Conversational Experience Platform

Conversational Experiences for the Customer Journey

+44 (028) 9694 3666

The Artificial Intelligence (AI) gold rush has seen the wide spread proliferation of customer service chatbots. While these technological advancements have revolutionised customer interactions, they also come with challenges. The 2024 case of Jake Moffatt v Air Canada and its AI chatbot demonstrates this.  In this blog, Catherine Ewings, Syndeo’s COO, delves into the detail and extracts the valuable lessons for brands interested in or currently adopting this technology.

What happened between Jake Moffatt and Air Canada’s AI chatbot?  

The Air Canada chatbot incident began innocuously enough. 

In November 2022, after the death of his grandmother Jake Moffatt, visited the Air Canada website to book a return flight from Vancouver to Toronto. Unsure of the bereavement discount policy, he opened the handy chatbot and asked it to explain. 

When Moffat was told he could claim a refund after booking his tickets, he went ahead and booked flights right away safe in the knowledge that, within 90 days, he’d be able to claim a partial refund. 

He has the screenshot to show that the chatbot’s full response was: 

If you need to travel immediately or have already travelled and would like to submit your ticket for a reduced bereavement rate, kindly do so within 90 days of the date your ticket was issued by completing our Ticket Refund Application form 

Seems about as clear and encouraging as you’d hope to get in such circumstances. 

Moffatt was however surprised then to find that his refund request was denied. Air Canada’s policy actually states that the airline won’t provide refunds for bereavement travel after the flight has been booked; the information provided by the chatbot was incorrect. 

Moffatt spent months trying to get his refund, but was met with the same answer: refunds can’t be requested retroactively. Air Canada’s argument was that because the chatbot response included a link to a page on the site outlining the policy correctly, Moffat should’ve known better. Its best offer was to give Moffatt a $200 coupon. So he took them to court. 

Moffatt filed a small claim complaint in Canada’s Civil Resolution Tribunal. Although the chatbot couldn’t take the stand, Air Canada argued that not only should its chatbot be considered a separate legal entity, but also that Moffat never should have trusted it. Because naturally, customers should of course in no way trust systems put in place by companies to mean what they say. 

Christopher Rivers, the Tribunal member who decided the case in favour of Moffatt, called Air Canada’s defense “remarkable”. 

Air Canada argues it cannot be held liable for information provided by one of its agents, servants, or representatives—including a chatbot,” Rivers wrote. “It does not explain why it believes that is the case” or “why the webpage titled ‘Bereavement Travel’ was inherently more trustworthy than its chatbot.” 

In the end, it was ruled that Moffatt was entitled to a partial refund of $650.88 in Canadian dollars off the original fare, which was $1,640.36 CAD, as well as additional damages to cover interest on the airfare and Moffatt’s legal fees. 

Lessons for brands using AI Assistants for customer service 

As we navigate the evolving landscape of AI in customer service, incidents like Air Canada’s offer invaluable lessons.  In today’s digital age, most consumers appreciate AI retail assistants. Rather than approaching AI tools with fear, brands can instead learn from these experiences.  

Listed below are our key takeaways:  

  1. Prioritise Accuracy: AI Assistants are powerful tools, but their responses must be accurate and consistent with your brand’s policies and messaging. Regular testing and updates are crucial to ensure accuracy.  
  2. Ensure Omnichannel Consistency: Ensure consistency across all your communication channels including AI Assistants, website content and live agents. Mixed messages create confusion and erode trust.  
  3. Foster Human Oversight: Implement a robust framework for human oversight to review interactions allowing time for the timely correction of errors and the provision of live agent support  when AI falls short. This safeguards against misinformation and ensures a positive customer experience.   
  4. Embrace Transparency: Clearly communicate the limitations of AI-powered tools. Let consumers know if it’s still under development or that complex inquiries require human intervention. Building trust fosters understanding and reduces the risk of misinterpretations.  
  5. Prepare for the Unexpected: Have a clear escalation process in place for situations where the AI Assistant fails or provides incorrect information. Be proactive in addressing user consumer concerns and demonstrating accountability.  

 

Beyond the Refund: Reputational Damage 

The Air Canada case goes beyond the financial cost. The airline’s resistance to responsibility (for sake of a mere  $650 CAD) and lack of transparency damaged its reputation. It also highlights the importance of proactive customer service and taking responsibility for technology-related issues.  

The Path Forward 

AI Digital Assistants offer immense potential, but their implementation requires careful planning and execution. By prioritising accuracy, transparency and human oversight, brands can leverage these tools effectively while mitigating risks and building trust with your customers.  

And remember: AI Assistants are not a magic bullet. They are tools that require responsible management and integration. By learning from Air Canada’s misstep, brands can navigate the digital skies whilst safeguarding your brand and delivering exceptional customer experiences.   

Thank You!

Your message has been received.

Our team is eagerly waiting and ready to assist you!

Apply Now

Fill out the form below, and we will be in touch shortly.

Contact Us

Fill out the form below, and we will be in touch shortly.