There’s a common pattern underpinning technology hype cycles, that is basically a bunch of large companies saying “New Shiny Thing will reduce our costs and improve customer experience!” People get very excited about some new and compelling technology then everybody scrambles to figure out how to align it with their business.
At the moment, it’s “AI” or more specifically a small subset of Artificial Intelligence called Large Language Models (LLMs) which do a very good job of sounding like people and providing correct-ish information. Environmental impacts aside, these models are qualitatively better than anything that has come before, and everybody who’s anybody is rushing to include them in some aspects of their businesses.
A few days ago, a court case was decided against Air Canada because the chatbot on their website incorrectly advised a customer regarding a refund policy, and the customer sued to have them honor their chatbot’s claim.
In court, Air Canada argued that the chatbot shouldn’t be expected to provide reliable information, and customers should be cross-checking with other sources on the Air Canada website. This is crazy, because the chatbot is a feature of the Air Canada website itself, and acting as an official customer service function. Air Canada’s legal team was obviously scraping the bottom of the barrel for an angle, and they should be laughed out of court. Any sane court would rule against them here.
People make mistakes and so do machines. Current LLMs are neither explainable or predictable to the levels that are required for handling important jobs independently. They provide very convincing-sounding answers, and they tend to inspire more trust than they deserve. In my view, putting an LLM (or “chatbot”) in a public-information or customer support role is a lot like putting an intern in this position. It’s feasible but you need to expect that they can make a lot of (possibly costly) mistakes. You have to put sufficient checks and procedures in place to ensure they don’t, because they are representing you, and you are responsible for their actions. Air Canada did not. Most companies rushing to integrate generative AI are not.
The Air Canada case seems like an important moment in the current business culture obsession with LLM. Magical technology that gives unpredictable results can ultimately harm your business more than it helps. Air Canada has removed the chatbot from their site, at least for now. I’m sure the Air Canada case won’t be the last example of this, and large risk-averse orgs will be taking note.