During the Paris 2024 Olympics, Google featured an ad for its generative AI tool, Gemini. In it, a father explains how he asked Gemini to assist his daughter write a letter to American Olympic hurdler Sydney McLaughlin-Levrone. The ad received swift backlash, with viewers upset that a father would teach their kid to make use of AI to specific herself and others voicing just general discomfort.
Google pulled the ad after just days, saying in an announcement to multiple outlets, “We imagine that AI can be a terrific tool for enhancing human creativity, but can never replace it.”
The incident highlights consumers’ wariness over AI, whilst corporations spend billions on it. Worldwide spending on AI and related business services has reached $235 billion, in response to International Data Corporation. Marketers are also spending tens of millions to advertise AI-powered services and products, with about $200 million spent from January to early August, TV measurement firm iSpot told CX Dive.
While acceptance of the technology is slowly growing, consumers recurrently indicate that they’re skeptical of AI. Research, published in the Journal of Hospitality Marketing & Management, adds to that; AI terminology actually decreases customers’ purchasing intention, the study found.
“Every experiment that we have now seen, for those who use AI, it decreases the purchasing intention,” said Mesut Cicek, assistant professor of promoting and international business at Washington State University. “We provided them some text about the product, product descriptions, and then the only difference between the descriptions is in a single, it includes AI. In the other one, it doesn’t include AI.”
Cicek and his colleague conducted a series of experiments. In the first, roughly 300 participants were shown a product description for a TV. The descriptions were nearly an identical, but one was described as an “AI-powered TV” while the other was a “latest technology TV.”
Participants were then asked inquiries to determine their willingness to purchase the TV. Those who saw AI in the product description were less more likely to make the purchase.
Researchers repeated the experiment with one other 200 participants, this time with an “AI-powered automotive” and the results were more pronounced. Purchase intention decreased significantly.
“If it is a perceived dangerous product, this effect is higher,” Cicek said.
In subsequent experiments on the use of “AI” to explain services, risk played a job in purchase intention as well. AI-powered customer support was perceived as less dangerous, while AI-powered illness diagnosis was perceived as high risk. While each saw decreased purchasing intention, it was more pronounced for AI-powered illness diagnosis.
The role of trust
To Cicek, the most notable finding was the impact the AI term had on emotional trust, which may significantly affect consumer attitudes and behaviors.
“The most important findings of this study is the use of AI decreases emotional trust,” Cicek said. “The consumers have trust issues with AI, and then also it decreases the purchase intention.”
Consumers have concerns about the privacy, security and safety of corporations using AI. That, coupled with the public’s fear of the unknown and questions on the impact of AI on autonomy, can all chip away at trust.
AI is an elusive concept for consumers — and in some ways a threatening one, Audrey Chee-Read, principal analyst at Forrester, said.
“It feels more like an umbrella term that is going to take their job and take away their intellect,” Chee-Read said. “Over half of the consumers imagine AI poses a big threat to society.”
There’s two most important aspects to this distrust, Chee-Read said:
- The first is a perceived threat to consumers’ ethics and morals, which incorporates “misinformation, disinformation, copyright infringement — what does this mean for society?”
- The other is output accuracy, which considers, “Is it actually going to do the job it’s alleged to do?”
Recent research from KPMG adds to those findings. Consumers’ top two concerns with AI services is that they won’t be capable of interact with a human and the security of non-public data, in response to Jeff Mango, managing director of advisory customer solutions at KPMG.
“Why are these people seeing the word AI and retracting their sale or caring about going forward with their sale?” Mango said. “Because each of those genuinely talk over with risks they perceive. They perceive, ‘I’m probably not going to get the help I want because I am unable to talk over with a human, and I imagine I want to talk over with a human,’ or ‘I imagine that my personal information isn’t secure.’”
But the AI label can be a turnoff for consumers for an easier reason: perceived complexity.
Consumers are also less more likely to buy something they view as complicated, Bruce Temkin, chief humanity catalyst at temkinsight, said.
“The general public views AI as being complicated, so attaching a generic AI label with none further explanation would likely lead many individuals to think that the item on the market is complex and obscure or use,” Temkin said via email. “People pays a premium for something they perceive as being easier to make use of, and the opposite is true, they’ll pay less for something they imagine is tougher.”
Products like an AI-powered automotive might be considered dangerous not only due to the higher price point, but additionally since it might appear tougher to operate, Temkin said.
How should corporations construct trust?
Experts agree that the term “AI” is overused and, in some cases, has lost all meaning.
“Companies are using AI in all places,” Cicek said, even when AI technology isn’t present.
For corporations that wish to construct trust with consumers, accuracy and transparency are paramount.
“First and foremost, stop throwing around the term ‘AI’ prefer it’s a marketing nirvana,” Temkin said. “Not only can it increase risk, but it surely’s being so overused that it adds little value for explaining the value of your offerings. If you’re thinking that AI is a differentiator, try and describe that feature more explicitly, like ‘AI-powered safety breaks.’”
At the bare minimum, CX leaders have to ensure that they’re following rules and regulations, Chee-Read said. She encourages corporations to develop AI governance plans and to coach employees on tips on how to responsibly use AI. On a more basic CX level, leaders have to ensure that that the experience they’re creating with AI is consistent with their brand and that it gives value to consumers.
CX leaders can discover how AI can solve a necessity.
“People don’t wish to buy ‘AI,’ but they’re probably willing to pay more for those who can create more value for them using AI,” Temkin said. “So the strategy stays the same as at all times, concentrate on value first, and then determine the messaging that brings that value to life.”
If there isn’t a value — or no clear value — consumers turn out to be distrustful.
“If I’m going to the regular average food market and the aisle is powered by AI, I do not know what meaning,” Mango said. “I do not know why that is going to assist me. I’m just lost, and so subsequently I turn out to be very distrusting.”
Brands may ease concerns through transferable trust, Mango said. If a brand has a superb fame with consumers, that fame is more likely to transfer to its use of AI.
Building this trust must be integral to an organization’s AI approach — failure to achieve this cannot only harm relationships, but an organization’s bottom line, too.
“If trust increases, purchasing intention increases, sales increase,” Cicek said.
Read the full article here