Even as tech giants pour billions of dollars into what they herald as humanity’s new frontier, a recent study shows that tacking the “AI” label on products may actually drive people away.
A study published in the Journal of Hospitality Marketing & Management in June found that describing a product as using AI lowers a customer’s intention to buy it. Researchers sampled participants across various age groups and showed them the same products – the only difference between them: one was described as “high tech” and the other as using AI, or artificial intelligence.
“We looked at vacuum cleaners, TVs, consumer services, health services,” said Dogan Gursoy, one of the study’s authors and the Taco Bell Distinguished Professor of hospitality business management at Washington State University, in an interview with CNN. “In every single case, the intention to buy or use the product or service was significantly lower whenever we mentioned AI in the product description.”
Despite AI’s rapid advancement in recent months, the study highlights consumers’ hesitance to incorporate AI into their daily lives – a marked divergence from the enthusiasm driving innovations in big tech.
The role of trust…
Included in the study was an examination of how participants viewed products considered “low risk,” which included household appliances that use AI, and “high risk,” which included self-driving cars, AI-powered investment decision-making services and medical diagnosis services.
While the percentage of people rejecting the items was greater in the high-risk group, non-buyers were the majority in both product groups.
There are two kinds of trust that the study says play a part in consumers’ less-than-rosy perception of products that describe themselves as “AI-powered.”
The first kind, cognitive trust, has to do with the higher standard that people hold AI to as a machine they expect to be free from human error. So, when AI does slip up, that trust can be quickly eroded.
Take Google’s AI-generated search results overview tool, which summarizes search results for users and presents them at the top of the page. People were quick to criticize the company earlier this year for providing confusing and even blatantly false information to users’ questions, pressuring Google to walk back some of the features’ capabilities.
Gursoy says that limited knowledge and understanding about the inner workings of AI forces consumers to fall back on emotional trust and make their own subjective judgments about the technology.
“One of the reasons why people are not willing to use AI devices or technologies is fear of the unknown,” he said. “Before ChatGPT was introduced, not many people had any idea about AI, but AI has been running in the background for years and it’s nothing new.”
Even before chatbot ChatGPT burst into public consciousness in 2022, artificial intelligence was used in technology behind familiar digital services, from your phone’s autocorrect to Netflix’s algorithm for recommending movies.
And the way AI is portrayed in pop culture isn’t helping boost trust in the technology either. Gursoy added that Hollywood science fiction films casting robots as villains had a bigger impact on shaping public perception towards AI than one might think.
“Way before people even heard about AI, those movies shaped people’s perception of what robots that run by AI can do to humanity,” he said.
…and a lack of transparency
Another part of the equation influencing customers is the perceived risk around AI – particularly with how it handles users’ personal data.
Concerns about how companies manage customers’ data have tamped down excitement around tools meant to streamline the user experience at a time when the government is still trying to find its footing on regulating AI.
“People have worries about privacy. They don’t know what’s going on in the background, the algorithms, how they run, that raises some concern,” said Gursoy.
This lack of transparency is something that Gursoy warns has the potential to sour customers’ perceptions towards brands they may have already come to trust. It is for this reason that he cautions companies against slapping on the “AI” tag as a buzzword without elaborating on its capabilities.
“The most advisable thing for them to do is come up with the right messaging,” he said. “Rather than simply putting ’AI-powered’ or ’run by AI,’ telling people how this can help them will ease the consumer’s fears.”
--------------------------------------------------------
바로가기 (새창) : https://edition.cnn.com/2024/08/10/business/brands-avoid-term-customers/index.html
도큐멘토에서는 일부 내용만을 보여드리고 있습니다.
세부적인 내용은 바로가기로 확인하시면 됩니다.
Even as tech giants pour billions of dollars into what they herald as humanity’s new frontier, a recent study shows that tacking the “AI” label on products may actually drive people away.
A study published in the Journal of Hospitality Marketing & Management in June found that describing a product as using AI lowers a customer’s intention to buy it. Researchers sampled participants across various age groups and showed them the same products – the only difference between them: one was described as “high tech” and the other as using AI, or artificial intelligence.
“We looked at vacuum cleaners, TVs, consumer services, health services,” said Dogan Gursoy, one of the study’s authors and the Taco Bell Distinguished Professor of hospitality business management at Washington State University, in an interview with CNN. “In every single case, the intention to buy or use the product or service was significantly lower whenever we mentioned AI in the product description.”
Despite AI’s rapid advancement in recent months, the study highlights consumers’ hesitance to incorporate AI into their daily lives – a marked divergence from the enthusiasm driving innovations in big tech.
The role of trust…
Included in the study was an examination of how participants viewed products considered “low risk,” which included household appliances that use AI, and “high risk,” which included self-driving cars, AI-powered investment decision-making services and medical diagnosis services.
While the percentage of people rejecting the items was greater in the high-risk group, non-buyers were the majority in both product groups.
There are two kinds of trust that the study says play a part in consumers’ less-than-rosy perception of products that describe themselves as “AI-powered.”
The first kind, cognitive trust, has to do with the higher standard that people hold AI to as a machine they expect to be free from human error. So, when AI does slip up, that trust can be quickly eroded.
Take Google’s AI-generated search results overview tool, which summarizes search results for users and presents them at the top of the page. People were quick to criticize the company earlier this year for providing confusing and even blatantly false information to users’ questions, pressuring Google to walk back some of the features’ capabilities.
Gursoy says that limited knowledge and understanding about the inner workings of AI forces consumers to fall back on emotional trust and make their own subjective judgments about the technology.
“One of the reasons why people are not willing to use AI devices or technologies is fear of the unknown,” he said. “Before ChatGPT was introduced, not many people had any idea about AI, but AI has been running in the background for years and it’s nothing new.”
Even before chatbot ChatGPT burst into public consciousness in 2022, artificial intelligence was used in technology behind familiar digital services, from your phone’s autocorrect to Netflix’s algorithm for recommending movies.
And the way AI is portrayed in pop culture isn’t helping boost trust in the technology either. Gursoy added that Hollywood science fiction films casting robots as villains had a bigger impact on shaping public perception towards AI than one might think.
“Way before people even heard about AI, those movies shaped people’s perception of what robots that run by AI can do to humanity,” he said.
…and a lack of transparency
Another part of the equation influencing customers is the perceived risk around AI – particularly with how it handles users’ personal data.
Concerns about how companies manage customers’ data have tamped down excitement around tools meant to streamline the user experience at a time when the government is still trying to find its footing on regulating AI.
“People have worries about privacy. They don’t know what’s going on in the background, the algorithms, how they run, that raises some concern,” said Gursoy.
This lack of transparency is something that Gursoy warns has the potential to sour customers’ perceptions towards brands they may have already come to trust. It is for this reason that he cautions companies against slapping on the “AI” tag as a buzzword without elaborating on its capabilities.
“The most advisable thing for them to do is come up with the right messaging,” he said. “Rather than simply putting ’AI-powered’ or ’run by AI,’ telling people how this can help them will ease the consumer’s fears.”
--------------------------------------------------------
바로가기 (새창) : https://edition.cnn.com/2024/08/10/business/brands-avoid-term-customers/index.html
도큐멘토에서는 일부 내용만을 보여드리고 있습니다.
세부적인 내용은 바로가기로 확인하시면 됩니다.