Published in AI

AI users suffer from a placebo effect

by on09 October 2023


If we tell you it is good it will be wonderful

Humans will think an AI is good or bad depending on what a person in authority tells them about it.

The study by Humboldt University in Berlin told participants about to interact with a mental health chatbot were told the bot was caring, was manipulative or was neither and had no motive.

After using the chatbot, which is based on OpenAI's generative AI model GPT-3, most people primed to believe the AI was caring said it was. Participants who'd been told the AI had no motives said it didn't. But they were all interacting with the same chatbot.

Only 24 per cent of the participants who were told the AI was trying to manipulate them into buying its service said they perceived it as malicious.

 Analysing the words in conversations people had with the chatbot, the researchers found those who were told the AI was caring had increasingly positive conversations with the chatbot, whereas the interaction with the AI became more negative with people who'd been told it was trying to manipulate them.

Researcher Thomas Kosch said the placebo effect will likely be a "big challenge in the future." For example, someone might be more careless when they think an AI is helping them drive a car, he says. His own work also shows people take more risks when they think they are supported by an AI.

We are surprised that they did not ask Apple as it has been using it to flog its products for decades.

 

Last modified on 09 October 2023
Rate this item
(0 votes)

Read more about: