While it seems reasonable given that the AI is being adjusted and subject to flights of existential crisis, the news has created some worries about privacy and corporate secrecy.
Employers including JP Morgan and Amazon have banned or restricted staff use of ChatGPT, which uses similar technology, amid concerns that sensitive information could be fed into the bot.
Microsoft said that Bing data is protected by stripping personal information from it and that only certain employees could access the chats. It updated its privacy policy last week to say it can collect and review users' interactions with chatbots.
Amazon, Google and Apple attracted criticism several years ago when it emerged that contractors were reviewing voice recordings from the companies’ smart assistants, overhearing medical details or criminal behaviour. The companies now allow users to opt out of sending audio to the companies.
Bing’s human-like responses to questions mean that some users may enter private or intimate messages into the bot.
A spokesVole said that to effectively respond to and monitor inappropriate behaviour, we employ both automated and manual reviews of prompts shared with Bing,.
“Microsoft is committed to protecting user privacy, and data is protected through agreed industry best practices including pseudonymisation, encryption at rest, secured and approved data access management, and data retention procedures. In all cases access to user data is limited to Microsoft employees with a verified business need only, and not with any third parties.”
Many companies have restricted ChatGPT, made by the Silicon Valley start-up OpenAI, or advised employees not to enter confidential information into it. OpenAI’s website says the company reviews conversations with the chatbot.