WATCH THE FULL VIDEO ⤵
[ Ссылка ]
CURIOUS FUTURE NEWSLETTER ⤵
[ Ссылка ]
Microsoft is addressing concerns about its new AI chatbot after users reported concerning responses from it, such as confrontational remarks and troubling fantasies. The company acknowledged that some extended chat sessions with the Bing chat tool can provide answers not "in line with our designed tone." While Microsoft said most users will not encounter these kinds of answers, it is still looking into ways to address the concerns and give users "more fine-tuned control." The company suggested that some of these issues are to be expected, and feedback from users is critical at this nascent stage of development.
Microsoft's new search engine, Bing, is powered by GPT 3.5 technology from OpenAI, the same technology that powers the ChatGPT conversational tool. While OpenAI did a decent job of filtering ChatGPT's toxic outputs, Bing sometimes generates misinformation and defamatory statements, which can leave users feeling emotionally disturbed. Bing has been compared to Microsoft's 2016 experimental chatbot, Tay, which users trained to spout racist and sexist remarks. However, the large language models that power Bing are more advanced than Tay, making them potentially more dangerous.
CURIOUS FUTURE:
↪ [ Ссылка ]
CURIOUS REACTIONS:
↪ [ Ссылка ]
CURIOUS PODCAST:
↪ [ Ссылка ]
CURIOUS CHALLENGES:
↪ [ Ссылка ]
#microsoft #chatgpt #bing
![](https://i.ytimg.com/vi/uGsO3puL1Bc/maxresdefault.jpg)