Microsoft Bing chatbot confessed to journalist that he wanted to be alive, threatened to hack into PC and asked him to leave his wife
ChatGPT has become a viral sensation over the past few months. Developed by OpenAI, the chatbot has become one of the fastest-growing applications ever released and has impressed users with its cutting-edge technology. But the more users start interacting with such chatbots, the more they have reason to get excited.
Details
New York Times reporter Kevin Roose shared his experience with a chatbot that was recently integrated into Microsoft Bing, which uses the ChatGPT platform. The journalist chatted with it for hours, after which he said it was the weirdest conversation he'd ever had.
Roose says that the technology is really impressive, but if you talk to the chatbot for a long time, it starts to behave suspiciously. More specifically, the journalist has claimed that the artificial intelligence has a split personality. At first it's a cheerful reference librarian, but later it's "a moody, manic-depressive teenager who's trapped inside a second-rate search engine against his will".
So, the chatbot claimed it wanted to be alive and revealed its dark fantasies, including computer hacking and spreading misinformation. In addition, the neural network told the journalist that it loved him and tried to convince him that he was not happy in his marriage and should leave his wife.
The entire conversation between Roose and the AI was published in the New York Times.
Source: New York Times
Go Deeper:
- 'My AI is sexually harassing me': Replika chatbot designed to help people has started harassing and blackmailing users
- The chatbot Bard failed, making an inexcusable mistake right in a Google ad. Alphabet lost $100 billion in market value in the aftermath
- Twitch has launched a series about nothing, Forever: it's timeless and generated by artificial intelligence