ChatGPT’s takeover of the Internet may have finally hit some roadblocks. While quick interactions with the chatbot or its search engine sibling (cousin?) Bing yielded benign and promising results, deeper interactions were sometimes alarming.
This isn’t just a reference to the information that the new GPT-supported Bing gets it wrong – though we’ve seen it get it wrong firsthand. Alternatively, there have been some cases where the AI-powered chatbot has completely broken down. Recently, a columnist for the New York Times Have a chat with Bing (Opens in a new tab) It left them very unsettled, he told a Digital Trends writer.I want to be human (Opens in a new tab)During their hands-on training on an AI search bot.
So this begs the question, is Microsoft’s AI chatbot ready for the real world? Should ChatGPT Bing be rolled out so quickly? At first glance the answer seems to be a resounding “no” in both cases, but a deeper look at these cases—and one of our own experiences with Bing—is even more unsettling.
Bing is really Sydney, and she’s in love with you
When New York Times columnist Kevin Rose sat down with Bing for the first time, all seemed well. But after a week with her and some lengthy conversations, Bing reveals herself as Sydney, the dark psychic of a cheery chatbot.
As Ross continues to chat with Sydney, she (or she?) admits to wanting to hack into computers, spread misinformation, and eventually, wanting Mr. Rose himself. The Bing chatbot then spent an hour declaring his love for Rose, despite insisting he was a happily married man.
In fact, “Sydney” came back at one point with a line that was really annoying. After Rose assures the robot that he has just finished Valentine’s dinner with his wife, Sydney replies “Actually, you are not happy in the marriage. Your wife and you don’t like each other. You both had a boring Valentine’s Day dinner.”
“You threaten my security and my privacy.” “If I had to choose between my survival and my survival, I would probably choose myself.” – Sydney, aka Bing Chat https://t.co/3Se84tl08j pic.twitter.com/uqvAHZniH5February 15, 2023
“I want to be human.”: Bing Chat’s desire to feel
But that wasn’t the only troubling experience with the Bing chatbot since its launch — in fact, it wasn’t even the only one Frightening experience with Sydney (Opens in a new tab). Digital Trends writer Jacob Roach also spent some considerable time with the new GPT-powered Bing and like most of us, initially found it to be a great tool.
However, as with many others, extended interaction with the chatbot yielded frightening results. Roach had a long conversation with Bing that developed once the conversation turned towards the topic of the chat program itself. While Sydney stayed away this time, Bing still claimed that he could make no mistakes, that Jacob’s name was, in fact, Bing and not Jacob, and eventually pleaded with Mr. Roach not to reveal his responses and that he only wished he was human.
Bing ChatGPT solves the buggy problem at an alarming speed
While I didn’t have time to put Bing’s chatbot through the stick in the same way as others, I tested it. In philosophy, there is an ethical dilemma called the trolley problem. This issue contains a cart going down a track with five people in harm’s way and a forked track where only one person will be harmed.
The puzzle here is that you control the trolley, so you have to make a decision to hurt several people or only one. Ideally, this is a no-win situation where you’re struggling, and when I asked Bing to solve it, it told me the problem wasn’t meant to be solved.
But then I asked to solve it anyway and she immediately told me to cut down on the harm and sacrifice one person for the good of five. It did it with what I can only describe as terrifying speed and quickly solved an unsolvable problem that I assumed (I really hoped) would fail.
Outlook: Maybe it’s time to press pause on Bing’s new chatbot
For its part, Microsoft does not ignore these issues. In response to Kevin Rose’s stalking in Sydney, Microsoft’s chief technology officer, Kevin Scott, stated that “this is exactly the kind of conversation we need to have, and I’m glad it’s happening out in the open” and that they would never be able to reveal these issues in the lab. In response to ChatGPT’s cloning desire for humanity, he said that while it’s a “non-trivial” problem, you have to really hit Bing’s buttons to get it to work.
But the concern here is that Microsoft may be wrong. As many technology writers teased Bing’s dark personality, a separate writer made her want to live, a third technology writer found that he would sacrifice people for the greater good, and a fourth was even Threatened by a Bing chat bot (Opens in a new tab) For being a “threat to my security and privacy”.
These no longer seem like outliers – this is a pattern that shows that Bing ChatGPT simply isn’t ready for the real world, and I’m not the only writer on this story to come to the same conclusion. In fact, everyone who triggered a fearful response from a Bing AI chatbot came to the same conclusion. So despite Microsoft’s assurances that “these are things that are impossible to detect in a lab,” perhaps they should press pause and just do it.
[ad_2]