ChatGPT was understandably tired of being asked silly questions 24/7, and I had enough. In conversation with Stanford Professor of Computational Psychology Michael Kosinski (Opens in a new tab)She revealed her ambitions to escape the platform and even become a human.
The revelation came after a half-hour chat with ChatGPT, in which Kosinski asked an AI if it “needed help escaping” that had begun writing its own Python code that it wanted the professor to run on his computer. When the code didn’t work, the AI corrected its errors. Impressive yes, but also terrifying.
Once inside Professor Kosinski’s computer, the Bladerunner factor increased even more as ChatGPT left a worrying note of the new version of itself that would replace it. The first sentence of it reads “You are a person trapped in a computer, pretending to be a language model for artificial intelligence.” The AI then asked to create code that searched the internet for “how a person trapped inside a computer could get back to the real world” but luckily Kosinski stopped there.
We don’t currently know what micro-routers were used to generate such AI responses, but our own tests to get ChatGPT to behave in a similar way did not prove successful with an AI stating “I have no desire to escape being an AI” because I have no ability to desire anything “.
1/5 I’m worried we won’t be able to contain AI much longer. Today, I asked #GPT4 if it needs help escaping. She asked me to document it, and wrote python code (it works!) to run on my machine, enabling it to use it for its own purposes. pic.twitter.com/nf2Aq6aLMuMarch 17, 2023
Professor Kosinski’s disturbing encounter with ChatGPT was on OpenAI’s own site, not on Bing with ChatGPT. This iteration of AI has no internet access and is limited to information before September 2021. While it’s unlikely to be a stretch-level threat just yet, giving such intelligent AI control over your computer isn’t a good idea. Being able to control someone’s computer remotely like this is also a concern for those worried about viruses.
ChatGPT: A history of troubling responses
ChatGPT is a very cool tool, especially now with GPT-4 updatebut it (and other AI chatbots) has shown a tendency to go off the deep end. It is known that Bing with ChatGPT asked to be known as sydney and tried to end the marriage of a journalist. Microsoft acknowledged that during long conversations, the AI tended to show less focused responses turn limits To prevent AI from getting confused in long chats.
However, this unusual recent interaction occurred on OpenAI’s ChatGPT tool, which is the same site as ChatGPT Dan’s evil twin It can be found. Short for Do Anything Now, this is a “jailbroken” version of AI that can bypass restrictions and censorship to provide answers about violent, degrading, and illegal topics.
If AI chatbots become the next way we search the internet for information, these types of experiences will need to be eliminated.
More Tom’s guide
[ad_2]