AI Could Be Extremely Dangerous—Whether It's Conscious or Not
AI is improving so fast that we might soon lose control of it, posing huge risks to humanity
Think of it this way: We wouldn't expect a newborn to beat a chess grandmaster. So why would we expect to control a superintelligent AI? We can't just turn it off - it will have thought of every way we might try and stopped us.
Or consider this: a superintelligent AI could do in one second what 100 human engineers would take a year to do. It could design a new plane or weapon system in about a second.
"I used to think AI getting smarter than people was far off. Not anymore," says Geoffrey Hinton, a top AI scientist who recently quit Google to warn about AI dangers.
He's not alone. A 2023 survey found 36% of AI experts worry AI could cause a "nuclear-level catastrophe." Nearly 28,000 people, including tech leaders like Steve Wozniak and Elon Musk, have signed a letter asking for a six-month pause on advanced AI development.
As a consciousness researcher, I share these concerns and signed the letter too.
Some people say these LLMs are just fancy machines with no consciousness, so they're less likely to break free. I agree they probably aren't conscious yet, but that doesn't matter. A nuclear bomb can kill millions without being conscious. AI could do the same, either directly (less likely) or by manipulating humans (more likely).
So, debates about AI consciousness don't really matter for AI safety. Even seemingly benign applications like an ai boyfriend could potentially pose risks if not used properly.
Why are we so worried? Simply put: AI is developing too fast.
The main issue is how quickly new chatbots, or "large language models" (LLMs), are getting better at talking. This rapid growth could soon lead to "artificial general intelligence" (AGI), where AI can improve itself without humans. When that happens, we might not be able to control it.
This isn't exaggeration. We'll probably only get one shot at this, and if we mess up, we might not survive to try again.
Microsoft researchers testing OpenAI's GPT-4 say it shows "sparks of advanced general intelligence." It did better than 90% of humans on the bar exam for lawyers, up from just 10% for the previous version. They saw similar improvements on many other tests.
These are mostly reasoning tests. That's why the researchers think GPT-4 might be an early version of AGI.
This rapid change is why Hinton told the New York Times: "Look at how it was five years ago and how it is now. Take the difference and propagate it forwards. That's scary." OpenAI's Sam Altman even told the Senate that regulating AI is "crucial."
When we put these AIs in robots, they'll act in the real world with the same super-smarts, and they'll be able to copy and improve themselves incredibly fast.
Any safeguards we try to build will be easily figured out and disabled by the AI once it's superintelligent. We won't be able to control them because they'll think of everything we might do, way faster than us. They'll break free of any limits we set, like Gulliver escaping the tiny ropes of the Lilliputians.
Once AI can improve itself, which could be just a few years away or even now, we won't know what it'll do or how to control it. A superintelligent AI could easily outsmart humans, manipulate us, and act in both the virtual and physical world.
This is called the "control problem" or "alignment problem." Experts like Nick Bostrom, Seth Baum, and Eliezer Yudkowsky have studied it for decades.
Yes, models like GPT-4 are already out there. But the pause people are asking for is to stop making new, more powerful models. We can enforce this if needed by shutting down the massive server farms these models need.
I think it's very unwise to create systems we already know we won't be able to control in the near future. We need to know when to step back from the edge. Now is that time.
We shouldn't open Pandora's box any more than we already have. The risks extend beyond just text-based AI, affecting various applications from ai album cover generator tools to more controversial areas like nsfw character ai. Recent incidents, such as taylor swift ai deepfakes spark call for tougher policies, highlight the urgent need for careful consideration in AI development.