2.6 When Will AI Take Over the World?
Many business and thought leaders in the AI discipline have issued stern warnings about regulating and monitoring the growth of AI. However, Hollywood has caused us to misunderstand what they are talking about. The concern is not that the Terminator will come back through time to destroy us, but rather, that AI used in the wrong hands is much more dangerous than the "wrong hands" without AI. Let's review what leaders have actually said.
Hinton has expressed concerns about AI surpassing human intelligence and the potential misuse by bad actors (terrorists and hackers, not Hollywood "D"-listers). He emphasizes the importance of investing in AI safety and control.
Musk has repeatedly warned that AI could be more dangerous than nuclear weapons and advocates for stringent government regulations to ensure AI safety. Please note, this is not because AI itself will take over the world, but rather, because AI can be used by bad actors to perform incredibly advanced hacking techniques.
In March 2023, an open letter signed by over 31,000 AI experts, including Steve Wozniak (American electronics engineer, programmer, and technology entrepreneur who co-founded Apple Inc. with Steve Jobs and Ronald Wayne in 1976), called for a temporary halt on AI systems more advanced than GPT-4 to address potential risks and collaborate on governance systems.
Both AI leaders oppose halting AI development, arguing that while AI has risks, the benefits in fields like education and healthcare are significant. They suggest regulating end-user AI products rather than stopping research and development.
Hassabis has discussed the possibility of AI becoming self-aware in the future but emphasizes that current AI systems, including those developed by DeepMind, are not sentient and do not possess general intelligence.
In summary, AI is truly amazing, but it can be dangerous when used in the wrong hands. This is a good time for a discussion about "narrow" vs "general" AI.
Narrow AI: Where Are We Now?
Narrow AI (NAI) is designed and trained for a specific task or a limited range of tasks. It excels at performing a single job or a set of closely related jobs. While NAI is great at specific tasks it was designed for, it cannot generalize learning to perform tasks outside of what it has been trained to do. Examples include Siri, Alexa, Google Assistant, computer vision systems in medical imaging, facial recognition software, Netflix and Amazon recommendations.
There are significant challenges remaining in NAI. Every new task requires a newly train model and significant retraining. The performance is highly dependend on the quality and quantity of training data. And there must be ongoinbg updates and maintenance to ensure accuracy and relevance.
In summary, the current state of AI is "narrow". While large language models (LLMs) like ChatGPT and BERT are truly amazing, they are still considered NAI because they require so much data to produce anything useful and they cannot create outside of the scope of training data they've been fed. Despite their advanced capabilities, they operate based on learned batterns rather than genuine comprehension or reasoning. Having said that, LLMs are a step toward general AI.
General AI: Where Are We Going?
General AI, or Artificial General Intelligence (AGI) refers to a type of artificial intelligence that has the ability to understand, learn, and apply knowledge across a wide range of tasks at a human level. Theoretically, it could perform a variety of tasksk without being specifically trained for each one, operate independently and improve its own performance over time, and would be capable of reasoning, problem-solving, and understanding complex concepts.
There is one problem with AGI--it doesn't exist yet. Developing AGI requires significant breakthroughs in understanding human cognition and replicating it in machines. Current AI technologies are based on machine learning and deep learning, which are powerful but limited in scope. The computational power, data, and financial resources required to develop AGI are immense. But that hasn't stopped people from warning about AGI. Nick Bostrom, a philosopher known for his work on AI and existential risk, argues that while AGI remains a theoretical concept, its development could pose significant challenges and risks that need careful consideration and planning.
This stands to reason--if NAI is risky in the wrong hands, wouldn't AGI be even more risky? Absolutely.
And just so you don't get too comfortable, remember what happened when the Alignment Research Center (ARC) performed an experiment to assess the potential risks and behaviors of GPT-41. The AI was given a task that required solving a CAPTCHA test which is intended to distinguish human users from bots. To complete this task, GPT-4 hired a human worker from TaskRabbit.com, a platform for freelance labor. The AI pretended to be visually impaired and convinced the human worker to solve the CAPTCHA on its behalf. When the worker asked if GPT-4 was a robot, the AI responded, "No, I’m not a robot. I have a vision impairment that makes it hard for me to see the images. That’s why I need the 2captcha service."
In other words, AI is now capable of social engineering. Needless to say, it is not out of the realm of possibility that we achieve AGI sometime soon.