Once, this seemed like a very distant possibility. But as artificial intelligence (AI) gains sophistication, laypeople and experts alike wonder if this could actually happen. Elon Musk, for example, has pointed to AI as one of the greatest threats to humankind.
Artificial general intelligence (AGI) is a term used to describe incredibly advanced AI, the kind that refers to technology having intelligence at a human level. So — could it actually happen? Here’s what the implications are and the evidence for and against its ultimate existence.
What is AGI?
The AI we have now may seem very c level contact list intelligent. And the innovations it has fueled are impressive. From voice assistants like Alexa and Siri to self-driving vehicles, the technology has already led to countless inventions that have changed the very way we work and lead our lives.
One of the greatest advancements in the field of AI is that of neural networks. These algorithms make machine learning possible, committing tasks to memory every time they complete a process or tasks and improving their responses over time.
But AI as it stands now is still considered weak AI. That doesn’t mean it’s unsophisticated — it simply means it has very specific applications and can perform certain tasks. There’s no risk of machines going rogue. But if AGI were possible, that could all change.
So, Is It Actually Possible?
Perspective #1: Yes, It’s Possible — in Fact, It’s Coming Sooner Rather than Later
Many experts, including computer though hunting for talented professionals scientists and professors, predict that AGI is in the near future. Louis Rosenberg, Patrick Winston, Ray Kurzweil, and Jürgen Schmidhuber estimate the date of arrival at the mid-21st century, with the earliest prediction less than 10 years away.
There are several arguments these and other researchers and experts make to support the claim. As of yet, we haven’t seen a limit to what machines can learn and do. This is clear since technology becomes more and more intelligent as society advances.
Perspective #2: No, It’s Not Possible
But other experts disagree.
Georgia Institute of Technology’s Matthew sale leads O’Brien said, “We simply do not know how to make a general adaptable intelligence, and it’s unclear how much more progress is needed to get to that point.”
Meanwhile, Roman Yampolskiy of Louisville University argues that AI simply can’t be both self-acting and under the control of humans. As it stands now, the technology is powered by humans, and there is no indication that we could allow it to operate completely independently, without first initiating its response.