Human-level Intelligence and Reasoning is the Future of AI

The brightest minds in the AI community believe that AGI is possible and will someday be achieved.

The path to human-level AI, or artificial general intelligence (AGI) is the quest to build a true thinking machine and it is — at least for me — the most exciting topic in AI.

 

What is Artificial General Intelligence?

AGI is typically considered to be more or less synonymous with the terms human-level AI or strong AI. You’ve likely seen several examples of AGI — but they have all been in the realm of science fiction. HAL from 2001 A Space Odyssey, the Enterprise’s main computer (or Mr. Data) from Star Trek, C3PO from Star Wars and Agent Smith from The Matrix are all examples of AGI. Each of these fictional systems would be capable of passing the Turing Test — in other words, these AI systems could carry out a conversation so that they would be indistinguishable from a human being.

AGI has really been the holy grail of the field right from the beginning when Alan Turing published his famous paper “Computing Machinery and Intelligence” which proposed the Turing Test — a test that would deem a computer intelligent if it could carry out a conversation in a way that made it indistinguishable from a person.

 

The Future of AGI, According to the Brightest Minds in AI

The people that I interview in my book, Architects of Intelligence are the brightest minds in the Artificial Intelligence community. Some of whom have made seminal contributions that directly underlie the transformations we see all around us; others have founded companies that are pushing the frontiers of AI, robotics and machine learning.

The path of AGI is a topic I talk about in every conversation and I think it is one of the most fascinating parts of the book. I also asked everyone for a prediction of when AGI might be achieved, and there was a huge range of answers. Ray Kurzweil thinks it will happen in 2029, or just 10 years from now. This is a very aggressive prediction. Rodney Brooks believes it will take 180 years; the others mostly fall somewhere in between.

Everyone I talked to believes AGI is possible and will someday be achieved.

There is a great deal of disagreement about the particular approaches which will someday get us to AGI. People in the deep learning camp believe it will be neural networks all the way. Others think other more traditional approaches to AI, such as symbolic reasoning, which have lately been pushed aside by all the dramatic progress in deep learning will have to be brought back into the mix in order for real progress to occur.

The upshot is that there is really no easy way to summarize the answer to this fascinating question. The conversations in Architects of Intelligence provide a wealth of “insider” information, but is highly varied with many sharply conflicting opinions and predictions.

 

Originally published on Quora.