The Biggest Risks Associated with the Future of AI

Be aware of what will accompany progress in both the immediate future and over much longer time horizons.

With the advancement of artificial intelligence, there are risks that will accompany progress in both the immediate future and over much longer time horizons.

These risks and dangers associated with advances in artificial intelligence is one of the most important topics in every conversation that I had the pleasure of having in my new book, Architects of Intelligence.

 

Short-term risks that we face today and will face in the coming decades.

This includes security, or the vulnerability of interconnected, autonomous systems to cyber attack or hacking. This may be the single greatest near-term worry. Another immediate concern is the susceptibility of machine learning algorithms to bias, in some cases on the basis of race or gender. Many of the individuals I spoke with in Architects of Intelligence emphasized the importance of addressing this issue and told of research currently underway in this area. Several also sounded an optimistic note — suggesting that AI may someday prove to be a powerful tool to help combat systemic bias or discrimination.

Other concerns are the impact on privacy. China is currently building a very scary Orwellian surveillance state using AI, especially facial recognition. If we are not careful, this may also impact the West.

A danger that many researchers are especially passionate about is the specter of fully autonomous weapons. Many people in the artificial intelligence community believe that AI-enabled robots or drones with the capability to kill, without a human “in the loop” to authorize any lethal action, could eventually be as dangerous and destabilizing as biological or chemical weapons.

How do we address these concerns? Some issues like security and bias are more technical and are being actively addressed by the industry. Others, like potential weaponization, will need to be addressed through regulation.

In general, many of the researchers I interviewed in Architects of Intelligencethink regulation of applications of AI — such as self-driving cars and medical applications is obviously important and needs to happen. However, virtually no one thinks the research itself should be regulated.

 

Long-term, and especially “existential” risks.

This includes the far more speculative danger of the so-called “AI alignment problem.” This is the concern that a truly intelligent, or perhaps superintelligent, machine might escape our control, or make decisions that might have adverse consequences for humanity. This is the fear that elicits seemingly over-the-top statements from people like Elon Musk. Nearly everyone I spoke to weighed in on this issue. To ensure that I gave this concern adequate and balanced coverage, I spoke with Nick Bostrom of the Future of Humanity Institute at the University of Oxford.

My own view is that this is a legitimate concern — but it probably lies far in the future. Therefore, giving it too much emphasis now is a distraction from the short-term risks, as well as the potential impact of AI on the job market and the economy.

I do think it is a good thing that very smart people at private organizations including the Future of Humanity Institute, OpenAI, DeepMind and the Machine Intelligence Research Institute are thinking about this issue. However, I think that is an appropriate allocation of resources to this risk. I don’t think we want massive government involvement in this issue because it is too undefined at this point. I don’t think anyone wants Donald Trump tweeting about existential risks from superintelligence. So better to let highly informed and technically capable groups work on it for now.

 

 

Originally published on Quora.