• Martina Banyay

What are the risks of AI?

Updated: Feb 27, 2019


Let’s start with the elephant in the room: Superintelligence.


What happens when we reach the Singularity, when we get machines that are smarter than us? And not just somewhat smarter, but supersmart as if they were a totally different breed.

Like if you would compare a human to an ant.


Humans may have nothing against ants, but if ants come in our way we get just rid of them. Say, for example, you are to build a road and there is an anthill in the way. What would you do? I think we all know, you would not stop the project or try to find a way to build the road around the anthill. Instead you would wipe it away, and then just go on with whatever you were doing.


What if a superintelligence would do the same to us? Superintelligence don’t have to dislike human in order to eradicate us. So, if we create systems that are superintelligent, we’d better make sure they serve us and not the other way around.


The experts still debate if, how, and when we could have superintelligence. Some say it’s only 30 years away, while others claim it may take hundreds of years, and others still think it will never happen. But most agree that the possibility of AI one day reaching superintelligence cannot be ruled out. And because of that we need to start preparing ourselves already now, to make sure we create benevolent AI that will helps us instead of destroying us.


But even if the discussion around superintelligence is both engaging, a bit scary, and important, there are other risks associated with AI that in fact are much more pressing to deal with right now. At least for the broader public. Maybe the Terminator scenario has gotten a little bit too much airtime.


One widely used training set was shown to be 75% male and 80% white, making the AI start to behave in a prejudiced way, just like a biased person would.

First of all, we have the risk of bias. Today’s AI developers are quite a homogenous group, consisting mainly of highly-educated men. These guys decide on what data should be used in training, and how to construct the algorithms. AI is used to make millions of decisions every minute already today, why the question around bias is very important. If we want to have systems that can help us with conclusions that are truthful, we have to feed the system with a diversity of data. Otherwise we will get voice assistants that are better at recognizing male voices than female, minorities that are being sorted out in recruitment processes, and creditworthy people who don’t get loans just because of their ZIP code. Or a chatbot who turns into a full-blooded racist in just a couple of hours, like Microsoft’s Tay-bot did on Twitter back in 2016.


If we don’t deal with the problem of bias, we will create intelligent systems that will learn and amplify our negative preconceptions. If you want an AI system to take important decisions for you, make sure you understand how the system has been trained and what it’s basing its conclusions on.


There are some steps to take in order to prevent biases from propagating. Bias can be deployed in AI by the datasets used for training and by the choice of algorithms to build up the models. To begin with, one has to understand that AI can be biased (most people don’t), and then one has to ensure quality data in training of the models. And finally, one should be transparent with what data and algorithms that have been used, so others can get the chance to question the outputs and provide feedback that can be used to address the issues appropriately.


Another apparent risk is that the capabilities and applications of AI are developing so quickly that it’s almost impossible to keep up. Policymakers don’t understand the implications of the technology they are supposed to regulate well enough. AI is extremely powerful, and we need to have initiated discussions around how make best use of it. It should not only be the AI researchers, the data scientists, and computer engineers who understands the technology and its potential. We need to get people from various backgrounds involved in the discussion - from politics, ethics, philosophy, law, humanities - to make sure we use the technology in the best possible way.


MIT has recently announced an investment of 1 bUSD in a new AI collage, with the aim to “educate the bilinguals of the future”. The students will be taught biology, chemistry, politics, history and linguistics, but they will also be skilled in the techniques of modern computing that can be applied to the different fields of studies. The intent is to break up the silos that academic institutions tend to organize in, and have computing baked into the curriculum rather than stapled on. This is an excellent way of trying to ensure that we as a society understand the technology, and not just get hit by it. I hope more universities will look into creating something similar. And the first batch of students at the MIT AI college will start in the autumn of 2019, so the speed is high now.


And finally, you actually take a very big risk if you do nothing at all.


AI is maybe the most powerful invention in all human history, and if you don’t get on the train you risk being hopelessly left behind.


AI will not replace companies per se. But companies that use AI will outcompete the ones who don’t.


If you want to know more about AI and can spare 5 minutes a day, then you should watch this 7-days series I've created on what AI is, how it can be used and how to get started.

Let's Talk.

Want to talk? Get in touch via e-mail or LinkedIn

martina[at]banyayconsulting.se

  • LinkedIn

© 2020 by Martina Banyay