Tech magnate Elon Musk recently termed Artificial Intelligence (AI) as “the greatest risk we face as a civilisation.” But, Google’s AI chief, John Giannandrea, has different point of view to offer on the issue.

According to Giannandrea, it isn’t the robots that we have to worry about, it is the danger that might be lurking inside these machine-learning algorithms used to make millions of decisions every minute.

Speaking before a recent Google conference on the relationship between humans and AI systems, Giannandrea said, “The real safety question, if you want to call it that, is that if we give these systems biased data, they will be biased.”

Considering the fact that AI is currently spreading like wild fire, the problem of bias in machine learning has the potential of becoming more significant as the technology enters into more critical spheres of life such as law and medicine, and people without a deeper technical understanding of the technology are given the task of deploying it.

Some AI experts have even warned that it isn’t that the bias is lurking somewhere in the vicinity, the algorithmic bias is already pervasive in many industries, but almost no one is making an effort to identify it leave alone correct it.

Giannandrea believes that one way to counter this bias is to be transparent about the training data that is being used and keeping an eye out for all the hidden biases in it, otherwise we will be developing biased systems without even knowing it.

Giving an example of the situation above, he says, “If someone is trying to sell you a black box system for medical decision support, and you don’t know how it works or what data was used to train it, then I wouldn’t trust it.”

For the uninitiated, black box machine-learning models have already been having a major impact on the lives of some people. For instance, Northpointe’s COMPAS predicts defendants’ likelihood of reoffending, and is currently being used by some judges to determine whether an inmate should be granted parole or not. However, how COMPAS works is still a well kept secret. Recently, an investigation by ProPublica unearthed evidence proving that the system might actually be biased when it comes to minority defendants.

However, experts believe it is not always about publishing details of the data or the algorithm being put to use. A majority of the emerging machine-learning techniques have such a complex and opaque working that they easily defy careful examination. In order to trump this issue, AI researchers are nowadays working on ways through which they can make these systems give some approximation of their workings to not only the engineers but also the end users.

It is important to highlight the bias creeping in AI as more and more tech giants are exploring the technology in different projects. Google is among several big tech firms pitching the AI capabilities of its cloud computing platforms to all sorts of businesses. Though these cloud-based machine-learning systems are a lot easier to use than the underlying algorithms, but they could also make it easier for bias to get in. Hence, it is important to provide necessary knowledge and training to less experienced scientists and engineers on how to spot and remove bias from their training data.

Karrie Karahalios, a professor of computer science at the University of Illinois, also took the podium at the conference to bring to light how tricky it is to spot bias in even the most commonplace algorithms. Giving example of how Facebook newsfeed functions, Karahalios highlighted that users generally have no understanding about how Facebook filters the posts shown in their news feed. While on the surface it might seem not that serious, but the case is just a one off example of how difficult it is to interrogate an algorithm.

Hence, one can surely say, killer robots are the least of our problems right now. The first and foremost task is to identify bias in training data and kill it then and there itself.

This development was first reported in MIT Technology Review.

[Image: AI Business]
Advertisements

Post a Comment

Previous Post Next Post
Like this content? Sign up for our daily newsletter to get latest updates.