If you’re planning on becoming a part of a protest or a rally but don’t want to reveal your identity at the same time, you might want to think about your participation again as the latter might no longer be possible. Researchers, from Cambridge University, India’s National Institute of Technology, and the Indian Institute of Science have successfully developed a deep-learning algorithm that is capable of identifying an individual even when part of their face is obscured or covered by sunglasses or bandanas, as is seen during many protests, rallies and agitations.
The researchers, who are all set to present their research paper at the IEEE International Conference on Computer Vision Workshops (ICCVW) scheduled to be held from 22 Oct to 29 Oct 2017, in Venice, Italy, claim that their algorithm can correctly ID a person whose face is concealed by a scarf 67 percent of the time when they were photographed against a “complex” background, which resembles natural conditions in the real world.
The deep-learning algorithm functions in a unique way. In order to develop the system, the researchers first started out by outlining 14 key areas of the human face, a step which was then followed by training a deep-learning model to identify the 14 outlined areas. The algorithm then gets to work by connecting the points into what might seem like a “star-net structure.” It then makes use of the angles between the points to identify a face. The algorithm is capable of still identifying those angles even when part of a person’s face is covered, by disguises glasses, scarves and caps etc.
While the system can be of great help to authorities in identifying the miscreants and criminals whose main objective is to spread violence and unrest, but at the same time, it can also act as a hindrance to people who just want to peacefully stage a protest against a particular problem or issue but don’t want to get identified.
Sharing his views on the topic, Amarjot Singh, a Ph.D student at Cambridge University and one of the researchers behind the paper said that when he was working on this method, his focus was solely on criminals who take advantage of disguises to cause unrest in the society.
However, while the researchers might not have thought about the other way the technology can be used by authoritarian regimes while they were working on the system, but it is a very important world issue because if the system falls in the wrong hands, it is capable of causing a lot of damage.
Zeynep Tufekci, a professor at the University of North Carolina and a writer at The New York Times, took to Twitter to voice her concerns about the system described in the paper. She wrote, too many worry about what AI—as if some independent entity—will do to us. Too few people worry what *power* will do *with* AI.”
Let me say: too many worry about what AI—as if some independent entity—will do to us. Too few people worry what *power* will do *with* AI.
— zeynep tufekci (@zeynep) September 4, 2017
In a follow-up tweet, she drew attention towards the bigger picture that the research can result in. She writes, “This is a minor paper; narrow, conditional results. But it’s the direction & this will be done with nation-state data—not by grad students.”
While the algorithm described in the paper might not be reliable enough to be put to use by law enforcement or other authorities anytime soon, but it does provides crucial gifts to future academics to work on. One of the main issues faced by researchers while training machine learning models is that they can’t ever find abundant quality databases to train them on. But, this paper provides the researchers with the gift of two different databases to train the algorithms on to carry out similar tasks, each having a whopping 2,000 images.
The algorithm described in the system can successfully identify faces in disguises like sunglasses, bandanas, scarves, masks, but it does face issues when it comes to identifying faces disguised in rigid Guy Fawkes masks, which are often donned by members of hacking collective Anonymous. The researchers are currently working on ways they can even ID people disguised in these rigid marks. What’s impressive to note is that, the experimental algorithms are already successful in identifying people with 99 per cent accuracy on the basis of how they walk.
The researchers have big things planned for the future. They want their algorithm to work in real time without a Wi-Fi connection. This way, people using the tech would be able to use a camera to ID people wearing disguises without having to manually feed data into a remote served over a Wi-Fi. The team also plans on training the system on a larger dataset of people to sharpen its accuracy.
While on the face of it, the idea might seem brand new, but it actually isn’t. A paper published in 2014 also discussed an automated algorithm that could recognise faces even when they were obscured. The new paper makes uses of a different technique and a larger data set. Prior to that, a 2008 paper analysed the effect disguises have on facial recognition algorithms.
Interestingly, in order to restore the balance, there’s a plenty of research also being done on how to successfully evade these facial recognition algorithms. For instance, last year, a team at Carnegie Mellon University developed glasses that could fool a facial recognition algorithm.