An Art Piece That Explores Modern Physiognomy Through an AI That Classifies Dangerous Faces

It’s unlikely that artificial intelligence will ever harm you directly, but that doesn’t mean AI isn’t dangerous. Like any other tool in…

It’s unlikely that artificial intelligence will ever harm you directly, but that doesn’t mean AI isn’t dangerous. Like any other tool in the wrong hands, humans can potentially use AI as a means of harming each other. The ethical ramifications are enormous, and we’ve already seen real-world violations in the form of AI profiling systems. To explore those ethical concerns, artist Marta Revuelta constructed AI Facial Profiling, Levels of Paranoia, which classifies people as dangerous solely based on their appearance.

Revuelta was inspired by Xiaolin Wu and Xi Zhang from Shanghai’s Jiao Tong University, who claimed in a 2016 research paper that they could use artificial intelligence to determine the criminality of a person using just a photo of their face. It’s a modern form of physiognomy — the practice of assessing an individual’s character, intelligence, or personality based on their physical appearance. Physiognomy is wholly without scientific merit, and was predominately used as a way to justify racial stereotyping.

Physiognomy, and its modern artificial intelligence-based incarnations, cannot achieve anything more than, at best, finding correlations. And that’s exactly what Revuelta’s interactive art piece does. An operator scans a person’s face with a “weapon-camera,” and the image is fed into the machine. That’s then analyzed by a convolutional neural network against a known set of photos and videos.

Through that analysis, the system determines two traits: how skilled the individual is with a firearm, and how likely they are to be dangerous. A classification card is then printed and stamped declaring the individual’s risk level. It is, obviously, coming to completely arbitrary and certainly erroneous conclusions, but that’s the point the art piece is designed to make. Artificial intelligence isn’t magic, and even the most sophisticated neural networks can come to dangerous conclusions if we allow them to. What’s important is that AI developers, and the organizations that use the systems they create, understand that and avoid these scientifically-unsupported ethical pitfalls.

Cameron Coward
Writer for Hackster News. Proud husband and dog dad. Maker and serial hobbyist. Check out my YouTube channel: Serial Hobbyism
Latest articles
Sponsored articles
Related articles
Latest articles
Read more
Related articles