If George Orwell had made 1984 a parable of tech companies lording over a totalitarian state instead of communists, the stores would be called “town squares” and the phones would read your face.
Well, we’ve arrived.
At today’s big reveal, Apple unveiled the iPhone X, which will cost you a cool thousand bucks. But my concern is not a matter of Veblen goods (or the funny way some products get more attractive the more expensive they are). Rather, it’s the pandora’s box that Cupertino is opening up with the much-discussed Face ID feature, which, as the name suggests, uses your face rather than your thumbprint to verify that you’re you. The Verge’s Vlad Savov reports that the phone will use the front camera and a “flood illuminator” to beam a light on your face to read it, and Apple even built a neural engine—a sophisticated piece of machine learning—to do the face recognition in real time.
University of North Carolina sociologist Zeynep Tufekci, one of the finest scholars of technology and privacy, was nonplussed. “This same tech will be used—by someone—to identify protesters, to figure out if you’re depressed or manic—and how to monetize that,” she wrote in a series of tweets. “Same family of technologies will be used to classify you—right or wrong—as a criminal or a terrorist—or what your sexual orientation is. Not saying Apple will. I am saying that this is increasingly doable and will be done—or at least some people will think it should be done.”
Crazy as it sounds, this a somewhat incremental step in the long march of what internet ethics scholar Michael Zimmer calls “capitalistic surveillance.” The director of the University of Wisconsin-Milwaukee’s Center for Information Policy Research previously told Thrive Global that this type of surveillance has been around as long as capitalism itself, from monitoring employee productivity to running consumer behavior studies and other forms of market research. But in recent years, it’s gotten more pervasive, invisible and fine-grained.
Target famously figured out how to forecast customers pregnancies half a decade ago. Today, Amazon’s Echo Look wants to archive photos of your outfits, which, with the same machine learning technology Apple’s inserting into the new iPhone, could tell if you’re pregnant or depressed. The start-up Activa has built tech to recognize the emotions in people’s faces; video game developers have used it to create a psychological thriller that gets harder the more distressed you look. (Yes, that is the stuff of nightmares.) On the more helpful end of the surveillance spectrum, car companies are learning how to read drivers’ faces for drowsiness, too.
Face ID failed the first time Apple software head Craig Federighi tried to demo it on stage, then he grabbed another iPhone X and the feature worked. It’s a sign that no matter how slick technology looks, it has flaws—a lesson devastatingly reinforced by Equifax’s data breach. The more personal technology gets, the more vulnerable it makes us, too. It’s one thing for you password to fall into the wrong hands. It’s another to lose your face—in a whole new meaning of the phrase.