Last time we looked at the “Pretty Please” etiquette-teaching feature for the Google Assistant, noting that the smart speaker simultaneously (and paradoxically) invites you to “Make Google do it.” If kids are treating their Assistants and Alexas like inhuman tools to be abused, might they not start treating people the same way, including their parents?
As the tech/human distinction slips, kids boss their parents around in the same ways their parents boss around their technology. As technology increasingly and deliberately blurs the lines of sociability, it starts pushing into realms far more serious than basic manners. Because as people themselves become human plug-ins for “sociable services,” the way we treat machines boomerangs to define the way we treat each other.
Consider the following anecdotes revealing how Uber drivers are sometimes treated by passengers:
An enraged Bronx woman threatens to falsely accuse an Uber driver of assault after he tells her he doesn’t have a charger for her phone.
A man berates his driver for not having picked up a rideshare before him, an apparently faster option according to his app. The driver explains he receives pick ups in the order they come in. The man insists to the driver that the app is right and the driver is wrong, giving him a one-star rating.
An inebriated executive assaults his Uber driver after the driver tells the executive his directions are incomprehensible. The driver captures the attack on his dash cam. When the footage gets online, the executive sues the driver for $5 million for invasion of privacy.
In each of these cases, passengers’ expectations reduce the human driver to a mere extension of a digital social service. The rage these “users” exhibit seems more akin to striking a device that’s on the fritz than an encounter with a fellow human.
Now consider how these technology-mediated (anti)social interactions change Uber drivers’ own behavior and self-concept:
“It gets to a point where the app sort of takes over your motor functions in a way…It becomes almost like a hypnotic experience,” Herb Coakley, a longtime driver explains. “You can talk to drivers and you’ll hear them say things like, ‘I just drove a bunch of Uber pools for two hours. I probably picked up 30-40 people and I have no idea where I went.’ In that state, they are literally just listening to the sounds [of the drivers’ apps]. Stopping when they say stop, picking up when they say pick up, turning when they say turn. You get into a rhythm of that, and you begin to feel almost like an android.”
50% of Uber drivers quit in under a year.
Peter Kahn, a developmental psychologist studying human-robot interaction at the University of Washington recognizes that new “sociable” technologies like virtual assistants and robots are “creating a new category of being,” a “personified non-animal, semi-conscious half-agent.” When children interact with Kahn’s robots they say things like, “He’s like, he’s half living, half not.” Could kids soon be saying the same of the new class of android humans seamlessly slaved to the technology? My driver is half living, half not and I yell at it/her just like I do at Alexa.
Which brings us to one final observation. Automation is increasingly replacing the people we would have once said “Please” and “Thank you” to. Someday, in the not-too-distant future, we may no longer have a human Uber-android to abuse. All that will be left to disrespect will be the autonomous technology itself. And there’s proof that we’ve already begun to exploit it.
Volvo considered leaving its prototype self-driving cars unmarked during their London trials so that human drivers wouldn’t take advantage of the robotic vehicles’ risk-averse programming. Erik Coelingh, Volvo’s senior technical leader, explains, “I’m pretty sure that people will challenge them if they are marked by doing really harsh braking…or putting themselves in the way.”
The analysis of “humans bullying mild-mannered autonomous cars” goes on to point out that “Google has already experienced similar problems firsthand. Some of its cars found it difficult to pull away from stop signs, because they were too timid: other cars simply whistled by while they sat stranded.”
Think about that. How many humans will be swerving to cut off, or even stepping out in front of, say, a fully automated Google delivery vehicle? We know it has to stop, right? We can make it stop, no? Google told us we can “make Google do it.” We’re human. It’s not. And until we have any reason to respect this machine, why would we?
Might this be why Google is teaching kids to be polite to their technology? Not just because the blurring lines between human and machine interactions may lead to less polite human interactions. That’s a critique as old as Marx. Rather, might we be teaching kids to be polite to technology because our inability to be civil to machines may be a serious limiting factor to realizing a fully automated tech utopia, one we can completely control to the point of abuse but must choose not to?
Maybe “Pretty Please” is just the beginning of a far greater project of enculturation than mere manners. Maybe Alexa and Google Assistant will soon be handing out star ratings to family members, along with accolades for being polite. If so, the technology of the future may well be saying “Thank you” to a generation educated by machines in the new etiquette of literally yielding to machines.
Sit with that. Pretty please.
To read the first part of this discussion, click here.
Ted Florea is Global Chief Strategy Officer at Kirshenbaum Bond Senecal + Partners (KBS), the Creative Communications and Technologies agency network headquartered in New York. KBS is committed to “creating human purpose in a tech-obsessed world” and works with innovative advertising, marketing and technology partners to build systems that put timeless human nature and timely human needs at their core. This piece was written in collaboration with KBS’s Strategy Group.