Roboethics: Can Code Be Humanized?

As we explore how robotics can support our daily lives, we have to ponder ethical considerations in order to stay committed to products that truly sustain us.

The Thrive Global Community welcomes voices from many spheres on our open platform. We publish pieces as written by outside contributors with a wide range of opinions, which don’t necessarily reflect our own. Community stories are not commissioned by our editorial team and must meet our guidelines prior to being published.

Anyone who used Microsoft Office in late 1990s and early 2000s knows about Clippy, the animated paper clip that “helped” users navigate software. For example, if you opened MS word and wrote “Dear,” Clippy might interject, “I see you’re writing a letter. Would you like help with that?” But no one wanted Clippy’s help, and it was eventually removed from the MS Office Suite.

So why did it come about? Because research at Stanford had Microsoft conclude that if users could depend on human-like support they wouldn’t yell at their screens and walk away from their computers in frustration. This conclusion has since been characterized as a “tragic misunderstanding,” and Clippy has been declared “one of the worst software design blunders in the annals of computing.” When it came to being human, Clippy failed.

Clippy is obviously small potatoes and a very early attempt at human-like interaction. But the question remains: how do you program a machine to act like a human? And how do we want programming to shape user behavior?

Already, certain applications become disabled if they think we’re driving (Waze), updates to operating systems promise to limit our screen time and forthcoming software updates will require kids to mind their Ps and Qs. As we move into driverless cars and voice-activated robotics, anticipating every possible scenario with human-like instincts becomes more important. It’s one thing to complete a simple task, in the case of a vacuuming robot for instance, and quite another to design complex products with etiquette and ethics in mind.

In the effort to make driverless cars, safety is paramount. The car would need to be able to detect a stoplight, and know whether the light is red, green or yellow. If you’re driving your own car and approaching a stop light that’s turning yellow, you could make it through or you could stop — in a driverless car, programmers would make that decision. The same is true for turning right on a red light — when is it safe and where is it legal? Everything will need to be embedded in the code, down to county.

Whoever thought engineers would need to study the legal code as it pertains to driving? Or that lawyers would be designing cars? Okay, it’s not that simple, but there is certainly overlap, and we will see (and eventually experience) the consequences of that. To put it in perspective, let’s imagine a different scenario where innovators chose speed over safety. In the case of Auto Pilot, a Tesla feature where the car drives itself for short periods of time,

Tesla engineers could have programmed the cars to go slowly, upping safety. Or they could have programmed them to go fast, the better to get you where you need to be. Instead, they programmed the cars to follow the speed limit, minimizing Tesla’s risk of liability show something go awry.

Law trumps everything. Yet sometimes technology moves faster than the law. This is where ethics comes in, which brings science fiction writer Isaac Asimov’s Three Laws of Robotics to mind:

  • A robot may not injure a human being or, through inaction, allow a human being to come to harm
  • A robot must obey the orders given it by human beings except where such orders would conflict with the First Law
  • A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws

In making these rules, Asimov imagined robots would be human-serving androids, a scenario that applies in some cases and doesn’t in others. This has led to the creation of ethical codes for robots in Japan, the European Union and elsewhere. And it can lead us down interesting paths that are worth considering.

Let’s consider another kind of robot, one that’s programmed to bathe people who can no longer do it themselves. While the robot may be fully functional, what if you don’t want a bath? By following Asimov’s second rule, the robot would have to obey the human, but if the human never bathed, wouldn’t it be harming itself, therefore engaging Asimov’s first rule? According EU civil law, humans can refuse care from a robot, making the bathing conundrum a non-issue. But so far the US, and much of the world, lacks these rules, leaving these questions unanswered.

Except that this ice cream commercial gets to the heart of the matter:

On a serious note, we have a lot more work to do. One layer of learning unveils another layer of learning; the integration of robotics and automated vehicles in society will continue to evolve. Like all innovation, it’s iterative. Let’s hope it doesn’t end up like Halo Top.

Originally published at

Share your comments below. Please read our commenting guidelines before posting. If you have a concern about a comment, report it here.

You might also like...


Escaping the World Trade Center on 9–11 Inspired One Man to Pursue His Passions: With Dr. Tianyi Jiang

by Yitzi Weiner

“Enjoy the Journey.” with Mark Richer

by Fotis Georgiadis

Does Microsoft Excel Training Boost Career in 2021?

by Sunny Jones

Sign up for the Thrive Global newsletter

Will be used in accordance with our privacy policy.

Thrive Global
People look for retreats for themselves, in the country, by the coast, or in the hills . . . There is nowhere that a person can find a more peaceful and trouble-free retreat than in his own mind. . . . So constantly give yourself this retreat, and renew yourself.


We use cookies on our site to give you the best experience possible. By continuing to browse the site, you agree to this use. For more information on how we use cookies, see our Privacy Policy.