Even since its inception, Yuval Harari’s Sapiens had gathered a cult following due to its ability to answer humanity’s big questions with scientific explanations that feel tangible to the general audience. When Barack Obama and Mark Zuckerberg recommended it in social media and on TV, Sapiens‘ popularity exploded. New York Times called it “tailored-made for the through leader industrial complex”. His latest book, Homo Deus, came out early last year in the United States, and it is nothing short for the reflective minds.
While Sapiens had managed to examine the history of human race, Homo Deus tries to predict the future of humanity. I will not try to encompass all its ingenuity in this 600 words article, but I do wish to bring forth a ubiquitous but downplayed theme in his book through some of its excerpts: the battle between human mind, mental health and technology.
“As technology allows us to upgrade humans, overcome old age and find the key to happiness, so people would care less about fictional gods, nations, and corporations, and focus on deciphering the physical and biological reality [..] Modern science did not simply replace myths with facts […] Thanks to computers and bioengineering, the difference between fiction and reality will blur as people reshape their reality to match their fictions.”
Is our society ready for a new way of looking at how we will live with technology? Uber and its driverless car had hit the wall with last week’s crash, and the public had responded with all ranges of human emotions – one of which is turning the project to a new owner. But is a new owner really going to solve a problem of public trust, in a new-born society in the world of technology that has the false expectation of their stomach and of their fictions? We are like a two-year old that is given the authority to build our own utopia but needs to do so without a fully developed brain. If we are not ready to make a decision on whether we want an algorithm to control our livelihood, what about other areas that are not so black and white? Many large companies are using algorithms to affect behaviors, which includes the type of videos contents your kids will watch or the complete outfit for your first date. Are we truly prepared to let our mental storage be hijacked by algorithms that are based on zeros and ones? Before all of our fictions become reality, we need time to grow up and be equipped with our own “algorithms” or universal truths.
“Even ordinary people who are not engaged in scientific research, have become used to thinking about death as a technical problem. […] How could they have died? Someone from somewhere must have screwed up.”
This is a new way of thinking about health and wellness in the modern society and it is embedded in all chains of healthcare. One of my previous work includes understanding the decision making of cancer patients, in other words, what steps has a patient taken to navigate from their primary doctor to their end of life. You look at clinical reports, observe conversations and gather data points that will eventually construct a roadmap. This may help a hospital or an organization better serve their patients or assist sociologists to better understand how we can design the process to be more human-centered. However, it is not a one size fits all solution, and becomes increasingly ambiguous as we handle issues such as mental health. There is a recent article that gained lots of traction written by a survivor of the Parkland, of which she said “No amount of kindness would have changed Nikolas Cruz”. So who is responsible for this? We have deaths on hand, but how can we construct a roadmap for this so that we can see who screwed up? While science and technology continue to evolve, we need to design better roadmaps that not only address the root cause, but deals with the ambiguity that surrounds it.
“We seem to be trapped in a vicious circle. Starting with the assumption that we can believe humans when they report that they are conscious, we can identify the signature of human consciousness and use these signatures to prove that humans are indeed conscious. But if Artificial Intelligence self-reports that it is conscious, should we just believe it?”
This has The Matrix all over it, but nonetheless, interesting. Artificial intelligence, as the king of buzzwords in recent years, has snuck into our environment whether or not we want/realize it, and many have asked whether it has consciousness. To me, Yuval has provided an answer disguised as a statement – “it doesn’t matter, it will be if people think it does”. That is a challenge to humankind, because it again passes over the control of the future to us, or more preciously, our mind. We think, therefore we are, and by extension, we think, therefore they are. This opens up all kinds of questions. Should robots have rights? Should we provide them with healthcare? In fact, we are already seeing its manifestation, for example, in the recent viral video where Boston Dynamics seemed to abuse their robots and many comments that calls Boston Dynamics out for it.
I believe in the power of technology wholeheartedly, but not so much in the readiness of our society in handling sticky situations. Before algorithms make decisions for us, we should train our generation and the next ones to better prepare for a future that all of us can live in (yes, including the robots).
Excerpts Credit to: Yuval Noah Harari, Homo Deus: A Brief History of Tomorrow