Twenty years ago, setting up a timer to record on a tape your favourite TV show using a VCR device or recording a message on your voicemail was a total success in technology automation. Even the simple task of programming a digital alarm clock to wake us up at a certain time in the morning stopped being rocket science for some of us, fulfilling an inner desire to be part of a society that was unstoppably shifting into a digital transformation.
This is just a small slice of how technology automation has changed over the past 20 years, and I assume we can all acknowledge that AI is gaining momentum, albeit regulatory authorities, legislators and lawyers not being fully sure how to adapt or embrace the change that’s currently happening. Artificial Intelligence is here, it’s the hot topic or the popular kid everyone wants to play in the park with.
AI and automation are bringing us daily benefits; Internet and Big Data are becoming an essential part of both our work and private lives and we now have the capacity to collect huge sums of information too cumbersome for a person to process. But what will this future bring in terms of issues, policies and regulations? Will the impact of AI on our society drive the study of ethics in the computer science ecosystem? Will programmers and researchers be obliged to study ethics and morals as compulsory modules throughout their learning paths?
At the moment, there is a lot of debate around the direction that the legislature or the regulations should take and we are certainly not coming up with solutions at the same speed this technology is evolving.
Could we see a Hippocratic Oath for coders like we have for doctors? That could make sense. We’ll need to learn together and with a strong commitment to broad societal responsibility. Ultimately the question is not only what computers can do. It’s what computers should do. – Brad Smith and Harry Shum, Microsoft Corp.
Similarly, there are some existing laws that could be applicable to AI, especially the ones regarding privacy law but so far, an AI law doesn’t exist as a distinct field per se. Microsoft in its “Future Computed” manual already started defining six ethical principles that should guide the development and use of artificial intelligence. These principles should ensure that AI systems are fair, reliable and safe, private and secure, inclusive, transparent, and accountable.
The application of artificial intelligence in this regard has already been quite fruitful in several industries such as technology, banking, marketing, and entertainment. “We’ve seen that even if algorithms don’t improve much, big data and massive computing simply allow artificial intelligence to learn through brute force”, says Harvard University in their latest study in Artificial Intelligence.
Probably, many of you won’t have heard the term before but being a ‘futurist’ is now a pretty big deal. No, they are not a sort of soothsayer using a crystal ball to predict the future and neither do they rely on unfounded or magical theories. A futurist is a very smart and uber cultivated person on new trends, politics, technology and current affairs. They can easily visualise future scenarios about work, workers and workplaces by researching and analysing mega-trends and developments.
Last week, during an Artificial Intelligence and Quantum Computing Summit I attended in London I had the opportunity to have a quick chat with George Muir, Futurist at LiveTiles. His role in the company is to visualise the way we work, live and play in the future, to our clients, our leadership team and to all our co-workers, to help understand and predict or align the strategies, directions, road maps and activities we are all working on.
What are the main challenges and opportunities AI is currently bringing us?
The main challenge with regard to AI is that we do not fully understand the effect on our society, our life and our everyday values.
We are not prepared for the change. We think that AI means autonomous vehicles, Alexa or Siri. We do not see that we transition from a full-time human employment to a full-time AI employment and part-time human employment.
Opportunities exist all over the place, wherever there are repeatable tasks from ordering a book to performing heart surgery, to prevent medication for diabetes.
Will AI eliminate jobs, will create more or will just require new skills as it happened in every industrial revolution?
AI will change what we mean by “jobs”, gradually over time. Yes, and in every industrial revolution there will be a job that will no longer exist and there will be new jobs that will be created. This major difference between these industrial revolutions will be that we move from a society that has mainly full-time human employment to full-time AI employment and part-time human employment. This means that everybody’s life will change and we will have more leisure time and more time to decide what to do with our lives.
Are we already facing the start of a 4.0 Industrial Revolution?
No, not really, we are seeing the early indicators. AI is not really a new thing, but the power and data that we now have make AI very strong. The Intelligence Revolution will be recognised when human beings are not working full time because the AI is performing a major part of their job. We are collecting masses of data and every year we will double the amount of data we store. We are creating more complex algorithms and learning in a more speedier way, but the change starts when we apply AI in everyday life situations.
Would we need to adapt employment laws and labour policies to address the new responsibilities that AI will arise?
Most certainly! Governments need taxes to be able to run a country. If people are working 10%, 20%, 30% or 50% compared to today, how will we afford to live as human beings and also how we pay our taxes? We need to start to address these questions now. I believe that we should treat AI as we would treat humans, with respect, with fairness and also, see that AI pays taxes through its performance in the process/organisation.
How can we guard against mistakes?
Intelligence comes from learning, whether you’re human or machine.
Systems usually have a training phase in which they “learn” to detect the right patterns and act according to their input. Once a system is fully trained, it can then go into test phase, where it is hit with more examples and we see how it performs.
How do we eliminate AI bias?
We shouldn’t forget that AI systems are created by humans, who can be biased and judgemental. Once again, if used right, or if used by those who strive for social progress, artificial intelligence can become a catalyst for positive change.
How do we keep AI safe from adversaries or a negligence?
We cannot keep AI safe. Therefore, we must think in terms of that AI can or will be attacked, misused etc. Just like a human, we cannot prevent any of us from a disease, but the human body has the immune system is the body’s defence against infectious organisms and other invaders. Through a series of steps called the immune response, the immune system attacks organisms and substances that invade body systems and cause a disease.
The more powerful a technology becomes, the more can it be used for nefarious reasons as well as good.
This applies not only to robots created to replace human soldiers, or autonomous weapons but to AI systems that can cause damage if used maliciously.
Because these fights won’t be fought on the battleground only, cyber security will become even more important. After all, we’re dealing with a system that is faster and more capable than us by orders of magnitude.
How do we define the humane treatment of AI?
The IEEE Global Initiative on Ethics of Autonomous and Intelligent Systems is a program of The Institute of Electrical and Electronics Engineers, Inc. (IEEE), the world’s largest technical professional organization dedicated to advancing technology for the benefit of humanity with over 420,000 members in more than 160 countries.
The IEEE Global Initiative brings together over 250 participants who are thought leaders from academia, industry, civil society and government from six continents in the autonomous and intelligent systems communities to identify and find consensus on timely issues in these fields.
The mission of The IEEE Global Initiative is to ensure every stakeholder involved in the design and development of autonomous and intelligent systems are educated, trained, and empowered to prioritize ethical considerations so that these technologies are advanced for the benefit of humanity.
When tackling this topic, some key conclusions emerge and we need to keep them all in mind since there will be clear challenges as well as promising opportunities.
“It is in Apple’s DNA that technology alone is not enough—it’s technology married with liberal arts, married with the humanities, that yields us the results that make our heart sing.” — Steve Jobs.
Companies and countries will need to embrace these changes rapidly and effectively and there will be a capital need to think about strong ethical principles, training new skills and supporting the evolution of laws. And all of this with a strong sense of shared responsibility.
For more information about LiveTiles, visit their website clicking here.
You can also download Microsoft’s ‘Future Computed’ book here.