March 12, 2019

Ethics in the Age of Augmentation

March 12, 2019

Artificial Intelligence (AI) may sound intimidating; but we, in fact, have been using AI in various activities in our daily lives. Some of the applications include the Smart Replies function in Gmail, Facebook’s facial recognition, product recommendations function for online shopping, music recommendations function on Youtube, mobile banking applications with bill pay reminders, or even Google Maps. 

Some of us think AI makes our lives more efficient every day. Others worry that it will slowly make us become more dependent and ultimately, humankind will be destroyed. We keep on pressing the issue of Ethics of Artificial Intelligence for the fear of being wiped clean off the face of the earth. But is this the right question to ask? Do we fully understand AI?

What is AI?

AI adapts through progressive learning algorithms to let the data do the programming. With the method called deep learning, machines can use huge neural networks with many layers of processing units, utilizing advances in computing power and improved training techniques to learn complex patterns in large amounts of data.

In short, AI is similar to human intelligence; it learns by processing and responding to data. The bigger the dataset is, the more intelligent AI is. 

The ability of processing and learning large datasets empowers machines to outperform humans in many tasks. Let’s use the game of chess as an example.

In 1997, IBM’s chess program called Deep Blue beat Garry Kasparov, former world chess champion. Since then, chess-playing computer programs have built upon Deep Blue’s developments to become even more proficient and efficient. 

Yet, it was not until 2017 that the world witnessed the intelligence of machines. After learning chess from scratch for only four hours, Google’s AlphaZero program managed to defeat Stockfish 8, the world’s computer chess champion, which had access to centuries of accumulated human experience and decades of computer experience in chess.

Using deep learning, AlphaZero learned by playing against itself without the help of any human guide or human’s intelligence. Some of its winning moves and strategies were even unconventional and outright genius in the human eyes. This shows that human beings are no longer the smartest creature in certain fields. 

While AlphaZero and chess seem narrow in scope, it is hard to fathom the ultimate power and spread of deep learning in wider, more general aspects of human lives.

According to Dr. Le Viet Quoc, a Google scientist, in the next 5-10 years, AI and deep learning will be even more advance and cover more grounds, such as transportation, healthcare, and education. 

Race against the machines?

It is the uncertainty of AI power that triggers various concerns. The two most prominent questions are: (1) “Will AI replace the human workforce?”; and (2) “Can we impose ethics on AI?”. 

In an interview, AI expert Kai Fu Lee said that he believes 40% of the world’s jobs will be replaced by robots capable of automating tasks. This vision of the future affects not only blue-collar professions but also white-collar ones. Some experts even went so far to identify the top professions that will be replaced by AI and machine learning. 

Jobs such as those involve driving will be most likely to disappear in the next 15 years due to the introduction of driver-less automobiles. Or if you need to apply for a loan in the future, you will more likely work with machines, which know and can quickly process everything about your current income, assets, debts, and credit. Even the creative space will not be safe; an AI called Aiva is now capable of composing emotional soundtracks for films, video games, commercials or any type of entertainment content.

There is not much concrete data that enables us to have an absolute answer to whether AI will replace the human workforce. What we can understand is that certain tasks will be replaced. It, however, should not be the problem.

We, homo sapiens, survived history by constantly adapting ourselves and reinventing ourselves. We invented jobs that we could not even fathom before the Industrial Revolution. Now, with the help from AI and machine learning, future jobs will continue being reinvented, enriched and elevated. 

Instead of trying to limit the growth of AI, we may consider collaborating with AI. As Dr. Christopher Nguyen, President & CEO at Arimo, a Panasonic Company, said, we reach the age of augmentation when we can combine human intelligence and artificial intelligence.

Therefore, we may need to turn to the other side of the problem and ask a more relevant question: how can we design more jobs and prepare for the uncertainty?

This also gives rise to a new concern: ethics of AI. Let’s take the above jobs, which will likely be replaced by AI, as examples. AI automobiles can be hacked and wreak havoc for human beings in terrorism situations. AI loan management can create unnecessary racial issues due to its insufficient source of data. Aiva may potentially face copyright infringement lawsuits. 

We keep asking for ethics of AI; but is AI the ultimate defendant? 

According to Dr. Le Viet Quoc, AI and deep learning are not as dangerous as we see in Hollywood movies. AI experts view AI and deep learning as a tool for programming.

If we see AI and deep learning in this light, AI then is similar to fire, electricity, or even the most controversial discovery, nuclear power.  If AI is a tool, is it fair to ask only of AI to be ethical? Should we, homo sapiens,also be held accountable for unethical ways of using AI, a tool? 

The ultimate defendant in this debate of Ethics of AI has to be us, human beings. To avoid destructive consequences when involve AI, we, as the de facto users need to understand ethics and act in an ethical way. 

Rise above the machines

We have turned the questions around; but one concern remains: How? How can we prepare for the unforeseeable? How can we make sure that we can act in an ethical way? The answer is both easy and difficult: knowledge. 

In his book 21 Lessons for the 21st Century, Yuval Noah Harari stresses that students in the 21st century do not need teachers to teach them more information; they can get access to new information anywhere, anytime. What students, and all of us, need then is the ability “to make sense of information, to tell the difference between what is important and what is unimportant, and above all, to combine many bits of information into a broad picture of the world.”

We also need to foster more public discussions on Ethics. Basic principles of ethics can help us determine the right and wrong and provide guidance to our lives. Yet, ethics does not seem to get the attention it deserves. In the age of growth, some even view ethics as the factor that hinders innovation. However, when powerful tools such as AI come into play, ethics has to be the focus. 

In her opening speech launching the Moral Leadership Speaker Series, President of Fulbright University Vietnam, Dam Bich Thuy emphasized that ethics needs to be built into the core curriculum. Fulbright students are equipped with not only the right set of skills to prepare for the ever changing future, but also the right values to become a person with integrity and dignity. 

With the aim to initiate more discussions on Ethics and AI, the next speaker of Fulbright’s “21stCentury Ethics” Speaker Series will be Dr. Christopher Nguyen, President & CEO at Arimo, a Panasonic Company, and a board member of the Trust for University Innovation in Vietnam. The talk is themed “Should AI care about Ethics,” and will be held on 18 March 2019 at Fulbright University Vietnam Campus. 

Thach Thao

Related Articles

Đăng ký thông tin