Sunday, September 27, 2020

AI and taking responsibility

A classic ethical issue in AI development is that of having to choose between killing the passenger and killing a pedestrian. Imagine a situation where an autonomous car with a passenger is about to hit a pedestrian. If it tries to avoid the pedestrian, it will go off the cliff, killing its passenger. Should the autonomous car be designed to prioritise the safety of its passenger? Or to prioritise the safety of other road users?

Under the exact same situation, a human driver will have to make a split second decision, which boils down to both instinct as well as the ethical upbringing of the driver. Whatever choice is made can be finally attributed to the driver, and because every person differs in his or her own way in upbringing, the spectrum of decisions will reflect the broad spectrum that is human ethics. And once a decision is made, he or she may be required to defend that decision (if still alive).

But for an autonomous car, that decision will be based on its training data, and that training data is decided by a limited few. So what is the training data to use? And who is responsible for any outcomes of that decision? Is it the company which develops the autonomous car's software? The specific engineering team that selected the training data? Or the owner of the car, even though he or she may not have control over the car itself?

Life and death situations may not be so commonplace. But the issue of taking responsibility remains. For example, if a translation or interpretation software provides output that causes a company to lose millions of dollars due to the wrong choice of words or an ambiguous phrase, who is to be held responsible? Is it the company which developed the software? The specific team that selected the training data? Or the end user? As we grow to depend more and more on AI in our life, this is a question we need an answer to, and quickly. For example, "turn right before turning left" and "turn left before turning right" can cause a huge difference if one leads to the destination while the other leads off the cliff. If a phrase meaning the former is mistakenly translated into the latter, who is responsible? Who will pay compensation?
 
Because human society has this concept of responsibility. And we have developed an understanding about responsibility as each person being responsible for his or her own actions, or those under his or her charge. This includes compensation for actions which cause hurt, damage, or death.

If AI is a tool, and we understand that the user is responsible for the use of one's tools, then does our end user have the required knowledge and control over those tools to be able to properly exercise responsibility? Because it is not fair to ask someone to shoulder responsibility without knowledge and control. It is like asking someone to take responsibility over an unknown dog without a leash. You don't know if the dog has been trained, and there is no way for you to control it.

But if not, can we teach AI the concept of responsibility? And how can AI shoulder liabilities unless AI is allowed the rights to own assets? This again goes back to the debate of what rights will eventually be afforded to AI. Today, even the most advanced AI is only as intelligent as a pet, so it is unlikely that we will get AI to take responsibility for anything just as we do not ask our pets to assume responsibility. But AI development is moving at a blazing speed, and it won't be long before we hit a stage where AI will be near, on par, or even surpass us in intelligence. What then?

Before that, we need to determine who has to bear responsibility over the outcomes of AI as we continue to use AI as a tool. Because whatever we decide, it needs to be universally accepted, and whoever is tasked to bear responsibility will then need to be given the knowledge and means of control to be able to fulfill that responsibility.

No comments: