Wednesday, September 16, 2020

Can we control self-learning AI?

AI Houkai (AI崩壊) is a movie about an AI system that was embedded with code that triggered it to run amok, screening people based on various attributes to decide who should live and who should die.


While this is a movie, I think the writer has done quite some research into recent deep learning techniques, so it is somewhat an accurate portrayal of how AI works. But I won't go into the technical details since picking at a movie for technological accuracy isn't what this post is about. Rather. I want to talk about one question in my mind: can we control a self-learning AI?

Currently, self-learning AI is limited to certain specific tasks. In general, a self-learning algorithm is programmed to learn a certain task using training data provided. This is already "old" technology; we have such self-learning AI that has become quite adept at recognising pictures and such.

The next step is transfer learning, where certain aspects of knowledge obtained from learning one task is then retained by the AI to help it learn another task in a shorter time. This is also something that has been somewhat achieved. There is then one-shot learning, where AI is able to learn something just from a single instance.

And finally, a generic AI that is able to learn and do anything, similar to humans. Like Alice in SAO.

The steps are similar to how a human baby grows into an adult. A baby starts by learning simple tasks, like how to crawl, then walk. How to make certain sounds, and eventually stringing those sounds into words, then sentences. It then learns how to talk while walking, and learn more things like how to read. As a human being grows, it learns to read, then applies that skill to learn mathematics, science, other languages, work skills, and so on.

And therein lies the problem. Parents all know we cannot control our children. They eventually grow up with minds of their own. And if a generic self-learning AI is anything similar, we will eventually be unable to control what it learns on its own. This presents a problem because, unlike traditional programs where instructions are hard-coded, self-learning algorithms are a set of initial instructions that grows even more complex as it learns. It may even eventually be able to write its own expansion code to allow it to do new things that it has learnt.

It means that a generic self-learning AI may grow into something that we cannot control, just like our children. For example, we have already seen how AI has unknowingly picked up racism because it was trained on real-life data that was biased against people of color. Or how it screened out women from recruitment because past data used for training were mostly men. These were unintended outcomes based on using real-life data. Imagine a self-learning AI that has access to more real-life data; how do we control what it learns from that training data? A large amount of training data is necessary for accurate learning, yet producing such quantities of data is difficult. Policing the contents of such data is even more difficult.

And who decides what goes into training data and what does not? This will eventually become a question of ethical values and even ideology.

We want computers to learn on their own instead of being told what to do. But given that we have education ministries and other such agencies that set learning curriculum for our children, we should also have similar agencies that set standards for the training of self-learning AI.

Otherwise, we may one day develop an AI that eventually learns on its own to decide that human mortality is a weakness, and such "inferior" beings are to be culled to prevent waste of resources...

No comments: