Friday, January 04, 2019

My thoughts on artificial intelligence development in 2019

AI is all the hype now, and will probably continue to remain in the spotlight for years to come. 2019 is no exception. So what are the issues we need to be thinking about?

Well, imitation learning is making headway, with computers now able to learn how to move by looking at videos of humans making the same movements. Deep learning has always been good at imitation; AI is very much about computers trying to imitate humans in the first place. The next step in imitation learning is likely to be the inculcation of human values. Computers learn from data, and data is value-neutral; there is no "right" and "wrong", no "good" or "bad". But if AI is to be useful in the real world, it must be able to understand human values, and learn what we humans perceive as good and bad, right and wrong. Then, it must be able to apply these values in the same way that humans would.

A classic problem, and a very practical one, is how a car, autonomously driven, should react when it is in a situation where there is danger to human life. When it has to choose between endangering its occupants versus endangering the lives of those around it. When a human driver is faced with the same situation, he or she will make a split second decision based on values, experience, and past training. AI would have past training, and probably experience, but values are something that is still being pondered over. Coupled with the varying values that human beings have, what are those universal values, if they even exist, that should be incorporated into AI? How do we get computers to reflect these values in their actions? Should we allow users to customize values to suit the users, or are these values set by the companies who develop these products? And who is ultimately responsible for the actions of AI, and why?

Another interesting idea I have is about machine translation. As a translator with an interest in AI, development in machine translation is a double-edged sword. I want to see AI develop to a stage where it can do proper translations, but that would mean a drop in my own income. Still, I think the current challenges for machine translation are:
1. Machine learning of languages. Maybe we should split this learning up, into how we humans learn. We learn vocabulary, so maybe AI should have a portion set aside for machine learning using bilingual dictionaries. We then move on to sentences, and then entire paragraphs for context. So maybe, machine learning for machine translation should have a three-stage algo: vocabulary, sentence formation, context.
2. Bilingual texts. We know the Bible is widely used for machine learning in machine translation, because it is one of the few texts which is available in various languages. Still, the Bible is not the full scope of human experiences, and its context and vocabulary may be somewhat dated. The challenge, then, is how to take existing books that we have which are available in more than one language, and convert these into usable training texts, in an automated way. This requires natural language processing with text/sentence matching. This is not actually machine translation, but is part of the development essential for machine translation.

Lastly, I would like to see the current trend in applying AI to social good continue. We know that AI is not the magic cure, but still, its scope of application is getting wider by the day. What are the types of social issues which AI can be most effective in solving? How do we adapt AI to solve those issues? It may be time to take stock of this. While this is not a technical development, but findings in this area will help to shape the focus of technical development in AI.

Looking forward to a wonderful year of AI development in 2019!

No comments: