There has been a lot of progress made in AI on developing neural networks which can be applied in a more generic sense.
Measuring abstract reasoning in neural networks
Capture the Flag: the emergence of complex cooperative agents
OpenAI Five Benchmark: Results
A common feature of AI using neural networks for deep learning is the training that must be done. The problem is that most AI can only learn to do one thing. Play a certain game, for example. The AI trained to play DOTA won't be able to play Quake.
It got me thinking: how can we training an AI to be able to do different things?
Maybe it is as simple as having three distinct portions of the learning algorithm.
An input/output (I/O) layer which is tailored for the task being learned. This takes in raw data from the "environment" and outputs actions back into the "environment."
An I/O conversion layer that acts as a translator between the actual AI, and the I/O layer. Nothing really fancy here, although this is probably where we need to sort out features from labels (rewards?). This conversion layer changes data into concepts for use by the actual AI.
And the actual AI layer(s) which does the learning and prediction. Since it works with concepts and not raw representations, it should be better able to adapt to different circumstances, and reused learnt concepts in new environments.
It also means researchers can split their work into various focus groups. The concept-learning AI itself, which is the holy grail for machine learning. Or the conceptualizing conversion layer, which is an enabling research, and does affect the subsequent AI since the quality of conceptualizing will affect the concepts available for learning.
Whatever the case, I am looking forward to seeing how AI research progresses in the future.
Measuring abstract reasoning in neural networks
Capture the Flag: the emergence of complex cooperative agents
OpenAI Five Benchmark: Results
A common feature of AI using neural networks for deep learning is the training that must be done. The problem is that most AI can only learn to do one thing. Play a certain game, for example. The AI trained to play DOTA won't be able to play Quake.
It got me thinking: how can we training an AI to be able to do different things?
Maybe it is as simple as having three distinct portions of the learning algorithm.
An input/output (I/O) layer which is tailored for the task being learned. This takes in raw data from the "environment" and outputs actions back into the "environment."
An I/O conversion layer that acts as a translator between the actual AI, and the I/O layer. Nothing really fancy here, although this is probably where we need to sort out features from labels (rewards?). This conversion layer changes data into concepts for use by the actual AI.
And the actual AI layer(s) which does the learning and prediction. Since it works with concepts and not raw representations, it should be better able to adapt to different circumstances, and reused learnt concepts in new environments.
It also means researchers can split their work into various focus groups. The concept-learning AI itself, which is the holy grail for machine learning. Or the conceptualizing conversion layer, which is an enabling research, and does affect the subsequent AI since the quality of conceptualizing will affect the concepts available for learning.
Whatever the case, I am looking forward to seeing how AI research progresses in the future.
No comments:
Post a Comment