Machine Learning’s ‘Amazing’ Ability to Predict Chaos
I happened to come across this article as I was searching for "chaos complexity deep learning" due to an impulse while reading about chaos and complexity theory: can chaos and complexity be applied to artificial intelligence in a way similar to deep learning?
While the article is still very much about existing machine learning techniques, I think deep learning using layered networks (the current method) will eventually hit a wall: the number of layers is inherently built into the design. Inside of organizing neurons into layers, and having them more or less hardwired to each other, what if the neurons are represented as agents in a local environment forming and breaking bonds/connections with each other? The depth of links are then no longer limited by the fixed number of layers.
If this idea is to be further explored, there must be a way to define how inputs are fed into the system, and how outputs are represented. These may still be in terms of a layer of neurons currently used in deep learning. There must then be a rule for the forming of connections: how does the neuron choose which neuron in its neighborhood (the size of neighborhood, aka the neuron's reach, must also be defined) to connect to, and which existing connections to break. Back propagation is probably still need to provide neurons on which links to keep.
Such a neural network formed by neural agents may even help us learn more about how our brains actually work. After all, our brains are made of neurons, but they are not organized into fixed layers.
Of course, I can only imagine the amount of computing power this will need...
I happened to come across this article as I was searching for "chaos complexity deep learning" due to an impulse while reading about chaos and complexity theory: can chaos and complexity be applied to artificial intelligence in a way similar to deep learning?
While the article is still very much about existing machine learning techniques, I think deep learning using layered networks (the current method) will eventually hit a wall: the number of layers is inherently built into the design. Inside of organizing neurons into layers, and having them more or less hardwired to each other, what if the neurons are represented as agents in a local environment forming and breaking bonds/connections with each other? The depth of links are then no longer limited by the fixed number of layers.
If this idea is to be further explored, there must be a way to define how inputs are fed into the system, and how outputs are represented. These may still be in terms of a layer of neurons currently used in deep learning. There must then be a rule for the forming of connections: how does the neuron choose which neuron in its neighborhood (the size of neighborhood, aka the neuron's reach, must also be defined) to connect to, and which existing connections to break. Back propagation is probably still need to provide neurons on which links to keep.
Such a neural network formed by neural agents may even help us learn more about how our brains actually work. After all, our brains are made of neurons, but they are not organized into fixed layers.
Of course, I can only imagine the amount of computing power this will need...
No comments:
Post a Comment