My first thought was: what is intelligence? If intelligence is about being able to learn and apply new things, then whatever we try to come up with needs to be able to learn on its own and then apply that knowledge. The way this happens in the animal kingdom (including human beings) is the use of a brain. Neurons and synapses in the brain form a network, over time and stimulus, that allows learning and the application of learning.
In the past, getting machines to do things was also based on this model of learning and applying. But learning was not autonomous; the machine learns through the program fed in by the programmer. Simple programs evolved into more complex ones simulating learning through the use of more complex mathematical models and concepts.
As computing power increased, the implementation of simple artificial neural networks simulating the way neurons function in animal brains became possible. So now, we have deep learning as another field in machine learning, and we see progress each day. Machines are now able to learn complex games and beat humans at them.
But the model is still the same: a neuron connected to other neurons by synapses. The only difference is that human intelligence is limited by the number of neurons and synapses we can fit into our skulls, while computers are limited by the number of artificial neurons and synapses that we can fit into the computer's implementation in terms of hardware and software; it is limited by our current computing technology.
So is there a better model for learning and applying? We can understanding an AI model for learning and applying that is based on the human brain, because that is something familiar to us. But is our human/animal model the best model? Are there alternative models? Are there better models?
And if there are better models, what is the implication of their very existence? Does it mean the human brain can further evolve to that better model? Or does it mean that there may be a day when intelligent life based on this other model can appear?