The latest advance in artificial intelligence lies in the effort to reduce energy costs and compute requirements by introducing a spiking processing that increases efficiency of processing and thus, lowers energy costs.
However, the underlying problem is that designers of current neural networks have not yet reflected upon the comparisons and correlations between the ends they seek and the models/metaphors they are using.
In short, they haven’t figured out how to incorporate the need for sleep. This is, of course, because they do not desire their technology to have such a downcycle built into it. It would likely be considered an unacceptable loss of productivity. Still, if you’re going to use a metaphor as your model (which neural networks is), it should be obvious that the consideration must be more than superficial.
This reality is acknowledged and realized in some areas, with many neural network developers and engineers studying in-depth neurology to understand how the activation and potential systems in the human brain work. However, it does not seem many consider the holistic nature of the system, nor expand their thinking beyond the immediate province of activation and potential and into the full cycle of the processor.
For me, it seems that the next plateau will rest not in some mythical method by which we can find infinite (or even progressive) efficiencies, rather, that the concept of spiking neural networks finally incorporates the complementary metaphors of concepts like “retrospection” and “introspection,” “meditation” and “consideration,” and most importantly, “rest cycles” and “sleep.”
Just as the mind and brain do not constantly work at the threshold of full consciousness, the neural network has to hold the capacity for modeling all three “layers” of brain activity – pre-frontal, mid, and limbic. I suspect a metaphor for each will arise in the near future and serve to progress in ways nothing else manages.
Likewise, the recursion of “hidden variables” into those “mid” and “limbic” metaphors will introduce what is lacking to replicate the full cycle of activation and potential management with today’s neural networks; the baseline concepts, their weights and balances will move deeper into this “brain” metaphor once formulas and algorithms have passed from the initial and experimental phases into being systems of reliance.
Similarly, the weights and balances of encoded concepts of culture and society (aka biases of the humans who design these systems) will become the future “mid” systems à la hippocampus and amygdala, with only the surface layer remaining as the more malleable “pfc,” resulting in a neural network largely indistinguishable from its human counterpart.
In this metaphor/model, the machine must sleep in order to learn, even if those sleep cycles are on the order of our more human blink. This aligns with historical methods like the batch process, the cron, and the script… and it is just as important to the balanced function of the machine/network.
It may well be that getting to sleep is the next (currently unknown) plateau for artificial intelligence and neural network processing… which makes the one beyond it even more interesting: To dream.
The video that tickled this out of my brain today: