AI has a power problem. Even simple graphics processors for AI applications consume several hundred watts. But the most powerful computer we use for machine learning? That award goes to Frontier, which consumes 20 megawatts a year. That is $40 million of electricity.
Frontier boasts an exaflop of compute: a billion, billion floating point mathematical calculations every second – or about the same amount of compute as our brain. The only difference is that our brains need just 20 watts of energy – the same as a lightbulb.
This energy constraint is increasingly hindering the advancement of AI. But what if we could take a cue from the way our brains process information? We recently spoke to Nigel Toon, the founder of IPU chipmaker Graphcore, to get some answers.
Using Our Noggins: Is the Key in the ‘Spike’?
AI algorithms navigate through complex, multidimensional graphs to process and analyse data. That architecture of connections is very similar to the architecture of the brain. As Nigel Toon describes,
“That connectedness is what your brain does. You’ve got the neurons connected by axons, the synapses, and, actually, there’s no memory in your brain. Everything you know is stored on the connections between your neurons, and the importance of what you know is based on how the neuron will determine whether that piece of information is important for the processing that it’s trying to do. A neuron might have 10,000 connections coming into it. Of those 10,000 things, which are the ones that are important for this particular decision?”
In the brain, that task of processing information depends on binary electrical impulses called spikes. These spikes last milliseconds and are always an equal voltage. The information lies in the interval length between the spikes.
That differs from conventional AI systems. They multiply large matrices by real numbers, storing the information in the exact values, which is where the compute requirements and massive energy drain come from.
So could AI research leverage this brain-power? Researchers at FAU believe so. They focus on artificial nerve cells called long short-term memory units (LSTM), which they have modified to imitate the membrane potential of biological cells. This lets them act like brain nerve cells that use spikes to transmit and process information. And the results are promising.
What About Quantum Computing?
While researchers look to the human brain for answers to AI’s power problem, others tout the possibilities of quantum computing. The hype has reached almost stratospheric levels in some quarters, with IBM recently claiming the ‘paradigm-shifting abilities’ of quantum artificial intelligence (QAI) enable a ‘nearly limitless possibility’.
But do we need to be more realistic?
Probably.
Quantum computing uses qubits – quantum bits made from atoms, superconducting circuits, or other particles – that encode data not just as 0s or 1s, but also in a state that represents both simultaneously. This state, called a superposition, allows for encoding exponentially more information per qubit.
Accordingly, quantum processors can solve complex calculations much faster than classical computers, potentially reducing computation time from thousands of years to minutes. Unlike deterministic classical computers that sequentially compute each step, quantum computers process enormous datasets almost simultaneously in a probabilistic manner, greatly increasing efficiency by finding the most likely solution to problems.
It sounds like the ideal solution to AI’s increasingly demanding compute requirements, right?
But there is a problem.
Qubits are inherently error prone. So much so that we could need to comprehensively correct system errors 10 billion times or more per second to get anywhere. As Nigel Toon explains, the problem is in the following question: ‘Can you come up with a structure that allows you to replicate qubits in such a way that you don’t actually replicate the error functions faster than you grow the size of the machine?’
And no one done it yet.
Moving to the Molecular
A more promising avenue might be molecular computing. The idea is to use molecules to perform computations, leveraging their chemical and physical properties to process information and solve problems at a molecular level. As Nigel Toon explains,
‘There’s research going on around using DNA to store data and information, and about how you would use a protein structure and cause it to change shape, which you could represent as a switch, like a transistor. The problem is to exercise that protein to change its shape, you’ve got to insert some chemical. And how do you insert the chemical in such a way that it becomes scalable and you can grow it?’
Yet scientists have already created a new biocomputing chip that calculates using a DNA substrate, including the mathematical operations key to AI training and big data processing.
Where Next?
As we grapple with the energy-intensive demands of contemporary AI systems, we are learning to walk a potentially transformative pathway: mimicking the energy efficiency of the human brain.
Meanwhile, quantum and molecular computing hint at a future where AI could operate not only with greater computational power but also the kind of energy efficiency that nature has perfected over millions of years.
This blend of biological inspiration and technological innovation may just hold the key to overcoming the current limitations of AI, ushering in a new era of sustainable, powerful computing solutions.
AI at Macro Hive
At Macro Hive, our vision is to become the financial industry’s most trusted partner in harnessing the synergy of natural and artificial intelligence to revolutionise market insights and decision-making.
We aim to lead the industry by setting the highest standards for accuracy, reliability, and innovation, empowering institutions to navigate the complexities of the financial markets with unparalleled precision and foresight.
We have already developed a wealth of AI-enhanced financial analysis tools that are helping our clients interpret markets and boost returns. You can find out more about our efforts here.
Matthew Tibble is Commissioning Editor at Macro Hive. He has worked as an editorial consultant and freelance editor for companies such as RiskThinking.AI, JDI Research, and FutureScape248.