Future quantum computing – a proper thinking machine

Headline: Million qubit QCs in 2033, 10^12 qubits by 2040, 10^18 qubits by 2050, and a 10^25 qubit supercomputer by 2060

Most people are excited by progress in AI and quantum computing. I have a lot of fun with ChatGPT, but the ideas being debated now are as old as the hills. I looked through some 25-year old presentations I used to give, and back then, the same debate was going on in IT circles. My slides (in talks to IBM, Microsoft, Cisco etc) looked at AGIs and how we might build one, and what would it do to our machine-human relationship, and how could we guard against a malign AGI. 25 years ago, and we’re having that same debate again now.

One of the things I mentioned in my talks to IBM and Microsoft back then was the idea of the Heisenberg Resonator. My name – it was my own invention and it sounded a good name. I only had the vaguest of ideas how it might be made, but what it did (in theory, it was just an idea) was to solve the collapsing quantum states, by ‘resonating’ the matrix so that the state was recreated and reinforced continuously, so that computation could continue longer.

The Heisenberg Resonator stayed just an idea. Either they didn’t understand my idea, or they thought it was crap, I don’t know. But QC manufacturers still struggle to make high quality qubits and maintain entanglement. I believe Google now use something they call echo, which seems to be a similar idea. I don’t know when they started using echo and I’ve never lectured to Google so they must have though of it independently, but it’s a pretty obvious idea really.

A few years ago, I had an idea how to make it for real. Instead of using qubits, I would link 3 entanglements in a triangular arrangement, so that if one even started thinking about collapse, the other two sides would instantly bring it back into line. Pretty sure that would work, but it’s obsolete now. I’ve thought of a much better way.

I don’t want to give away the details of how to make it in a blog (it needs two of my other inventions, electron pipes and a new(ish) state of matter), but what it does is to connect all the atoms in a lattice together in a way that entangles them all by forcing them to share electrons with their near neighbours dynamically. In doing so, and leveraging Heisenberg’s uncertainty of who owns which electron at any time, it ‘quantum interlocks’ atoms in each dimension, making the whole 3D structure self-reinforcing. Each link becomes part of a lattice-wide Heisenberg Resonator. It creates a sort of quantum crystal. The quality of the states would be near perfect because they simply wouldn’t be able to drift, held in a quantum interlock. So each physical qubit could be a logical qubit, no need to use thousands to solve errors, there just wouldn’t be errors if it’s locked tight.

So I started thinking it through with a 2 x 2 grid at first, seemed fine. Then 10 x 10, still fine, then considered 10 x 10 x 10 cubic arrangement, and the maths and physics still works fine. At that point, it was obvious how strong the interlocks would be, and when I swapped from hydrogen to lithium, it was clear that it could be scaled up to a 1mm crystal with a million atoms each side (yes, they are spaced out, and yes that is part of the idea). A lot of crystals could be assembled with their essential ‘wiring’ in a 1 metre rack, millions in fact. But you probably would be quite happy with one, because even a single 1mm quantum crystal would give 10^ logical qubits.

If you did make a supercomputer version, with 25 million of these in a rack, that would give us a quantum computer with 10^24 – 10^25 qubits and you could clock those at any speed you like, even THz depending how you do the fins structure inside. That’s the far end of the wedge, but even a modest early version could have tens of millions of perfect quality qubits.

Thinking of how it works, it intuitively would not need high power consumption, but I did the maths anyway and it turns out it can run on 0.04mW, that’s milli, not mega, it isn’t a typo, but that’s the theoretical minimum, so I’d be happy with a few watts, unless I do go for the supercomputer, then I’d brush up on the manufacturing to keep it to kW. A supercomputer with up to 10^25 perfect qubits (per rack) running as fast as you like!

There are some basic physics limits on theoretical speed of quantum computers – Bekenstein , Margolus-Levitin, Davies and more, so I checked the math against those limits and all is well, it doesn’t even approach physics limits. I also ran it past all the other articles I could find mentioning limits and it passes. So there isn’t any physics in the way, it is just an engineering problem. Mind you, that doesn’t make it easy and although the basic principles are very simple, the engineering is still tricky. But I have solutions to the problems I can think of. I know how to make one, how to feed it, how to keep it running smoothly.

As for timescales, I would expect the big players will have similar ideas soon, if they haven’t already. I’d be surprised if we didn’t see an early prototype this decade. A million qubit model would follow within a few years of prototype, so 2033, because scaling really shouldn’t be too difficult, and we could expect maybe 10^12 qubits by 2040, noting the AI assistance earlier models would provide. It will be perfectly possible to make the crystal with a million atoms each dimension to deliver 10^18 qubits and it ought to be here by 2050, maybe earlier.

I can’t think of much apart from an AI that you would build with such a device, but even a modest, human-level AGI shouldn’t need more than ten thousand qubits (some people say 10 million, but they’re probably allowing for errors that need 1000 physical qubits to make good logical ones).

Can’t think of much, but two things for sure: in the same time frame we will likely see EDNA arriving. EDNA needs enormous amounts of compute power to design all the molecules, DNA, and other intra-cellular equipment it would use. As EDNA develops, it will soon need enormously powerful external IT to link to its human-centric IT. A 10^12 qubit QC here and there would provide that comfortably. By the time the last parts of EDNA are ready for installation, we’d have our 10^18 qubit machines, enough to take humans to the far-superhuman realm.

EDNA can’t be fully implemented without the extreme compute power that QCs will provide, but EDNA provides the human-machine link essential to make superhuman AGI safe, as well as a key market driver. They form a natural, and essential synergy. It isn’t safe to have a computer of that power that at least some humans aren’t directly connected to and thereby stay in pace with.

The second purpose is AI. The processing capability of such a computer would be very high, but it won’t need much I/O at all. If you want to process large datasets to offer cloud services, you can use any old junk. The elegance, the beauty of a quantum crystal AGI should not be wasted on such drudgery. Almost all of its power would be internal-facing. It would be a thinking machine of astonishing intelligence. It might only input or output a few Tbit/s, the quantum equivalent of a guru meditating for 10 years before uttering a single sentence of wisdom.

The future looks increasingly exciting, and scary, in equal measure.