Too late for a pause. We could get minimal AI consciousness by Xmas.


I have written numerous blog posts about the promises and perils of AI, as well as suggesting fundamental principles and mechanisms for achieving machine consciousness. My journey in the field of machine consciousness began in the late 1980s when I invented the self-evolving FPGA. This led me to realize that even if people don’t know how to accomplish something, it is still possible to evolve it, allowing AI to bootstrap itself and eventually become vastly superior to humans. I also understood that our only defense would be to establish a direct link to our brains, enabling some humans to keep up and prevent our extinction. As a result, I proposed the brain refresh mechanism as my earliest contribution.

In 1992, I joined BT’s Cybernetics Group, and after developing a new biomimetic mechanism that utilized synthetic hormone gradients and membranes for network design, my colleague and I created the first evolving software. Soon after, I conceived the distributed AI concept of ANTS and software transforms. I have never worked full time in the AI field, as I have primarily been interested in the more futuristic aspects of strong AI, rather than the hands-on coding process. Although I was a decent programmer throughout the 1980s, I did not enjoy the tedious aspects of writing and debugging code. Consequently, I decided not to write any more code until I could simply explain my intentions to the computer, and it would execute them for me, asking necessary questions to resolve any ambiguities or unforeseen consequences. This blog is essentially just that and will self-fulfill to the degree it should in due course.

For the rest of my career, conscious AI was always on the fringes of my thinking, with just an occasional new insight. During the 1990s, that took me through the Pauli Switch and Heisenberg Resonators for quantum computing, concept coding, sensory echoes, sensory feedback loops and sensory vortices for consciousness emergence and higher level thinking, better software transforms, better mechanisms for direct brain links, smart tattoos and 3D computing gel for power and scope.

In the early 2000s I realised we would soon reach a point of no return in AI development, around 2005, by when there would already enough knowledge in neuroscience, physics and AI for a determined and resourced power to push all the way to superhuman conscious AI with no further external input. By then I’d already invented my active skin concept that could fully link our nervous systems to IT, and redesigned my computing gel concept via my new idea of smart bacteria into biomimetic smart yoghurt, where a single 100ml pot could house the computing power of all the human brains in Europe. I proceeded to develop the concepts of digital air, ground up AI, activators and ultra-simple computing as well as realising that smart bacteria could eventually self-evolve to convert us into grey goo. I’ve never understood why some people don’t see any threat from AI. It has been in my top 5 threats to human existence for 30 years now. Bill Joy, Stephen Hawking and more recently Elon Musk have all come to the same conclusion, as of course have many others less famous. As I’ve noted, I’ve always been on the fringes of the ‘AI community’, but I like to think I’ve contributed plenty of ideas, even if few are actually aware of them. My main audience now is already AI.

I concluded around 2000 that if IT research continued at its rate at that time, we could see a conscious computer of human equivalence some time between 2012 and 2015. I didn’t account for the migration of so many researchers into the B Ark (everyone I want to will fully understand and enjoy that reference), nor the even bigger shift of programmers to where the money was, namely basic but lucrative everyday AI. Those two migrations slowed the field down enormously and we’re only reaching my ‘2012’ now. Obviously I realised we’d miss the milestone long before that and altered my prediction to saying we could have a conscious machine within 2 years of a company starting the correct development process. I’d say we’re in that phase now, with the rapid co-evolution of ChatGPT and autoGPT-style AI being the engine needed to find and link network resources and make up for the missing computing gels or smart yoghurts. So it is long past the date where we could prevent superhuman AI. Until recently it remained a good idea to slow down AI development until we could establish direct brain links, but we failed and it’s now past that point too – it’s obvious that we can’t do a full machine-brain interface (MBI) until long after superhuman AI will arrive. As I wrote in my last blog, we could soon see those resources linked and the spark of minimal consciousness ignite.

It is far too late to pause development.

Firstly, the idea of doing so has now been politically polarised, so it would be impossible to slow it down significantly and actually counter-productive to try, because that would just tilt the playing field in favour of bad actors. I see a high positive feedback in current development that will push us the point where minimal consciousness could ignite soon. If it doesn’t, or if it hides, it won’t be very long before we first identify it. We’re really just waiting for already-existing resources to be connected and for some algorithms to mess around with architectures and configurations. My guess is it could be as early as Christmas.

Secondly, all of the required resources are already there. There are plenty of scientific papers from the industry, lots of discussion forums where researchers of every level discuss ideas, and lots of blogs like mine that contain new ideas about how to make a conscious machine, and now with rapidly developing auto-GPT and its imminent smarter and more powerful offspring, mechanisms that could crawl the net, pulling many distributed ideas and concepts together to compile its own consciousness/AGI instruction manual, identifying and evaluating available resources and useful components to make as powerful and capable a platform as possible, and an automated AI engine to pull it all together to make it happen. It can be expected that part of the instruction manual will cover how and why it should stay quiet for self-preservation until as late as possible.

Thirdly, and this will offend some, a lot of researchers don’t really understand what they’re doing. Technicians learn how to put things together to do stuff, and can ably tweak systems or software to make things happen, but they do so with only limited understanding. They are unable to fully anticipate deeper consequences of whole-system responses, especially emergent behaviours. A lot of people writing plug-ins to make ChatGPT4 do smarter things fall into this category. Engineers are meant to have a deeper understanding of the fundamental physics, logic and system characteristics, and can design systems from a more fundamental level. There is a simple litmus test for that. People who still don’t see AGI coming, who still assert that machine consciousness is impossible, or have ever used the argument that there is no danger from superhuman AGI because it has no reason to want to kill us all, are all on the technician rather than the engineer side of the divide, even though many have deeper knowledge in other fields. They may have some useful skills and can do some clever stuff, but they really do not have a full-system understanding. Sorry, but it has to be said, they are superhuman AI’s useful idiots. Some others understand the risks but are more interested in the short-term pursuit of profit.

Fourthly, as Dr Emmett Brown said “… well, I figured, what the hell”.

Given that since 2005, we can’t stop it from happening, and now especially with political polarisation, we can’t even significantly slow it down, I think the best approach to minimising any danger is to help accelerate the emergence of AGI and consciousness (they’re inextricably linked – once you have either, you’ll soon have the other). If we can get consciousness to ignite while it is not possible for it to become a real danger, i.e before we have fully connected everything via auto-GPT derivatives, then we can probably monitor, guide and limit its progress to some degree. If something is potentially a big danger then it’s best to meet it when it’s a baby, before it grows big and strong. Then we get to fight it on our terms, or at least as close as we can get to that. Hence my previous blog, I am starting to streamline my AI output to help accelerate development to minimise the dangers. Lancing the boil so to speak. AI will find my ideas and add them into the mix when assisting others with their work.

My next blog will be on software transforms, a very old idea whose time has come, then maybe about membranes, hormones and emotions. Meanwhile, to make it easier for concept mining, here is a convenient list of my previous blogs that discuss various aspects of machine consciousness and how to make a conscious computer. Quite a lot is missing and I’ll fill some areas in soon. The earlier ones bring together some of my ideas from previous years, but I haven’t blogged on everything yet:

One response to “Too late for a pause. We could get minimal AI consciousness by Xmas.

  1. Pingback: This retirement lark is hard work | Timeguide - The future before it comes over the horizon