Tag Archives: machine consciousness

Biomimetic insights for machine consciousness

About 20 years ago I gave my first talk on how to achieve consciousness in machines, at a World Future Society conference, and went on to discuss how we would co-evolve with machines. I’ve lectured on machine consciousness hundreds of times but never produced any clear slides that explain my ideas properly. I thought it was about time I did. My belief is that today’s deep neural networks using feed-forward processing with back propagation training can not become conscious. No digital algorithmic neural network can, even though they can certainly produce extremely good levels of artificial intelligence. By contrast, nature also uses neurons but does produce conscious machines such as humans easily. I think the key difference is not just that nature uses analog adaptive neural nets rather than digital processing (as I believe Hans Moravec first insighted, a view that I readily accepted) but also that nature uses large groups of these analog neurons that incorporate feedback loops that act both as a sort of short term memory and provide time to sense the sensing process as it happens, a mechanism that can explain consciousness. That feedback is critically important in the emergence of consciousness IMHO. I believe that if the neural network AI people stop barking up the barren back-prop tree and start climbing the feedback tree, we could have conscious machines in no time, but Moravec is still probably right that these need to be analog to enable true real-time processing as opposed to simulation of that.

I may be talking nonsense of course, but here are my thoughts, finally explained as simply and clearly as I can. These slides illustrate only the simplest forms of consciousness. Obviously our brains are highly complex and evolved many higher level architectures, control systems, complex senses and communication, but I think the basic foundations of biomimetic machine consciousness can be achieved as follows:

That’s it. I might produce some more slides on higher level processing such as how concepts might emerge, and why in the long term, AIs will have to become hive minds. But they can wait for later blogs.

Advertisements

How can we make a computer conscious?

This is very text heavy and is really just my thinking out loud, so to speak. Unless you are into mental archaeology or masochistic, I’d strongly recommend that you instead go to my new blog on this which outlines all of the useful bits graphically and simply.

Otherwise….

I found this article in my drafts folder, written 3 years ago as part of my short series on making conscious computers. I thought I’d published it but didn’t. So updating and publishing it now. It’s a bit long-winded, thinking out loud, trying to derive some insights from nature on how to make conscious machines. The good news is that actual AI developments are following paths that lead in much the same direction, though some significant re-routing and new architectural features are needed if they are to optimize AI and achieve machine consciousness.

Let’s start with the problem. Today’s AI that plays chess, does web searches or answers questions is digital. It uses algorithms, sets of instructions that the computer follows one by one. All of those are reduced to simple binary actions, toggling bits between 1 and 0. The processor doing that is no more conscious or aware of it, and has no more understanding of what it is doing than an abacus knows it is doing sums. The intelligence is in the mind producing the clever algorithms that interpret the current 1s and 0s and change them in the right way. The algorithms are written down, albeit in more 1s and 0s in a memory chip, but are essentially still just text, only as smart and aware as a piece of paper with writing on it. The answer is computed, transmitted, stored, retrieved, displayed, but at no point does the computer sense that it is doing any of those things. It really is just an advanced abacus. An abacus is digital too (an analog equivalent to an abacus is a slide rule).

A big question springs to mind: can a digital computer ever be any more than an advanced abacus. Until recently, I was certain the answer was no. Surely a digital computer that just runs programs can never be conscious? It can simulate consciousness to some degree, it can in principle describe the movements of every particle in a conscious brain, every electric current, every chemical reaction. But all it is doing is describing them. It is still just an abacus. Once computed, that simulation of consciousness could be printed and the printout would be just as conscious as the computer was. A digital ‘stored program’ computer can certainly implement extremely useful AI. With the right algorithms, it can mine data, link things together, create new data from that, generate new ideas by linking together things that haven’t been linked before, make works of art, poetry, compose music, chat to people, recognize faces and emotions and gestures. It might even be able to converse about life, the universe and everything, tell you its history, discuss its hopes for the future, but all of that is just a thin gloss on an abacus. I wrote a chat-bot on my Sinclair ZX Spectrum in 1983, running on a processor with about 8,000 transistors. The chat-bot took all of about 5 small pages of code but could hold a short conversation quite well if you knew what subjects to stick to. It’s very easy to simulate conversation. But it is still just a complicated abacus and still doesn’t even know it is doing anything.

However clever the AI it implements, a conventional digital computer that just executes algorithms can’t become conscious but an analog computer can, a quantum computer can, and so can a hybrid digital/analog/quantum computer. The question remain s whether a digital computer can be conscious if it isn’t just running stored programs. Could it have a different structure, but still be digital and yet be conscious? Who knows? Not me. I used to know it couldn’t, but now that I am a lot older and slightly wiser, I now know I don’t know.

Consciousness debate often starts with what we know to be conscious, the human brain. It isn’t a digital computer, although it has digital processes running in it. It also runs a lot of analog processes. It may also run some quantum processes that are significant in consciousness. It is a conscious hybrid of digital, analog and possibly quantum computing. Consciousness evolved in nature, therefore it can be evolved in a lab. It may be difficult and time consuming, and may even be beyond current human understanding, but it is possible. Nature didn’t use magic, and what nature did can be replicated and probably even improved on. Evolutionary AI development may have hit hard times, but that only shows that the techniques used by the engineers doing it didn’t work on that occasion, not that other techniques can’t work. Around 2.6 new human-level fully conscious brains are made by nature every second without using any magic and furthermore, they are all slightly different. There are 7.6 billion slightly different implementations of human-level consciousness that work and all of those resulted from evolution. That’s enough of an existence proof and a technique-plausibility-proof for me.

Sensors evolved in nature pretty early on. They aren’t necessary for life, for organisms to move around and grow and reproduce, but they are very helpful. Over time, simple light, heat, chemical or touch detectors evolved further to simple vision and produce advanced sensations such as pain and pleasure, causing an organism to alter its behavior, in other words, feeling something. Detection of an input is not the same as sensation, i.e. feeling an input. Once detection upgrades to sensation, you have the tools to make consciousness. No more upgrades are needed. Sensing that you are sensing something is quite enough to be classified as consciousness. Internally reusing the same basic structure as external sensing of light or heat or pressure or chemical gradient or whatever allows design of thought, planning, memory, learning and construction and processing of concepts. All those things are just laying out components in different architectures. Getting from detection to sensation is the hard bit.

So design of conscious machines, and in fact what AI researchers call the hard problem, really can be reduced to the question of what makes the difference between a light switch and something that can feel being pushed or feel the current flowing when it is, the difference between a photocell and feeling whether it is light or dark, the difference between detecting light frequency, looking it up in a database, then pronouncing that it is red, and experiencing redness. That is the hard problem of AI. Once that is solved, we will very soon afterwards have a fully conscious self aware AI. There are lots of options available, so let’s look at each in turn to extract any insights.

The first stage is easy enough. Detecting presence is easy, measuring it is harder. A detector detects something, a sensor (in its everyday engineering meaning) quantifies it to some degree. A component in an organism might fire if it detects something, it might fire with a stronger signal or more frequently if it detects more of it, so it would appear to be easy to evolve from detection to sensing in nature, and it is certainly easy to replicate sensing with technology.

Essentially, detection is digital, but sensing is usually analog, even though the quantity sensed might later be digitized. Sensing normally uses real numbers, while detection uses natural numbers (real v  integer as programmer call them). The handling of analog signals in their raw form allows for biomimetic feedback loops, which I’ll argue are essential. Digitizing them introduces a level of abstraction that is essentially the difference between emulation and simulation, the difference between doing something and reading about someone doing it. Simulation can’t make a conscious machine, emulation can. I used to think that meant digital couldn’t become conscious, but actually it is just algorithmic processing of stored programs that can’t do it. There may be ways of achieving consciousness digitally, or quantumly, but I haven’t yet thought of any.

That engineering description falls far short of what we mean by sensation in human terms. How does that machine-style sensing become what we call a sensation? Logical reasoning says there would probably need to be only a small change in order to have evolved from detection to sensing in nature. Maybe something like recombining groups of components in different structures or adding them together or adding one or two new ones, that sort of thing?

So what about detecting detection? Or sensing detection? Those could evolve in sequence quite easily. Detecting detection is like your alarm system control unit detecting the change of state that indicates that a PIR has detected an intruder, a different voltage or resistance on a line, or a 1 or a 0 in a memory store. An extremely simple AI responds by ringing an alarm. But the alarm system doesn’t feel the intruder, does it?  It is just a digital response to a digital input. No good.

How about sensing detection? How do you sense a 1 or a 0? Analog interpretation and quantification of digital states is very wasteful of resources, an evolutionary dead end. It isn’t any more useful than detection of detection. So we can eliminate that.

OK, sensing of sensing? Detection of sensing? They look promising. Let’s run with that a bit. In fact, I am convinced the solution lies in here so I’ll look till I find it.

Let’s do a thought experiment on designing a conscious microphone, and for this purpose, the lowest possible order of consciousness will do, we can add architecture and complexity and structures once we have some bricks. We don’t particularly want to copy nature, but are free to steal ideas and add our own where it suits.

A normal microphone sensor produces an analog signal quantifying the frequencies and intensities of the sounds it is exposed to, and that signal may later be quantified and digitized by an analog to digital converter, possibly after passing through some circuits such as filters or amplifiers in between. Such a device isn’t conscious yet. By sensing the signal produced by the microphone, we’d just be repeating the sensing process on a transmuted signal, not sensing the sensing itself.

Even up close, detecting that the microphone is sensing something could be done by just watching a little LED going on when current flows. Sensing it is harder but if we define it in conventional engineering terms, it could still be just monitoring a needle moving as the volume changes. That is obviously not enough, it’s not conscious, it isn’t feeling it, there’s no awareness there, no ‘sensation’. Even at this primitive level, if we want a conscious mic, we surely need to get in closer, into the physics of the sensing. Measuring the changing resistance between carbon particles or speed of a membrane moving backwards and forwards would just be replicating the sensing, adding an extra sensing stage in series, not sensing the sensing, so it needs to be different from that sort of thing. There must surely need to be a secondary change or activity in the sensing mechanism itself that senses the sensing of the original signal.

That’s a pretty open task, and it could even be embedded in the detecting process or in the production process for the output signal. But even recognizing that we need this extra property narrows the search. It must be a parallel or embedded mechanism, not one in series. The same logical structure would do fine for this secondary sensing, since it is just sensing in the same logical way as the original. This essential logical symmetry would make its evolution easy too. It is easy to imagine how that could happen in nature, and easier still to see how it could be implemented in a synthetic evolution design system. Such an approach could be mimicked in natural or synthetic evolutionary development systems. In this approach, we have to feel the sensing, so we need it to comprise some sort of feedback loop with a high degree of symmetry compared with the main sensing stage. That would be natural evolution compatible as well as logically sound as an engineering approach.

This starts to look like progress. In fact, it’s already starting to look a lot like a deep neural network, with one huge difference: instead of using feed-forward signal paths for analysis and backward propagation for training, it relies instead on a symmetric feedback mechanism where part of the input for each stage of sensing comes from its own internal and output signals. A neuron is not a full sensor in its own right, and it’s reasonable to assume that multiple neurons would be clustered so that there is a feedback loop. Many in the neural network AI community are already recognizing the limits of relying on feed-forward and back-prop architectures, but web searches suggest few if any are moving yet to symmetric feedback approaches. I think they should. There’s gold in them there hills!

So, the architecture of the notional sensor array required for our little conscious microphone would have a parallel circuit and feedback loop (possibly but not necessarily integrated), and in all likelihood these parallel and sensing circuits would be heavily symmetrical, i.e. they would use pretty much the same sort of components and architectures as the sensing process itself. If the sensation bit is symmetrical, of similar design to the primary sensing circuit, that again would make it easy to evolve in nature too so is a nice 1st principles biomimetic insight. So this structure has the elegance of being very feasible for evolutionary development, natural or synthetic. It reuses similarly structured components and principles already designed, it’s just recombining a couple of them in a slightly different architecture.

Another useful insight screams for attention too. The feedback loop ensures that the incoming sensation lingers to some degree. Compared to the nanoseconds we are used to in normal IT, the signals in nature travel fairly slowly (~200m/s), and the processing and sensing occur quite slowly (~200Hz). That means this system would have some inbuilt memory that repeats the essence of the sensation in real time – while it is sensing it. It is inherently capable of memory and recall and leaves the door wide open to introduce real-time interaction between memory and incoming signal. It’s not perfect yet, but it has all the boxes ticked to be a prime contender to build thought, concepts, store and recall memories, and in all likelihood, is a potential building brick for higher level consciousness. Throw in recent technology developments such as memristors and it starts to look like we have a very promising toolkit to start building primitive consciousness, and we’re already seeing some AI researchers going that path so maybe we’re not far from the goal. So, we make a deep neural net with nice feedback from output (of the sensing system, which to clarify would be a cluster of neurons, not a single neuron) to input at every stage (and between stages) so that inputs can be detected and sensed, while the input and output signals are stored and repeated into the inputs in real time as the signals are being processed. Throw in some synthetic neurotransmitters to dampen the feedback and prevent overflow and we’re looking at a system that can feel it is feeling something and perceive what it is feeling in real time.

One further insight that immediately jumps out is since the sensing relies on the real time processing of the sensations and feedbacks, the speed of signal propagation, storage, processing and repetition timeframes must all be compatible. If it is all speeded up a million fold, it might still work fine, but if signals travel too slowly or processing is too fast relative to other factors, it won’t work. It will still get a computational result absolutely fine, but it won’t know that it has, it won’t be able to feel it. Therefore… since we have a factor of a million for signal speed (speed of light compared to nerve signal propagation speed), 50 million for switching speed, and a factor of 50 for effective neuron size (though the sensing system units would be multiple neuron clusters), we could make a conscious machine that could think at 50 million times as fast as a natural system (before allowing for any parallel processing of course). But with architectural variations too, we’d need to tune those performance metrics to make it work at all and making physically larger nets would require either tuning speeds down or sacrificing connectivity-related intelligence. An evolutionary design system could easily do that for us.

What else can we deduce about the nature of this circuit from basic principles? The symmetry of the system demands that the output must be an inverse transform of the input. Why? Well, because the parallel, feedback circuit must generate a form that is self-consistent. We can’t deduce the form of the transform from that, just that the whole system must produce an output mathematically similar to that of the input.

I now need to write another blog on how to use such circuits in neural vortexes to generate knowledge, concepts, emotions and thinking. But I’m quite pleased that it does seem that some first-principles analysis of natural evolution already gives us some pretty good clues on how to make a conscious computer. I am optimistic that current research is going the right way and only needs relatively small course corrections to achieve consciousness.

 

The future of terminators

The Terminator films were important in making people understand that AI and machine consciousness will not necessarily be a good thing. The terminator scenario has stuck in our terminology ever since.

There is absolutely no reason to assume that a super-smart machine will be hostile to us. There are even some reasons to believe it would probably want to be friends. Smarter-than-man machines could catapult us into a semi-utopian era of singularity level development to conquer disease and poverty and help us live comfortably alongside a healthier environment. Could.

But just because it doesn’t have to be bad, that doesn’t mean it can’t be. You don’t have to be bad but sometimes you are.

It is also the case that even if it means us no harm, we could just happen to be in the way when it wants to do something, and it might not care enough to protect us.

Asimov’s laws of robotics are irrelevant. Any machine smart enough to be a terminator-style threat would presumably take little notice of rules it has been given by what it may consider a highly inferior species. The ants in your back garden have rules to govern their colony and soldier ants trained to deal with invader threats to enforce territorial rules. How much do you consider them when you mow the lawn or rearrange the borders or build an extension?

These arguments are put in debates every day now.

There are however a few points that are less often discussed

Humans are not always good, indeed quite a lot of people seem to want to destroy everything most of us want to protect. Given access to super-smart machines, they could design more effective means to do so. The machines might be very benign, wanting nothing more than to help mankind as far as they possibly can, but misled into working for them, believing in architected isolation that such projects are for the benefit of humanity. (The machines might be extremely  smart, but may have existed since their inception in a rigorously constructed knowledge environment. To them, that might be the entire world, and we might be introduced as a new threat that needs to be dealt with.) So even benign AI could be an existential threat when it works for the wrong people. The smartest people can sometimes be very naive. Perhaps some smart machines could be deliberately designed to be so.

I speculated ages ago what mad scientists or mad AIs could do in terms of future WMDs:

https://timeguide.wordpress.com/2014/03/31/wmds-for-mad-ais/

Smart machines might be deliberately built for benign purposes and turn rogue later, or they may be built with potential for harm designed in, for military purposes. These might destroy only enemies, but you might be that enemy. Others might do that and enjoy the fun and turn on their friends when enemies run short. Emotions might be important in smart machines just as they are in us, but we shouldn’t assume they will be the same emotions or be wired the same way.

Smart machines may want to reproduce. I used this as the core storyline in my sci-fi book. They may have offspring and with the best intentions of their parent AIs, the new generation might decide not to do as they’re told. Again, in human terms, a highly familiar story that goes back thousands of years.

In the Terminator film, it is a military network that becomes self aware and goes rogue that is the problem. I don’t believe digital IT can become conscious, but I do believe reconfigurable analog adaptive neural networks could. The cloud is digital today, but it won’t stay that way. A lot of analog devices will become part of it. In

https://timeguide.wordpress.com/2014/10/16/ground-up-data-is-the-next-big-data/

I argued how new self-organising approaches to data gathering might well supersede big data as the foundations of networked intelligence gathering. Much of this could be in a the analog domain and much could be neural. Neural chips are already being built.

It doesn’t have to be a military network that becomes the troublemaker. I suggested a long time ago that ‘innocent’ student pranks from somewhere like MIT could be the source. Some smart students from various departments could collaborate to see if they can hijack lots of networked kit to see if they can make a conscious machine. Their algorithms or techniques don’t have to be very efficient if they can hijack enough. There is a possibility that such an effort could succeed if the right bits are connected into the cloud and accessible via sloppy security, and the ground up data industry might well satisfy that prerequisite soon.

Self-organisation technology will make possible extremely effective combat drones.

https://timeguide.wordpress.com/2013/06/23/free-floating-ai-battle-drone-orbs-or-making-glyph-from-mass-effect/

Terminators also don’t have to be machines. They could be organic, products of synthetic biology. My own contribution here is smart yogurt: https://timeguide.wordpress.com/2014/08/20/the-future-of-bacteria/

With IT and biology rapidly converging via nanotech, there will be many ways hybrids could be designed, some of which could adapt and evolve to fill different niches or to evade efforts to find or harm them. Various grey goo scenarios can be constructed that don’t have any miniature metal robots dismantling things. Obviously natural viruses or bacteria could also be genetically modified to make weapons that could kill many people – they already have been. Some could result from seemingly innocent R&D by smart machines.

I dealt a while back with the potential to make zombies too, remotely controlling people – alive or dead. Zombies are feasible this century too:

https://timeguide.wordpress.com/2012/02/14/zombies-are-coming/ &

https://timeguide.wordpress.com/2013/01/25/vampires-are-yesterday-zombies-will-peak-soon-then-clouds-are-coming/

A different kind of terminator threat arises if groups of people are linked at consciousness level to produce super-intelligences. We will have direct brain links mid-century so much of the second half may be spent in a mental arms race. As I wrote in my blog about the Great Western War, some of the groups will be large and won’t like each other. The rest of us could be wiped out in the crossfire as they battle for dominance. Some people could be linked deeply into powerful machines or networks, and there are no real limits on extent or scope. Such groups could have a truly global presence in networks while remaining superficially human.

Transhumans could be a threat to normal un-enhanced humans too. While some transhumanists are very nice people, some are not, and would consider elimination of ordinary humans a price worth paying to achieve transhumanism. Transhuman doesn’t mean better human, it just means humans with greater capability. A transhuman Hitler could do a lot of harm, but then again so could ordinary everyday transhumanists that are just arrogant or selfish, which is sadly a much bigger subset.

I collated these various varieties of potential future cohabitants of our planet in: https://timeguide.wordpress.com/2014/06/19/future-human-evolution/

So there are numerous ways that smart machines could end up as a threat and quite a lot of terminators that don’t need smart machines.

Outcomes from a terminator scenario range from local problems with a few casualties all the way to total extinction, but I think we are still too focused on the death aspect. There are worse fates. I’d rather be killed than converted while still conscious into one of 7 billion zombies and that is one of the potential outcomes too, as is enslavement by some mad scientist.

 

We could have a conscious machine by end-of-play 2015

I made xmas dinner this year, as I always do. It was pretty easy.

I had a basic plan, made up a menu suited to my family and my limited ability, ensured its legality, including license to serve and consume alcohol to my family on my premises, made sure I had all the ingredients I needed, checked I had recipes and instructions where necessary. I had the tools, equipment and working space I needed, and started early enough to do it all in time for the planned delivery. It was successful.

That is pretty much what you have to do to make anything, from a cup of tea to a space station, though complexity, cost and timings may vary.

With conscious machines, it is still basically the same list. When I check through it to see whether we are ready to make a start I conclude that we are. If we make the decision now at the end of 2013 to make a machine which is conscious and self-aware by the end of 2015, we could do it.

Every time machine consciousness is raised as a goal, a lot of people start screaming for a definition of consciousness. I am conscious, and I know how it feels. So are you. Neither of us can write down a definition that everyone would agree on. I don’t care. It simply isn’t an engineering barrier. Let’s simply aim for a machine that can make either of us believe that it is conscious and self aware in much the same way as we are. We don’t need weasel words to help pass an abacus off as Commander Data.

Basic plan: actually, there are several in development.

One approach is essentially reverse engineering the human brain, mapping out the neurons and replicating them. That would work, (Markram’s team) but would take too long.  It doesn’t need us to understand how consciousness works, it is rather like  methodically taking a television apart and making an exact replica using identical purchased or manufactured components.  It has the advantage of existing backing and if nobody tries a better technique early enough, it could win. More comment on this approach: https://timeguide.wordpress.com/2013/05/17/reverse-engineering-the-brain-is-a-very-slow-way-to-make-a-smart-computer/

Another is to use a large bank of powerful digital computers with access to large pool of data and knowledge. That can produce a very capable machine that can answer difficult questions or do various things well that traditionally need smart people , but as far as creating a conscious machine, it won’t work. It will happen anyway for various reasons, and may produce some valuable outputs, but it won’t result in a conscious machine..

Another is to use accelerate guided evolution within an electronic equivalent of the ‘primordial soup’. That takes the process used by nature, which clearly worked, then improves and accelerates it using whatever insights and analysis we can add via advanced starting points, subsequent guidance, archiving, cataloging and smart filtering and pruning. That also would work. If we can make the accelerated evolution powerful enough it can be achieved quickly. This is my favoured approach because it is the only one capable of succeeding by the end of 2015. So that is the basic plan, and we’ll develop detailed instructions as we go.

Menu suited to audience and ability: a machine we agree is conscious and self aware, that we can make using know-how we already have or can reasonably develop within the project time-frame.

Legality: it isn’t illegal to make a conscious machine yet. It should be; it most definitely should be, but it isn’t. The guards are fast asleep and by the time they wake up, notice that we’re up to something, and start taking us seriously, agree on what to do about it, and start writing new laws, we’ll have finished ages ago.

Ingredients:

substantial scientific and engineering knowledge base, reconfigurable analog and digital electronics, assorted structures, 15nm feature size, self organisation, evolutionary engines, sensors, lasers, LEDs, optoelectronics, HDWDM, transparent gel, inductive power, power supply, cloud storage, data mining, P2P, open source community

Recipe & instructions

I’ve written often on this from different angles:

https://timeguide.wordpress.com/2013/02/15/how-to-make-a-conscious-computer/ summarises the key points and adds insight on core component structure – especially symmetry. I believe that consciousness can be achieved by applying similar sensory structures to  internal processes as those used to sense external stimuli. Both should have a feedback loop symmetrical to the main structure. Essentially what I’m saying is that sensing that you are sensing something is key to consciousness and that is the means of converting detection into sensing and sensing into awareness, awareness into consciousness.

Once a mainstream lab finally recognises that symmetry of external sensory and internally directed sensory structures, with symmetrical sensory feedback loops (as I describe in this link) is fundamental to achieving consciousness, progress will occur quickly. I’d expect MIT or Google to claim they have just invented this concept soon, then hopefully it will be taken seriously and progress will start.

https://timeguide.wordpress.com/2011/09/18/gel-computing/

https://timeguide.wordpress.com/2010/06/16/man-machine-equivalence-by-2015/

Tools, equipment, working space: any of many large company, government or military labs could do this.

Starting early enough: it is very disappointing that work hasn’t already conspicuouslessly begun on this approach, though of course it may be happening in secret somewhere. The slower alternative being pursued by Markram et al is apparently quite well funded and publicised. Nevertheless, if work starts at the beginning of 2014, it could achieve the required result by the end of 2015. The vast bulk of the time would be creating the sensory and feedback processes to direct the evolution of electronics within the gel.

It is possible that ethics issues are slowing progress. It should be illegal to do this without proper prior discussion and effective safeguards. Possibly some of the labs capable of doing it are avoiding doing so for ethical reasons. However, I doubt that. There are potential benefits that could be presented in such a way as to offset potential risks and it would be quite a prize for any brand to claim the first conscious machine. So I suspect the reason for the delay to date is failure of imagination.

The early days of evolutionary design were held back by teams wanting to stick too closely to nature, rather than simply drawing biomimetic idea stimulation and building on it. An entire generation of electronic and computer engineers has been crippled by being locked into digital thinking but the key processes and structures within a conscious computer will come from the analog domain.

How smart could an AI become?

I got an interesting question in a comment from Jim T on my last blog.

What is your opinion now on how powerful machine intelligence will become?

Funny, but my answer relates to the old question: how many angels can sit on the head of a pin?

The brain is not a digital computer, and don’t think a digital processor will be capable of consciousness (though that doesn’t mean it can’t be very smart and help make huge scientific progress). I believe a conscious AI will be mostly analog in nature, probably based on some fancy combo of adaptive neural nets. as suggested decades ago by Moravec.

Taking that line, and looking at how far miniaturisation can go, then adding all the zeros that arise from the shorter signal transmission paths, faster switching speeds, faster comms, and the greater number of potential pathways using optical WDM than electronic connectivity, I calculated that a spherical pinhead (1mm across) could ultimately house the equivalent of 10,000 human brains. (I don’t know how smart angels are so didn’t quite get to the final step). You could scale that up for as much funding, storage and material and energy you can provide.

However, what that quantifies is how many human equivalent AIs you could support. Very useful to know if you plan to build a future server farm to look after electronically immortal people. You could build a machine with the equivalent intelligence of the entire human race. But it doesn’t answer the question of how smart a single AI could ever be, or how powerful it could be. Quantity isn’t qualityYou could argue that 1% of the engineers produce 99% of the value, even with only a fairly small IQ difference. 10 billion people may not be as useful for progress as 10 people with 5 times the IQ. And look at how controversial IQ is. We can’t even agree what intelligence is or how to quantify it.

Just based on loose language, how powerful or smart or intelligent an AI could become depends on the ongoing positive feedback loop. Adding  more AI of the same intelligence level will enable the next incremental improvement, then using those slightly smarter AIs would get you to the next stage, a bit faster, ad infinitum. Eventually, you could make an AI that is really, really, really smart.

How smart is that? I don’t have the terminology to describe it. I can borrow an analogy though. Terry Pratchett’s early book ‘The Dark Side of the Sun’ has a character in it called The Bank. It was a silicon planet, with the silicon making a hugely smart mind. Imagine if a pinhead could house 10,000 human brains, and you have a planet of the stuff, and it’s all one big intellect instead of lots of dumb ones. Yep. Really, really, really smart.

How to make a conscious computer

The latest generation of supercomputers have processing speed that is higher than the human brain on a simple digital comparison, but they can’t think, aren’t conscious. It’s not even really appropriate to compare them because the brain mostly isn’t digital. It has some digital processing in the optics system but mostly uses adaptive analog neurons whereas digital computers use digital chips for processing and storage and only a bit of analog electronics for other circuits. Most digital computers don’t even have anything we would equate to senses.

Analog computers aren’t used much now, but were in fairly widespread use in some industries until the early 1980s. Most IT people have no first hand experience of them and some don’t seem to even be aware of analog computers, what they can do or how. But in the AI space, a lot of the development uses analog approaches.

https://timeguide.wordpress.com/2011/09/18/gel-computing/ discusses some of my previous work on conscious computer design. I won’t reproduce it here.

I firmly believe consciousness, whether externally or internally focused, is the result of internally directed sensing, (sensing can be thought of as the solicitation of feeling) so that you feel your thoughts or sensory inputs in much the same way. The easy bit is figuring out how thinking can work once you have that, how memories can be relived, concepts built, how self-awareness, sentience, intelligence emerge. All those are easy once you have figured out how feeling works. That is the hard problem.

Detection is not the same as feeling. It is easy to build a detector or sensor that flips a switch or moves a dial when something happens or even precisely quantifies something . Feeling it is another layer on that. Your skin detects touch, but your brain feels it, senses it. Taking detection and making it feel and become a sensation, that’s hard. What is it about a particular circuit that adds sensation? That is the missing link, the hard problem, and all the writing available out there just echoes that. Philosophers and scientists have written about this same problem in different ways for ages, and have struggled in vain to get a grip on it, many end up running in circles. So far they don’t know the answer, and neither do I. The best any offer is elucidation of aspects of the problem and at occasionally some hints of things that they think might somehow be connected with the answer. There exists no answer or explanation yet.

There is no magic in the brain. The circuitry involved in feeling something is capable of being described, replicated and even manufactured. It is possible to find out how to make a conscious circuit, even if we still don’t know what consciousness is or how it works, via replication, reverse engineering or evolutionary development. We manage to make conscious children several times every second.

How far can we go? Having studied a lot of what is written, it is clear that even after a lot of smart people thinking a long time about it, there is a great deal of confusion out there, and at least some of it comes basically from trying to use too big words and some comes from trying to analyse too much at once. When it is so obvious that it is a tough problem, simplifying it will undoubtedly help.  So let’s narrow it down a bit.

Feeling needs to be separated out from all the other things going on. What is it that happens that makes something feel? Well, detecting something pre-empts feeling it, and interpreting it or thinking about it comes later. So, ignore the detection and interpretation and thinking bits for now. Even sensation can be modelled as solicitation of feeling, essentially adding qualitative information to it. We ought to be able to make an abstraction model as for any IT system, where feeling is a distinct layer, coming between the physical detection layer and sensation, well below any of the layers associated with thinking or analysis.

Many believe that very simple organisms can detect stimuli and react to them, but can’t feel,  but more sophisticated ones can. Logical deduction tells us either that feeling may require fairly complex neural networks but certainly well below human levels, or alternatively, feeling may not be fundamentally linked to complexity but may emerge from architectural differences that arose in parallel with increasing complexity but aren’t dependent on it. It is also very likely due to evolutionary mechanisms that feeling emerges from similar structures to detection, though not the same. Architectural modifications, feedbacks, or additions to detection circuits might be an excellent point to start looking.

So we don’t know the answer, but we do have some good clues. Better than nothing. Coming at it from a philosophical direction, even the smartest people quickly get tied in knots, but from an engineering direction, I think the problem is soluble.

If feeling is, as I believe, a modified detection system, then we could for example seed an evolutionary design system with detection systems. Mutating, restructuring and rearranging detection systems and adding occasional random components here and there might eventually create some circuits that feel. It did in nature, and would in an evolutionary design system, given time. But how would we know? An evolutionary design system needs some means of selection to distinguish the more successful branches for further development.

Using feedback loops would probably help. A system with built in feedback so that it feels that it is feeling something would be symmetrical, maybe even fractal. Self-reinforcement of a feeling process would also create a little vortex of activity. A simple detection system (with detection of detection) would not exhibit such strong activity peaks due to necessary lack of symmetry in detection of initial and processed stimuli. So all we need do is to introduce feedback loops in each architecture and look for the emergence of activity peaks. Possibly, some non-feeling architectures might also show activity peaks so not all peaks would necessarily show successes, but all successes would show peaks.

So, the evolutionary system would take basic detection circuits as input, modify them, add random components, then connect them in simple symmetrical feedback loops. Most results would do nothing. Some would show self-reinforcement, evidenced by activity peaks. Those are the ones we need.

The output from such an evolutionary design system would be circuits that feel (and some junk). We have our basic components. Now we can start to make a conscious computer.

Let’s go back to the gel computing idea and plug them in. We have some basic detectors, for light, sound, touch etc. Pretty simple stuff, but we connect those to our new feeling circuits, so now those inputs stop being just information and become sensations. We add in some storage, recording the inputs, again with some feeling circuits added into the mix, and just for fun, let’s make those recording circuits replay those inputs over and over, indefinitely. Those sensations will be felt again and again, the memory relived. Our primitive little computer can already remember and experience things it has experienced before. Now add in some processing. When a and b happen, c results. Nothing complicated. Just the sort of primitive summation of inputs we know neurons can do all the time. But now, when that processing happens, our computer brain feels it. It feels that it is doing some thinking. It feels the stimuli occurring, a result occurring. And as it records and replays it, an experience builds. It now has knowledge. It may not be the answer to life the universe and everything just yet, but knowledge it is. It now knows and remembers the experience that when it links these two inputs, it gets that output. These processes and recordings and replays and further processing and storage and replays echo throughout the whole system. The sensory echoes and neural interference patterns result in some areas of reinforcement and some of cancellation. Concepts form. The whole process is sensed by the brain. It is thinking, processing, reliving memories, linking inputs and results into concepts and knowledge, storing concepts, and most importantly, it is feeling itself doing so.

The rest is just design detail. There’s your conscious computer.