How to make a conscious computer

The latest generation of supercomputers have processing speed that is higher than the human brain on a simple digital comparison, but they can’t think, aren’t conscious. It’s not even really appropriate to compare them because the brain mostly isn’t digital. It has some digital processing in the optics system but mostly uses adaptive analog neurons whereas digital computers use digital chips for processing and storage and only a bit of analog electronics for other circuits. Most digital computers don’t even have anything we would equate to senses.

Analog computers aren’t used much now, but were in fairly widespread use in some industries until the early 1980s. Most IT people have no first hand experience of them and some don’t seem to even be aware of analog computers, what they can do or how. But in the AI space, a lot of the development uses analog approaches.

http://timeguide.wordpress.com/2011/09/18/gel-computing/ discusses some of my previous work on conscious computer design. I won’t reproduce it here.

I firmly believe consciousness, whether externally or internally focused, is the result of internally directed sensing, (sensing can be thought of as the solicitation of feeling) so that you feel your thoughts or sensory inputs in much the same way. The easy bit is figuring out how thinking can work once you have that, how memories can be relived, concepts built, how self-awareness, sentience, intelligence emerge. All those are easy once you have figured out how feeling works. That is the hard problem.

Detection is not the same as feeling. It is easy to build a detector or sensor that flips a switch or moves a dial when something happens or even precisely quantifies something . Feeling it is another layer on that. Your skin detects touch, but your brain feels it, senses it. Taking detection and making it feel and become a sensation, that’s hard. What is it about a particular circuit that adds sensation? That is the missing link, the hard problem, and all the writing available out there just echoes that. Philosophers and scientists have written about this same problem in different ways for ages, and have struggled in vain to get a grip on it, many end up running in circles. So far they don’t know the answer, and neither do I. The best any offer is elucidation of aspects of the problem and at occasionally some hints of things that they think might somehow be connected with the answer. There exists no answer or explanation yet.

There is no magic in the brain. The circuitry involved in feeling something is capable of being described, replicated and even manufactured. It is possible to find out how to make a conscious circuit, even if we still don’t know what consciousness is or how it works, via replication, reverse engineering or evolutionary development. We manage to make conscious children several times every second.

How far can we go? Having studied a lot of what is written, it is clear that even after a lot of smart people thinking a long time about it, there is a great deal of confusion out there, and at least some of it comes basically from trying to use too big words and some comes from trying to analyse too much at once. When it is so obvious that it is a tough problem, simplifying it will undoubtedly help.  So let’s narrow it down a bit.

Feeling needs to be separated out from all the other things going on. What is it that happens that makes something feel? Well, detecting something pre-empts feeling it, and interpreting it or thinking about it comes later. So, ignore the detection and interpretation and thinking bits for now. Even sensation can be modelled as solicitation of feeling, essentially adding qualitative information to it. We ought to be able to make an abstraction model as for any IT system, where feeling is a distinct layer, coming between the physical detection layer and sensation, well below any of the layers associated with thinking or analysis.

Many believe that very simple organisms can detect stimuli and react to them, but can’t feel,  but more sophisticated ones can. Logical deduction tells us either that feeling may require fairly complex neural networks but certainly well below human levels, or alternatively, feeling may not be fundamentally linked to complexity but may emerge from architectural differences that arose in parallel with increasing complexity but aren’t dependent on it. It is also very likely due to evolutionary mechanisms that feeling emerges from similar structures to detection, though not the same. Architectural modifications, feedbacks, or additions to detection circuits might be an excellent point to start looking.

So we don’t know the answer, but we do have some good clues. Better than nothing. Coming at it from a philosophical direction, even the smartest people quickly get tied in knots, but from an engineering direction, I think the problem is soluble.

If feeling is, as I believe, a modified detection system, then we could for example seed an evolutionary design system with detection systems. Mutating, restructuring and rearranging detection systems and adding occasional random components here and there might eventually create some circuits that feel. It did in nature, and would in an evolutionary design system, given time. But how would we know? An evolutionary design system needs some means of selection to distinguish the more successful branches for further development.

Using feedback loops would probably help. A system with built in feedback so that it feels that it is feeling something would be symmetrical, maybe even fractal. Self-reinforcement of a feeling process would also create a little vortex of activity. A simple detection system (with detection of detection) would not exhibit such strong activity peaks due to necessary lack of symmetry in detection of initial and processed stimuli. So all we need do is to introduce feedback loops in each architecture and look for the emergence of activity peaks. Possibly, some non-feeling architectures might also show activity peaks so not all peaks would necessarily show successes, but all successes would show peaks.

So, the evolutionary system would take basic detection circuits as input, modify them, add random components, then connect them in simple symmetrical feedback loops. Most results would do nothing. Some would show self-reinforcement, evidenced by activity peaks. Those are the ones we need.

The output from such an evolutionary design system would be circuits that feel (and some junk). We have our basic components. Now we can start to make a conscious computer.

Let’s go back to the gel computing idea and plug them in. We have some basic detectors, for light, sound, touch etc. Pretty simple stuff, but we connect those to our new feeling circuits, so now those inputs stop being just information and become sensations. We add in some storage, recording the inputs, again with some feeling circuits added into the mix, and just for fun, let’s make those recording circuits replay those inputs over and over, indefinitely. Those sensations will be felt again and again, the memory relived. Our primitive little computer can already remember and experience things it has experienced before. Now add in some processing. When a and b happen, c results. Nothing complicated. Just the sort of primitive summation of inputs we know neurons can do all the time. But now, when that processing happens, our computer brain feels it. It feels that it is doing some thinking. It feels the stimuli occurring, a result occurring. And as it records and replays it, an experience builds. It now has knowledge. It may not be the answer to life the universe and everything just yet, but knowledge it is. It now knows and remembers the experience that when it links these two inputs, it gets that output. These processes and recordings and replays and further processing and storage and replays echo throughout the whole system. The sensory echoes and neural interference patterns result in some areas of reinforcement and some of cancellation. Concepts form. The whole process is sensed by the brain. It is thinking, processing, reliving memories, linking inputs and results into concepts and knowledge, storing concepts, and most importantly, it is feeling itself doing so.

The rest is just design detail. There’s your conscious computer.

About these ads

10 responses to “How to make a conscious computer

  1. It is not possible to create conscious A.I or even that is possible , it will take 1000 or 10000 years!!! The human brain is so great and only god can create such a fantastic thing .I bet 10.000 £ that you are Wong !

    • Nobody disputes that the brain is great, but in raw processing terms we already have faster machines, just as we have machines stronger and capable of more precise movement. Some AI works well on digital machines such as those, but many or even most AI researchers think that digital computers (specifically, Turing machines) will not become conscious. They can probably simulate it one day but not emulate or perform or experience conscious behaviour. The analog domain allows use of a much broader suite of components and architectures and isn’t limited to Turing machine capability. We can’t make a human-equivalent AI yet but it is achievable and using the approach I suggest it could be done using technology available in the next few years. Someone might do it using totally different approach – mine isn’t the only way. I don’t want to offend your religious beliefs but normal natural reproduction among people is already proof that it is possible for people to make new human brains, even without fully understanding the processes. There is no need to assume the need for any magical or supernatural effects. A future AI may well use some architectures used in nature to achieve consciousness, just as planes use wings. Engineers often take ideas from nature, learn the underlying principles, improve on them, and then build something better. On that basis, even your bet doesn’t make any sense, because you could forever argue that the AI we build isn’t really ‘artificial’ but uses ‘real’ consciousness or techniques borrowed from nature or using some intellectual property generated by your god, and it may well do so. I am an engineer. I will be perfectly happy when we end up with a conscious machine, and religious people can debate for eternity over whether it is artificial or not.

  2. Interesting article Ian! What is your opinion now on how powerful machine intelligence will become? Is it increasing in power exponentially as Ray Kurzweil , for example, claims or is it increasing logarithmically as an analysis by former BT technical head Peter Cochrane suggests.

    • Kurzweil and Cochrane and I all would agree that incremental changes happen more or less exponentially. Increasing speed of development increases the speed of development further. That is well known in engineering as a positive feedback loop. (Logarithmically is being used in the same meaning in this context – I know Peter and that’s what he means. The other variant is geometrically, which isn’t the same, but there is no sound basis for that in this case). However, when a breakthrough happens, it can sometimes cause a large step change in knowledge, especially if it is breaking some bottleneck that has been holding everything back. So, we have a background exponential growth with occasional jumps or blockages.

      • Ian – Peter’s view does seem to be a bit different. See e.g. from http://www.imperica.com/in-conversation-with/peter-cochrane-ambient-intelligence

        “If we leap to the futurists and to people like Ray Kurzweil and the priesthood of the singularity, their calculations and projections are based on a very coarse product of processing power multiplied by memory. What I am saying is, that is not the key feature. The key feature that dictates intelligence, is input sensory capability and the output actual capability. The flow that I have produced does not predict an exponential growth in intelligence of machines, but much more logarithmic or, at best, linear growth which is a lot slower. The expectation that we have had in getting intelligent machines has been on the wrong hypothesis. The hypothesis has been an exponential growth in operating speeds, processing powers, and memory, will give us exponentially faster capability and an exponential growth in intelligence. That’s not the case.”

        And from http://www.techrepublic.com/blog/cio-insights/peter-cochranes-blog-why-ai-fails-to-outsmart-us/39747348

        “There seems to be a very good reason why artificial intelligence is advancing far more slowly than we expected… This finding implies that overall machine intelligence is growing linearly with time.”

        I confess I don’t follow Peter’s argument, and I suspect it is mistaken. But even Peter, in the same article, goes on to say the following:

        “So the obvious question is what happens when a large number of intelligent machines are networked? If they are sufficient, and their numbers grow exponentially, then, and only then, will we see an exponential growth in intelligence.”

      • OK, I’ll need to chat with Peter and discuss. I didn’t know he’d changed his view. Network complexity does increase geometrically so if connectivity were key, then that would follow. Consciousness isn’t just a matter of speed, memory or complexity, we need to connect the right bits in the right way, and so far we don’t really know which bits we need. It will be a good thing if we are all in the right forest and just barking up different trees. If that is the case, we’ll get there faster. Adding memristors to the toolkit a couple of years back may turn out to be key, but we don’t know for sure yet. I do think evolutionary design with the short cuts I suggest would get there fairly quickly since it bypasses a lot of bottlenecks that arise through human engineering prejudices.

  3. Hi Peter,

    Your answer reminds me of the arguments that people used to make that AIs could never handle the amount of processing required to drive a car. See the analysis by Erik Brynjolfsson and Andrew P. McAfee in e.g. http://www.nytimes.com/2011/10/24/technology/economists-see-more-jobs-for-machines-not-people.html?_r=0 – here’s an extract:

    - In 2004, two leading economists, Frank Levy and Richard J. Murnane, published “The New Division of Labor,” which analyzed the capabilities of computers and human workers. Truck driving was cited as an example of the kind of work computers could not handle, recognizing and reacting to moving objects in real time.
    - But last fall, Google announced that its robot-driven cars had logged thousands of miles on American roads with only an occasional assist from human back-seat drivers. The Google cars, Mr. Brynjolfsson said, are but one sign of the times.

    In similar ways, people used to be sure that AI could never outplay grandmasters at chess, or come up with patentable inventions, or understand sufficient real-world knowledge to be able to win in quiz shows such as Jeopardy.

    If evolution can come up with a human brain and consciousness, what is to prevent “intelligent design” (i.e. humans assisted by technology) from doing the same?

  4. Pingback: Saturday – A Singularitarian Utopia Or A New Dark Age? « The Laughing Programmer

  5. Pingback: Reverse engineering the brain is a very slow way to make a smart computer | The more accurate guide to the future

  6. Pingback: We could have a conscious machine by end-of-play 2015 | The more accurate guide to the future

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s