We could have a conscious machine by end-of-play 2015

I made xmas dinner this year, as I always do. It was pretty easy.

I had a basic plan, made up a menu suited to my family and my limited ability, ensured its legality, including license to serve and consume alcohol to my family on my premises, made sure I had all the ingredients I needed, checked I had recipes and instructions where necessary. I had the tools, equipment and working space I needed, and started early enough to do it all in time for the planned delivery. It was successful.

That is pretty much what you have to do to make anything, from a cup of tea to a space station, though complexity, cost and timings may vary.

With conscious machines, it is still basically the same list. When I check through it to see whether we are ready to make a start I conclude that we are. If we make the decision now at the end of 2013 to make a machine which is conscious and self-aware by the end of 2015, we could do it.

Every time machine consciousness is raised as a goal, a lot of people start screaming for a definition of consciousness. I am conscious, and I know how it feels. So are you. Neither of us can write down a definition that everyone would agree on. I don’t care. It simply isn’t an engineering barrier. Let’s simply aim for a machine that can make either of us believe that it is conscious and self aware in much the same way as we are. We don’t need weasel words to help pass an abacus off as Commander Data.

Basic plan: actually, there are several in development.

One approach is essentially reverse engineering the human brain, mapping out the neurons and replicating them. That would work, (Markram’s team) but would take too long.  It doesn’t need us to understand how consciousness works, it is rather like  methodically taking a television apart and making an exact replica using identical purchased or manufactured components.  It has the advantage of existing backing and if nobody tries a better technique early enough, it could win. More comment on this approach: https://timeguide.wordpress.com/2013/05/17/reverse-engineering-the-brain-is-a-very-slow-way-to-make-a-smart-computer/

Another is to use a large bank of powerful digital computers with access to large pool of data and knowledge. That can produce a very capable machine that can answer difficult questions or do various things well that traditionally need smart people , but as far as creating a conscious machine, it won’t work. It will happen anyway for various reasons, and may produce some valuable outputs, but it won’t result in a conscious machine..

Another is to use accelerate guided evolution within an electronic equivalent of the ‘primordial soup’. That takes the process used by nature, which clearly worked, then improves and accelerates it using whatever insights and analysis we can add via advanced starting points, subsequent guidance, archiving, cataloging and smart filtering and pruning. That also would work. If we can make the accelerated evolution powerful enough it can be achieved quickly. This is my favoured approach because it is the only one capable of succeeding by the end of 2015. So that is the basic plan, and we’ll develop detailed instructions as we go.

Menu suited to audience and ability: a machine we agree is conscious and self aware, that we can make using know-how we already have or can reasonably develop within the project time-frame.

Legality: it isn’t illegal to make a conscious machine yet. It should be; it most definitely should be, but it isn’t. The guards are fast asleep and by the time they wake up, notice that we’re up to something, and start taking us seriously, agree on what to do about it, and start writing new laws, we’ll have finished ages ago.

Ingredients:

substantial scientific and engineering knowledge base, reconfigurable analog and digital electronics, assorted structures, 15nm feature size, self organisation, evolutionary engines, sensors, lasers, LEDs, optoelectronics, HDWDM, transparent gel, inductive power, power supply, cloud storage, data mining, P2P, open source community

Recipe & instructions

I’ve written often on this from different angles:

https://timeguide.wordpress.com/2013/02/15/how-to-make-a-conscious-computer/ summarises the key points and adds insight on core component structure – especially symmetry. I believe that consciousness can be achieved by applying similar sensory structures to  internal processes as those used to sense external stimuli. Both should have a feedback loop symmetrical to the main structure. Essentially what I’m saying is that sensing that you are sensing something is key to consciousness and that is the means of converting detection into sensing and sensing into awareness, awareness into consciousness.

Once a mainstream lab finally recognises that symmetry of external sensory and internally directed sensory structures, with symmetrical sensory feedback loops (as I describe in this link) is fundamental to achieving consciousness, progress will occur quickly. I’d expect MIT or Google to claim they have just invented this concept soon, then hopefully it will be taken seriously and progress will start.

https://timeguide.wordpress.com/2011/09/18/gel-computing/

https://timeguide.wordpress.com/2010/06/16/man-machine-equivalence-by-2015/

Tools, equipment, working space: any of many large company, government or military labs could do this.

Starting early enough: it is very disappointing that work hasn’t already conspicuouslessly begun on this approach, though of course it may be happening in secret somewhere. The slower alternative being pursued by Markram et al is apparently quite well funded and publicised. Nevertheless, if work starts at the beginning of 2014, it could achieve the required result by the end of 2015. The vast bulk of the time would be creating the sensory and feedback processes to direct the evolution of electronics within the gel.

It is possible that ethics issues are slowing progress. It should be illegal to do this without proper prior discussion and effective safeguards. Possibly some of the labs capable of doing it are avoiding doing so for ethical reasons. However, I doubt that. There are potential benefits that could be presented in such a way as to offset potential risks and it would be quite a prize for any brand to claim the first conscious machine. So I suspect the reason for the delay to date is failure of imagination.

The early days of evolutionary design were held back by teams wanting to stick too closely to nature, rather than simply drawing biomimetic idea stimulation and building on it. An entire generation of electronic and computer engineers has been crippled by being locked into digital thinking but the key processes and structures within a conscious computer will come from the analog domain.

28 responses to “We could have a conscious machine by end-of-play 2015

  1. Pingback: Futureseek Daily Link Review; 29 December 2013 | Futureseek Link Digest

  2. Only 2 years of work ??? How many scientists & what are the needed budget? Do you Think we have the nessary Technology or do we need more advanced breaktrough Technologies?

    Like

    • Two years for a decent electronics lab, e.g 50 good engineers with access to a fabrication plant producing 15nm electronics. I don’t think we need any more breakthroughs, just to work with what we already have. Budget depends who asks. A lot of people would work for less than going rate to be part of such a team.

      Like

  3. I love your gel computing ideas and I hope that it comes true (I believe it will) Now I few questions. I was reading the strong A.I. “criteria” and I wanted your input on how the gel computing idea matches it.

    Reason – use strategy, solve, puzzles,
    Represent Knowledge – common sense, using information and applying it to the real world.
    Planning – The ability to set goals and achieve them
    Learning – The ability to acquire new, or modifying, reinforcing information, knowledge, and skills.
    Communicating In Natural Language – Ability to read and understand the language humans speak.
    Creativity – Outputting knowledge, skills, and information not initially programmed into the machine
    Consciousness – Having subjective experience and knowing of thought
    Sentience – The ability feel perceptions and emotions
    Self Awareness – To be aware of one’s thoughts

    I know you have answered some of these already but I would like to hear it in a more detailed few. I also have one question concerning the feedback loop. If a user set a picture of rabbit in the feedback system over and over again, is the system forming a memory of the rabbit or is it just computing and input over and over again without knowing? I would look to see how you could just keep on repeating a signal over and over again and the computer feels it. I think I am misunderstanding but if I put a sound of me crying to the computer and you play it over and over again and senses it, it knows I was crying and you play it over and over again. Does the computer know what it means and what it applies to and what it should do about it or is it just sensing it and computing information over and over again forming nothing useful. Don’t get me wrong I think the gel computing idea is brilliant but I am just curious about this.

    Like

    • Gel computing is a broad field that could cover all of these depending how you use it. At the simplest approach, it could be a simple suspension of digital chips, using optical interconnects instead of wiring but otherwise indistinguishable from any other digital computer. Or if you go the whole way you could evolve a soup of reconfigurable analog and digital components to mimic anything you see in nature, including human brain functions if you want, or go further and evolve superhuman abilities. All the things you mention are possible for a gel computer, and it could be conscious and have feelings in exactly the way you do. If it learns about the world from humans (and it could do that in a few minutes of internet browsing), then it would fully understand our languages and cultures, and be able to understand and relate to the experiences we draw from them.

      Like

      • Interesting. Do you think it would be preferable to have a system of networks replicating human brain functions (But with the speed and power benefits of computers) interlinking and communicating sharing knowledge and information and thinking. For example you would have a visual system with recognizers for light, shapes, colors, textures etc (along with a thinking processor). and a emotion system with feeling, moods, thinking processors, and memory modules etc. And it learns about a tiger or lion. It can study how it looks likes and works (using the visual, cognitive, auditory etc. systems) and give how it feels about it (Not the best example) to the user and any other external source. So not only is it learning about lions, it is learning about different shapes, sounds, colors, other life, and more.

        It could learn emotions by looking up on the internet words like scared and stupid and study what they mean, and relate to and then store them into the memory and feedback loops and send them to the emotion memory modules and feeling systems so when something that draws to those emotions come apart, those emotions act of the machine. So do you think for a Strong A.I., it would be better to have a computer with human brain functions and regions?

        Like

      • There are certainly some merits in doing so, but also some disadvantages. I think it is generally better to integrate data about real world objects, because then it is easy for the mind to access and process it from any angle. If it is stored under particular titles, that tends to erode the integrity of cross links. The brain does seem to be made up of a number of regions for different tasks, and that obviously does work fine, but even in humans we see significant improvement in creativity from people whose brain regions are better interconnected. So I think the more stuff is mixed up, the better it will be. Your idea of using the internet for meanings and then using that to identify and catalog data so that it can be cross linked better is sound, but that should be done early in the sensory process so that it doesn’t make later integration more difficult.

        Like

  4. Dear Pearson.I like your idea. I asked some experts I know and they Said it is a Nice approach , but Will not Working in reality. That is why no one is had tried and Will try. And the timeframe is too short.

    Like

    • It’s safe to say opinion is diverse in the whole field of strong AI. Some are convinced we will never have a conscious machine, some of us are certain that we will. I know from my own experience that a lot of the ideas I mentioned have been tried in a few teams to some degree and didn’t work, but there are good reasons for that. In part, some teams tried to stick too religiously to copying nature precisely, which isn’t really what biomimetic engineering is about and without being too cruel, some teams had too much prejudice driving them down the wrong paths. However, we do know from nature that evolution has produced a conscious machine – us, so it would be nonsense to claim it can’t work. As for timeframe, 2 years is how long it would take a good team if they started right now with a decent budget. Many teams couldn’t do it that fast, and many have fundamentally the wrong approach so wouldn’t be able to do it at all. They can still be experts, just with expertise in other areas.

      Like

  5. I actually like this idea very much but I am confused about one thing. One the gel computing blog page, it says that the gel computer would try out enormous amounts of algorithms and architectures to achieve a large number of tasks. Are the tasks given by humans and the gel has to solve them. Does this mean that some of the architectures would have to be fixed? For example if it wants to learn to control limbs and muscles it would also have to obviously have a certain pathway to get signals down to the muscles and limbs. So is it all random or is it fixed and are the library of tasks it adapts to and solves inputted from the creators?

    Like

    • Hi Trey, it isn’t built yet of course so this is to some extent open to engineering decision during implementation. However: yes humans would give it some simple tasks, such as find a good way to get a signal from here to there, or to move that leg, or to control a sensor. The architectures don’t have to be fixed, but the gel could make a library of known solutions or even part solutions, so that it can try them later for other purposes, or carry on evolving by retrying older attempts with different avenues of exploration. Its attempts would be quite random, and most would fail, but when a part success is achieved, it can store that and use it in lots of further attempts to get a better success. The design of the evolutionary algorithms are as important as the hardware they run with. We know evolution works in nature, and there have been successes using it already in software development, but the progress has been disappointing. I don’t think that means it can’t work, I think it needs more effort to figure out what needs to change to make it work better.

      Like

      • So the algorithms are created by the gel computer or by the humans? Also, how would it migrate to after learning simple tasks and functions to complex things like language and emotion?

        Like

      • Some crude ones might be seeded by humans but most of the creation is by the gel. It would be perfectly capable of supporting conventional computing too, and that would be a good starting point for coping with language. A lot of conventional AI could be used, again taking it potentially as a starting point for later algorithm refinement. Emotions can be implemented in many ways. I suggested using beams of light in a gel using optical interconnection. The beams could bias the strength of signals thereby creating the basis for emotion. Magnetism, chemicals or radio signals could equally be used.

        Like

  6. Last question. As a said in the previous questions, would the gel computer use the internet and gain knowledge from knowledge bases and would humans direct to? or would it get to a point where it can do all of this by itself. The other last question is I am confused about the feedback loop and it creating consciousness. Does it just take in inputs and tell the gel computer that it detected something or is it something else?

    Like

    • If I were doing it then I would certainly connect it to the net at some point and it would learn quickly, but like a human baby, it needs to learn the basics before it can understand things that depend on them. Humans might instruct it, or it could self-teach. There are lots of teaching resources on the net. I tried to explain the feedback loops in consciousness in this blog which I think will start off with the same sorts of structures used in other sensing – one of the areas I’d expect it make libraries of possible solutions/starting points. Learning to interpret signals from senses is one of the earliest things it has to experiment with. There is a natural feedback loop checkpoint in realizing that you are conscious. When it achieves that, then it will know and be sentient.
      So basically, we have to give it some basic starting points and evolutionary algorithms, and wait until things start to emerge that we might think are early hints of sensory functions and awareness and guide it a bit. Guided evolution will accelerate it through early stages and then as it takes over its own development it will end up running all by itself and self-directing, if we let it.

      Like

  7. Interesting. The only question I have is this “guided evolution” optimization or invention? I say this because most evolutionary techniques today are directly from optimization heuristics. You start out with a group of solutions, and then combine them, make random acts etc. until the greatest one is found. So my question is, is the process for generating the abilities of the AI, are finding the best out of available designs and techniques or is it somehow inventing a solution to a problem (assuming it has idea of what to do)?

    Like

    • The guided evolution would use a mix of tools. Of course some optimisation for picking winners for certain functions, but I think one important key is to be able to store and recall good candidates and not be religious about mimicking nature, which forgets old solutions. The idea is to build up libraries of techniques that can produce outcomes, then recombining these randomly, optimizing, experimenting more and constantly building up function libraries. The process should invite intervention from people or AIs to guide it along promising paths, but use its own randomisation and optimisation on those paths. Another thing I would avoid is being strict with genetic algorithms, where 50-50 combos are used. Why not 100 parents sometimes or 90-10 or just generally being flexible. If you know one parant is pretty good, why throw most of it away, instead of just tweaking small sections? It’s just about using common sense and picking winners where appropriate and letting randomisation do the rest.

      Like

  8. So is the building up (the function libraries) portion strictly from humans and then allowing the AI to randomize, optimize, etc. to find the best techniques to achieve those functions or is it just the AI gets a few cues, then creates a solution to achieve the function? Or is it a little bit of both?

    Like

    • Well, both in the short term, until AI become sufficiently advanced. People and machines will have different merits for some years yet so using the best of both makes sense. There are no rules really, and researchers should be free to use any techniques to make progress. I think that being too focused on precisely mimicking nature is one of the reasons the field has made less progress than we originally hoped. Nature has produced some great ideas but it is often possible to improve on them.

      Like

  9. Awesome. Could it also be possible to have a couple of components that then interact and come together to perform a function? Like if it had separate components for a lens, cornea, iris, fovea, optic nerve, etc., the AI could randomize and optimize to achieve a certain function (in this the function of an eye) and produce an eyeball. This way the basis is created by humans but the overall function is created from the AI.

    Like

    • yes, if researchers want to make their own solutions, it is certainly still useful to use evolution techniques to design components or to extend solution libraries. After adding them together into systems, they can then be further optimised.

      Like

  10. Pingback: Ground up data is the next big data | The more accurate guide to the future

  11. Novozymes tried but it did not work !

    Like

  12. So, was it done by the end of 2015? Or if it was completed, does the world not know about it yet?

    Like

    • No, it is running late. I estimate AI development has been running about 35% slower than expectations since around 2000. Disappointing. We have some AIs that are better than humans in small niches, but nothing that could be described as conscious yet. Current approaches to AGI will not succeed, groupthink seems to be causing everyone in the field to bark up the same empty trees. I still think it is entirely possible to do so, but the right approach is needed and it will take a couple of years at least once that approach is under way. Carrying on with the wrong approach will not succeed.

      Like

  13. Pingback: Too late for a pause. Minimal AI consciousness by Xmas. | Futurizon: the future before it comes over the horizon

Leave a comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.