Monthly Archives: June 2010

Man-machine equivalence by 2015?

Sometimes it is embarrassing being a futurologist. I make predictions on when things should appear based on my own experience as an engineer and how long I reckon it ought to take. Occasionally someone gets there much earlier than I expect, with a radically different solution than I would have used. I have no problem with that. I am a competent engineer but there are plenty of others who are a lot more competent and I learned to live with that a long time ago. What does annoy me is when things don’t happen on time. Not only do I look bad, but it throws my whole mindset out of line and I have to adjust my whole mindset of what the future looks like. Worse still, it means we don’t get the benefits of the new development I expected.

There are a few examples. The worst error I ever made was predicting that virtual reality would surpass TV in terms of consumption of recreational time by the year 2000. It didn’t. It still hasn’t, not even if you count all the virtual worlds in computer games, which fall far short of immersive VR. I would feel a lot worse about that if I hadn’t got some other stuff right, but this is not a brag blog. Now I am in danger of being wrong on man-machine equivalence in terms of intellect. I am on record any number of times scheduling it around 2015.

As I just blogged, supercomputers have passed human brains in terms of their raw power as measured in instructions per second, though the comparison is a bit apple-orangey. That is more or less on time I guess, but I also hoped that by now we would have a lot more insight into human consciousness than we do and would be able to use the superior raw computer fire-power to come up with computers almost as smart as people in overall terms. I think here we have fallen a bit behind. I have no right to moan. My own work on the topic has sat on the back burner now for several years and still is nowhere near publishable. But surely someone ought to be working on it and getting results? If one supercomputer can do 3 x 10^15 instructions per second, that ought to be better than 25 billion brain cells firing at 200Hz with 1000 equivalent instructions per firing, especially given that only a small fraction of brain cells are ever involved actively in thinking at any one time. With all the nanotech we have now, and brain electrical activity monitoring stuff capable of millimetre resolutions, we should be getting loads of insight into the sorts of processes we need to emulate, or at least with which to seed some sort of evolution engine. Scientists are making progress, but we aren’t there yet.

Even in AI, the progress is frustrating. There are impressive developments for sure, but where are the ‘hero’ demonstrations most engineers are so fond of? Craig Venter is all over the place jumping up and down with glee after claiming the first artificial life, or at least a bacterium with synthetic DNA. Where’s your ambition? Why aren’t we seeing AI engines registering for GCSEs yet, even in Maths? You would think that basic text recognition and basic sentence parsing would allow at least enough questions on a GCSE Maths exam to be understood and answered to provide a pass mark by now. Instead, we see lots of industrial examples of AI that are in totally different spheres. Come on guys, you’re making us futurologists look bad by not achieving all the things we promised you would. It is no excuse that you never agreed our targets in the first place.

I can only suspect that we are actually seeing a whole lot of relevant progress but it just isn’t visible or connected in the right ways yet. University A is probably doing great, as are B, C and D. Loads of IT and biotech companies are probably doing their bits too, as no doubt are a few secret military research centres. But of course they probably all have their own plans and their own objectives and won’t want to share results until it is in their interests to do so. Perhaps the economic or military potential is just too great to throw it all away sharing the knowledge too early just to grab a few cheap headlines. Or maybe they aren’t, Maybe all the engineers have given up because it looks too hard a problem so they are spending their efforts elsewhere. I hope the latter isn’t true! Or maybe the engineers are too wrapped up in the real work to waste time on silly demos. Better.

I am still getting laughed at regularly because I refuse to adjust my predictions of man-machine equivalence in a machine by 2015. But I haven’t given up, I do still believe it can happen that soon. If we are still only 1% of the way there, we might still be on schedule. That is the nature of IT and biotech development. The virtuous circle positive feedback loop means that almost all the progress happens in the last year. That’s what happened with the mapping of DNA. The first million years achieved 1%. The last 99% happened in the final 2 years, just after the laughter about the claims was starting to die down. The trouble of course is that without knowing all the detail of all the work going on in all the relevant establishments, it is very hard top say when the last 2 years starts!

My major concern is that one or two of the main components seem to be missing. Firstly, most computer scientists seem to be locked in to digital thinking. We have a whole generation of computer scientists who have never seen an analog computer. The brain is more like an analog computer than a digital one, but there is nowhere near enough effort invested now in analog processing – though there certainly is some. Ask a young engineer to design a simple thermostat and I am convinced very few would even consider a basic physics approach such as a bimetallic strip. The rest would probably go straight to the chip catalogues and use a few megalines of code too. Another missing component is the lack of cross-discipline teaching. Many students do biotech or IT, but too few do both. Those that are educated in such a way probably have their focus on making better bionics for prostheses or other such obviously worthwhile projects. Thirdly, the evolutionary computing avenue seems to have been largely abandoned long before it was properly explored, and biomimetics is sometimes too rigid in its approach, trying to emulate nature too closely instead of just using it for initial stimulation. But none of these problems is universal, and there are many good scientists and engineers to whom they simply aren’t relevant barriers. So I haven’t given up hope, I hope still that the delays are imaginary rather than real.

I think we will find out pretty soon if that is the case though. If 2015 is not a completely wrong date for man-machine equivalence, then we will start to see very impressive results appearing soon from research labs. We will start seeing clear indications that we are on the right track, and scientists and engineers finally willing to make their own grand claims of impending successes.

If that doesn’t happen, I guess I will eventually have to write it off to experience and accept that at least one more futurologist has made too optimistic dates for breakthroughs. If it does happen on time, I will never stop yelling “I told you so!”


A recently announced Chinese supercomputer achieves 2.6 Peta-Instructions Per Second apparently. I once calculated that the human brain has about a third as much power in raw processing terms. However, the computer uses fundamentally different approaches to achieving its task compared to the brain.

Artificial intelligence is already used to create sophisticated virus variants, and autonomous AI entities will eventually become potential threats in their own right. Today, computers act only on instruction from people, but tomorrow, they will become a lot more independent. Assumptions that people will write their software are not valid. It is entirely feasible to develop design techniques that harness evolutionary and random chance principles, which could become much more sophisticated than today’s primitive genetic algorithms. Many people underestimate the potential for AI based threats because they assume that all machines and their software must be designed by people, who have limited knowledge, but that is no longer true and will become increasingly untrue as time goes on. So someone intent on mischief could create a piece of software and release it onto the net, where it could evolve and adapt and take on a life of its own, creating problems for companies while hiding using anonymity, encryption and distribution. It could be very difficult to find and destroy many such entities.

Nightmare AI scenarios do not necessarily require someone to be intent on creating mischief. Student pranks or curiosity could be enough. For example, suppose that some top psychology students, synthetic biology students and a few decent hackers spend some time over a few drinks debating whether it is possible to create a conscious AI entity. Even though none of them has any deep understanding of how human consciousness works, or how to make an alternative kind of consciousness, they may have enough combined insight to start a large scale zombie network, and to seed some crude algorithms as the base for an evolutionary experiment. Their lack of industrial experience also translates into a lack of design prejudice. Putting in some basic start-point ideas, coupled with imaginative thinking, a powerful distributed network of such machines would provide a formidable platform on which to run such an experiment. By making random changes to both algorithms and architecture, and perhaps using a ‘guided evolution’ approach, such an experiment might stumble across some techniques that offer promise, and eventually achieve a crude form of consciousness or advanced intelligence, both of which are dangerous. This might continue its development on its own, out of the direct control of the students. Even if the techniques it uses are very crude by comparison to those used by nature, the processing power and storage available to such a network offers vastly more raw scope than that available even in the human brain, and would perhaps allow an inefficient intelligence to still be superior to that of humans.

Once an AI reaches a certain level of intelligence, it would be capable of hiding, using distribution and encryption to disperse itself around the net. By developing its own techniques to capture more processing resources, it could benefit from a positive feedback loop, accelerating quickly towards a vastly superhuman entity. Although there is no reason to assume that it would necessarily be malicious, there is equally no reason to assume it would be benign. With its own curiosity, perhaps humans would become unintentional victims of its activities, in much the same way as insects on a building site.