Sometimes it is embarrassing being a futurologist. I make predictions on when things should appear based on my own experience as an engineer and how long I reckon it ought to take. Occasionally someone gets there much earlier than I expect, with a radically different solution than I would have used. I have no problem with that. I am a competent engineer but there are plenty of others who are a lot more competent and I learned to live with that a long time ago. What does annoy me is when things don’t happen on time. Not only do I look bad, but it throws my whole mindset out of line and I have to adjust my whole mindset of what the future looks like. Worse still, it means we don’t get the benefits of the new development I expected.
There are a few examples. The worst error I ever made was predicting that virtual reality would surpass TV in terms of consumption of recreational time by the year 2000. It didn’t. It still hasn’t, not even if you count all the virtual worlds in computer games, which fall far short of immersive VR. I would feel a lot worse about that if I hadn’t got some other stuff right, but this is not a brag blog. Now I am in danger of being wrong on man-machine equivalence in terms of intellect. I am on record any number of times scheduling it around 2015.
As I just blogged, supercomputers have passed human brains in terms of their raw power as measured in instructions per second, though the comparison is a bit apple-orangey. That is more or less on time I guess, but I also hoped that by now we would have a lot more insight into human consciousness than we do and would be able to use the superior raw computer fire-power to come up with computers almost as smart as people in overall terms. I think here we have fallen a bit behind. I have no right to moan. My own work on the topic has sat on the back burner now for several years and still is nowhere near publishable. But surely someone ought to be working on it and getting results? If one supercomputer can do 3 x 10^15 instructions per second, that ought to be better than 25 billion brain cells firing at 200Hz with 1000 equivalent instructions per firing, especially given that only a small fraction of brain cells are ever involved actively in thinking at any one time. With all the nanotech we have now, and brain electrical activity monitoring stuff capable of millimetre resolutions, we should be getting loads of insight into the sorts of processes we need to emulate, or at least with which to seed some sort of evolution engine. Scientists are making progress, but we aren’t there yet.
Even in AI, the progress is frustrating. There are impressive developments for sure, but where are the ‘hero’ demonstrations most engineers are so fond of? Craig Venter is all over the place jumping up and down with glee after claiming the first artificial life, or at least a bacterium with synthetic DNA. Where’s your ambition? Why aren’t we seeing AI engines registering for GCSEs yet, even in Maths? You would think that basic text recognition and basic sentence parsing would allow at least enough questions on a GCSE Maths exam to be understood and answered to provide a pass mark by now. Instead, we see lots of industrial examples of AI that are in totally different spheres. Come on guys, you’re making us futurologists look bad by not achieving all the things we promised you would. It is no excuse that you never agreed our targets in the first place.
I can only suspect that we are actually seeing a whole lot of relevant progress but it just isn’t visible or connected in the right ways yet. University A is probably doing great, as are B, C and D. Loads of IT and biotech companies are probably doing their bits too, as no doubt are a few secret military research centres. But of course they probably all have their own plans and their own objectives and won’t want to share results until it is in their interests to do so. Perhaps the economic or military potential is just too great to throw it all away sharing the knowledge too early just to grab a few cheap headlines. Or maybe they aren’t, Maybe all the engineers have given up because it looks too hard a problem so they are spending their efforts elsewhere. I hope the latter isn’t true! Or maybe the engineers are too wrapped up in the real work to waste time on silly demos. Better.
I am still getting laughed at regularly because I refuse to adjust my predictions of man-machine equivalence in a machine by 2015. But I haven’t given up, I do still believe it can happen that soon. If we are still only 1% of the way there, we might still be on schedule. That is the nature of IT and biotech development. The virtuous circle positive feedback loop means that almost all the progress happens in the last year. That’s what happened with the mapping of DNA. The first million years achieved 1%. The last 99% happened in the final 2 years, just after the laughter about the claims was starting to die down. The trouble of course is that without knowing all the detail of all the work going on in all the relevant establishments, it is very hard top say when the last 2 years starts!
My major concern is that one or two of the main components seem to be missing. Firstly, most computer scientists seem to be locked in to digital thinking. We have a whole generation of computer scientists who have never seen an analog computer. The brain is more like an analog computer than a digital one, but there is nowhere near enough effort invested now in analog processing – though there certainly is some. Ask a young engineer to design a simple thermostat and I am convinced very few would even consider a basic physics approach such as a bimetallic strip. The rest would probably go straight to the chip catalogues and use a few megalines of code too. Another missing component is the lack of cross-discipline teaching. Many students do biotech or IT, but too few do both. Those that are educated in such a way probably have their focus on making better bionics for prostheses or other such obviously worthwhile projects. Thirdly, the evolutionary computing avenue seems to have been largely abandoned long before it was properly explored, and biomimetics is sometimes too rigid in its approach, trying to emulate nature too closely instead of just using it for initial stimulation. But none of these problems is universal, and there are many good scientists and engineers to whom they simply aren’t relevant barriers. So I haven’t given up hope, I hope still that the delays are imaginary rather than real.
I think we will find out pretty soon if that is the case though. If 2015 is not a completely wrong date for man-machine equivalence, then we will start to see very impressive results appearing soon from research labs. We will start seeing clear indications that we are on the right track, and scientists and engineers finally willing to make their own grand claims of impending successes.
If that doesn’t happen, I guess I will eventually have to write it off to experience and accept that at least one more futurologist has made too optimistic dates for breakthroughs. If it does happen on time, I will never stop yelling “I told you so!”
Pingback: We could have a conscious machine by end-of-play 2015 | The more accurate guide to the future
Just wish to say your article is as astounding. The clearness in your
post is simply spectacular and i could assume you’re
an expert on this subject. Fine with your permission let me to grab your feed
to keep updated with forthcoming post. Thanks a million and please carry
on the gratifying work.
Pingback: Too late for a pause. Minimal AI consciousness by Xmas. | Futurizon: the future before it comes over the horizon