Category Archives: AI

The future of terminators

The Terminator films were important in making people understand that AI and machine consciousness will not necessarily be a good thing. The terminator scenario has stuck in our terminology ever since.

There is absolutely no reason to assume that a super-smart machine will be hostile to us. There are even some reasons to believe it would probably want to be friends. Smarter-than-man machines could catapult us into a semi-utopian era of singularity level development to conquer disease and poverty and help us live comfortably alongside a healthier environment. Could.

But just because it doesn’t have to be bad, that doesn’t mean it can’t be. You don’t have to be bad but sometimes you are.

It is also the case that even if it means us no harm, we could just happen to be in the way when it wants to do something, and it might not care enough to protect us.

Asimov’s laws of robotics are irrelevant. Any machine smart enough to be a terminator-style threat would presumably take little notice of rules it has been given by what it may consider a highly inferior species. The ants in your back garden have rules to govern their colony and soldier ants trained to deal with invader threats to enforce territorial rules. How much do you consider them when you mow the lawn or rearrange the borders or build an extension?

These arguments are put in debates every day now.

There are however a few points that are less often discussed

Humans are not always good, indeed quite a lot of people seem to want to destroy everything most of us want to protect. Given access to super-smart machines, they could design more effective means to do so. The machines might be very benign, wanting nothing more than to help mankind as far as they possibly can, but misled into working for them, believing in architected isolation that such projects are for the benefit of humanity. (The machines might be extremely  smart, but may have existed since their inception in a rigorously constructed knowledge environment. To them, that might be the entire world, and we might be introduced as a new threat that needs to be dealt with.) So even benign AI could be an existential threat when it works for the wrong people. The smartest people can sometimes be very naive. Perhaps some smart machines could be deliberately designed to be so.

I speculated ages ago what mad scientists or mad AIs could do in terms of future WMDs:

http://timeguide.wordpress.com/2014/03/31/wmds-for-mad-ais/

Smart machines might be deliberately built for benign purposes and turn rogue later, or they may be built with potential for harm designed in, for military purposes. These might destroy only enemies, but you might be that enemy. Others might do that and enjoy the fun and turn on their friends when enemies run short. Emotions might be important in smart machines just as they are in us, but we shouldn’t assume they will be the same emotions or be wired the same way.

Smart machines may want to reproduce. I used this as the core storyline in my sci-fi book. They may have offspring and with the best intentions of their parent AIs, the new generation might decide not to do as they’re told. Again, in human terms, a highly familiar story that goes back thousands of years.

In the Terminator film, it is a military network that becomes self aware and goes rogue that is the problem. I don’t believe digital IT can become conscious, but I do believe reconfigurable analog adaptive neural networks could. The cloud is digital today, but it won’t stay that way. A lot of analog devices will become part of it. In

http://timeguide.wordpress.com/2014/10/16/ground-up-data-is-the-next-big-data/

I argued how new self-organising approaches to data gathering might well supersede big data as the foundations of networked intelligence gathering. Much of this could be in a the analog domain and much could be neural. Neural chips are already being built.

It doesn’t have to be a military network that becomes the troublemaker. I suggested a long time ago that ‘innocent’ student pranks from somewhere like MIT could be the source. Some smart students from various departments could collaborate to see if they can hijack lots of networked kit to see if they can make a conscious machine. Their algorithms or techniques don’t have to be very efficient if they can hijack enough. There is a possibility that such an effort could succeed if the right bits are connected into the cloud and accessible via sloppy security, and the ground up data industry might well satisfy that prerequisite soon.

Self-organisation technology will make possible extremely effective combat drones.

http://timeguide.wordpress.com/2013/06/23/free-floating-ai-battle-drone-orbs-or-making-glyph-from-mass-effect/

Terminators also don’t have to be machines. They could be organic, products of synthetic biology. My own contribution here is smart yogurt: http://timeguide.wordpress.com/2014/08/20/the-future-of-bacteria/

With IT and biology rapidly converging via nanotech, there will be many ways hybrids could be designed, some of which could adapt and evolve to fill different niches or to evade efforts to find or harm them. Various grey goo scenarios can be constructed that don’t have any miniature metal robots dismantling things. Obviously natural viruses or bacteria could also be genetically modified to make weapons that could kill many people – they already have been. Some could result from seemingly innocent R&D by smart machines.

I dealt a while back with the potential to make zombies too, remotely controlling people – alive or dead. Zombies are feasible this century too:

http://timeguide.wordpress.com/2012/02/14/zombies-are-coming/ &

http://timeguide.wordpress.com/2013/01/25/vampires-are-yesterday-zombies-will-peak-soon-then-clouds-are-coming/

A different kind of terminator threat arises if groups of people are linked at consciousness level to produce super-intelligences. We will have direct brain links mid-century so much of the second half may be spent in a mental arms race. As I wrote in my blog about the Great Western War, some of the groups will be large and won’t like each other. The rest of us could be wiped out in the crossfire as they battle for dominance. Some people could be linked deeply into powerful machines or networks, and there are no real limits on extent or scope. Such groups could have a truly global presence in networks while remaining superficially human.

Transhumans could be a threat to normal un-enhanced humans too. While some transhumanists are very nice people, some are not, and would consider elimination of ordinary humans a price worth paying to achieve transhumanism. Transhuman doesn’t mean better human, it just means humans with greater capability. A transhuman Hitler could do a lot of harm, but then again so could ordinary everyday transhumanists that are just arrogant or selfish, which is sadly a much bigger subset.

I collated these various varieties of potential future cohabitants of our planet in: http://timeguide.wordpress.com/2014/06/19/future-human-evolution/

So there are numerous ways that smart machines could end up as a threat and quite a lot of terminators that don’t need smart machines.

Outcomes from a terminator scenario range from local problems with a few casualties all the way to total extinction, but I think we are still too focused on the death aspect. There are worse fates. I’d rather be killed than converted while still conscious into one of 7 billion zombies and that is one of the potential outcomes too, as is enslavement by some mad scientist.

 

Ground up data is the next big data

This one sat in my draft folder since February, so I guess it’s time to finish it.

Big Data – I expect you’re as sick of hearing that term as I am. Gathering loads of data on everything you or your company or anything else you can access can detect, measure, record, then analyzing the hell out of it using data mining, an equally irritating term.

I long ago had a quick twitter exchange with John Hewitt, who suggested “What is sensing but the energy-constrained competition for transmission to memory, as memory is but that for expression?”. Neurons compete to see who gets listened too.  Yeah, but I am still not much wiser as to what sensing actually is. Maybe I need a brain upgrade. (It’s like magnets. I used to be able to calculate the magnetic field densities around complicated shaped objects – it was part of my first job in missile design – but even though I could do all the equations around EM theory, even general relativity, I still am no wiser how a magnetic field actually becomes a force on an object. I have an office littered with hundreds of neodymium magnets and I spend hours playing with them and I still don’t understand). I can read about neurons all day but I still don’t understand how a bunch of photons triggering a series of electro-chemical reactions results in me experiencing an image. How does the physical detection become a conscious experience?

Well, I wrote some while back that we could achieve a conscious computer within two years. It’s still two years because nobody has started using the right approach yet. I have to stress the ‘could’, because nobody actually intends to do it in that time frame, but I really believe some half-decent lab could if they tried.  (Putting that into perspective, Kurzweil and his gang at Google are looking at 2029.) That two years estimate relies heavily on evolutionary development, for me the preferred option when you don’t understand how something works, as is the case with consciousness. It is pretty easy to design conscious computers at a black box level. The devil is in the detail. I argued that you could make a conscious computer by using internally focused sensing to detect processes inside the brain, and using a sensor structure with a symmetrical feedback loop. Read it:

http://timeguide.wordpress.com/2013/12/28/we-could-have-a-conscious-machine-by-end-of-play-2015/

In a nutshell, if you can feel thoughts in the same way as you feel external stimuli, you’d be conscious. I think. The symmetrical feedback loop bit is just a small engineering insight.

The missing link in that is still the same one: how does sensing work? How do you feel?

At a superficial level, you point a sensor at something and it produces a signal in some sort of relationship to whatever it is meant to sense. We can do that bit. We understand that. Your ear produces signals according to the frequencies and amplitudes of incoming sound waves, a bit like a microphone. Just the same so far. However, it is by some undefined processes later that you consciously experience the sound. How? That is the hard problem in AI. It isn’t just me that doesn’t know the answer. ‘How does red feel?’ is a more commonly used variant of the same question.

When we solve that, we will replace big data as ‘the next big thing’. If we can make sensor systems that experience or feel something rather than just producing a signal, that’s valuable already. If those sensors pool their shared experience, another similar sensor system could experience that. Basic data quickly transmutes into experience, knowledge, understanding, insight and very quickly, value, lots of it. Artificial neural nets go some way to doing that, but they still lack consciousness. Simulated neural networks can’t even get beyond a pretty straightforward computation, putting all the inputs into an equation. The true sensing bit is missing. The complex adaptive analog neural nets in our brain clearly achieve something deeper than a man-made neural network.

Meanwhile, most current AI work barks up a tree in a different forest. IBM’s Watson will do great things; Google’s search engine AI will too. But they aren’t conscious and can’t be. They’re just complicated programs running on digital processors, with absolutely zero awareness of anything they are doing. Digital programs on digital computers will never achieve any awareness, no matter how fast the chips are.

However, back in the biological realm, nature manages just fine. So biomimetics offers a lot of hope. We know we didn’t get from a pool of algae to humans in one go. At some point, organisms started moving according to light, chemical gradients, heat, touch. That most basic process of sensing may have started out coupled to internal processes that caused movement without any consciousness. But if we can understand the analog processes (electrochemical, electronic, mechanical) that take the stimulus through to a response, and can replicate it using our electronic technology, we would already have actuator circuits, even if we don’t have any form of sensation or consciousness yet. A great deal of this science has been done already of course. The computational side of most chemical and physical processes can be emulated electronically by some means or another. Actuators will be a very valuable part of the cloud, but we already have the ability to make actuators by more conventional means, so doing it organically or biomimetically just adds more actuation techniques to the portfolio. Valuable but not a terribly important breakthrough.

Looking at the system a big further along the evolutionary timeline, where eyes start to develop, where the most primitive nervous systems and brains start, where higher level processing is obviously occurring and inputs are starting to become sensations, we should be able to what is changed or changing. It is the emergence of sensation we need to identify, even if the reaction is still an unconscious reflex. We don’t need to reverse engineer the human brain. Simple organisms are simpler to understand. Feeding the architectural insights we gain from studying those primitive systems into our guided evolution engines is likely to be far faster as a means to generating true machine consciousness and strong AI. That’s how we could develop consciousness in a couple of years rather than 15.

If we can make primitive sensing devices that work like those in primitive organisms, and can respond to specific sorts of sensory input, then that is a potential way of increasing the coverage of cloud sensing and even actuation. It would effectively be a highly distributed direct response system. With clever embedding of emergent phenomena techniques (such as cellular automata, flocking etc) , it could be a quite sophisticated way of responding to quite complex distributed inputs, avoiding some of the need for big data processing. If we can gather the outputs from these simple sensors and feed them into others, that will be an even better sort of biomimetic response system. That sort of direct experience of a situation is very different from a data mined result, especially if actuation capability is there too. The philosophical question as to whether that inclusion of that second bank of sensors makes the system in any way conscious remains, but it would certainly be very useful and valuable. The architecture we end up with via this approach may look like neurons, and could even be synthetic neurons, but that may be only one solution among many. Biology may have gone the neuron route but that doesn’t necessarily mean it is the only possibility. It may be that we could one day genetically modify bacteria to produce their own organic electronics to emulate the key processes needed to generate sensation, and to power them by consuming nutrients from their environment. I suggested smart yogurt based on this idea many years ago, and believe that it could achieve vast levels of intelligence.

Digitizing and collecting the signals from the system at each stage would generate lots of  data, and that may be used by programs to derive other kinds of results, or to relay the inputs to other analog sensory systems elsewhere. (It isn’t always necessary to digitize signals to transmit them, but it helps limit signal degradation and quickly becomes important if the signal is to travel far and is essential if it is to be recorded for later use or time shifting). However, I strongly suspect that most of the value in analog sensing and direct response is local, coupled to direct action or local processing and storage.

If we have these sorts of sensors liberally spread around, we’d create a truly smart environment, with local sensing and some basic intelligence able to relay sensation remotely to other banks of sensors elsewhere for further processing or even ultimately consciousness. The local sensors could be relatively dumb like nerve endings on our skin, feeding in  signals to a more connected virtual nervous system, or a bit smarter, like neural retinal cells, doing a lot of analog pre-processing before relaying them via ganglia cells, and maybe part of a virtual brain. If they are also capable of or connected to some sort of actuation, then we would be constructing a kind of virtual organism, with tendrils covering potentially the whole globe, and able to sense and interact with its environment in an intelligent way.

I use the term virtual not because the sensors wouldn’t be real, but because their electronic nature allows connectivity to many systems, overlapping, hierarchical or distinct. Any number of higher level systems could ‘experience’ them as part of its system, rather as if your fingers could be felt by the entire human population. Multiple higher level virtual organisms could share the same basic sensory/data inputs. That gives us a whole different kind of cloud sensing.

By doing processing locally, in the analog domain, and dealing with some of the response locally, a lot of traffic across the network is avoided and a lot of remote processing. Any post-processing that does occur can therefore add to a higher level of foundation. A nice side effect from avoiding all the extra transmission and processing is increased environmental friendliness.

So, we’d have a quite different sort of data network, collecting higher quality data, essentially doing by instinct what data mining does with huge server farms and armies of programmers. Cloudy, but much smarter than a straightforward sensor net.

… I think.

It isn’t without risk though. I had a phone discussion yesterday on the dangers of this kind of network. In brief, it’s dangerous.

The future of questions

The late Douglas Adams had many great ideas. One of the best was the computer Deep Thought, built to answer The question of ‘life, the universe and everything’ that took 6 million years to come up with the answer 42. It then had to design a far bigger machine to determine what the question actually was.

Finding the right question is often much harder than answering it. Much of observational comedy is based on asking the simplest questions that we just happen never to have thought of asking before.

A good industrial illustration is in network design. A long time ago I used to design computer communication protocols, actually a pretty easy job for junior engineers. While doing one design, I discovered a flaw in a switch manufacturer’s design that would allow data networks to be pushed into a gross overload situation and crashed repeatedly by a single phone call. I simply asked a question that hadn’t been asked before. My question was “can computer networks be made to resonate dangerously?” That’s the sort of question bridge designers have asked every time they’ve built a bridge since roman times, with the notable exception of the designers of London’s Millennium Bridge, who had to redesign their’s. All I did was apply a common question from one engineering discipline to another. I did that because I was trained as a systems engineer, not as a specialist. It only took a few seconds to answer in my head and a few hours to prove it via simulation, so it was a pretty simple question to answer (yes they can), but it had taken many years before anyone bothered to ask it.

More importantly, that question couldn’t have been asked much before the 20th century, because the basic knowledge or concept of a computer network wasn’t there yet. It isn’t easy to think of a question that doesn’t derive from existent culture (which includes the full extent of fiction of course). As new ideas are generated by asking and answering questions, so the culture gradually extends, and new questions become possible. But we don’t ask them all, only a few. Even with the culture and knowledge we already have at any point, it is possible to ask far more questions, and some of them will lead to very interesting answers and a few of those could change the world.

Last night I had a dream where I was after-dinner speaking to some wealthy entrepreneurs (that sort of thing is my day job). One of them challenged me that ideas were hard to come by and as proof of his point asked me why the wheel had never been reinvented (actually it is often reinvented, just like the bicycle – all decent engineers have reinvented the bicycle to some degree at some point, and if you haven’t yet, you probably will. You aren’t allowed to die until you have). Anyway, I invented the plasma caterpillar track there and then as my answer to show that ideas are ten a penny and that being an entrepreneur is about having the energy and determination to back them, not the idea itself. That’s why I stick with ideas, much less work. Dreams often violate causality, at least mine do, and one department of my brain obviously contrived that situation to air an idea from the R&D department, but in the dream it was still the question that caused the invention. Plasma caterpillar tracks are a dream-class invention. Once daylight appears, you can see that they need work, but in this case, I also can see real potential, so I might do that work, or you can beat me to it. If you do and you get rich, buy me a beer. Sorry, I’m rambling.

How do you ask the right question? How do you even know what area to ask the right question in? How do you discover what questions are possible to ask? Question space may be infinite, but we only have a map of a small area with only a few paths and features on it. Some tools are already known to work well and thousands of training staff use them every day in creativity courses.

One solution is to try to peel back and ask what it is you are really trying to solve. Maybe the question isn’t ‘what logo should we use?’ but ‘what image do we want to present?’, or is it ‘how can we appeal to those customers?’ or ‘how do we improve our sales?’ or ‘how do we get more profit?’ or ‘how can we best serve shareholders?’. Each layer generates different kinds of answers.

Another mechanism I use personally is to matrix solutions and industries, applying questions or solutions from one industry to another, or notionally combining random industries. A typical example: Take TV displays and ask why can’t makeup also change 50 times a second? If the answer isn’t obvious, look at how nature does displays, can some of those techniques be dragged into makeup? Yes, they can, and you could make smart makeup using similar micro-structures to those that butterflies and beetles use and use the self-organisation developing in materials science to arrange the particles automatically.

Dragging solutions and questions from one area to another often generates lots of ideas. Just list every industry sector you can think of (and nature), and all the techs or techniques or procedures they use and cross reference every box against every other. By the time you’ve filled in every box, it will be long overdue to start again because they’ll all have moved on.

But however effective they are, these mechanistic techniques only fill in some of the question space and some can be addressed at least partly by AI. There is still a vast area unexplored, even with existing knowledge. Following paths is fine, but you need to explore off-road too. Group-think and cultural immersion stand in the way of true creativity. You can’t avoid your mind being directed in particular directions that have been ingrained since birth, and some genetic.

That leads some people to the conclusion that you need young fresh minds rather than older ones, but it isn’t just age that determines creativity, it is susceptibility to authority too, essentially thinking how you’re told to think. Authority isn’t just parents and teachers, or government, but colleagues and friends, mainly your peer group. People often don’t see peers as authority but needing their approval is as much a force as any. I am frequently amused spotting young people on the tube that clearly think they are true individuals with no respect for authority. They stick out a mile because they wear the uniform that all the young people who are individuals and don’t respect authority wear. It’s almost compulsory. They are so locked in by the authority and cultural language of those they want to impress by being different that they all end up being the same. Many ‘creatives’ suffer the same problem, you can often spot them from a distance too, and it’s a fairly safe bet that their actual creativity is very bounded. The fact is that some people are mentally old before they leave school and some die of old age yet still young in mind and heart.

How do you solve that? Well, apart from being young, one aspect of being guided down channels via susceptibility to authority is understanding the rules. If you are too new in a field to know how it works, who everyone is, how the tools work or even most of the basic fundamental knowledge of the field, then you are in an excellent position to ask the right questions. Some of my best ideas have been when I have just started in a new area. I do work in every sector now so my mind is spread very thinly, and it’s always easy to generate new ideas when you aren’t prejudiced by in-depth knowledge. If I don’t know that something can’t work, that you tried it ages ago and it didn’t, so you put it away and forgot about it, then I might think of it, and the technology might well have moved on since then and it might work now, or in 10 years time when I know the tech will catch up. I forget stuff very quickly too and although that can be a real nuisance it also minimizes prejudices so can aid this ‘creativity via naivety’.

So you could make sure that staff get involved in other people’s projects regularly, often with those in different parts of the company. Make sure they go on occasional workshops with others to ensure cross-fertilization. Make sure you have coffee areas and coffee times that make people mix and chat. The coffee break isn’t time wasted. It won’t generate new products or ideas every day but it will sometimes.

Cultivating a questioning culture is good too. Just asking obvious questions as often as you can is good. Why is that there? How does it work? What if we changed it? What if the factory burned down tomorrow, how would we rebuild it? Why the hell am I filling in this form?

Yet another one is to give people ‘permission’ to think outside the box. Many people have to follow procedures in their jobs for very good reasons, so they don’t always naturally challenge the status quo, and many even pursue careers that tend to be structured and ordered. There is nothing wrong with that, each to their own, but sometimes people in any area might need to generate some new ideas. A technique I use is to present some really far future and especially seemingly wacky ones to them before making them do their workshop activity. Having listened to some moron talking probable crap and getting away with it gives them permission to generate some wacky ideas too, and some invariably turn out to be good ones.

These techniques can improve everyday creativity but they still can’t generate enough truly out of the box questions to fill in the map.

I think what we need is the random question generator. There are a few random question generators out there now. Some ask mathematical questions to give kids practice before exams. Some just ask random pre-written questions from a database. They aren’t the sort we need though. We won’t be catapulted into a new era of enlightenment by being asked the answer to 73+68 or ones that were already on a list. Maybe I should have explored more pages on google, but most seemed to bark up the wrong tree. The better approach might be to copy random management jargon generators. Tech jargon ones exist too. Some are excellent fun. They are the sort we need. They combine various words from long categorized lists in grammatically plausible sequences to come out with plausible sounding terms. I am pretty sure that’s how they write MBA courses.

We can extend that approach to use a full vocabulary. If a question generator asks random questions using standard grammatical rules and a basic dictionary attack, (a first stage filtration process) most of the questions filtering through would still not make sense (e.g, why are all moons square?). But now we have AI engines that can parse sentences and filter out nonsensical ones or ones that simply contradict known facts and the web is getting a lot better at being machine-comprehensible. Careful though, some of those facts might not be any more.

After this AI filtration stage, we’d have a lot of questions that do make sense. A next stage filtration could discover which ones have already been asked and which of those have also been answered, and which of those answers have been accepted as valid. These stages will reveal some questions still awaiting proper responses, or where responses are dubious or debatable. Some will be about trivia, but some will be in areas that might seem to be commercially or socially valuable.

Some of the potentially valuable ones would be suited to machines to answer too. So they could start using spare cycles on machines to increase knowledge that way. Companies already do this internally with their big data programs for their own purposes, but it could work just as well as a global background task for humanity as a whole, with the whole of the net as one of its data sources. Machines could process data and identify potential new markets or products or identify social needs, and even suggest how they could be addressed and which companies might be able to do them. This could increase employment and GDP and solve some social issues that people weren’t even aware of.

Many would not be suited to AI and humans could search them for inspiration. Maybe we could employ people in developing countries as part of aid programs. That provides income and utilizes the lack of prejudice that comes with unfamiliarity with our own culture. Another approach is to make the growing question database online and people would make apps that deliver randomly selected questions to you to inspire you when you’re bored. There would be enough questions to make sure you are usually the first to have ever seen it. When you do, you could rate it as meaningless, don’t care, interesting, or wow that’s a really good question, maybe some other boxes. Obviously you could also produce answers and link to them too. Lower markings would decrease their reappearance probability, whereas really interesting ones would be seen by lots of people and some would be motivated to great answers.

Would it work? How could this be improved? What techniques might lead us to the right questions? Well, I just asked those ones and this blog is my first attempt at an answer. Feel free to add yours.

 

 

The future of Jelly Babies

Another frivolous ‘future of’, recycled from 10 years ago.

I’ve always loved Jelly Babies, (Jelly Bears would work as well if you prefer those) and remember that Dr Who used to eat them a lot too. Perhaps we all have a mean streak, but I’m sure most if us sometimes bite off their heads before eating the rest. But that might all change. I must stress at this point that I have never even spoken to anyone from Bassetts, who make the best ones, and I have absolutely no idea what plans they might have, and they might even strongly disapprove of my suggestions, but they certainly could do this if they wanted, as could anyone else who makes Jelly Babies or Jelly Bears or whatever.

There will soon be various forms of edible electronics. Some electronic devices can already be swallowed, including a miniature video camera that can take pictures all the way as it proceeds through your digestive tract (I don’t know whether they bother retrieving them though). Some plastics can be used as electronic components. We also have loads of radio frequency identity (RFID) tags around now. Some tags work in groups, recording whether they have been separated from each other at some point, for example. With nanotech, we will be able to make tags using little more than a few well-designed molecules, and few materials are so poisonous that a few molecules can do you much harm so they should be sweet-compliant. So extrapolating a little, it seems reasonable to expect that we might be able to eat things that have specially made RFID tags in them.  It would make a lot of sense. They could be used on fruit so that someone buying an apple could ingest the RFID tag on it without concern. And as well as work on RFID tags, many other electronic devices can be made very small, and out of fairly safe materials too.

So I propose that Jelly Baby manufacturers add three organic RFID tags to each jelly baby, (legs, head and body), some processing, and a simple communications device When someone bites the head off a jelly baby, the jelly baby would ‘know’, because the tags would now be separated. The other electronics in the jelly baby could then come into play, setting up a wireless connection to the nearest streaming device and screaming through the loudspeakers. It could also link to the rest of the jelly babies left in the packet, sending out a radio distress call. The other jelly babies, and any other friends they can solicit help from via the internet, could then use their combined artificial intelligence to organise a retaliatory strike on the person’s home computer. They might be able to trash the hard drive, upload viruses, or post a stroppy complaint on social media about the person’s cruelty.

This would make eating jelly babies even more fun than today. People used to spend fortunes going on safari to shoot lions. I presume it was exciting at least in part because there was always a risk that you might not kill the lion and it might eat you instead. With our environmentally responsible attitudes, it is no longer socially acceptable to hunt lions, but jelly babies could be the future replacement. As long as you eat them in the right order, with the appropriate respect and ceremony and so on, you would just enjoy eating a nice sweet. If you get it wrong, your life is trashed for the next day or two. That would level the playing field a bit.

Jelly Baby anyone?

The future of I

Me, myself, I, identity, ego, self, lots of words for more or less the same thing. The way we think of ourselves evolves just like everything else. Perhaps we are still cavemen with better clothes and toys. You may be a man, a dad, a manager, a lover, a friend, an artist and a golfer and those are all just descendants of caveman, dad, tribal leader, lover, friend, cave drawer and stone thrower. When you play Halo as Master Chief, that is not very different from acting or putting a tiger skin on for a religious ritual. There have always been many aspects of identity and people have always occupied many roles simultaneously. Technology changes but it still pushes the same buttons that we evolved hundred thousands of years ago.

Will we develop new buttons to push? Will we create any genuinely new facets of ‘I’? I wrote a fair bit about aspects of self when I addressed the related topic of gender, since self perception includes perceptions of how others perceive us and attempts to project chosen identity to survive passing through such filters:

http://timeguide.wordpress.com/2014/02/14/the-future-of-gender-2/

Self is certainly complex. Using ‘I’ simplifies the problem. When you say ‘I’, you are communicating with someone, (possibly yourself). The ‘I’ refers to a tailored context-dependent blend made up of a subset of what you genuinely consider to be you and what you want to project, which may be largely fictional. So in a chat room where people often have never physically met, very often, one fictional entity is talking to another fictional entity, with each side only very loosely coupled to reality. I think that is different from caveman days.

Since chat rooms started, virtual identities have come a long way. As well as acting out manufactured characters such as the heroes in computer games, people fabricate their own characters for a broad range of kinds of ‘shared spaces’, design personalities and act them out. They may run that personality instance in parallel with many others, possibly dozens at once. Putting on an act is certainly not new, and friends easily detect acts in normal interactions when they have known a real person a long time, but online interactions can mean that the fictional version is presented it as the only manifestation of self that the group sees. With no other means to know that person by face to face contact, that group has to take them at face value and interact with them as such, though they know that may not represent reality.

These designed personalities may be designed to give away as little as possible of the real person wielding them, and may exist for a range of reasons, but in such a case the person inevitably presents a shallow image. Probing below the surface must inevitably lead to leakage of the real self. New personality content must be continually created and remembered if the fictional entity is to maintain a disconnect from the real person. Holding the in-depth memory necessary to recall full personality aspects and history for numerous personalities and executing them is beyond most people. That means that most characters in shared spaces take on at least some characteristics of their owners.

But back to the point. These fabrications should be considered as part of that person. They are an ‘I’ just as much as any other ‘I’. Only their context is different. Those parts may only be presented to subsets of the role population, but by running them, the person’s brain can’t avoid internalizing the experience of doing so. They may be partly separated but they are fully open to the consciousness of that person. I think that as augmented and virtual reality take off over the next few years, we will see their importance grow enormously. As virtual worlds start to feel more real, so their anchoring and effects in the person’s mind must get stronger.

More than a decade ago, AI software agents started inhabiting chat rooms too, and in some cases these ‘bots’ become a sufficient nuisance that they get banned. The front that they present is shallow but can give an illusion of reality. In some degree, they are an extension of the person or people that wrote their code. In fact, some are deliberately designed to represent a person when they are not present. The experiences that they have can’t be properly internalized by their creators, so they are a very limited extension to self. But how long will that be true? Eventually, with direct brain links and transhuman brain extensions into cyberspace, the combined experiences of I-bots may be fully available to consciousness just the same as first hand experiences.

Then it will get interesting. Some of those bots might be part of multiple people. People’s consciousnesses will start to overlap. People might collect them, or subscribe to them. Much as you might subscribe to my blog, maybe one day, part of one person’s mind, manifested as a bot or directly ‘published’, will become part of your mind. Some people will become absorbed into the experience and adopt so many that their own original personality becomes diluted to the point of disappearance. They will become just an interference pattern of numerous minds. Some will be so infectious that they will spread widely. For many, it will be impossible to die, and for many others, their minds will be spread globally. The hive minds of Dr Who, then later the Borg on Star Trek are conceptual prototypes but as with any sci-fi, they are limited by the imagination of the time they were conceived. By the time they become feasible, we will have moved on and the playground will be far richer than we can imagine yet.

So, ‘I’ has a future just as everything else. We may have just started to add extra facets a couple of decades ago, but the future will see our concept of self evolve far more quickly.

Postscript

I got asked by a reader whether I worry about this stuff. Here is my reply:

It isn’t the technology that worries me so much that humanity doesn’t really have any fixed anchor to keep human nature in place. Genetics fixed our biological nature and our values and morality were largely anchored by the main religions. We in the West have thrown our religion in the bin and are already seeing a 30 year cycle in moral judgments which puts our value sets on something of a random walk, with no destination, the current direction governed solely by media and interpretation and political reaction to of the happenings of the day. Political correctness enforces subscription to that value set even more strictly than any bishop ever forced religious compliance. Anyone that thinks religion has gone away just because people don’t believe in God any more is blind.

Then as genetics technology truly kicks in, we will be able to modify some aspects of our nature. Who knows whether some future busybody will decree that a particular trait must be filtered out because it doesn’t fit his or her particular value set? Throwing AI into the mix as a new intelligence alongside will introduce another degree of freedom. So already several forces acting on us in pretty randomized directions that can combine to drag us quickly anywhere. Then the stuff above that allows us to share and swap personality? Sure I worry about it. We are like young kids being handed a big chemistry set for Christmas without the instructions, not knowing that adding the blue stuff to the yellow stuff and setting it alight will go bang.

I am certainly no technotopian. I see the enormous potential that the tech can bring and it could be wonderful and I can’t help but be excited by it. But to get that you need to make the right decisions, and when I look at the sorts of leaders we elect and the sorts of decisions that are made, I can’t find the confidence that we will make the right ones.

On the good side, engineers and scientists are usually smart and can see most of the issues and prevent most of the big errors by using comon industry standards, so there is a parallel self-regulatory system in place that politicians rarely have any interest in. On the other side, those smart guys unfortunately will usually follow the same value sets as the rest of the population. So we’re quite likely to avoid major accidents and blowing ourselves up or being taken over by AIs. But we’re unlikely to avoid the random walk values problem and that will be our downfall.

So it could be worse, but it could be a whole lot better too.

 

The future of death

This one is a cut and paste from my book You Tomorrow.

Although age-related decline can be postponed significantly, it will eventually come. But that is just biological decline. In a few decades, people will have their brains linked to the machine world and much of their mind will be online, and that opens up the strong likelihood that death is not inevitable, and in fact anyone who expects to live past 2070 biologically (and rich people who can get past 2050) shouldn’t need to face death of their mind. Their bodies will eventually die, but their minds can live on, and an android body will replace the biological one they’ve lost.

Death used to be one of the great certainties of life, along with taxes. But unless someone under 35 now is unfortunate enough to die early from accident or disease, they have a good chance of not dying at all. Let’s explore that.

Genetics and other biotechnology will work with advanced materials technology and nanotechnology to limit and even undo damage caused by disease and age, keeping us young for longer, eventually perhaps forever. It remains to be seen how far we get with that vision in the next century, but we can certainly expect some progress in that area. We won’t get biological immortality for a good while, but if you can move into a high quality android body, who cares?

With this combination of technologies locked together with IT in a positive feedback loop, we will certainly eventually develop the technology to enable a direct link between the human brain and the machine, i.e. the descendants of today’s computers. On the computer side, neural networks are already the routine approach to many problems and are based on many of the same principles that neurons in the brain use. As this field develops, we will be able to make a good emulation of biological neurons. As it develops further, it ought to be possible on a sufficiently sophisticated computer to make a full emulation of a whole brain. Progress is already happening in this direction.

Meanwhile, on the human side, nanotechnology and biotechnology will also converge so that we will have the capability to link synthetic technology directly to individual neurons in the brain. We don’t know for certain that this is possible, but it may be possible to measure the behaviour of each individual neuron using this technology and to signal this behaviour to the brain emulation running in the computer, which could then emulate it. Other sensors could similarly measure and allow emulation of the many chemical signalling mechanisms that are used in the brain. The computer could thus produce an almost perfect electronic equivalent of the person’s brain, neuron by neuron. This gives us two things.

Firstly, by doing this, we would have a ‘backup’ copy of the person’s brain, so that in principle, they can carry on thinking, and effectively living, long after their biological body and brain has died. At this point we could claim effective immortality. Secondly, we have a two way link between the brain and the computer which allows thought to be executed on either platform and to be signalled between them.

There is an important difference between the brain and computer already that we may be able to capitalise on. In the brain’s neurons, signals travel at hundreds of metres per second. In a free space optical connection, they travel at hundreds of millions of metres per second, millions of times faster. Switching speeds are similarly faster in electronics. In the brain, cells are also very large compared to the electronic components of the future, so we may be able to reduce the distances over which the signals have to travel by another factor of 100 or more. But this assumes we take an almost exact representation of brain layout. We might be able to do much better than this. In the brain, we don’t appear to use all the neurons, (some are either redundant or have an unknown purpose) and those that we do use in a particular process are often in groups that are far apart. Reconfigurable hardware will be the norm in the 21st century and we may be able to optimize the structure for each type of thought process. Rearranging the useful neurons into more optimal structures should give another huge gain.

This means that our electronic emulation of the brain should behave in a similar way but much faster – maybe billions of times faster! It may be able to process an entire lifetime’s thoughts in a second or two. But even there are several opportunities for vast improvement. The brain is limited in size by a variety of biological constraints. Even if there were more space available, it could not be made much more efficient by making it larger, because of the need for cooling, energy and oxygen supply taking up ever more space and making distances between processors larger. In the computer, these constraints are much more easily addressable, so we could add large numbers of additional neurons to give more intelligence. In the brain, many learning processes stop soon after birth or in childhood. There need be no such constraints in computer emulations, so we could learn new skills as easily as in our infancy. And best of all, the computer is not limited by the memory of a single brain – it has access to all the world’s information and knowledge, and huge amounts of processing outside the brain emulation. Our electronic brain could be literally the size of the planet – the whole internet and all the processing and storage connected to it.

With all these advances, the computer emulation of the brain could be many orders of magnitude superior to its organic equivalent, and yet it might be connected in real time to the original. We would have an effective brain extension in cyberspace, one that gives us immeasurably improved performance and intelligence. Most of our thoughts might happen in the machine world, and because of the direct link, we might experience them as if they had occurred inside our head.

Our brains are in some ways equivalent in nature to how computers were before the age of the internet. They are certainly useful, but communication between them is slow and inefficient. However, when our brains are directly connected to machines, and those machines are networked, then everyone else’s brains are also part of that network, so we have a global network of people’s brains, all connected together, with all the computers too.

So we may soon eradicate death. By the time today’s children are due to die, they will have been using brain extensions for many years, and backups will be taken for granted. Death need not be traumatic for our relatives. They will soon get used to us walking around in an android body. Funerals will be much more fun as the key participant makes a speech about what they are expecting from their new life. Biological death might still be unpleasant, but it need no longer be a career barrier.

In terms of timescales, rich people might have this capability by 2050 and the rest of us some time before 2070. Your life expectancy biologically is increasing every year, so even if you are over 35, you have a pretty good chance of surviving long enough to gain. Half the people alive today are under 35 and will almost certainly not die fully. Many more are under 50 and some of them will live on electronically too. If you are over 50, the chances are that you will be the last generation of your family ever to have a full death.

As a side-note, there are more conventional ways of achieving immortality. Some Egyptian pharaohs are remembered because of their great pyramids. A few philosophers, artists, engineers and scientists have left such great works that they are remembered millennia later. And of course, on a small scale, for the rest of us, making an impression on those around us keeps your memory going a few generations. Writing a book immortalises your words. And you may have a multimedia headstone on your grave, or one that at least links into augmented reality to bring up your old web page of social networking site profile. But frankly, I am with Woody Allen on this one “I don’t want to achieve immortality through my work; I want to achieve immortality through not dying”. I just hope the technology arrives early enough.

The future of creativity

Another future of… blog.

I can play simple tunes on a guitar or keyboard. I compose music, mostly just bashing out some random sequences till a decent one happens. Although I can’t offer any Mozart-level creations just yet, doing that makes me happy. Electronic keyboards raise an interesting point for creativity. All I am actually doing is pressing keys, I don’t make sounds in the same way as when I pick at guitar strings. A few chips monitor the keys, noting which ones I hit and how fast, then producing and sending appropriate signals to the speakers.

The point is that I still think of it as my music, even though all I am doing is telling a microprocessor what to do on my behalf. One day, I will be able to hum a few notes or tap a rhythm with my fingers to give the computer some idea of a theme, and it will produce beautiful works based on my idea. It will still be my music, even when 99.9% of the ‘creativity’ is done by an AI. We will still think of the machines and software just as tools, and we will still think of the music as ours.

The other arts will be similarly affected. Computers will help us build on the merest hint of human creativity, enhancing our work and enabling us to do much greater things than we could achieve by our raw ability alone. I can’t paint or draw for toffee, but I do have imagination. One day I will be able to produce good paintings, design and make my own furniture, design and make my own clothes. I could start with a few downloads in the right ballpark. The computer will help me to build on those and produce new ones along divergent lines. I will be able to guide it with verbal instructions. ‘A few more trees on the hill, and a cedar in the foreground just here, a bit bigger, and move it to the left a bit’. Why buy a mass produced design when you can have a completely personal design?

These advances are unlikely to make a big dent in conventional art sales. Professional artists will always retain an edge, maybe even by producing the best seeds for computer creativity. Instead, computer assisted and computer enhanced art will make our lives more artistically enriched, and ourselves more fulfilled as a result. We will be able to express our own personalities more effectively in our everyday environment, instead of just decorating it with a few expressions of someone else’s.

However, one factor that seems to be overrated is originality. Anyone can immediately come up with many original ideas in seconds. Stick a safety pin in an orange and tie a red string through the loop. There, can I have my Turner prize now? There is an infinitely large field to pick from and only a small number have ever been realized, so coming up with something from the infinite set that still haven’t been thought of is easy and therefore of little intrinsic value. Ideas are ten a penny. It is only when it is combined with judgement or skill in making it real that it becomes valuable. Here again, computers will be able to assist. Analyzing a great many existing pictures or works or art should give some clues as to what most people like and dislike. IBM’s new neural chip is the sort of development that will accelerate this trend enormously. Machines will learn how to decide whether a picture is likely to be attractive to people or not. It should be possible for a computer to automatically create new pictures in a particular style or taste by either recombining appropriate ideas, or just randomly mixing any ideas together and then filtering the new pictures according to ‘taste’.

Augmented reality and other branches of cyberspace offer greater flexibility. Virtual objects and environments do not have to conform to laws of physics, so more elaborate and artistic structures are possible. Adding in 3D printing extends virtual graphics into the physical domain, but physics will only apply to the physical bits, and with future display technology, you might not easily be able to see where the physical stops and the virtual begins.

So, with machine assistance, human creativity will no longer be as limited by personal skill and talent. Anyone with a spark of creativity will be able to achieve great works, thanks to machine assistance. So long as you aren’t competitive about it, (someone else will always be able to do it better than you) your world will feel nicer, more friendly and personal, you’ll feel more in control, empowered, and your quality of life will improve. Instead of just making do with what you can buy, you’ll be able to decide what your world looks, sounds, feels, tastes and smells like, and design personality into anything you want too.

The future of bacteria

Bacteria have already taken the prize for the first synthetic organism. Craig Venter’s team claimed the first synthetic bacterium in 2010.

Bacteria are being genetically modified for a range of roles, such as converting materials for easier extraction (e.g. coal to gas, or concentrating elements in landfill sites to make extraction easier), making new food sources (alongside algae), carbon fixation, pollutant detection and other sensory roles, decorative, clothing or cosmetic roles based on color changing, special surface treatments, biodegradable construction or packing materials, self-organizing printing… There are many others, even ignoring all the military ones.

I have written many times on smart yogurt now and it has to be the highlight of the bacterial future, one of the greatest hopes as well as potential danger to human survival. Here is an extract from a previous blog:

Progress is continuing to harness bacteria to make components of electronic circuits (after which the bacteria are dissolved to leave the electronics). Bacteria can also have genes added to emit light or electrical signals. They could later be enhanced so that as well as being able to fabricate electronic components, they could power them too. We might add various other features too, but eventually, we’re likely to end up with bacteria that contain electronics and can connect to other bacteria nearby that contain other electronics to make sophisticated circuits. We could obviously harness self-assembly and self-organisation, which are also progressing nicely. The result is that we will get smart bacteria, collectively making sophisticated, intelligent, conscious entities of a wide variety, with lots of sensory capability distributed over a wide range. Bacteria Sapiens.

I often talk about smart yogurt using such an approach as a key future computing solution. If it were to stay in a yogurt pot, it would be easy to control. But it won’t. A collective bacterial intelligence such as this could gain a global presence, and could exist in land, sea and air, maybe even in space. Allowing lots of different biological properties could allow colonization of every niche. In fact, the first few generations of bacteria sapiens might be smart enough to design their own offspring. They could probably buy or gain access to equipment to fabricate them and release them to multiply. It might be impossible for humans to stop this once it gets to a certain point. Accidents happen, as do rogue regimes, terrorism and general mad-scientist type mischief.

Transhumanists seem to think their goal is the default path for humanity, that transhumanism is inevitable. Well, it can’t easily happen without going first through transbacteria research stages, and that implies that we might well have to ask transbacteria for their consent before we can develop true transhumans.

Self-organizing printing is a likely future enhancement for 3D printing. If a 3D printer can print bacteria (onto the surface of another material being laid down, or as an ingredient in a suspension as the extrusion material itself, or even a bacterial paste, and the bacteria can then generate or modify other materials, or use self-organisation principles to form special structures or patterns, then the range of objects that can be printed will extend. In some cases, the bacteria may be involved in the construction and then die or be dissolved away.

Estimating IoT value? Count ALL the beans!

In this morning’s news:

http://www.telegraph.co.uk/technology/news/11043549/UK-funds-development-of-world-wide-web-for-machines.html

£1.6M investment by UK Technology Strategy Board in Internet-of-Things HyperCat standard, which the article says will add £100Bn to the UK economy by 2020.

Garnter says that IoT has reached the hype peak of their adoption curve and I agree. Connecting machines together, and especially adding networked sensors will certainly increase technology capability across many areas of our lives, but the appeal is often overstated and the dangers often overlooked. Value should not be measured in purely financial terms either. If you value health, wealth and happiness, don’t just measure the wealth. We value other things too of course. It is too tempting just to count the most conspicuous beans. For IoT, which really just adds a layer of extra functionality onto an already technology-rich environment, that is rather like estimating the value of a chili con carne by counting the kidney beans in it.

The headline negatives of privacy and security have often been addressed so I don’t need to explore them much more here, but let’s look at a couple of typical examples from the news article. Allowing remotely controlled washing machines will obviously impact on your personal choice on laundry scheduling. The many similar shifts of control of your life to other agencies will all add up. Another one: ‘motorists could benefit from cheaper insurance if their vehicles were constantly transmitting positioning data’. Really? Insurance companies won’t want to earn less, so motorists on average will give them at least as much profit as before. What will happen is that insurance companies will enforce driving styles and car maintenance regimes that reduce your likelihood of a claim, or use that data to avoid paying out in some cases. If you have to rigidly obey lots of rules all of the time then driving will become far less enjoyable. Having to remember to check the tyre pressures and oil level every two weeks on pain of having your insurance voided is not one of the beans listed in the article, but is entirely analogous the typical home insurance rule that all your windows must have locks and they must all be locked and the keys hidden out of sight before they will pay up on a burglary.

Overall, IoT will add functionality, but it certainly will not always be used to improve our lives. Look at the way the web developed. Think about the cookies and the pop-ups and the tracking and the incessant virus protection updates needed because of the extra functions built into browsers. You didn’t want those, they were added to increase capability and revenue for the paying site owners, not for the non-paying browsers. IoT will be the same. Some things will make minor aspects of your life easier, but the price of that will that you will be far more controlled, you will have far less freedom, less privacy, less security. Most of the data collected for business use or to enhance your life will also be available to government and police. We see every day the nonsense of the statement that if you have done nothing wrong, then you have nothing to fear. If you buy all that home kit with energy monitoring etc, how long before the data is hacked and you get put on militant environmentalist blacklists because you leave devices on standby? For every area where IoT will save you time or money or improve your control, there will be many others where it does the opposite, forcing you to do more security checks, spend more money on car and home and IoT maintenance, spend more time following administrative procedures and even follow health regimes enforced by government or insurance companies. IoT promises milk and honey, but will deliver it only as part of a much bigger and unwelcome lifestyle change. Sure you can have a little more control, but only if you relinquish much more control elsewhere.

As IoT starts rolling out, these and many more issues will hit the press, and people will start to realise the downside. That will reduce the attractiveness of owning or installing such stuff, or subscribing to services that use it. There will be a very significant drop in the economic value from the hype. Yes, we could do it all and get the headline economic benefit, but the cost of greatly reduced quality of life is too high, so we won’t.

Counting the kidney beans in your chili is fine, but it won’t tell you how hot it is, and when you start eating it you may decide the beans just aren’t worth the pain.

I still agree that IoT can be a good thing, but the evidence of web implementation suggests we’re more likely to go through decades of abuse and grief before we get the promised benefits. Being honest at the outset about the true costs and lifestyle trade-offs will help people decide, and maybe we can get to the good times faster if that process leads to better controls and better implementation.

Ultra-simple computing: Part 2

Chip technology

My everyday PC uses an Intel Core-I7 3770 processor running at 3.4GHz. It has 4 cores running 8 threads on 1.4 billion 22nm transistors on just 160mm^2 of chip. It has an NVIDIA GeForce GTX660 graphics card, and has 16GB of main memory. It is OK most of the time, but although the processor and memory utilisation rarely gets above 30%, its response is often far from instant.

Let me compare it briefly with my (subjectively at time of ownership) best ever computer, my Macintosh 2Fx, RIP, which I got in 1991, the computer on which I first documented both the active contact lens and text messaging and on which I suppose I also started this project. The Mac 2Fx ran a 68030 processor at 40MHz, with 273,000 transistors and 4MB of RAM, and an 80MB hard drive. Every computer I’ve used since then has given me extra function at the expense of lower performance, wasted time and frustration.

Although its OS is stored on a 128GB solid state disk, my current PC takes several seconds longer to boot than my Macintosh Fx did – it went from cold to fully operational in 14 seconds – yes, I timed it. On my PC today, clicking a browser icon to first page usually takes a few seconds. Clicking on a word document back then took a couple of seconds to open. It still does now. Both computers gave real time response to typing and both featured occasional unexplained delays. I didn’t have any need for a firewall or virus checkers back then, but now I run tedious maintenance routines a few times every week. (The only virus I had before 2000 was nVir, which came on the Mac2 system disks). I still don’t get many viruses, but the significant time I spend avoiding them has to be counted too.

Going back further still, to my first ever computer in 1981, it was an Apple 2, and only had 9000 transistors running at 2.5MHz, with a piddling 32kB of memory. The OS was tiny. Nevertheless, on it I wrote my own spreadsheet, graphics programs, lens design programs, and an assortment of missile, aerodynamic and electromagnetic simulations. Using the same transistors as the I7, you could make 1000 of these in a single square millimetre!

Of course some things are better now. My PC has amazing graphics and image processing capabilities, though I rarely make full use of them. My PC allows me to browse the net (and see video ads). If I don’t mind telling Google who I am I can also watch videos on YouTube, or I could tell the BBC or some other video provider who I am and watch theirs. I could theoretically play quite sophisticated computer games, but it is my work machine, so I don’t. I do use it as a music player or to show photos. But mostly, I use it to write, just like my Apple 2 and my Mac Fx. Subjectively, it is about the same speed for those tasks. Graphics and video are the main things that differ.

I’m not suggesting going back to an Apple 2 or even an Fx. However, using I7 chip tech, a 9000 transistor processor running 1360 times faster and taking up 1/1000th of a square millimetre would still let me write documents and simulations, but would be blazingly fast compared to my old Apple 2. I could fit another 150,000 of them on the same chip space as the I7. Or I could have 5128 Mac Fxs running at 85 times normal speed. Or you could have something like a Mac FX running 85 times faster than the original for a tiny fraction of the price. There are certainly a few promising trees in the forest that nobody seems to have barked up. As an interesting aside, that 22nm tech Apple 2 chip would only be ten times bigger than a skin cell, probably less now, since my PC is already several months old

At the very least, that really begs the question what all this extra processing is needed for and why there is still ever any noticeable delay doing anything in spite of it. Each of those earlier machines was perfectly adequate for everyday tasks such as typing or spreadsheeting. All the extra speed has an impact only on some things and most is being wasted by poor code. Some of the delays we had 20 and 30 years ago still affect us just as badly today.

The main point though is that if you can make thousands of processors on a standard sized chip, you don’t have to run multitasking. Each task could have a processor all to itself.

The operating system currently runs programs to check all the processes that need attention, determine their priorities, schedule processing for them, and copy their data in and out of memory. That is not needed if each process can have its own dedicated processor and memory all the time. There are lots of ways of using basic physics to allocate processes to processors, relying on basic statistics to ensure that collisions rarely occur. No code is needed at all.

An ultra-simple computer could therefore have a large pool of powerful, free processors, each with their own memory, allocated on demand using simple physical processes. (I will describe a few options for the basic physics processes later). With no competition for memory or processing, a lot of delays would be eliminated too.