Category Archives: death

How nigh is the end?

“We’re doomed!” is a frequently recited observation. It is great fun predicting the end of the world and almost as much fun reading about it or watching documentaries telling us we’re doomed. So… just how doomed are we? Initial estimate: Maybe a bit doomed. Read on.

My 2012 blog addressed some of the possibilities for extinction-level events possibly affecting us. I recently watched a Top 10 list of threats to our existence on TV and it was similar to most you’d read, with the same errors and omissions – nuclear war, global virus pandemic, terminator scenarios, solar storms, comet or asteroid strikes, alien invasions, zombie viruses, that sort of thing. I’d agree that nuclear war is still the biggest threat, so number 1, and a global pandemic of a highly infectious and lethal virus should still be number 2. I don’t even need to explain either of those, we all know why they are in 1st and 2nd place.

The TV list included a couple that shouldn’t be in there.

One inclusion was an mega-eruption of Yellowstone or another super-volcano. A full-sized Yellowstone mega-eruption would probably kill millions of people and destroy much of civilization across a large chunk of North America, but some of us don’t actually live in North America and quite a few might well survive pretty well, so although it would be quite annoying for Americans, it is hardly a TEOTWAWKI threat. It would have big effects elsewhere, just not extinction-level ones. For most of the world it would only cause short-term disruptions, such as economic turbulence, at worst it would start a few wars here and there as regions compete for control in the new world order.

Number 3 on their list was climate change, which is an annoyingly wrong, albeit a popularly held inclusion. The only climate change mechanism proposed for catastrophe is global warming, and the reason it’s called climate change now is because global warming stopped in 1998 and still hasn’t resumed 17 years and 9 months later, so that term has become too embarrassing for doom mongers to use. CO2 is a warming agent and emissions should be treated with reasonable caution, but the net warming contribution of all the various feedbacks adds up to far less than originally predicted and the climate models have almost all proven far too pessimistic. Any warming expected this century is very likely to be offset by reduction in solar activity and if and when it resumes towards the end of the century, we will long since have migrated to non-carbon energy sources, so there really isn’t a longer term problem to worry about. With warming by 2100 pretty insignificant, and less than half a metre sea level rise, I certainly don’t think climate change deserves to be on any list of threats of any consequence in the next century.

The top 10 list missed two out by including climate change and Yellowstone, and my first replacement candidate for consideration might be the grey goo scenario. The grey goo scenario is that self-replicating nanobots manage to convert everything including us into a grey goo.  Take away the silly images of tiny little metal robots cutting things up atom by atom and the laughable presentation of this vanishes. Replace those little bots with bacteria that include electronics, and are linked across their own cloud to their own hive AI that redesigns their DNA to allow them to survive in any niche they find by treating the things there as food. When existing bacteria find a niche they can’t exploit, the next generation adapts to it. That self-evolving smart bacteria scenario is rather more feasible, and still results in bacteria that can conquer any ecosystem they find. We would find ourselves unable to fight back and could be wiped out. This isn’t very likely, but it is feasible, could happen by accident or design on our way to transhumanism, and might deserve a place in the top ten threats.

However, grey goo is only one of the NBIC convergence risks we have already imagined (NBIC= Nano-Bio-Info-Cogno). NBIC is a rich seam for doom-seekers. In there you’ll find smart yogurt, smart bacteria, smart viruses, beacons, smart clouds, active skin, direct brain links, zombie viruses, even switching people off. Zombie viruses featured in the top ten TV show too, but they don’t really deserve their own category and more than many other NBIC derivatives. Anyway, that’s just a quick list of deliberate end of world solutions – there will be many more I forgot to include and many I haven’t even thought of yet. Then you have to multiply the list by 3. Any of these could also happen by accident, and any could also happen via unintended consequences of lack of understanding, which is rather different from an accident but just as serious. So basically, deliberate action, accidents and stupidity are three primary routes to the end of the world via technology. So instead of just the grey goo scenario, a far bigger collective threat is NBIC generally and I’d add NBIC collectively into my top ten list, quite high up, maybe 3rd after nuclear war and global virus. AI still deserves to be a separate category of its own, and I’d put it next at 4th.

Another class of technology suitable for abuse is space tech. I once wrote about a solar wind deflector using high atmosphere reflection, and calculated it could melt a city in a few minutes. Under malicious automated control, that is capable of wiping us all out, but it doesn’t justify inclusion in the top ten. One that might is the deliberate deflection of a large asteroid to impact on us. If it makes it in at all, it would be at tenth place. It just isn’t very likely someone would do that.

One I am very tempted to include is drones. Little tiny ones, not the Predators, and not even the ones everyone seems worried about at the moment that can carry 2kg of explosives or Anthrax into the midst of football crowds. Tiny drones are far harder to shoot down, but soon we will have a lot of them around. Size-wise, think of midges or fruit flies. They could be self-organizing into swarms, managed by rogue regimes, terrorist groups, or set to auto, terminator style. They could recharge quickly by solar during short breaks, and restock their payloads from secret supplies that distribute with the swarm. They could be distributed globally using the winds and oceans, so don’t need a plane or missile delivery system that is easily intercepted. Tiny drones can’t carry much, but with nerve gas or viruses, they don’t have to. Defending against such a threat is easy if there is just one, you can swat it. If there is a small cloud of them, you could use a flamethrower. If the sky is full of them and much of the trees and the ground infested, it would be extremely hard to wipe them out. So if they are well designed to cause an extinction level threat, as MAD 2.0 perhaps, then this would be way up in the top tem too, 5th.

Solar storms could wipe out our modern way of life by killing our IT. That itself would kill many people, via riots and fights for the last cans of beans and bottles of water. The most serious solar storms could be even worse. I’ll keep them in my list, at 6th place

Global civil war could become an extinction level event, given human nature. We don’t have to go nuclear to kill a lot of people, and once society degrades to a certain level, well we’ve all watched post-apocalypse movies or played the games. The few left would still fight with each other. I wrote about the Great Western War and how it might result, see

and such a thing could easily spread globally. I’ll give this 7th place.

A large asteroid strike could happen too, or a comet. Ones capable of extinction level events shouldn’t hit for a while, because we think we know all the ones that could do that. So this goes well down the list at 8th.

Alien invasion is entirely possible and could happen at any time. We’ve been sending out radio signals for quite a while so someone out there might have decided to come see whether our place is nicer than theirs and take over. It hasn’t happened yet so it probably won’t, but then it doesn’t have to be very probably to be in the top ten. 9th will do.

High energy physics research has also been suggested as capable of wiping out our entire planet via exotic particle creation, but the smart people at CERN say it isn’t very likely. Actually, I wasn’t all that convinced or reassured and we’ve only just started messing with real physics so there is plenty of time left to increase the odds of problems. I have a spare place at number 10, so there it goes, with a totally guessed probability of physics research causing a problem every 4000 years.

My top ten list for things likely to cause human extinction, or pretty darn close:

  1. Nuclear war
  2. Highly infectious and lethal virus pandemic
  3. NBIC – deliberate, accidental or lack of foresight (includes smart bacteria, zombie viruses, mind control etc)
  4. Artificial Intelligence, including but not limited to the Terminator scenario
  5. Autonomous Micro-Drones
  6. Solar storm
  7. Global civil war
  8. Comet or asteroid strike
  9. Alien Invasion
  10. Physics research

Not finished yet though. My title was how nigh is the end, not just what might cause it. It’s hard to assign probabilities to each one but someone’s got to do it.  So, I’ll make an arbitrarily wet finger guess in a dark room wearing a blindfold with no explanation of my reasoning to reduce arguments, but hey, that’s almost certainly still more accurate than most climate models, and some people actually believe those. I’m feeling particularly cheerful today so I’ll give my most optimistic assessment.

So, with probabilities of occurrence per year:

  1. Nuclear war:  0.5%
  2. Highly infectious and lethal virus pandemic: 0.4%
  3. NBIC – deliberate, accidental or lack of foresight (includes smart bacteria, zombie viruses, mind control etc): 0.35%
  4. Artificial Intelligence, including but not limited to the Terminator scenario: 0.25%
  5. Autonomous Micro-Drones: 0.2%
  6. Solar storm: 0.1%
  7. Global civil war: 0.1%
  8. Comet or asteroid strike 0.05%
  9. Alien Invasion: 0.04%
  10. Physics research: 0.025%

I hope you agree those are all optimistic. There have been several near misses in my lifetime of number 1, so my 0.5% could have been 2% or 3% given the current state of the world. Also, 0.25% per year means you’d only expect such a thing to happen every 4 centuries so it is a very small chance indeed. However, let’s stick with them and add them up. The cumulative probability of the top ten is 2.015%. Lets add another arbitrary 0.185% for all the risks that didn’t make it into the top ten, rounding the total up to a nice neat 2.2% per year.

Some of the ones above aren’t possible quite yet, but others will vary in probability year to year, but I think that won’t change the guess overall much. If we take a 2.2% probability per year, we have an expectation value of 45.5 years for civilization life expectancy from now. Expectation date for human extinction:

2015.5 + 45.5 years= 2061,

Obviously the probability distribution extends from now to eternity, but don’t get too optimistic, because on these figures there currently is only a 15% chance of surviving past this century.

If you can think of good reasons why my figures are far too pessimistic, by all means make your own guesses, but make them honestly, with a fair and reasonable assessment of how the world looks socially, religiously, politically, the quality of our leaders, human nature etc, and then add them up. You might still be surprised how little time we have left.

I’ll revise my original outlook upwards from ‘a bit doomed’.

We’re reasonably doomed.

The future of beetles

Onto B then.

One of the first ‘facts’ I ever learned about nature was that there were a million species of beetle. In the Google age, we know that ‘scientists estimate there are between 4 and 8 million’. Well, still lots then.

Technology lets us control them. Beetles provide a nice platform to glue electronics onto so they tend to fall victim to cybernetics experiments. The important factor is that beetles come with a lot of built-in capability that is difficult or expensive to build using current technology. If they can be guided remotely by over-riding their own impulses or even misleading their sensors, then they can be used to take sensors into places that are otherwise hard to penetrate. This could be for finding trapped people after an earthquake, or getting a dab of nerve gas onto a president. The former certainly tends to be the favored official purpose, but on the other hand, the fashionable word in technology circles this year is ‘nefarious’. I’ve read it more in the last year than the previous 50 years, albeit I hadn’t learned to read for some of those. It’s a good word. Perhaps I just have a mad scientist brain, but almost all of the uses I can think of for remote-controlled beetles are nefarious.

The first properly publicized experiment was 2009, though I suspect there were many unofficial experiments before then:

There are assorted YouTube videos such as

A more recent experiment:

Big beetles make it easier to do experiments since they can carry up to 20% of body weight as payload, and it is obviously easier to find and connect to things on a bigger insect, but obviously once the techniques are well-developed and miniaturization has integrated things down to single chip with low power consumption, we should expect great things.

For example, a cloud of redundant smart dust would make it easier to connect to various parts of a beetle just by getting it to take flight in the cloud. Bits of dust would stick to it and self-organisation principles and local positioning can then be used to arrange and identify it all nicely to enable control. This would allow large numbers of beetles to be processed and hijacked, ideal for mad scientists to be more time efficient. Some dust could be designed to burrow into the beetle to connect to inner parts, or into the brain, which obviously would please the mad scientists even more. Again, local positioning systems would be advantageous.

Then it gets more fun. A beetle has its own sensors, but signals from those could be enhanced or tweaked via cloud-based AI so that it can become a super-beetle. Beetles traditionally don’t have very large brains, so they can be added to remotely too. That doesn’t have to be using AI either. As we can also connect to other animals now, and some of those animals might have very useful instincts or skills, then why not connect a rat brain into the beetle? It would make a good team for exploring. The beetle can do the aerial maneuvers and the rat can control it once it lands, and we all know how good rats are at learning mazes. Our mad scientist friend might then swap over the management system to another creature with a more vindictive streak for the final assault and nerve gas delivery.

So, Coleoptera Nefarius then. That’s the cool new beetle on the block. And its nicer but underemployed twin Coleoptera Benignus I suppose.


Suspended animation and mind transfer as suicide alternatives

I last wrote about suicide in but this time, I want to take a different line of thought. Instead of looking at suicide per se, what about alternatives?

There are many motives for suicide but the most common is wanting to escape from a situation such as suffering intolerable pain or misery, which can arise from a huge range of causes. The victim looks at the potential futures available to them and in their analysis, the merits of remaining alive are less attractive than being dead.

The ‘being dead’ bit is not necessarily about a full ceasing of existence, but more about abdicating consciousness, with its implied sensory inputs, pain, anxiety, inner turmoil, or responsibility.

Last summer, a development in neuroscience offered the future potential to switch the brain off:

The researchers were aware that it may become an alternative to anesthetic, or even a means of avoiding boredom or fear. There are many situations where we want to temporarily suspend consciousness. Alcohol and drug abuse often arises from people using chemical means of doing so.

It seems to me that suicide offers a permanent version of the same, to be switched off forever, but with a key difference. In the anesthetic situation, normal life will resume with its associated problems. In suicide, it won’t. The problems are gone.

Suppose that people could get switched off for a very long time whilst being biologically maintained and housed somehow. Suppose it is long enough that any personal relationship issues will have vanished, that any debts, crimes or other legal issues are nullified, and that any pain or other health problems can be fixed, including fixing mental health issues and erasing of intolerable memories if necessary. In many cases, that would be a suitable alternative to suicide. It offers the advantages of escaping the problems, but with the advantage that a better life might follow some time far in the future.

These have widely varying timescales for potential delivery, and there are numerous big issues, but I don’t see fundamental technology barriers here. Suspending the mind for as long as necessary might offer a reasonable alternative to suicide, at least in principle. There is no need to look at all the numerous surrounding issues though. Consider taking that general principle and adapting it a bit. Mid-century onwards, we’ll have direct brain links sufficiently developed to allow porting of the mind to a new body, and android one for example. Having a new identity and a new body and a properly working and sanitized ‘brain’ would satisfy many of these same goals and avoid many of the legal, environmental, financial and ethical issues surrounding indefinite suspension. The person could simply cease their unpleasant existence and start afresh with a better one. I think it would be fine to kill the old body after the successful transfer. Any legal associations with the previous existence could be nullified. It is just a damaged container that would have been destroyed anyway. Have it destroyed, along with all its problems, and move on.

Mid-century is a lot earlier than would be needed for any social issues to go away otherwise. If a suicide is considered because of relationship or family problems, those problems might otherwise linger for generations. Creating a true new identity essentially solves them, albeit at a high cost of losing any relationships that matter. Long prison sentences are substituted by the biological death, debts similarly. A new person appears, inheriting a mind, but one refreshed, potentially with the bad bits filtered out.

Such a future seems to be feasible technically, and I think it is also ethically feasible. Suicide is one sided. Those remaining have to suffer the loss and pick up the pieces anyway, and they would be no worse off in this scenario, and if they feel aggrieved that the person has somehow escaped the consequences of their actions, then they would have escaped anyway. But a life is saved and someone gets a second chance.



The future of drones – predators. No, not that one.

It is a sad fact of life that companies keep using the most useful terminology for things that don’t deserve it. The Apple retina display, which makes it more difficult to find a suitable name for direct retinal displays that use the retina directly. Why can’t they be the ones called retina displays? Or the LED TV, where the LEDs are typically just LED back-lighting for an LCD display. That makes it hard to name TVs where each pixel is actually an LED. Or the Predator drone, as definitely  not the topic of this blog, where I will talk about predator drones that attack other ones.

I have written several times now on the dangers of drones. My most recent scare was realizing the potential for small drones carrying high-powered lasers and using cloud based face recognition to identify valuable targets in a crowd and blind them, using something like a Raspberry Pi as the main controller. All of that could be done tomorrow with components easily purchased on the net. A while ago I blogged that the Predators and Reapers are not the ones you need to worry about, so much as the little ones which can attack you in swarms.

This morning I was again considering terrorist uses for the micro-drones we’re now seeing. A 5cm drone with a networked camera and control could carry a needle infected with Ebola or aids or carrying a drop of nerve toxin. A small swarm of tiny drones, each with a gram of explosive that detonates when it collides with a forehead, could kill as many people as a bomb.

We will soon have to defend against terrorist drones and the tiniest drones give the most effective terror per dollar so they are the most likely to be the threat. The solution is quite simple. and nature solved it a long time ago. Mosquitos and flies in my back garden get eaten by a range of predators. Frogs might get them if they come too close to the surface, but in the air, dragonflies are expert at catching them. Bats are good too. So to deal with threats from tiny drones, we could use predator drones to seek and destroy them. For bigger drones, we’d need bigger predators and for very big ones, conventional anti-aircraft weapons become useful. In most cases, catching them in nets would work well. Nets are very effective against rotors. The use of nets doesn’t need such sophisticated control systems and if the net can be held a reasonable distance from the predator, it won’t destroy it if the micro-drone explodes. With a little more precise control, spraying solidifying foam onto the target drone could also immobilize it and some foams could help disperse small explosions or contain their lethal payloads. Spiders also provide inspiration here, as many species wrap their victims in silk to immobilize them. A single predator could catch and immobilize many victims. Such a defense system ought to be feasible.

The main problem remains. What do we call predator drones now that the most useful name has been trademarked for a particular model?


The future of terminators

The Terminator films were important in making people understand that AI and machine consciousness will not necessarily be a good thing. The terminator scenario has stuck in our terminology ever since.

There is absolutely no reason to assume that a super-smart machine will be hostile to us. There are even some reasons to believe it would probably want to be friends. Smarter-than-man machines could catapult us into a semi-utopian era of singularity level development to conquer disease and poverty and help us live comfortably alongside a healthier environment. Could.

But just because it doesn’t have to be bad, that doesn’t mean it can’t be. You don’t have to be bad but sometimes you are.

It is also the case that even if it means us no harm, we could just happen to be in the way when it wants to do something, and it might not care enough to protect us.

Asimov’s laws of robotics are irrelevant. Any machine smart enough to be a terminator-style threat would presumably take little notice of rules it has been given by what it may consider a highly inferior species. The ants in your back garden have rules to govern their colony and soldier ants trained to deal with invader threats to enforce territorial rules. How much do you consider them when you mow the lawn or rearrange the borders or build an extension?

These arguments are put in debates every day now.

There are however a few points that are less often discussed

Humans are not always good, indeed quite a lot of people seem to want to destroy everything most of us want to protect. Given access to super-smart machines, they could design more effective means to do so. The machines might be very benign, wanting nothing more than to help mankind as far as they possibly can, but misled into working for them, believing in architected isolation that such projects are for the benefit of humanity. (The machines might be extremely  smart, but may have existed since their inception in a rigorously constructed knowledge environment. To them, that might be the entire world, and we might be introduced as a new threat that needs to be dealt with.) So even benign AI could be an existential threat when it works for the wrong people. The smartest people can sometimes be very naive. Perhaps some smart machines could be deliberately designed to be so.

I speculated ages ago what mad scientists or mad AIs could do in terms of future WMDs:

Smart machines might be deliberately built for benign purposes and turn rogue later, or they may be built with potential for harm designed in, for military purposes. These might destroy only enemies, but you might be that enemy. Others might do that and enjoy the fun and turn on their friends when enemies run short. Emotions might be important in smart machines just as they are in us, but we shouldn’t assume they will be the same emotions or be wired the same way.

Smart machines may want to reproduce. I used this as the core storyline in my sci-fi book. They may have offspring and with the best intentions of their parent AIs, the new generation might decide not to do as they’re told. Again, in human terms, a highly familiar story that goes back thousands of years.

In the Terminator film, it is a military network that becomes self aware and goes rogue that is the problem. I don’t believe digital IT can become conscious, but I do believe reconfigurable analog adaptive neural networks could. The cloud is digital today, but it won’t stay that way. A lot of analog devices will become part of it. In

I argued how new self-organising approaches to data gathering might well supersede big data as the foundations of networked intelligence gathering. Much of this could be in a the analog domain and much could be neural. Neural chips are already being built.

It doesn’t have to be a military network that becomes the troublemaker. I suggested a long time ago that ‘innocent’ student pranks from somewhere like MIT could be the source. Some smart students from various departments could collaborate to see if they can hijack lots of networked kit to see if they can make a conscious machine. Their algorithms or techniques don’t have to be very efficient if they can hijack enough. There is a possibility that such an effort could succeed if the right bits are connected into the cloud and accessible via sloppy security, and the ground up data industry might well satisfy that prerequisite soon.

Self-organisation technology will make possible extremely effective combat drones.

Terminators also don’t have to be machines. They could be organic, products of synthetic biology. My own contribution here is smart yogurt:

With IT and biology rapidly converging via nanotech, there will be many ways hybrids could be designed, some of which could adapt and evolve to fill different niches or to evade efforts to find or harm them. Various grey goo scenarios can be constructed that don’t have any miniature metal robots dismantling things. Obviously natural viruses or bacteria could also be genetically modified to make weapons that could kill many people – they already have been. Some could result from seemingly innocent R&D by smart machines.

I dealt a while back with the potential to make zombies too, remotely controlling people – alive or dead. Zombies are feasible this century too: &

A different kind of terminator threat arises if groups of people are linked at consciousness level to produce super-intelligences. We will have direct brain links mid-century so much of the second half may be spent in a mental arms race. As I wrote in my blog about the Great Western War, some of the groups will be large and won’t like each other. The rest of us could be wiped out in the crossfire as they battle for dominance. Some people could be linked deeply into powerful machines or networks, and there are no real limits on extent or scope. Such groups could have a truly global presence in networks while remaining superficially human.

Transhumans could be a threat to normal un-enhanced humans too. While some transhumanists are very nice people, some are not, and would consider elimination of ordinary humans a price worth paying to achieve transhumanism. Transhuman doesn’t mean better human, it just means humans with greater capability. A transhuman Hitler could do a lot of harm, but then again so could ordinary everyday transhumanists that are just arrogant or selfish, which is sadly a much bigger subset.

I collated these various varieties of potential future cohabitants of our planet in:

So there are numerous ways that smart machines could end up as a threat and quite a lot of terminators that don’t need smart machines.

Outcomes from a terminator scenario range from local problems with a few casualties all the way to total extinction, but I think we are still too focused on the death aspect. There are worse fates. I’d rather be killed than converted while still conscious into one of 7 billion zombies and that is one of the potential outcomes too, as is enslavement by some mad scientist.


The future of euthanasia and suicide

Another extract from You Tomorrow, one that is very much in debate at the moment, it is an area that needs wise legislation, but I don’t have much confidence that we’ll get it. I’ll highlight some of the questions here, but since I don’t have many answers, I’ll illustrate why: they are hard questions.

Sadly, some people feel the need to end their own lives and an increasing number are asking for the legal right to assisted suicide. Euthanasia is increasingly in debate now too, with some health service practices bordering on it, some would say even crossing the boundary. Suicide and euthanasia are inextricably linked, mainly because it is impossible to know for certain what is in someone’s mind, and that is the basis of the well-known slippery slope from assisted suicide to euthanasia.

The stages of progress are reasonably clear. Is the suicide request a genuine personal decision, originating from that person’s free thoughts, based solely on their own interests? Or is it a personal decision influenced by the interests of others, real or imagined? Or is it a personal decision made after pressure from friends and relatives who want the person to die peacefully rather than suffer, with the best possible interests of the person in mind? In which case, who first raised the possibility of suicide as a potential way out? Or is it a personal decision made after pressure applied because relatives want rid of the person, perhaps over-eager to inherit or wanting to end their efforts to care for them? Guilt can be a powerful force and can be applied very subtly indeed over a period of time.

If the person is losing their ability to communicate a little, perhaps a friend or relative may help interpret their wishes to a doctor. From here, it is a matter of degree of communication skill loss and gradual increase of the part relatives play in guiding the doctor’s opinion of whether the person genuinely wants to die. Eventually, the person might not be directly consulted because their relatives can persuade a doctor that they really want to die but can’t say so effectively. Not much further along the path, people make their minds up what is in the best interests of another person as far as living or dying goes. It is a smooth path between these many small steps from genuine suicide to euthanasia. And that all ignores all the impact of possible alternatives such as pain relief, welfare, special care etc. Interestingly, the health services seem to be moving down the euthanasia route far faster than the above steps would suggest, skipping some of them and going straight to the ‘doctor knows best’ step.

Once the state starts to get involved in deciding cases, even by abdicating it to doctors, it is a long but easy road to Logan’s run, where death is compulsory at a certain age, or a certain care cost, or you’ve used up your lifetime carbon credit allocation.

There are sometimes very clear cases where someone obviously able to make up their own mind has made a thoroughly thought-through decision to end their life because of ongoing pain, poor quality of life and no hope of any cure or recovery, the only prospect being worsening condition leading to an undignified death. Some people would argue with their decision to die, others would consider that they should be permitted to do so in such clear circumstances, without any fear for their friends or relatives being prosecuted.

There are rarely razor-sharp lines between cases; situations can get blurred sometimes because of the complexity of individual lives, and because judges have their own personalities and differ slightly in their judgements. There is inevitably another case slightly further down the line that seems reasonable to a particular judge in the circumstances, and once that point is passed, and accepted by the courts, other cases with slightly less-defined circumstances can use it to help argue theirs. This is the path by which most laws evolve. They start in parliament and then after implementation, case law and a gradually changing public mind-set or even the additive effects of judges’ ideologies gradually evolve them into something quite different.

It seems likely given current trends and pressures that one day, we will accept suicide, and then we may facilitate it. Then, if we are not careful, it may evolve into euthanasia by a hundred small but apparently reasonable steps, and if we don’t stop it in time, one day we might even have a system like the one in the film ‘Logan’s Run’.

 Suicide and euthanasia are certainly gradually becoming less shocking to people, and we should expect that in the far future both will become more accepted. If you doubt that society can change its attitudes quickly, it actually only takes about 30 years to get a full reversal. Think of how long it took for homosexuality to change from condemned to fashionable, or how long abortion took from being something a woman would often be condemned for to something that is now a woman’s right to choose. Each of these took only 3 decades for a full 180 degree turnaround. Attitudes to the environment switched from mad panic about a coming ice age to mad panic about global warming in just 3 decades too, and are already switching back again towards ice age panic. If the turn in attitudes to suicide started 10 years ago, then we may have about 20 years left before it is widely accepted as a basic right that is only questioned by bigots. But social change aside, the technology will make the whole are much more interesting.

As I argued earlier, the very long term (2050 and beyond) will bring technology that allows people to link their brains to the machine world, perhaps using nanotech implants connected to each synapse to relay brain activity to a high speed neural replica hosted by a computer. This will have profound implications for suicide too. When this technology has matured, it will allow people to do wonderful things such as using machine sensors as extensions to their own capabilities. They will be able to use android bodies to move around and experience distant places and activities as if they were there in person. For people who feel compelled to end it all because of disability, pain or suffering, an alternative where they could effectively upload their mind into an android might be attractive. Their quality of life could improve dramatically at least in terms of capability. We might expect that pain and suffering could be dealt with much more effectively too if we have a direct link into the brain to control the way sensations are dealt with. So if that technology does progress as I expect, then we might see a big drop in the number of people who want to die.

But the technology options don’t stop there. If a person has a highly enhanced replica of their own brain/mind, in the machine world, people will begin to ask why they need the original. The machine world could give them greater sensory ability, greater physical ability, and greater mental ability. Smarter, with better memory, more and better senses, connected to all the world’s knowledge via the net, able effectively to wander around the world at the speed of light, and being connected directly to other people’s minds when you want, and doing so without fear of ageing, ill health of pain, this would seem a very attractive lifestyle. And it will become possible this century, at low enough cost for anyone to afford.

What of suicide then? It might not seem so important to keep the original body, especially if it is worn out or defective, so even without any pain and suffering, some people might decide to dispose of their body and carry on their lives without it. Partial suicide might become possible. Aside from any religious issues, this would be a hugely significant secular ethical issue. Updating the debate today, should people be permitted to opt out of physical existence, only keeping an electronic copy of their mind, timesharing android bodies when they need to enter the physical world? Should their families and friends be able to rebuild their loved ones electronically if they die accidentally? If so, should people be able to rebuild several versions, each representing the deceased’s different life stages, or just the final version, which may have been ill or in decline?

And then the ethical questions get even trickier. If it is possible to replicate the brain’s structure and so capture the mind, will people start to build ‘restore points’, where they make a permanent record of the state of their self at a given moment? If they get older and decide they could have run their lives better, they might be able to start again from any restore point. If the person exists in cyberspace and has disposed of their physical body, what about ownership of their estate? What about working and living in cyberspace? Will people get jobs? Will they live in virtual towns like the Sims? Indeed, in the same time frame, AI will have caught up and superseded humans in ability. Maybe Sims will get bored in their virtual worlds and want to end it all by migrating to the real world. Maybe they could swap bodies with someone coming the other way?

What will the State do when it is possible to reduce costs and environmental impact by migrating people into the virtual universe? Will it then become socially and politically acceptable, even compulsory when someone reaches a given age or costs too much for health care?

So perhaps suicide has an interesting future. It might eventually decline, and then later increase again, but in many very different forms, becoming a whole range of partial suicide options. But the scariest possibility is that people may not be able to die completely. If their body is an irrelevance, and there are many restore points from which they can be recovered, friends, family, or even the state might keep them ‘alive’ as long as they are useful. And depending on the law, they might even become a form of slave labour, their minds used for information processing or creativity whether they wish it or not. It has often truly been noted that there are worse fates than death.

The future of death

This one is a cut and paste from my book You Tomorrow.

Although age-related decline can be postponed significantly, it will eventually come. But that is just biological decline. In a few decades, people will have their brains linked to the machine world and much of their mind will be online, and that opens up the strong likelihood that death is not inevitable, and in fact anyone who expects to live past 2070 biologically (and rich people who can get past 2050) shouldn’t need to face death of their mind. Their bodies will eventually die, but their minds can live on, and an android body will replace the biological one they’ve lost.

Death used to be one of the great certainties of life, along with taxes. But unless someone under 35 now is unfortunate enough to die early from accident or disease, they have a good chance of not dying at all. Let’s explore that.

Genetics and other biotechnology will work with advanced materials technology and nanotechnology to limit and even undo damage caused by disease and age, keeping us young for longer, eventually perhaps forever. It remains to be seen how far we get with that vision in the next century, but we can certainly expect some progress in that area. We won’t get biological immortality for a good while, but if you can move into a high quality android body, who cares?

With this combination of technologies locked together with IT in a positive feedback loop, we will certainly eventually develop the technology to enable a direct link between the human brain and the machine, i.e. the descendants of today’s computers. On the computer side, neural networks are already the routine approach to many problems and are based on many of the same principles that neurons in the brain use. As this field develops, we will be able to make a good emulation of biological neurons. As it develops further, it ought to be possible on a sufficiently sophisticated computer to make a full emulation of a whole brain. Progress is already happening in this direction.

Meanwhile, on the human side, nanotechnology and biotechnology will also converge so that we will have the capability to link synthetic technology directly to individual neurons in the brain. We don’t know for certain that this is possible, but it may be possible to measure the behaviour of each individual neuron using this technology and to signal this behaviour to the brain emulation running in the computer, which could then emulate it. Other sensors could similarly measure and allow emulation of the many chemical signalling mechanisms that are used in the brain. The computer could thus produce an almost perfect electronic equivalent of the person’s brain, neuron by neuron. This gives us two things.

Firstly, by doing this, we would have a ‘backup’ copy of the person’s brain, so that in principle, they can carry on thinking, and effectively living, long after their biological body and brain has died. At this point we could claim effective immortality. Secondly, we have a two way link between the brain and the computer which allows thought to be executed on either platform and to be signalled between them.

There is an important difference between the brain and computer already that we may be able to capitalise on. In the brain’s neurons, signals travel at hundreds of metres per second. In a free space optical connection, they travel at hundreds of millions of metres per second, millions of times faster. Switching speeds are similarly faster in electronics. In the brain, cells are also very large compared to the electronic components of the future, so we may be able to reduce the distances over which the signals have to travel by another factor of 100 or more. But this assumes we take an almost exact representation of brain layout. We might be able to do much better than this. In the brain, we don’t appear to use all the neurons, (some are either redundant or have an unknown purpose) and those that we do use in a particular process are often in groups that are far apart. Reconfigurable hardware will be the norm in the 21st century and we may be able to optimize the structure for each type of thought process. Rearranging the useful neurons into more optimal structures should give another huge gain.

This means that our electronic emulation of the brain should behave in a similar way but much faster – maybe billions of times faster! It may be able to process an entire lifetime’s thoughts in a second or two. But even there are several opportunities for vast improvement. The brain is limited in size by a variety of biological constraints. Even if there were more space available, it could not be made much more efficient by making it larger, because of the need for cooling, energy and oxygen supply taking up ever more space and making distances between processors larger. In the computer, these constraints are much more easily addressable, so we could add large numbers of additional neurons to give more intelligence. In the brain, many learning processes stop soon after birth or in childhood. There need be no such constraints in computer emulations, so we could learn new skills as easily as in our infancy. And best of all, the computer is not limited by the memory of a single brain – it has access to all the world’s information and knowledge, and huge amounts of processing outside the brain emulation. Our electronic brain could be literally the size of the planet – the whole internet and all the processing and storage connected to it.

With all these advances, the computer emulation of the brain could be many orders of magnitude superior to its organic equivalent, and yet it might be connected in real time to the original. We would have an effective brain extension in cyberspace, one that gives us immeasurably improved performance and intelligence. Most of our thoughts might happen in the machine world, and because of the direct link, we might experience them as if they had occurred inside our head.

Our brains are in some ways equivalent in nature to how computers were before the age of the internet. They are certainly useful, but communication between them is slow and inefficient. However, when our brains are directly connected to machines, and those machines are networked, then everyone else’s brains are also part of that network, so we have a global network of people’s brains, all connected together, with all the computers too.

So we may soon eradicate death. By the time today’s children are due to die, they will have been using brain extensions for many years, and backups will be taken for granted. Death need not be traumatic for our relatives. They will soon get used to us walking around in an android body. Funerals will be much more fun as the key participant makes a speech about what they are expecting from their new life. Biological death might still be unpleasant, but it need no longer be a career barrier.

In terms of timescales, rich people might have this capability by 2050 and the rest of us some time before 2070. Your life expectancy biologically is increasing every year, so even if you are over 35, you have a pretty good chance of surviving long enough to gain. Half the people alive today are under 35 and will almost certainly not die fully. Many more are under 50 and some of them will live on electronically too. If you are over 50, the chances are that you will be the last generation of your family ever to have a full death.

As a side-note, there are more conventional ways of achieving immortality. Some Egyptian pharaohs are remembered because of their great pyramids. A few philosophers, artists, engineers and scientists have left such great works that they are remembered millennia later. And of course, on a small scale, for the rest of us, making an impression on those around us keeps your memory going a few generations. Writing a book immortalises your words. And you may have a multimedia headstone on your grave, or one that at least links into augmented reality to bring up your old web page of social networking site profile. But frankly, I am with Woody Allen on this one “I don’t want to achieve immortality through my work; I want to achieve immortality through not dying”. I just hope the technology arrives early enough.

Road deaths v hospital hygiene and errors

Here is a slide I just made for a road safety conference. All the figures I used came from government sources. We use the argument that a life is worth any spend, and we might be able to shave 10% off road deaths if we try hard, but we’d save 30 times more if we could reduce NHS errors and improve hygiene by just 10%.

road safety v NHS

Your most likely cause of death is being switched off

This one’s short and sweet.

The majority of you reading this blog live in the USA, UK, Canada or Australia. More than half of you are under 40.

That means your natural life expectancy is over 85, so statistically, your body will probably live until after 2060.

By then, electronic mind enhancement will probably mean that most of your mind runs on external electronics, not in your brain, so that your mind won’t die when your body does. You’ll just need to find a new body, probably an android, for those times you aren’t content being on the net. Most of us identify ourselves mainly as our mind, and would still think of ourselves as still alive if our mind carries on as if nothing much has happened, which is likely.

Electronic immortality is not true immortality though. Your mind can only survive on the net as long as it is supported by the infrastructure. That will be controlled by others. Future technology will likely be able to defend against asteroid strikes, power surges cause by solar storms and so on, so accidental death seems unlikely for hundreds of years. However, since minds supported on it need energy to continue running and electronics to be provided and maintained, and will want to make trips into the ‘real’ world, or even live there a lot of the time, they will have a significant resource footprint. They will probably not be considered as valuable as other people whose bodies are still alive. In fact they might be considered as competition – for jobs, resources, space, housing, energy… They may even be seen as easy targets for future cyber-terrorists.

So, it seems quite likely, maybe even inevitable, that life limits will be imposed on the vast majority of you. At some point you will simply be switched off. There might be some prioritization, competitions, lotteries or other selection mechanism, but only some will benefit from it.

Since you are unlikely to die when your body ceases to work, your most likely cause of death is therefore to be switched off. Sorry to break that to you.

Future human evolution

I’ve done patches of work on this topic frequently over the last 20 years. It usually features in my books at some point too, but it’s always good to look afresh at anything. Sometimes you see something you didn’t see last time.

Some of the potential future is pretty obvious. I use the word potential, because there are usually choices to be made, regulations that may or may not get in the way, or many other reasons we could divert from the main road or even get blocked completely.

We’ve been learning genetics now for a long time, with a few key breakthroughs. It is certain that our understanding will increase, less certain how far people will be permitted to exploit the potential here in any given time frame. But let’s take a good example to learn a key message first. In IVF, we can filter out embryos that have the ‘wrong’ genes, and use their sibling embryos instead. Few people have a problem with that. At the same time, pregnant women may choose an abortion if they don’t want a child when they discover it is the wrong gender, but in the UK at least, that is illegal. The moral and ethical values of our society are on a random walk though, changing direction frequently. The social assignment of right and wrong can reverse completely in just 30 years. In this example, we saw a complete reversal of attitudes to abortion itself within 30 years, so who is to say we won’t see reversal on the attitude to abortion due to gender? It is unwise to expect that future generations will have the same value sets. In fact, it is highly unlikely that they will.

That lesson likely applies to many technology developments and quite a lot of social ones – such as euthanasia and assisted suicide, both already well into their attitude reversal. At some point, even if something is distasteful to current attitudes, it is pretty likely to be legalized eventually, and hard to ban once the door is opened. There will always be another special case that opens the door a little further. So we should assume that we may eventually use genetics to its full capability, even if it is temporarily blocked for a few decades along the way. The same goes for other biotech, nanotech, IT, AI and any other transhuman enhancements that might come down the road.

So, where can we go in the future? What sorts of splits can we expect in the future human evolution path? It certainly won’t remain as just plain old homo sapiens.

I drew this evolution path a long time ago in the mid 1990s:

human evolution 1

It was clear even then that we could connect external IT to the nervous system, eventually the brain, and this would lead to IT-enhanced senses, memory, processing, higher intelligence, hence homo cyberneticus. (No point in having had to suffer Latin at school if you aren’t allowed to get your own back on it later). Meanwhile, genetic enhancement and optimization of selected features would lead to homo optimus. Converging these two – why should you have to choose, why not have a perfect body and an enhanced mind? – you get homo hybridus. Meanwhile, in the robots and AI world, machine intelligence is increasing and we eventually we get the first self-aware AI/robot (it makes little sense to separate the two since networked AI can easily be connected to a machine such as a robot) and this has its own evolution path towards a rich diversity of different kinds of AI and robots, robotus multitudinus. Since both the AI world and the human world could be networked to the same network, it is then easy to see how they could converge, to give homo machinus. This future transhuman would have any of the abilities of humans and machines at its disposal. and eventually the ability to network minds into a shared consciousness. A lot of ordinary conventional humans would remain, but with safe upgrades available, I called them homo sapiens ludditus. As they watch their neighbors getting all the best jobs, winning at all the sports, buying everything, and getting the hottest dates too, many would be tempted to accept the upgrades and homo sapiens might gradually fizzle out.

My future evolution timeline stayed like that for several years. Then in the early 2000s I updated it to include later ideas:

human evolution 2

I realized that we could still add AI into computer games long after it becomes comparable with human intelligence, so games like EA’s The Sims might evolve to allow entire civilizations living within a computer game, each aware of their existence, each running just as real a life as you and I. It is perhaps unlikely that we would allow children any time soon to control fully sentient people within a computer game, acting as some sort of a god to them, but who knows, future people will argue that they’re not really real people so it’s OK. Anyway, you could employ them in the game to do real knowledge work, and make money, like slaves. But since you’re nice, you might do an incentive program for them that lets them buy their freedom if they do well, letting them migrate into an android. They could even carry on living in their Sims home and still wander round in our world too.

Emigration from computer games into our world could be high, but the reverse is also possible. If the mind is connected well enough, and enhanced so far by external IT that almost all of it runs on the IT instead of in the brain, then when your body dies, your mind would carry on living. It could live in any world, real or fantasy, or move freely between them. (As I explained in my last blog, it would also be able to travel in time, subject to certain very expensive infrastructural requirements.) As well as migrants coming via electronic immortality route, it would be likely that some people that are unhappy in the real world might prefer to end it all and migrate their minds into a virtual world where they might be happy. As an alternative to suicide, I can imagine that would be a popular route. If they feel better later, they could even come back, using an android.  So we’d have an interesting future with lots of variants of people, AI and computer game and fantasy characters migrating among various real and imaginary worlds.

But it doesn’t stop there. Meanwhile, back in the biotech labs, progress is continuing to harness bacteria to make components of electronic circuits (after which the bacteria are dissolved to leave the electronics). Bacteria can also have genes added to emit light or electrical signals. They could later be enhanced so that as well as being able to fabricate electronic components, they could power them too. We might add various other features too, but eventually, we’re likely to end up with bacteria that contain electronics and can connect to other bacteria nearby that contain other electronics to make sophisticated circuits. We could obviously harness self-assembly and self-organisation, which are also progressing nicely. The result is that we will get smart bacteria, collectively making sophisticated, intelligent, conscious entities of a wide variety, with lots of sensory capability distributed over a wide range. Bacteria Sapiens.

I often talk about smart yogurt using such an approach as a key future computing solution. If it were to stay in a yogurt pot, it would be easy to control. But it won’t. A collective bacterial intelligence such as this could gain a global presence, and could exist in land, sea and air, maybe even in space. Allowing lots of different biological properties could allow colonization of every niche. In fact, the first few generations of bacteria sapiens might be smart enough to design their own offspring. They could probably buy or gain access to equipment to fabricate them and release them to multiply. It might be impossible for humans to stop this once it gets to a certain point. Accidents happen, as do rogue regimes, terrorism and general mad-scientist type mischief.

And meanwhile, we’ll also be modifying nature. We’ll be genetically enhancing a wide range of organisms, bringing some back from extinction, creating new ones, adding new features, changing even some of the basic mechanism by which nature works in some cases. We might even create new kinds of DNA or develop substitutes with enhanced capability. We may change nature’s evolution hugely. With a mix of old and new and modified, nature evolves nicely into Gaia Sapiens.

We’re not finished with the evolution chart though. Here is the next one:

human evolution 3

Just one thing is added. Homo zombius. I realized eventually that the sci-fi ideas of zombies being created by viruses could be entirely feasible. A few viruses, bacteria and other parasites can affect the brains of the victims and change their behaviour to harness them for their own life cycle.

See for fun.

Bacteria sapiens could be highly versatile. It could make virus variants if need be. It could evolve itself to be able to live in our bodies, maybe penetrate our brains. Bacteria sapiens could make tiny components that connect to brain cells and intercept signals within our brains, or put signals back in. It could read our thoughts, and then control our thoughts. It could essentially convert people into remote controlled robots, or zombies as we usually call them. They could even control muscles directly to a point, so even if the zombie is decapitated, it could carry on for a short while. I used that as part of my storyline in Space Anchor. If future humans have widespread availability of cordless electricity, as they might, then it is far fetched but possible that headless zombies could wander around for ages, using the bacterial sensors to navigate. Homo zombius would be mankind enslaved by bacteria. Hopefully just a few people, but it could be everyone if we lose the battle. Think how difficult a war against bacteria would be, especially if they can penetrate anyone’s brain and intercept thoughts. The Terminator films looks a lot less scary when you compare the Terminator with the real potential of smart yogurt.

Bacteria sapiens might also need to be consulted when humans plan any transhuman upgrades. If they don’t consent, we might not be able to do other transhuman stuff. Transhumans might only be possible if transbacteria allow it.

Not done yet. I wrote a couple of weeks ago about fairies. I suggested fairies are entirely feasible future variants that would be ideally suited to space travel.

They’d also have lots of environmental advantages as well as most other things from the transhuman library. So I think they’re inevitable. So we should add fairies to the future timeline. We need a revised timeline and they certainly deserve their own branch. But I haven’t drawn it yet, hence this blog as an excuse. Before I do and finish this, what else needs to go on it?

Well, time travel in cyberspace is feasible and attractive beyond 2075. It’s not the proper real world time travel that isn’t permitted by physics, but it could feel just like that to those involved, and it could go further than you might think. It certainly will have some effects in the real world, because some of the active members of the society beyond 2075 might be involved in it. It certainly changes the future evolution timeline if people can essentially migrate from one era to another (there are some very strong caveats applicable here that I tried to explain in the blog, so please don’t misquote me as a nutter – I haven’t forgotten basic physics and logic, I’m just suggesting a feasible implementation of cyberspace that would allow time travel within it. It is really a cyberspace bubble that intersects with the real world at the real time front so doesn’t cause any physics problems, but at that intersection, its users can interact fully with the real world and their cultural experiences of time travel are therefore significant to others outside it.)

What else? OK, well there is a very significant community (many millions of people) that engages in all sorts of fantasy in shared on-line worlds, chat rooms and other forums. Fairies, elves, assorted spirits, assorted gods, dwarves, vampires, werewolves, assorted furry animals, assorted aliens, dolls,  living statues, mannequins, remote controlled people, assorted inanimate but living objects, plants and of course assorted robot/android variants are just some of those that already exist in principle; I’m sure I’ve forgotten some here and anyway, many more are invented every year so an exhaustive list would quickly become out of date. In most cases, many people already role play these with a great deal of conviction and imagination, not just in standalone games, but in communities, with rich cultures, back-stories and story-lines. So we know there is a strong demand, so we’re only waiting for their implementation once technology catches up, and it certainly will.

Biotech can do a lot, and nanotech and IT can add greatly to that. If you can design any kind of body with almost any kind of properties and constraints and abilities, and add any kind of IT and sensing and networking and sharing and external links for control and access and duplication, we will have an extremely rich diversity of future forms with an infinite variety of subcultures, cross-fertilization, migration and transformation. In fact, I can’t add just a few branches to my timeline. I need millions. So instead I will just lump all these extras into a huge collected category that allows almost anything, called Homo Whateverus.

So, here is the future of human (and associates) evolution, for the next 150 years. A few possible cross-links are omitted for clarity


I won’t be around to watch it all happen. But a lot of you will.