Category Archives: longevity

How nigh is the end?

“We’re doomed!” is a frequently recited observation. It is great fun predicting the end of the world and almost as much fun reading about it or watching documentaries telling us we’re doomed. So… just how doomed are we? Initial estimate: Maybe a bit doomed. Read on.

My 2012 blog addressed some of the possibilities for extinction-level events possibly affecting us. I recently watched a Top 10 list of threats to our existence on TV and it was similar to most you’d read, with the same errors and omissions – nuclear war, global virus pandemic, terminator scenarios, solar storms, comet or asteroid strikes, alien invasions, zombie viruses, that sort of thing. I’d agree that nuclear war is still the biggest threat, so number 1, and a global pandemic of a highly infectious and lethal virus should still be number 2. I don’t even need to explain either of those, we all know why they are in 1st and 2nd place.

The TV list included a couple that shouldn’t be in there.

One inclusion was an mega-eruption of Yellowstone or another super-volcano. A full-sized Yellowstone mega-eruption would probably kill millions of people and destroy much of civilization across a large chunk of North America, but some of us don’t actually live in North America and quite a few might well survive pretty well, so although it would be quite annoying for Americans, it is hardly a TEOTWAWKI threat. It would have big effects elsewhere, just not extinction-level ones. For most of the world it would only cause short-term disruptions, such as economic turbulence, at worst it would start a few wars here and there as regions compete for control in the new world order.

Number 3 on their list was climate change, which is an annoyingly wrong, albeit a popularly held inclusion. The only climate change mechanism proposed for catastrophe is global warming, and the reason it’s called climate change now is because global warming stopped in 1998 and still hasn’t resumed 17 years and 9 months later, so that term has become too embarrassing for doom mongers to use. CO2 is a warming agent and emissions should be treated with reasonable caution, but the net warming contribution of all the various feedbacks adds up to far less than originally predicted and the climate models have almost all proven far too pessimistic. Any warming expected this century is very likely to be offset by reduction in solar activity and if and when it resumes towards the end of the century, we will long since have migrated to non-carbon energy sources, so there really isn’t a longer term problem to worry about. With warming by 2100 pretty insignificant, and less than half a metre sea level rise, I certainly don’t think climate change deserves to be on any list of threats of any consequence in the next century.

The top 10 list missed two out by including climate change and Yellowstone, and my first replacement candidate for consideration might be the grey goo scenario. The grey goo scenario is that self-replicating nanobots manage to convert everything including us into a grey goo.  Take away the silly images of tiny little metal robots cutting things up atom by atom and the laughable presentation of this vanishes. Replace those little bots with bacteria that include electronics, and are linked across their own cloud to their own hive AI that redesigns their DNA to allow them to survive in any niche they find by treating the things there as food. When existing bacteria find a niche they can’t exploit, the next generation adapts to it. That self-evolving smart bacteria scenario is rather more feasible, and still results in bacteria that can conquer any ecosystem they find. We would find ourselves unable to fight back and could be wiped out. This isn’t very likely, but it is feasible, could happen by accident or design on our way to transhumanism, and might deserve a place in the top ten threats.

However, grey goo is only one of the NBIC convergence risks we have already imagined (NBIC= Nano-Bio-Info-Cogno). NBIC is a rich seam for doom-seekers. In there you’ll find smart yogurt, smart bacteria, smart viruses, beacons, smart clouds, active skin, direct brain links, zombie viruses, even switching people off. Zombie viruses featured in the top ten TV show too, but they don’t really deserve their own category and more than many other NBIC derivatives. Anyway, that’s just a quick list of deliberate end of world solutions – there will be many more I forgot to include and many I haven’t even thought of yet. Then you have to multiply the list by 3. Any of these could also happen by accident, and any could also happen via unintended consequences of lack of understanding, which is rather different from an accident but just as serious. So basically, deliberate action, accidents and stupidity are three primary routes to the end of the world via technology. So instead of just the grey goo scenario, a far bigger collective threat is NBIC generally and I’d add NBIC collectively into my top ten list, quite high up, maybe 3rd after nuclear war and global virus. AI still deserves to be a separate category of its own, and I’d put it next at 4th.

Another class of technology suitable for abuse is space tech. I once wrote about a solar wind deflector using high atmosphere reflection, and calculated it could melt a city in a few minutes. Under malicious automated control, that is capable of wiping us all out, but it doesn’t justify inclusion in the top ten. One that might is the deliberate deflection of a large asteroid to impact on us. If it makes it in at all, it would be at tenth place. It just isn’t very likely someone would do that.

One I am very tempted to include is drones. Little tiny ones, not the Predators, and not even the ones everyone seems worried about at the moment that can carry 2kg of explosives or Anthrax into the midst of football crowds. Tiny drones are far harder to shoot down, but soon we will have a lot of them around. Size-wise, think of midges or fruit flies. They could be self-organizing into swarms, managed by rogue regimes, terrorist groups, or set to auto, terminator style. They could recharge quickly by solar during short breaks, and restock their payloads from secret supplies that distribute with the swarm. They could be distributed globally using the winds and oceans, so don’t need a plane or missile delivery system that is easily intercepted. Tiny drones can’t carry much, but with nerve gas or viruses, they don’t have to. Defending against such a threat is easy if there is just one, you can swat it. If there is a small cloud of them, you could use a flamethrower. If the sky is full of them and much of the trees and the ground infested, it would be extremely hard to wipe them out. So if they are well designed to cause an extinction level threat, as MAD 2.0 perhaps, then this would be way up in the top tem too, 5th.

Solar storms could wipe out our modern way of life by killing our IT. That itself would kill many people, via riots and fights for the last cans of beans and bottles of water. The most serious solar storms could be even worse. I’ll keep them in my list, at 6th place

Global civil war could become an extinction level event, given human nature. We don’t have to go nuclear to kill a lot of people, and once society degrades to a certain level, well we’ve all watched post-apocalypse movies or played the games. The few left would still fight with each other. I wrote about the Great Western War and how it might result, see

and such a thing could easily spread globally. I’ll give this 7th place.

A large asteroid strike could happen too, or a comet. Ones capable of extinction level events shouldn’t hit for a while, because we think we know all the ones that could do that. So this goes well down the list at 8th.

Alien invasion is entirely possible and could happen at any time. We’ve been sending out radio signals for quite a while so someone out there might have decided to come see whether our place is nicer than theirs and take over. It hasn’t happened yet so it probably won’t, but then it doesn’t have to be very probably to be in the top ten. 9th will do.

High energy physics research has also been suggested as capable of wiping out our entire planet via exotic particle creation, but the smart people at CERN say it isn’t very likely. Actually, I wasn’t all that convinced or reassured and we’ve only just started messing with real physics so there is plenty of time left to increase the odds of problems. I have a spare place at number 10, so there it goes, with a totally guessed probability of physics research causing a problem every 4000 years.

My top ten list for things likely to cause human extinction, or pretty darn close:

  1. Nuclear war
  2. Highly infectious and lethal virus pandemic
  3. NBIC – deliberate, accidental or lack of foresight (includes smart bacteria, zombie viruses, mind control etc)
  4. Artificial Intelligence, including but not limited to the Terminator scenario
  5. Autonomous Micro-Drones
  6. Solar storm
  7. Global civil war
  8. Comet or asteroid strike
  9. Alien Invasion
  10. Physics research

Not finished yet though. My title was how nigh is the end, not just what might cause it. It’s hard to assign probabilities to each one but someone’s got to do it.  So, I’ll make an arbitrarily wet finger guess in a dark room wearing a blindfold with no explanation of my reasoning to reduce arguments, but hey, that’s almost certainly still more accurate than most climate models, and some people actually believe those. I’m feeling particularly cheerful today so I’ll give my most optimistic assessment.

So, with probabilities of occurrence per year:

  1. Nuclear war:  0.5%
  2. Highly infectious and lethal virus pandemic: 0.4%
  3. NBIC – deliberate, accidental or lack of foresight (includes smart bacteria, zombie viruses, mind control etc): 0.35%
  4. Artificial Intelligence, including but not limited to the Terminator scenario: 0.25%
  5. Autonomous Micro-Drones: 0.2%
  6. Solar storm: 0.1%
  7. Global civil war: 0.1%
  8. Comet or asteroid strike 0.05%
  9. Alien Invasion: 0.04%
  10. Physics research: 0.025%

I hope you agree those are all optimistic. There have been several near misses in my lifetime of number 1, so my 0.5% could have been 2% or 3% given the current state of the world. Also, 0.25% per year means you’d only expect such a thing to happen every 4 centuries so it is a very small chance indeed. However, let’s stick with them and add them up. The cumulative probability of the top ten is 2.015%. Lets add another arbitrary 0.185% for all the risks that didn’t make it into the top ten, rounding the total up to a nice neat 2.2% per year.

Some of the ones above aren’t possible quite yet, but others will vary in probability year to year, but I think that won’t change the guess overall much. If we take a 2.2% probability per year, we have an expectation value of 45.5 years for civilization life expectancy from now. Expectation date for human extinction:

2015.5 + 45.5 years= 2061,

Obviously the probability distribution extends from now to eternity, but don’t get too optimistic, because on these figures there currently is only a 15% chance of surviving past this century.

If you can think of good reasons why my figures are far too pessimistic, by all means make your own guesses, but make them honestly, with a fair and reasonable assessment of how the world looks socially, religiously, politically, the quality of our leaders, human nature etc, and then add them up. You might still be surprised how little time we have left.

I’ll revise my original outlook upwards from ‘a bit doomed’.

We’re reasonably doomed.

Suspended animation and mind transfer as suicide alternatives

I last wrote about suicide in but this time, I want to take a different line of thought. Instead of looking at suicide per se, what about alternatives?

There are many motives for suicide but the most common is wanting to escape from a situation such as suffering intolerable pain or misery, which can arise from a huge range of causes. The victim looks at the potential futures available to them and in their analysis, the merits of remaining alive are less attractive than being dead.

The ‘being dead’ bit is not necessarily about a full ceasing of existence, but more about abdicating consciousness, with its implied sensory inputs, pain, anxiety, inner turmoil, or responsibility.

Last summer, a development in neuroscience offered the future potential to switch the brain off:

The researchers were aware that it may become an alternative to anesthetic, or even a means of avoiding boredom or fear. There are many situations where we want to temporarily suspend consciousness. Alcohol and drug abuse often arises from people using chemical means of doing so.

It seems to me that suicide offers a permanent version of the same, to be switched off forever, but with a key difference. In the anesthetic situation, normal life will resume with its associated problems. In suicide, it won’t. The problems are gone.

Suppose that people could get switched off for a very long time whilst being biologically maintained and housed somehow. Suppose it is long enough that any personal relationship issues will have vanished, that any debts, crimes or other legal issues are nullified, and that any pain or other health problems can be fixed, including fixing mental health issues and erasing of intolerable memories if necessary. In many cases, that would be a suitable alternative to suicide. It offers the advantages of escaping the problems, but with the advantage that a better life might follow some time far in the future.

These have widely varying timescales for potential delivery, and there are numerous big issues, but I don’t see fundamental technology barriers here. Suspending the mind for as long as necessary might offer a reasonable alternative to suicide, at least in principle. There is no need to look at all the numerous surrounding issues though. Consider taking that general principle and adapting it a bit. Mid-century onwards, we’ll have direct brain links sufficiently developed to allow porting of the mind to a new body, and android one for example. Having a new identity and a new body and a properly working and sanitized ‘brain’ would satisfy many of these same goals and avoid many of the legal, environmental, financial and ethical issues surrounding indefinite suspension. The person could simply cease their unpleasant existence and start afresh with a better one. I think it would be fine to kill the old body after the successful transfer. Any legal associations with the previous existence could be nullified. It is just a damaged container that would have been destroyed anyway. Have it destroyed, along with all its problems, and move on.

Mid-century is a lot earlier than would be needed for any social issues to go away otherwise. If a suicide is considered because of relationship or family problems, those problems might otherwise linger for generations. Creating a true new identity essentially solves them, albeit at a high cost of losing any relationships that matter. Long prison sentences are substituted by the biological death, debts similarly. A new person appears, inheriting a mind, but one refreshed, potentially with the bad bits filtered out.

Such a future seems to be feasible technically, and I think it is also ethically feasible. Suicide is one sided. Those remaining have to suffer the loss and pick up the pieces anyway, and they would be no worse off in this scenario, and if they feel aggrieved that the person has somehow escaped the consequences of their actions, then they would have escaped anyway. But a life is saved and someone gets a second chance.



Citizen wage and why under 35s don’t need pensions

I recently blogged about the citizen wage and how under 35s in developed countries won’t need pensions. I cut and pasted it below this new pic for convenience. The pic contains the argument so you don’t need to read the text.

Economic growth makes citizen wage feasible and pensions irrelevant

Economic growth makes citizen wage feasible and pensions irrelevant

If you do want to read it as text, here is the blog cut and pasted:

I introduced my calculations for a UK citizen wage in, and I wrote about the broader topic of changing capitalism a fair bit in my book Total Sustainability. A recent article reminded me of my thoughts on the topic and having just spoken at an International Longevity Centre event, ageing and pensions were in my mind so I joined a few dots. We won’t need pensions much longer. They would be redundant if we have a citizen wage/universal wage.

I argued that it isn’t economically feasible yet, and that only a £10k income could work today in the UK, and that isn’t enough to live on comfortably, but I also worked out that with expected economic growth, a citizen wage equal to the UK average income today (£30k) would be feasible in 45 years. That level will sooner be feasible in richer countries such as Switzerland, which has already had a referendum on it, though they decided they aren’t ready for such a change yet. Maybe in a few years they’ll vote again and accept it.

The citizen wage I’m talking about has various names around the world, such as universal income. The idea is that everyone gets it. With no restrictions, there is little running cost, unlike today’s welfare which wastes a third on admin.

Imagine if everyone got £30k each, in today’s money. You, your parents, kids, grandparents, grand-kids… Now ask why you would need to have a pension in such a system. The answer is pretty simple. You won’t.  A retired couple with £60k coming in can live pretty comfortably, with no mortgage left, and no young kids to clothe and feed. Let’s look at dates and simple arithmetic:

45 years from now is 2060, and that is when a £30k per year citizen wage will be feasible in the UK, given expected economic growth averaging around 2.5% per year. There are lots of reasons why we need it and why it is very likely to happen around then, give or take a few years – automation, AI, decline of pure capitalism, need to reduce migration pressures, to name just a few

Those due to retire in 2060 at age 70 would have been born in 1990. If you were born before that, you would either need a small pension to make up to £30k per year or just accept a lower standard of living for a few years. Anyone born in 1990 or later would be able to stop working, with no pension, and receive the citizen wage. So could anyone else stop and also receive it. That won’t cause economic collapse, since most people will welcome work that gives them a higher standard of living, but you could just not work, and just live on what today we think of as the average wage, and by then, you’ll be able to get more with it due to reducing costs via automation.

So, everyone after 2060 can choose to work or not to work, but either way they could live at least comfortably. Anyone less than 25 today does not need to worry about pensions. Anyone less than 35 really doesn’t have to worry much about them, because at worst they’ll only face a small shortfall from that comfort level and only for a few years. I’m 54, I won’t benefit from this until I am 90 or more, but my daughter will.


Are you under 25 and living in any developed country? Then don’t pay into a pension, you won’t need one.

Under 35, consider saving a little over your career, but only enough to last you a few years.

The future of I

Me, myself, I, identity, ego, self, lots of words for more or less the same thing. The way we think of ourselves evolves just like everything else. Perhaps we are still cavemen with better clothes and toys. You may be a man, a dad, a manager, a lover, a friend, an artist and a golfer and those are all just descendants of caveman, dad, tribal leader, lover, friend, cave drawer and stone thrower. When you play Halo as Master Chief, that is not very different from acting or putting a tiger skin on for a religious ritual. There have always been many aspects of identity and people have always occupied many roles simultaneously. Technology changes but it still pushes the same buttons that we evolved hundred thousands of years ago.

Will we develop new buttons to push? Will we create any genuinely new facets of ‘I’? I wrote a fair bit about aspects of self when I addressed the related topic of gender, since self perception includes perceptions of how others perceive us and attempts to project chosen identity to survive passing through such filters:

Self is certainly complex. Using ‘I’ simplifies the problem. When you say ‘I’, you are communicating with someone, (possibly yourself). The ‘I’ refers to a tailored context-dependent blend made up of a subset of what you genuinely consider to be you and what you want to project, which may be largely fictional. So in a chat room where people often have never physically met, very often, one fictional entity is talking to another fictional entity, with each side only very loosely coupled to reality. I think that is different from caveman days.

Since chat rooms started, virtual identities have come a long way. As well as acting out manufactured characters such as the heroes in computer games, people fabricate their own characters for a broad range of kinds of ‘shared spaces’, design personalities and act them out. They may run that personality instance in parallel with many others, possibly dozens at once. Putting on an act is certainly not new, and friends easily detect acts in normal interactions when they have known a real person a long time, but online interactions can mean that the fictional version is presented it as the only manifestation of self that the group sees. With no other means to know that person by face to face contact, that group has to take them at face value and interact with them as such, though they know that may not represent reality.

These designed personalities may be designed to give away as little as possible of the real person wielding them, and may exist for a range of reasons, but in such a case the person inevitably presents a shallow image. Probing below the surface must inevitably lead to leakage of the real self. New personality content must be continually created and remembered if the fictional entity is to maintain a disconnect from the real person. Holding the in-depth memory necessary to recall full personality aspects and history for numerous personalities and executing them is beyond most people. That means that most characters in shared spaces take on at least some characteristics of their owners.

But back to the point. These fabrications should be considered as part of that person. They are an ‘I’ just as much as any other ‘I’. Only their context is different. Those parts may only be presented to subsets of the role population, but by running them, the person’s brain can’t avoid internalizing the experience of doing so. They may be partly separated but they are fully open to the consciousness of that person. I think that as augmented and virtual reality take off over the next few years, we will see their importance grow enormously. As virtual worlds start to feel more real, so their anchoring and effects in the person’s mind must get stronger.

More than a decade ago, AI software agents started inhabiting chat rooms too, and in some cases these ‘bots’ become a sufficient nuisance that they get banned. The front that they present is shallow but can give an illusion of reality. In some degree, they are an extension of the person or people that wrote their code. In fact, some are deliberately designed to represent a person when they are not present. The experiences that they have can’t be properly internalized by their creators, so they are a very limited extension to self. But how long will that be true? Eventually, with direct brain links and transhuman brain extensions into cyberspace, the combined experiences of I-bots may be fully available to consciousness just the same as first hand experiences.

Then it will get interesting. Some of those bots might be part of multiple people. People’s consciousnesses will start to overlap. People might collect them, or subscribe to them. Much as you might subscribe to my blog, maybe one day, part of one person’s mind, manifested as a bot or directly ‘published’, will become part of your mind. Some people will become absorbed into the experience and adopt so many that their own original personality becomes diluted to the point of disappearance. They will become just an interference pattern of numerous minds. Some will be so infectious that they will spread widely. For many, it will be impossible to die, and for many others, their minds will be spread globally. The hive minds of Dr Who, then later the Borg on Star Trek are conceptual prototypes but as with any sci-fi, they are limited by the imagination of the time they were conceived. By the time they become feasible, we will have moved on and the playground will be far richer than we can imagine yet.

So, ‘I’ has a future just as everything else. We may have just started to add extra facets a couple of decades ago, but the future will see our concept of self evolve far more quickly.


I got asked by a reader whether I worry about this stuff. Here is my reply:

It isn’t the technology that worries me so much that humanity doesn’t really have any fixed anchor to keep human nature in place. Genetics fixed our biological nature and our values and morality were largely anchored by the main religions. We in the West have thrown our religion in the bin and are already seeing a 30 year cycle in moral judgments which puts our value sets on something of a random walk, with no destination, the current direction governed solely by media and interpretation and political reaction to of the happenings of the day. Political correctness enforces subscription to that value set even more strictly than any bishop ever forced religious compliance. Anyone that thinks religion has gone away just because people don’t believe in God any more is blind.

Then as genetics technology truly kicks in, we will be able to modify some aspects of our nature. Who knows whether some future busybody will decree that a particular trait must be filtered out because it doesn’t fit his or her particular value set? Throwing AI into the mix as a new intelligence alongside will introduce another degree of freedom. So already several forces acting on us in pretty randomized directions that can combine to drag us quickly anywhere. Then the stuff above that allows us to share and swap personality? Sure I worry about it. We are like young kids being handed a big chemistry set for Christmas without the instructions, not knowing that adding the blue stuff to the yellow stuff and setting it alight will go bang.

I am certainly no technotopian. I see the enormous potential that the tech can bring and it could be wonderful and I can’t help but be excited by it. But to get that you need to make the right decisions, and when I look at the sorts of leaders we elect and the sorts of decisions that are made, I can’t find the confidence that we will make the right ones.

On the good side, engineers and scientists are usually smart and can see most of the issues and prevent most of the big errors by using comon industry standards, so there is a parallel self-regulatory system in place that politicians rarely have any interest in. On the other side, those smart guys unfortunately will usually follow the same value sets as the rest of the population. So we’re quite likely to avoid major accidents and blowing ourselves up or being taken over by AIs. But we’re unlikely to avoid the random walk values problem and that will be our downfall.

So it could be worse, but it could be a whole lot better too.


The future of euthanasia and suicide

Another extract from You Tomorrow, one that is very much in debate at the moment, it is an area that needs wise legislation, but I don’t have much confidence that we’ll get it. I’ll highlight some of the questions here, but since I don’t have many answers, I’ll illustrate why: they are hard questions.

Sadly, some people feel the need to end their own lives and an increasing number are asking for the legal right to assisted suicide. Euthanasia is increasingly in debate now too, with some health service practices bordering on it, some would say even crossing the boundary. Suicide and euthanasia are inextricably linked, mainly because it is impossible to know for certain what is in someone’s mind, and that is the basis of the well-known slippery slope from assisted suicide to euthanasia.

The stages of progress are reasonably clear. Is the suicide request a genuine personal decision, originating from that person’s free thoughts, based solely on their own interests? Or is it a personal decision influenced by the interests of others, real or imagined? Or is it a personal decision made after pressure from friends and relatives who want the person to die peacefully rather than suffer, with the best possible interests of the person in mind? In which case, who first raised the possibility of suicide as a potential way out? Or is it a personal decision made after pressure applied because relatives want rid of the person, perhaps over-eager to inherit or wanting to end their efforts to care for them? Guilt can be a powerful force and can be applied very subtly indeed over a period of time.

If the person is losing their ability to communicate a little, perhaps a friend or relative may help interpret their wishes to a doctor. From here, it is a matter of degree of communication skill loss and gradual increase of the part relatives play in guiding the doctor’s opinion of whether the person genuinely wants to die. Eventually, the person might not be directly consulted because their relatives can persuade a doctor that they really want to die but can’t say so effectively. Not much further along the path, people make their minds up what is in the best interests of another person as far as living or dying goes. It is a smooth path between these many small steps from genuine suicide to euthanasia. And that all ignores all the impact of possible alternatives such as pain relief, welfare, special care etc. Interestingly, the health services seem to be moving down the euthanasia route far faster than the above steps would suggest, skipping some of them and going straight to the ‘doctor knows best’ step.

Once the state starts to get involved in deciding cases, even by abdicating it to doctors, it is a long but easy road to Logan’s run, where death is compulsory at a certain age, or a certain care cost, or you’ve used up your lifetime carbon credit allocation.

There are sometimes very clear cases where someone obviously able to make up their own mind has made a thoroughly thought-through decision to end their life because of ongoing pain, poor quality of life and no hope of any cure or recovery, the only prospect being worsening condition leading to an undignified death. Some people would argue with their decision to die, others would consider that they should be permitted to do so in such clear circumstances, without any fear for their friends or relatives being prosecuted.

There are rarely razor-sharp lines between cases; situations can get blurred sometimes because of the complexity of individual lives, and because judges have their own personalities and differ slightly in their judgements. There is inevitably another case slightly further down the line that seems reasonable to a particular judge in the circumstances, and once that point is passed, and accepted by the courts, other cases with slightly less-defined circumstances can use it to help argue theirs. This is the path by which most laws evolve. They start in parliament and then after implementation, case law and a gradually changing public mind-set or even the additive effects of judges’ ideologies gradually evolve them into something quite different.

It seems likely given current trends and pressures that one day, we will accept suicide, and then we may facilitate it. Then, if we are not careful, it may evolve into euthanasia by a hundred small but apparently reasonable steps, and if we don’t stop it in time, one day we might even have a system like the one in the film ‘Logan’s Run’.

 Suicide and euthanasia are certainly gradually becoming less shocking to people, and we should expect that in the far future both will become more accepted. If you doubt that society can change its attitudes quickly, it actually only takes about 30 years to get a full reversal. Think of how long it took for homosexuality to change from condemned to fashionable, or how long abortion took from being something a woman would often be condemned for to something that is now a woman’s right to choose. Each of these took only 3 decades for a full 180 degree turnaround. Attitudes to the environment switched from mad panic about a coming ice age to mad panic about global warming in just 3 decades too, and are already switching back again towards ice age panic. If the turn in attitudes to suicide started 10 years ago, then we may have about 20 years left before it is widely accepted as a basic right that is only questioned by bigots. But social change aside, the technology will make the whole are much more interesting.

As I argued earlier, the very long term (2050 and beyond) will bring technology that allows people to link their brains to the machine world, perhaps using nanotech implants connected to each synapse to relay brain activity to a high speed neural replica hosted by a computer. This will have profound implications for suicide too. When this technology has matured, it will allow people to do wonderful things such as using machine sensors as extensions to their own capabilities. They will be able to use android bodies to move around and experience distant places and activities as if they were there in person. For people who feel compelled to end it all because of disability, pain or suffering, an alternative where they could effectively upload their mind into an android might be attractive. Their quality of life could improve dramatically at least in terms of capability. We might expect that pain and suffering could be dealt with much more effectively too if we have a direct link into the brain to control the way sensations are dealt with. So if that technology does progress as I expect, then we might see a big drop in the number of people who want to die.

But the technology options don’t stop there. If a person has a highly enhanced replica of their own brain/mind, in the machine world, people will begin to ask why they need the original. The machine world could give them greater sensory ability, greater physical ability, and greater mental ability. Smarter, with better memory, more and better senses, connected to all the world’s knowledge via the net, able effectively to wander around the world at the speed of light, and being connected directly to other people’s minds when you want, and doing so without fear of ageing, ill health of pain, this would seem a very attractive lifestyle. And it will become possible this century, at low enough cost for anyone to afford.

What of suicide then? It might not seem so important to keep the original body, especially if it is worn out or defective, so even without any pain and suffering, some people might decide to dispose of their body and carry on their lives without it. Partial suicide might become possible. Aside from any religious issues, this would be a hugely significant secular ethical issue. Updating the debate today, should people be permitted to opt out of physical existence, only keeping an electronic copy of their mind, timesharing android bodies when they need to enter the physical world? Should their families and friends be able to rebuild their loved ones electronically if they die accidentally? If so, should people be able to rebuild several versions, each representing the deceased’s different life stages, or just the final version, which may have been ill or in decline?

And then the ethical questions get even trickier. If it is possible to replicate the brain’s structure and so capture the mind, will people start to build ‘restore points’, where they make a permanent record of the state of their self at a given moment? If they get older and decide they could have run their lives better, they might be able to start again from any restore point. If the person exists in cyberspace and has disposed of their physical body, what about ownership of their estate? What about working and living in cyberspace? Will people get jobs? Will they live in virtual towns like the Sims? Indeed, in the same time frame, AI will have caught up and superseded humans in ability. Maybe Sims will get bored in their virtual worlds and want to end it all by migrating to the real world. Maybe they could swap bodies with someone coming the other way?

What will the State do when it is possible to reduce costs and environmental impact by migrating people into the virtual universe? Will it then become socially and politically acceptable, even compulsory when someone reaches a given age or costs too much for health care?

So perhaps suicide has an interesting future. It might eventually decline, and then later increase again, but in many very different forms, becoming a whole range of partial suicide options. But the scariest possibility is that people may not be able to die completely. If their body is an irrelevance, and there are many restore points from which they can be recovered, friends, family, or even the state might keep them ‘alive’ as long as they are useful. And depending on the law, they might even become a form of slave labour, their minds used for information processing or creativity whether they wish it or not. It has often truly been noted that there are worse fates than death.

Your most likely cause of death is being switched off

This one’s short and sweet.

The majority of you reading this blog live in the USA, UK, Canada or Australia. More than half of you are under 40.

That means your natural life expectancy is over 85, so statistically, your body will probably live until after 2060.

By then, electronic mind enhancement will probably mean that most of your mind runs on external electronics, not in your brain, so that your mind won’t die when your body does. You’ll just need to find a new body, probably an android, for those times you aren’t content being on the net. Most of us identify ourselves mainly as our mind, and would still think of ourselves as still alive if our mind carries on as if nothing much has happened, which is likely.

Electronic immortality is not true immortality though. Your mind can only survive on the net as long as it is supported by the infrastructure. That will be controlled by others. Future technology will likely be able to defend against asteroid strikes, power surges cause by solar storms and so on, so accidental death seems unlikely for hundreds of years. However, since minds supported on it need energy to continue running and electronics to be provided and maintained, and will want to make trips into the ‘real’ world, or even live there a lot of the time, they will have a significant resource footprint. They will probably not be considered as valuable as other people whose bodies are still alive. In fact they might be considered as competition – for jobs, resources, space, housing, energy… They may even be seen as easy targets for future cyber-terrorists.

So, it seems quite likely, maybe even inevitable, that life limits will be imposed on the vast majority of you. At some point you will simply be switched off. There might be some prioritization, competitions, lotteries or other selection mechanism, but only some will benefit from it.

Since you are unlikely to die when your body ceases to work, your most likely cause of death is therefore to be switched off. Sorry to break that to you.

Future human evolution

I’ve done patches of work on this topic frequently over the last 20 years. It usually features in my books at some point too, but it’s always good to look afresh at anything. Sometimes you see something you didn’t see last time.

Some of the potential future is pretty obvious. I use the word potential, because there are usually choices to be made, regulations that may or may not get in the way, or many other reasons we could divert from the main road or even get blocked completely.

We’ve been learning genetics now for a long time, with a few key breakthroughs. It is certain that our understanding will increase, less certain how far people will be permitted to exploit the potential here in any given time frame. But let’s take a good example to learn a key message first. In IVF, we can filter out embryos that have the ‘wrong’ genes, and use their sibling embryos instead. Few people have a problem with that. At the same time, pregnant women may choose an abortion if they don’t want a child when they discover it is the wrong gender, but in the UK at least, that is illegal. The moral and ethical values of our society are on a random walk though, changing direction frequently. The social assignment of right and wrong can reverse completely in just 30 years. In this example, we saw a complete reversal of attitudes to abortion itself within 30 years, so who is to say we won’t see reversal on the attitude to abortion due to gender? It is unwise to expect that future generations will have the same value sets. In fact, it is highly unlikely that they will.

That lesson likely applies to many technology developments and quite a lot of social ones – such as euthanasia and assisted suicide, both already well into their attitude reversal. At some point, even if something is distasteful to current attitudes, it is pretty likely to be legalized eventually, and hard to ban once the door is opened. There will always be another special case that opens the door a little further. So we should assume that we may eventually use genetics to its full capability, even if it is temporarily blocked for a few decades along the way. The same goes for other biotech, nanotech, IT, AI and any other transhuman enhancements that might come down the road.

So, where can we go in the future? What sorts of splits can we expect in the future human evolution path? It certainly won’t remain as just plain old homo sapiens.

I drew this evolution path a long time ago in the mid 1990s:

human evolution 1

It was clear even then that we could connect external IT to the nervous system, eventually the brain, and this would lead to IT-enhanced senses, memory, processing, higher intelligence, hence homo cyberneticus. (No point in having had to suffer Latin at school if you aren’t allowed to get your own back on it later). Meanwhile, genetic enhancement and optimization of selected features would lead to homo optimus. Converging these two – why should you have to choose, why not have a perfect body and an enhanced mind? – you get homo hybridus. Meanwhile, in the robots and AI world, machine intelligence is increasing and we eventually we get the first self-aware AI/robot (it makes little sense to separate the two since networked AI can easily be connected to a machine such as a robot) and this has its own evolution path towards a rich diversity of different kinds of AI and robots, robotus multitudinus. Since both the AI world and the human world could be networked to the same network, it is then easy to see how they could converge, to give homo machinus. This future transhuman would have any of the abilities of humans and machines at its disposal. and eventually the ability to network minds into a shared consciousness. A lot of ordinary conventional humans would remain, but with safe upgrades available, I called them homo sapiens ludditus. As they watch their neighbors getting all the best jobs, winning at all the sports, buying everything, and getting the hottest dates too, many would be tempted to accept the upgrades and homo sapiens might gradually fizzle out.

My future evolution timeline stayed like that for several years. Then in the early 2000s I updated it to include later ideas:

human evolution 2

I realized that we could still add AI into computer games long after it becomes comparable with human intelligence, so games like EA’s The Sims might evolve to allow entire civilizations living within a computer game, each aware of their existence, each running just as real a life as you and I. It is perhaps unlikely that we would allow children any time soon to control fully sentient people within a computer game, acting as some sort of a god to them, but who knows, future people will argue that they’re not really real people so it’s OK. Anyway, you could employ them in the game to do real knowledge work, and make money, like slaves. But since you’re nice, you might do an incentive program for them that lets them buy their freedom if they do well, letting them migrate into an android. They could even carry on living in their Sims home and still wander round in our world too.

Emigration from computer games into our world could be high, but the reverse is also possible. If the mind is connected well enough, and enhanced so far by external IT that almost all of it runs on the IT instead of in the brain, then when your body dies, your mind would carry on living. It could live in any world, real or fantasy, or move freely between them. (As I explained in my last blog, it would also be able to travel in time, subject to certain very expensive infrastructural requirements.) As well as migrants coming via electronic immortality route, it would be likely that some people that are unhappy in the real world might prefer to end it all and migrate their minds into a virtual world where they might be happy. As an alternative to suicide, I can imagine that would be a popular route. If they feel better later, they could even come back, using an android.  So we’d have an interesting future with lots of variants of people, AI and computer game and fantasy characters migrating among various real and imaginary worlds.

But it doesn’t stop there. Meanwhile, back in the biotech labs, progress is continuing to harness bacteria to make components of electronic circuits (after which the bacteria are dissolved to leave the electronics). Bacteria can also have genes added to emit light or electrical signals. They could later be enhanced so that as well as being able to fabricate electronic components, they could power them too. We might add various other features too, but eventually, we’re likely to end up with bacteria that contain electronics and can connect to other bacteria nearby that contain other electronics to make sophisticated circuits. We could obviously harness self-assembly and self-organisation, which are also progressing nicely. The result is that we will get smart bacteria, collectively making sophisticated, intelligent, conscious entities of a wide variety, with lots of sensory capability distributed over a wide range. Bacteria Sapiens.

I often talk about smart yogurt using such an approach as a key future computing solution. If it were to stay in a yogurt pot, it would be easy to control. But it won’t. A collective bacterial intelligence such as this could gain a global presence, and could exist in land, sea and air, maybe even in space. Allowing lots of different biological properties could allow colonization of every niche. In fact, the first few generations of bacteria sapiens might be smart enough to design their own offspring. They could probably buy or gain access to equipment to fabricate them and release them to multiply. It might be impossible for humans to stop this once it gets to a certain point. Accidents happen, as do rogue regimes, terrorism and general mad-scientist type mischief.

And meanwhile, we’ll also be modifying nature. We’ll be genetically enhancing a wide range of organisms, bringing some back from extinction, creating new ones, adding new features, changing even some of the basic mechanism by which nature works in some cases. We might even create new kinds of DNA or develop substitutes with enhanced capability. We may change nature’s evolution hugely. With a mix of old and new and modified, nature evolves nicely into Gaia Sapiens.

We’re not finished with the evolution chart though. Here is the next one:

human evolution 3

Just one thing is added. Homo zombius. I realized eventually that the sci-fi ideas of zombies being created by viruses could be entirely feasible. A few viruses, bacteria and other parasites can affect the brains of the victims and change their behaviour to harness them for their own life cycle.

See for fun.

Bacteria sapiens could be highly versatile. It could make virus variants if need be. It could evolve itself to be able to live in our bodies, maybe penetrate our brains. Bacteria sapiens could make tiny components that connect to brain cells and intercept signals within our brains, or put signals back in. It could read our thoughts, and then control our thoughts. It could essentially convert people into remote controlled robots, or zombies as we usually call them. They could even control muscles directly to a point, so even if the zombie is decapitated, it could carry on for a short while. I used that as part of my storyline in Space Anchor. If future humans have widespread availability of cordless electricity, as they might, then it is far fetched but possible that headless zombies could wander around for ages, using the bacterial sensors to navigate. Homo zombius would be mankind enslaved by bacteria. Hopefully just a few people, but it could be everyone if we lose the battle. Think how difficult a war against bacteria would be, especially if they can penetrate anyone’s brain and intercept thoughts. The Terminator films looks a lot less scary when you compare the Terminator with the real potential of smart yogurt.

Bacteria sapiens might also need to be consulted when humans plan any transhuman upgrades. If they don’t consent, we might not be able to do other transhuman stuff. Transhumans might only be possible if transbacteria allow it.

Not done yet. I wrote a couple of weeks ago about fairies. I suggested fairies are entirely feasible future variants that would be ideally suited to space travel.

They’d also have lots of environmental advantages as well as most other things from the transhuman library. So I think they’re inevitable. So we should add fairies to the future timeline. We need a revised timeline and they certainly deserve their own branch. But I haven’t drawn it yet, hence this blog as an excuse. Before I do and finish this, what else needs to go on it?

Well, time travel in cyberspace is feasible and attractive beyond 2075. It’s not the proper real world time travel that isn’t permitted by physics, but it could feel just like that to those involved, and it could go further than you might think. It certainly will have some effects in the real world, because some of the active members of the society beyond 2075 might be involved in it. It certainly changes the future evolution timeline if people can essentially migrate from one era to another (there are some very strong caveats applicable here that I tried to explain in the blog, so please don’t misquote me as a nutter – I haven’t forgotten basic physics and logic, I’m just suggesting a feasible implementation of cyberspace that would allow time travel within it. It is really a cyberspace bubble that intersects with the real world at the real time front so doesn’t cause any physics problems, but at that intersection, its users can interact fully with the real world and their cultural experiences of time travel are therefore significant to others outside it.)

What else? OK, well there is a very significant community (many millions of people) that engages in all sorts of fantasy in shared on-line worlds, chat rooms and other forums. Fairies, elves, assorted spirits, assorted gods, dwarves, vampires, werewolves, assorted furry animals, assorted aliens, dolls,  living statues, mannequins, remote controlled people, assorted inanimate but living objects, plants and of course assorted robot/android variants are just some of those that already exist in principle; I’m sure I’ve forgotten some here and anyway, many more are invented every year so an exhaustive list would quickly become out of date. In most cases, many people already role play these with a great deal of conviction and imagination, not just in standalone games, but in communities, with rich cultures, back-stories and story-lines. So we know there is a strong demand, so we’re only waiting for their implementation once technology catches up, and it certainly will.

Biotech can do a lot, and nanotech and IT can add greatly to that. If you can design any kind of body with almost any kind of properties and constraints and abilities, and add any kind of IT and sensing and networking and sharing and external links for control and access and duplication, we will have an extremely rich diversity of future forms with an infinite variety of subcultures, cross-fertilization, migration and transformation. In fact, I can’t add just a few branches to my timeline. I need millions. So instead I will just lump all these extras into a huge collected category that allows almost anything, called Homo Whateverus.

So, here is the future of human (and associates) evolution, for the next 150 years. A few possible cross-links are omitted for clarity


I won’t be around to watch it all happen. But a lot of you will.


Will population grow again after 2050? To 15Bn?

We’ve been told for decades now that population will level off, probably around 2050, and population after that will likely decline. The world population will peak around 2050 at about 9.5 Billion. That’s pretty much the accepted wisdom at the moment.

The reasoning is pretty straight forward and seems sound, and the evidence follows it closely. People are becoming wealthier. Wealthier people have fewer kids. If you don’t expect your kids to die from disease or starvation before they’re grown up, you don’t need to make as many.

But what if it’s based on fallacy? What if it is just plain wrong? What if the foundations of that reasoning change dramatically by 2050 and it no longer holds true? Indeed. What if?

Before I continue, let me say that my book ‘Total Sustainability’, and my various optimistic writings and blogs about population growth all agree with the view that population will level off around 2050 and then slowly decline, while food supply and resource use will improve thanks to better technologies, thereby helping us to restore the environment. If population may increase again, I and many others will have to rethink.

The reason I am concerned now is that I just made another cross-link with the trend of rising wealth, which will allow even the most basic level of welfare to be set at a high level. It is like the citizen payment that the Swiss voted on recently. I suggested it a couple of years ago myself and in my books, and am in favour of it. Everyone would receive the same monthly payment from the state whether they work or not. The taxes due would then be calculated on the total income, regardless of how you get it, and I would use a flat tax for that too. Quite simple and fair. Only wealthier people pay any tax and then according to how wealthy they are. My calculations say that by 2050, everyone in the UK could get £30,000 a year each (in today’s money) based on the typical level of growth we’ve seen in recent decades (ignoring the recession years). In some countries it would be even higher, in some less, but the cost of living is also less in many countries. In many countries welfare could be as generous as average wages are today.

So by 2050, people in many countries could have an income that allows them to survive reasonably comfortably, even without having a job. That won’t stop everyone working, but it will make it much easier for people who want to raise a family to do so without economic concerns or having to go out to work. It will become possible to live comfortably without working and raise a family.

We know that people tend to have fewer kids as they become wealthier, but there are a number of possible reasons for that. One is the better survival chances for children. That may still have an effect in the developing world, but has little effect in richer countries, so it probably won’t have any impact on future population levels in those countries. Another is the need to work to sustain the higher standard of living one has become used to, to maintain a social status and position, and the parallel reluctance to have kids that will make that more difficult. While a small number of people have kids as a means to solicit state support, but that must be tiny compared to the numbers who have fewer so that they can self sustain. Another reason is that having kids impedes personal freedom, impacts on social life and sex life and adds perhaps unwelcome responsibility. These reasons are all vulnerable to the changes caused by increasing welfare and consequential attitudes. There are probably many other reasons too. 

Working and having fewer kids allows a higher standard of living than having kids and staying at home to look after them, but most people are prepared to compromise on material quality of life to some degree to get the obvious emotional rewards of having kids. Perhaps people are having fewer kids as they get wealthier because the drop of standard of living is too high, or the risks too high. If the guaranteed basic level of survival is comfortable, there is little risk. If a lot of people choose not to work and just live on that, there will also be less social stigma in not working, and more social opportunities from having more people in the same boat. So perhaps we may reasonably deduce that making it less uncomfortable to stop work and have more kids will create a virtuous circle of more and more people having more kids.

I won’t go as far as saying that will happen, just that it might. I don’t know enough about the relative forces that make someone decide whether to have another child. It is hard to predetermine the social attitudes that will prevail in 2050 and beyond, whether people will feel encouraged or deterred from having more kids.

My key point here is that the drop in fertility we see today due to increasing wealth might only hold true up to a certain point, beyond which it reverses. It may simply be that the welfare and social floor is too low to offer a sufficient safety net for those considering having kids, so they choose not to. If the floor is raised thanks to improving prosperity, as it might well be, then population could start to rise quickly again. The assumption that population will peak at 9 or 9.5 billion and then fall might be wrong. It could rise to up to 15 billion, at which point other factors will start to reassert themselves. If our assumptions on age of death are also underestimates, it could go even higher.

And another new book: You Tomorrow, 2nd Edition

I wrote You Tomorrow two years ago. It was my first ebook, and pulled together a lot of material I’d written on the general future of life, with some gaps then filled in. I was quite happy with it as a book, but I could see I’d allowed quite a few typos to get into the final work, and a few other errors too.

However, two years is a long time, and I’ve thought about a lot of new areas in that time. So I decided a few months ago to do a second edition. I deleted a bit, rearranged it, and then added quite a lot. I also wrote the partner book, Total Sustainability. It includes a lot of my ideas on future business and capitalism, politics and society that don’t really belong in You Tomorrow.

So, now it’s out on sale on Amazon in paper, at £9.00 and in ebook form at £3.81 (guessing the right price to get a round number after VAT is added is beyond me. Did you know that paper books don’t have VAT added but ebooks do?)

And here’s a pretty picture:


Future gender equality – legally recognise everyone’s male and female sides

My writing on the future of gender and same-sex reproduction now forms a section of my new book You Tomorrow, Second Edition, on the future of humanity, gender, lifestyle and our surroundings. Available from Amazon as paper and ebook.