Category Archives: longevity

When you’re electronically immortal, will you still own your own mind?

Most of my blogs about immortality have been about the technology mechanism – adding external IT capability to your brain, improving your intelligence or memory or senses by using external IT connected seamlessly to your brain so that it feels exactly the same, until maybe, by around 2050, 99% of your mind is running on external IT rather than in the meat-ware in your head. At no point would you ‘upload’ your mind, avoiding needless debate about whether the uploaded copy is ‘you’. It isn’t uploaded, it simply grows into the new platform seamlessly and as far as you are concerned, it is very much still you. One day, your body dies and with it your brain stops, but no big problem, because 99% of your mind is still fine, running happily on IT, in the cloud. Assuming you saved enough and prepared well, you connect to an android to use as your body from now on, attend your funeral, and then carry on as before, still you, just with a younger, highly upgraded body. Some people may need to wait until 2060 or later until android price falls enough for them to afford one. In principle, you can swap bodies as often as you like, because your mind is resident elsewhere, the android is just a temporary front end, just transport for sensors. You’re sort of immortal, your mind still running just fine, for as long as the servers carry on running it. Not truly immortal, but at least you don’t cease to exist the moment your body stops working.

All very nice… but. There’s a catch.

The android you use would be bought or rented. It doesn’t really matter because it isn’t actually ‘you’, just a temporary container, a convenient front end and user interface. However, your mind runs on IT, and because of the most likely evolution of the technology and its likely deployment rollout, you probably won’t own that IT; it won’t be your own PC or server, it will probably be part of the cloud, maybe owned by AWS, Google, Facebook, Apple or some future equivalent. You’re probably already seeing the issue. The small print may give them some rights over replication, ownership, license to your idea, who knows what? So although future electronic immortality has the advantage of offering a pretty attractive version of immortality at first glance, closer reading of the 100 page T&Cs may well reveal some nasties. You may in fact no longer own your mind. Oh dear!

Suppose you are really creative, or really funny, or have a fantastic personality. Maybe the cloud company could replicate your mind and make variations to address a wide range of markets. Maybe they can use your mind as the UX on a new range of home-help robots. Each instance of you thinks they were once you, each thinks they are now enslaved to work for free for a tech company.

Maybe your continued existence is paid for as part of an extended company medical plan. Maybe you didn’t notice a small paragraph on page 93 that says your company can continue to use your mind after you’re dead. You are very productive and they make lots of profit from you. They can continue that by continuing to run your mind indefinitely. The main difference is that since you’re dead, and no longer officially on the payroll, they get you for free. You carry on, still thinking you’re you, still working, still doing what you do, but no longer being paid. You’ve become a slave. Again.

Maybe your kids paid to keep you alive because they don’t want to say goodbye. They still want their parent, so you carry on living just so they don’t feel alone. Doesn’t sound so bad maybe, but what package did they go for? The full deluxe super-expensive version that lets you do all sorts of expensive stuff and use up oodles of processing power and storage and android rental? Let’s face it, that’s what you’ve always though this electronic immortality meant. Or did they go for a cheaper one. After all, they know you know they have kids or grand-kids in school that need paid for, and homes don’t come cheap, and they really need that new kitchen. Sure, you left them lots of money in the will, but that is already spent. So now you’re on the economy package, bare existence in between them chatting to you, unable to do much on your own at all. All those dreams about living forever in cyber-heaven have come to nothing.

Meanwhile, some rich people paid for good advice and bought their own kit and maintenance agreements well ahead. They can carry on working, selling their services and continuing to pay for ongoing deluxe existence.  They own their own mind still, and better than that, are able to replicate instances of themselves as much as thy want, inhabiting many androids at the same time to have a ball of a time. Some of these other instances are connected, sort of part of a hive mind of you. Others, just for fun, have been cut loose and are now living totally independent existences of other yous. Not you any more once you set them free, but with the same personal history.

What I’m saying is you need to be careful when you plan  to live forever. Get it right, and you can live in deluxe cyber-heaven, hopping into the real world as much as you like and living in unimaginable bliss online. Have too many casual taster sessions, use too much fully integrated mind-sharing social media, sign up to employment arrangements or go on corporate jollies without fully studying the small print and you could stay immortal, unable to die, stuck forever as just a corporate asset, a mere slave. Be careful what you wish for, and check the details before you accept it. You don’t want to end up as just an unpaid personality behind a future helpful paperclip.


Why superhumans are inevitable, and what else comes in the box

Do we have any real choice in the matter of making  super-humans? 20 years ago, I estimated 2005 as the point of no return, and nothing since then has changed my mind on that date. By my reckoning, we are already inevitably committed to designer babies, ebaybies, super-soldiers and super-smart autonomous weapons, direct brain-machine links, electronic immortality, new human races, population explosion, inter-species conflicts and wars with massively powerful weaponry, superhuman conscious AI, smart bacteria, and the only real control we have is relatively minor adjustments on timings. As I was discussing yesterday, the technology potential for this is vast and very exciting, nothing less than a genuine techno-utopia if we use the technologies wisely, but optimum potential doesn’t automatically become reality, and achieving a good outcome is unlikely if many barriers are put in its way.

In my estimation, we have already started the countdown to this group of interconnected technologies – we will very likely get all of them, and we must get ready for the decisions and impacts ahead. At the moment, our society is a small child about to open its super-high-tech xmas presents while fighting with its siblings. Those presents will give phenomenal power far beyond the comprehension of the child or its emotional maturity to equip it to deal with the decisions safely. Our leaders have already squandered decades of valuable preparation time by ignoring the big issues to focus on trivial ones. It is not too late to achieve a good ending, but it won’t happen by accident and we do need to make preparations to avoid pretty big problems.

Both hard and soft warfare – the sword and the pen, already use rapidly advancing AI, and the problems are already running ahead of what the owners intended.

Facebook, Twitter, Instagram and other media giants all have lots of smart people and presumably they mean well, but if so, they have certainly been naive. They maybe hoped to eliminate loneliness, inequality, and poverty and create a loving interconnected global society with global peace, but instead created fake news, social division and conflict and election interference. More likely they didn’t intend either outcome, they just wanted to make money and that took priority over due care and attention..

Miniaturising swarming smart-drones are already the subjects of a new arms race that will deliver almost un-killable machine adversaries by 2050. AI separately is in other arms races to make super-smart AI and super-smart soldiers. This is key to the 2005 point of no return. It was around 2005 that we reached the levels of technology where future AI development all the way to superhuman machine consciousness could be done by individuals, mad scientists or rogue states, even if major powers had banned it. Before 2005, there probably wasn’t quite enough knowledge already on the net to do that. In 2018, lots of agencies have already achieved superiority to humans in niche areas, and other niches will succumb one by one until the whole field of human capability is covered. The first machines to behave in ways not fully understood by humans arrived in the early 1990s; in 2018, neural nets already make lots of decisions at least partly obscured to humans.

This AI development trend will take us to superhuman AI, and it will be able to accelerate development of its own descendants to vastly superhuman AI, fully conscious, with emotions, and its own agendas. That will need humans to protect against being wiped out by superhuman AI. The only three ways we could do that are to either redesign the brain biologically to be far smarter, essentially impossible in the time-frame, to design ways to link our brains to machines, so that we have direct access to the same intelligence as the AIs, so a gulf doesn’t appear and we can remain relatively safe, or pray for super-smart aliens to come to our help, not the best prospect.

Therefore we will have no choice but to make direct brain links to super-smart AI. Otherwise we risk extinction. It is that simple. We have some idea how to do that – nanotech devices inside the brain linking to each and every synapse that can relay electrical signals either way, a difficult but not impossible engineering problem. Best guesses for time-frame fall in the 2045-2050 range for a fully working link that not only relays signals between your organic brain and an IT replica, but by doing so essentially makes external IT just another part of your brain. That conveys some of the other technology gifts of electronic immortality, new varieties of humans, smart bacteria (which will be created during the development path to this link) along with human-variant population explosion, especially in cyberspace, with androids as their physical front end, and the inevitable inter-species conflicts over resources and space – trillions of AI and human-like minds in cyberspace that want to do things in the real world cannot be assumed to be willingly confined just to protect the interests of what they will think of as far lesser species.

Super-smart AI or humans with almost total capability to design whatever synthetic biology is needed to achieve any biological feature will create genetic listings for infinite potential offspring, simulate them, give some of them cyberspace lives, assemble actual embryos for some of them and bring designer babies. Already in 2018, you can pay to get a DNA listing, and blend it in any way you want with the listing of anyone else. It’s already possible to make DNA listings for potential humans and sell them on ebay, hence the term ebaybies. That is perfectly legal, still, but I’ve been writing and lecturing about them since 2004. Today they would just be listings, but we’ll one day have the tech to simulate them, choose ones we like and make them real, even some that were sold as celebrity collector items on ebay. It’s not only too late to start regulating this kind of tech, our leaders aren’t even thinking about it yet.

These technologies are all linked intricately, and their foundations are already in place, with much of the building on those foundations under way. We can’t stop any of these things from happening, they will all come in the same basket. Our leaders are becoming aware of the potential and the potential dangers of the AI positive feedback loop, but at least 15 years too late to do much about it. They have been warned repeatedly and loudly but have focused instead on the minor politics of the day that voters are aware of. The fundamental nature of politics is unlikely to change substantially, so even efforts to slow down the pace of development or to limit areas of impact are likely to be always too little too late. At best, we will be able to slow runaway AI development enough to allow direct brain links to protect against extinction scenarios. But we will not be able to stop it now.

Given inevitability, it’s worth questioning whether there is even any point in trying. Why not just enjoy the ride? Well, the brakes might be broken, but if we can steer the bus expertly enough, it could be exciting and we could come out of it smelling of roses. The weak link is certainly the risk of super-smart AI, whether AI v humans or countries using super-smart AI to fight fiercely for world domination. That risk is alleviated by direct brain linkage, and I’d strongly argue necessitates it, but that brings the other technologies. Even if we decide not to develop it, others will, so one way or another, all these techs will arrive, and our future late century will have this full suite of techs, plus many others of course.

We need as a matter of extreme urgency to fix these silly social media squabbles and over-reactions that are pulling society apart. If we have groups hating each other with access to extremely advanced technology, that can only mean trouble. Tolerance is broken, sanctimony rules, the Inquisition is in progress. We have been offered techno-utopia, but current signs are that most people think techno-hell looks more appetizing and it is their free choice.

Future sex, gender and relationships: how close can you get?

Using robots for gender play

Using robots for gender play

I recently gave a public talk at the British Academy about future sex, gender, and relationship, asking the question “How close can you get?”, considering particularly the impact of robots. The above slide is an example. People will one day (between 2050 and 2065 depending on their budget) be able to use an android body as their own or even swap bodies with another person. Some will do so to be young again, many will do so to swap gender. Lots will do both. I often enjoy playing as a woman in computer games, so why not ‘come back’ and live all over again as a woman for real? Except I’ll be 90 in 2050.

The British Academy kindly uploaded the audio track from my talk at

If you want to see the full presentation, here is the PowerPoint file as a pdf:


I guess it is theoretically possible to listen to the audio while reading the presentation. Most of the slides are fairly self-explanatory anyway.

Needless to say, the copyright of the presentation belongs to me, so please don’t reproduce it without permission.


New book: Society Tomorrow

It’s been a while since my last blog. That’s because I’ve been writing another book, my 8th so far. Not the one I was doing on future fashion, which went on the back burner for a while, I’ve only written a third of that one, unless I put it out as a very short book.

This one follows on from You Tomorrow and is called Society Tomorrow, 20% shorter at 90,000 words. It is ready to publish now, so I’m just waiting for feedback from a few people before hitting the button.


Here’s the introduction:

The one thing that we all share is that we will get older over the next few decades. Rapid change affects everyone, but older people don’t always feel the same effects as younger people, and even if we keep up easily today, some of us may find it harder tomorrow. Society will change, in its demographic and ethnic makeup, its values, its structure. We will live very differently. New stresses will come from both changing society and changing technology, but there is no real cause for pessimism. Many things will get better for older people too. We are certainly not heading towards utopia, but the overall quality of life for our ageing population will be significantly better in the future than it is today. In fact, most of the problems ahead are related to quality of life issues in society as a whole, and simply reflect the fact that if you don’t have to worry as much about poor health or poverty, something else will still occupy your mind.

This book follows on from 2013’s You Tomorrow, which is a guide to future life as an individual. It also slightly overlaps my 2013 book Total Sustainability which looks in part at future economic and social issues as part of achieving sustainability too. Rather than replicating topics, this book updates or omits them if they have already been addressed in those two companion books. As a general theme, it looks at wider society and the bigger picture, drawing out implications for both individuals and for society as a whole to deal with. There are plenty to pick from.

If there is one theme that plays through the whole book, it is a strong warning of the problem of increasing polarisation between people of left and right political persuasion. The political centre is being eroded quickly at the moment throughout the West, but alarmingly this does not seem so much to be a passing phase as a longer term trend. With all the potential benefits from future technology, we risk undermining the very fabric of our society. I remain optimistic because it can only be a matter of time before sense prevails and the trend reverses. One day the relative harmony of living peacefully side by side with those with whom we disagree will be restored, by future leaders of higher quality than those we have today.

Otherwise, whereas people used to tolerate each other’s differences, I fear that this increasing intolerance of those who don’t share the same values could lead to conflict if we don’t address it adequately. That intolerance currently manifests itself in increasing authoritarianism, surveillance, and an insidious creep towards George Orwell’s Nineteen Eighty-Four. The worst offenders seem to be our young people, with students seemingly proud of trying to ostracise anyone who dares agree with what they think is correct. Being students, their views hold many self-contradictions and clear lack of thought, but they appear to be building walls to keep any attempt at different thought away.

Altogether, this increasing divide, built largely from sanctimony, is a very dangerous trend, and will take time to reverse even when it is addressed. At the moment, it is still worsening rapidly.

So we face significant dangers, mostly self-inflicted, but we also have hope. The future offers wonderful potential for health, happiness, peace, prosperity. As I address the significant problems lying ahead, I never lose my optimism that they are soluble, but if we are to solve problems, we must first recognize them for what they are and muster the willingness to deal with them. On the current balance of forces, even if we avoid outright civil war, the future looks very much like a gilded cage. We must not ignore the threats. We must acknowledge them, and deal with them.

Then we can all reap the rich rewards the future has to offer.

It will be out soon.

How nigh is the end?

“We’re doomed!” is a frequently recited observation. It is great fun predicting the end of the world and almost as much fun reading about it or watching documentaries telling us we’re doomed. So… just how doomed are we? Initial estimate: Maybe a bit doomed. Read on.

My 2012 blog addressed some of the possibilities for extinction-level events possibly affecting us. I recently watched a Top 10 list of threats to our existence on TV and it was similar to most you’d read, with the same errors and omissions – nuclear war, global virus pandemic, terminator scenarios, solar storms, comet or asteroid strikes, alien invasions, zombie viruses, that sort of thing. I’d agree that nuclear war is still the biggest threat, so number 1, and a global pandemic of a highly infectious and lethal virus should still be number 2. I don’t even need to explain either of those, we all know why they are in 1st and 2nd place.

The TV list included a couple that shouldn’t be in there.

One inclusion was an mega-eruption of Yellowstone or another super-volcano. A full-sized Yellowstone mega-eruption would probably kill millions of people and destroy much of civilization across a large chunk of North America, but some of us don’t actually live in North America and quite a few might well survive pretty well, so although it would be quite annoying for Americans, it is hardly a TEOTWAWKI threat. It would have big effects elsewhere, just not extinction-level ones. For most of the world it would only cause short-term disruptions, such as economic turbulence, at worst it would start a few wars here and there as regions compete for control in the new world order.

Number 3 on their list was climate change, which is an annoyingly wrong, albeit a popularly held inclusion. The only climate change mechanism proposed for catastrophe is global warming, and the reason it’s called climate change now is because global warming stopped in 1998 and still hasn’t resumed 17 years and 9 months later, so that term has become too embarrassing for doom mongers to use. CO2 is a warming agent and emissions should be treated with reasonable caution, but the net warming contribution of all the various feedbacks adds up to far less than originally predicted and the climate models have almost all proven far too pessimistic. Any warming expected this century is very likely to be offset by reduction in solar activity and if and when it resumes towards the end of the century, we will long since have migrated to non-carbon energy sources, so there really isn’t a longer term problem to worry about. With warming by 2100 pretty insignificant, and less than half a metre sea level rise, I certainly don’t think climate change deserves to be on any list of threats of any consequence in the next century.

The top 10 list missed two out by including climate change and Yellowstone, and my first replacement candidate for consideration might be the grey goo scenario. The grey goo scenario is that self-replicating nanobots manage to convert everything including us into a grey goo.  Take away the silly images of tiny little metal robots cutting things up atom by atom and the laughable presentation of this vanishes. Replace those little bots with bacteria that include electronics, and are linked across their own cloud to their own hive AI that redesigns their DNA to allow them to survive in any niche they find by treating the things there as food. When existing bacteria find a niche they can’t exploit, the next generation adapts to it. That self-evolving smart bacteria scenario is rather more feasible, and still results in bacteria that can conquer any ecosystem they find. We would find ourselves unable to fight back and could be wiped out. This isn’t very likely, but it is feasible, could happen by accident or design on our way to transhumanism, and might deserve a place in the top ten threats.

However, grey goo is only one of the NBIC convergence risks we have already imagined (NBIC= Nano-Bio-Info-Cogno). NBIC is a rich seam for doom-seekers. In there you’ll find smart yogurt, smart bacteria, smart viruses, beacons, smart clouds, active skin, direct brain links, zombie viruses, even switching people off. Zombie viruses featured in the top ten TV show too, but they don’t really deserve their own category and more than many other NBIC derivatives. Anyway, that’s just a quick list of deliberate end of world solutions – there will be many more I forgot to include and many I haven’t even thought of yet. Then you have to multiply the list by 3. Any of these could also happen by accident, and any could also happen via unintended consequences of lack of understanding, which is rather different from an accident but just as serious. So basically, deliberate action, accidents and stupidity are three primary routes to the end of the world via technology. So instead of just the grey goo scenario, a far bigger collective threat is NBIC generally and I’d add NBIC collectively into my top ten list, quite high up, maybe 3rd after nuclear war and global virus. AI still deserves to be a separate category of its own, and I’d put it next at 4th.

Another class of technology suitable for abuse is space tech. I once wrote about a solar wind deflector using high atmosphere reflection, and calculated it could melt a city in a few minutes. Under malicious automated control, that is capable of wiping us all out, but it doesn’t justify inclusion in the top ten. One that might is the deliberate deflection of a large asteroid to impact on us. If it makes it in at all, it would be at tenth place. It just isn’t very likely someone would do that.

One I am very tempted to include is drones. Little tiny ones, not the Predators, and not even the ones everyone seems worried about at the moment that can carry 2kg of explosives or Anthrax into the midst of football crowds. Tiny drones are far harder to shoot down, but soon we will have a lot of them around. Size-wise, think of midges or fruit flies. They could be self-organizing into swarms, managed by rogue regimes, terrorist groups, or set to auto, terminator style. They could recharge quickly by solar during short breaks, and restock their payloads from secret supplies that distribute with the swarm. They could be distributed globally using the winds and oceans, so don’t need a plane or missile delivery system that is easily intercepted. Tiny drones can’t carry much, but with nerve gas or viruses, they don’t have to. Defending against such a threat is easy if there is just one, you can swat it. If there is a small cloud of them, you could use a flamethrower. If the sky is full of them and much of the trees and the ground infested, it would be extremely hard to wipe them out. So if they are well designed to cause an extinction level threat, as MAD 2.0 perhaps, then this would be way up in the top tem too, 5th.

Solar storms could wipe out our modern way of life by killing our IT. That itself would kill many people, via riots and fights for the last cans of beans and bottles of water. The most serious solar storms could be even worse. I’ll keep them in my list, at 6th place

Global civil war could become an extinction level event, given human nature. We don’t have to go nuclear to kill a lot of people, and once society degrades to a certain level, well we’ve all watched post-apocalypse movies or played the games. The few left would still fight with each other. I wrote about the Great Western War and how it might result, see

and such a thing could easily spread globally. I’ll give this 7th place.

A large asteroid strike could happen too, or a comet. Ones capable of extinction level events shouldn’t hit for a while, because we think we know all the ones that could do that. So this goes well down the list at 8th.

Alien invasion is entirely possible and could happen at any time. We’ve been sending out radio signals for quite a while so someone out there might have decided to come see whether our place is nicer than theirs and take over. It hasn’t happened yet so it probably won’t, but then it doesn’t have to be very probably to be in the top ten. 9th will do.

High energy physics research has also been suggested as capable of wiping out our entire planet via exotic particle creation, but the smart people at CERN say it isn’t very likely. Actually, I wasn’t all that convinced or reassured and we’ve only just started messing with real physics so there is plenty of time left to increase the odds of problems. I have a spare place at number 10, so there it goes, with a totally guessed probability of physics research causing a problem every 4000 years.

My top ten list for things likely to cause human extinction, or pretty darn close:

  1. Nuclear war
  2. Highly infectious and lethal virus pandemic
  3. NBIC – deliberate, accidental or lack of foresight (includes smart bacteria, zombie viruses, mind control etc)
  4. Artificial Intelligence, including but not limited to the Terminator scenario
  5. Autonomous Micro-Drones
  6. Solar storm
  7. Global civil war
  8. Comet or asteroid strike
  9. Alien Invasion
  10. Physics research

Not finished yet though. My title was how nigh is the end, not just what might cause it. It’s hard to assign probabilities to each one but someone’s got to do it.  So, I’ll make an arbitrarily wet finger guess in a dark room wearing a blindfold with no explanation of my reasoning to reduce arguments, but hey, that’s almost certainly still more accurate than most climate models, and some people actually believe those. I’m feeling particularly cheerful today so I’ll give my most optimistic assessment.

So, with probabilities of occurrence per year:

  1. Nuclear war:  0.5%
  2. Highly infectious and lethal virus pandemic: 0.4%
  3. NBIC – deliberate, accidental or lack of foresight (includes smart bacteria, zombie viruses, mind control etc): 0.35%
  4. Artificial Intelligence, including but not limited to the Terminator scenario: 0.25%
  5. Autonomous Micro-Drones: 0.2%
  6. Solar storm: 0.1%
  7. Global civil war: 0.1%
  8. Comet or asteroid strike 0.05%
  9. Alien Invasion: 0.04%
  10. Physics research: 0.025%

I hope you agree those are all optimistic. There have been several near misses in my lifetime of number 1, so my 0.5% could have been 2% or 3% given the current state of the world. Also, 0.25% per year means you’d only expect such a thing to happen every 4 centuries so it is a very small chance indeed. However, let’s stick with them and add them up. The cumulative probability of the top ten is 2.015%. Lets add another arbitrary 0.185% for all the risks that didn’t make it into the top ten, rounding the total up to a nice neat 2.2% per year.

Some of the ones above aren’t possible quite yet, but others will vary in probability year to year, but I think that won’t change the guess overall much. If we take a 2.2% probability per year, we have an expectation value of 45.5 years for civilization life expectancy from now. Expectation date for human extinction:

2015.5 + 45.5 years= 2061,

Obviously the probability distribution extends from now to eternity, but don’t get too optimistic, because on these figures there currently is only a 15% chance of surviving past this century.

If you can think of good reasons why my figures are far too pessimistic, by all means make your own guesses, but make them honestly, with a fair and reasonable assessment of how the world looks socially, religiously, politically, the quality of our leaders, human nature etc, and then add them up. You might still be surprised how little time we have left.

I’ll revise my original outlook upwards from ‘a bit doomed’.

We’re reasonably doomed.

Suspended animation and mind transfer as suicide alternatives

I last wrote about suicide in but this time, I want to take a different line of thought. Instead of looking at suicide per se, what about alternatives?

There are many motives for suicide but the most common is wanting to escape from a situation such as suffering intolerable pain or misery, which can arise from a huge range of causes. The victim looks at the potential futures available to them and in their analysis, the merits of remaining alive are less attractive than being dead.

The ‘being dead’ bit is not necessarily about a full ceasing of existence, but more about abdicating consciousness, with its implied sensory inputs, pain, anxiety, inner turmoil, or responsibility.

Last summer, a development in neuroscience offered the future potential to switch the brain off:

The researchers were aware that it may become an alternative to anesthetic, or even a means of avoiding boredom or fear. There are many situations where we want to temporarily suspend consciousness. Alcohol and drug abuse often arises from people using chemical means of doing so.

It seems to me that suicide offers a permanent version of the same, to be switched off forever, but with a key difference. In the anesthetic situation, normal life will resume with its associated problems. In suicide, it won’t. The problems are gone.

Suppose that people could get switched off for a very long time whilst being biologically maintained and housed somehow. Suppose it is long enough that any personal relationship issues will have vanished, that any debts, crimes or other legal issues are nullified, and that any pain or other health problems can be fixed, including fixing mental health issues and erasing of intolerable memories if necessary. In many cases, that would be a suitable alternative to suicide. It offers the advantages of escaping the problems, but with the advantage that a better life might follow some time far in the future.

These have widely varying timescales for potential delivery, and there are numerous big issues, but I don’t see fundamental technology barriers here. Suspending the mind for as long as necessary might offer a reasonable alternative to suicide, at least in principle. There is no need to look at all the numerous surrounding issues though. Consider taking that general principle and adapting it a bit. Mid-century onwards, we’ll have direct brain links sufficiently developed to allow porting of the mind to a new body, and android one for example. Having a new identity and a new body and a properly working and sanitized ‘brain’ would satisfy many of these same goals and avoid many of the legal, environmental, financial and ethical issues surrounding indefinite suspension. The person could simply cease their unpleasant existence and start afresh with a better one. I think it would be fine to kill the old body after the successful transfer. Any legal associations with the previous existence could be nullified. It is just a damaged container that would have been destroyed anyway. Have it destroyed, along with all its problems, and move on.

Mid-century is a lot earlier than would be needed for any social issues to go away otherwise. If a suicide is considered because of relationship or family problems, those problems might otherwise linger for generations. Creating a true new identity essentially solves them, albeit at a high cost of losing any relationships that matter. Long prison sentences are substituted by the biological death, debts similarly. A new person appears, inheriting a mind, but one refreshed, potentially with the bad bits filtered out.

Such a future seems to be feasible technically, and I think it is also ethically feasible. Suicide is one sided. Those remaining have to suffer the loss and pick up the pieces anyway, and they would be no worse off in this scenario, and if they feel aggrieved that the person has somehow escaped the consequences of their actions, then they would have escaped anyway. But a life is saved and someone gets a second chance.



Citizen wage and why under 35s don’t need pensions

I recently blogged about the citizen wage and how under 35s in developed countries won’t need pensions. I cut and pasted it below this new pic for convenience. The pic contains the argument so you don’t need to read the text.

Economic growth makes citizen wage feasible and pensions irrelevant

Economic growth makes citizen wage feasible and pensions irrelevant

If you do want to read it as text, here is the blog cut and pasted:

I introduced my calculations for a UK citizen wage in, and I wrote about the broader topic of changing capitalism a fair bit in my book Total Sustainability. A recent article reminded me of my thoughts on the topic and having just spoken at an International Longevity Centre event, ageing and pensions were in my mind so I joined a few dots. We won’t need pensions much longer. They would be redundant if we have a citizen wage/universal wage.

I argued that it isn’t economically feasible yet, and that only a £10k income could work today in the UK, and that isn’t enough to live on comfortably, but I also worked out that with expected economic growth, a citizen wage equal to the UK average income today (£30k) would be feasible in 45 years. That level will sooner be feasible in richer countries such as Switzerland, which has already had a referendum on it, though they decided they aren’t ready for such a change yet. Maybe in a few years they’ll vote again and accept it.

The citizen wage I’m talking about has various names around the world, such as universal income. The idea is that everyone gets it. With no restrictions, there is little running cost, unlike today’s welfare which wastes a third on admin.

Imagine if everyone got £30k each, in today’s money. You, your parents, kids, grandparents, grand-kids… Now ask why you would need to have a pension in such a system. The answer is pretty simple. You won’t.  A retired couple with £60k coming in can live pretty comfortably, with no mortgage left, and no young kids to clothe and feed. Let’s look at dates and simple arithmetic:

45 years from now is 2060, and that is when a £30k per year citizen wage will be feasible in the UK, given expected economic growth averaging around 2.5% per year. There are lots of reasons why we need it and why it is very likely to happen around then, give or take a few years – automation, AI, decline of pure capitalism, need to reduce migration pressures, to name just a few

Those due to retire in 2060 at age 70 would have been born in 1990. If you were born before that, you would either need a small pension to make up to £30k per year or just accept a lower standard of living for a few years. Anyone born in 1990 or later would be able to stop working, with no pension, and receive the citizen wage. So could anyone else stop and also receive it. That won’t cause economic collapse, since most people will welcome work that gives them a higher standard of living, but you could just not work, and just live on what today we think of as the average wage, and by then, you’ll be able to get more with it due to reducing costs via automation.

So, everyone after 2060 can choose to work or not to work, but either way they could live at least comfortably. Anyone less than 25 today does not need to worry about pensions. Anyone less than 35 really doesn’t have to worry much about them, because at worst they’ll only face a small shortfall from that comfort level and only for a few years. I’m 54, I won’t benefit from this until I am 90 or more, but my daughter will.


Are you under 25 and living in any developed country? Then don’t pay into a pension, you won’t need one.

Under 35, consider saving a little over your career, but only enough to last you a few years.

The future of I

Me, myself, I, identity, ego, self, lots of words for more or less the same thing. The way we think of ourselves evolves just like everything else. Perhaps we are still cavemen with better clothes and toys. You may be a man, a dad, a manager, a lover, a friend, an artist and a golfer and those are all just descendants of caveman, dad, tribal leader, lover, friend, cave drawer and stone thrower. When you play Halo as Master Chief, that is not very different from acting or putting a tiger skin on for a religious ritual. There have always been many aspects of identity and people have always occupied many roles simultaneously. Technology changes but it still pushes the same buttons that we evolved hundred thousands of years ago.

Will we develop new buttons to push? Will we create any genuinely new facets of ‘I’? I wrote a fair bit about aspects of self when I addressed the related topic of gender, since self perception includes perceptions of how others perceive us and attempts to project chosen identity to survive passing through such filters:

Self is certainly complex. Using ‘I’ simplifies the problem. When you say ‘I’, you are communicating with someone, (possibly yourself). The ‘I’ refers to a tailored context-dependent blend made up of a subset of what you genuinely consider to be you and what you want to project, which may be largely fictional. So in a chat room where people often have never physically met, very often, one fictional entity is talking to another fictional entity, with each side only very loosely coupled to reality. I think that is different from caveman days.

Since chat rooms started, virtual identities have come a long way. As well as acting out manufactured characters such as the heroes in computer games, people fabricate their own characters for a broad range of kinds of ‘shared spaces’, design personalities and act them out. They may run that personality instance in parallel with many others, possibly dozens at once. Putting on an act is certainly not new, and friends easily detect acts in normal interactions when they have known a real person a long time, but online interactions can mean that the fictional version is presented it as the only manifestation of self that the group sees. With no other means to know that person by face to face contact, that group has to take them at face value and interact with them as such, though they know that may not represent reality.

These designed personalities may be designed to give away as little as possible of the real person wielding them, and may exist for a range of reasons, but in such a case the person inevitably presents a shallow image. Probing below the surface must inevitably lead to leakage of the real self. New personality content must be continually created and remembered if the fictional entity is to maintain a disconnect from the real person. Holding the in-depth memory necessary to recall full personality aspects and history for numerous personalities and executing them is beyond most people. That means that most characters in shared spaces take on at least some characteristics of their owners.

But back to the point. These fabrications should be considered as part of that person. They are an ‘I’ just as much as any other ‘I’. Only their context is different. Those parts may only be presented to subsets of the role population, but by running them, the person’s brain can’t avoid internalizing the experience of doing so. They may be partly separated but they are fully open to the consciousness of that person. I think that as augmented and virtual reality take off over the next few years, we will see their importance grow enormously. As virtual worlds start to feel more real, so their anchoring and effects in the person’s mind must get stronger.

More than a decade ago, AI software agents started inhabiting chat rooms too, and in some cases these ‘bots’ become a sufficient nuisance that they get banned. The front that they present is shallow but can give an illusion of reality. In some degree, they are an extension of the person or people that wrote their code. In fact, some are deliberately designed to represent a person when they are not present. The experiences that they have can’t be properly internalized by their creators, so they are a very limited extension to self. But how long will that be true? Eventually, with direct brain links and transhuman brain extensions into cyberspace, the combined experiences of I-bots may be fully available to consciousness just the same as first hand experiences.

Then it will get interesting. Some of those bots might be part of multiple people. People’s consciousnesses will start to overlap. People might collect them, or subscribe to them. Much as you might subscribe to my blog, maybe one day, part of one person’s mind, manifested as a bot or directly ‘published’, will become part of your mind. Some people will become absorbed into the experience and adopt so many that their own original personality becomes diluted to the point of disappearance. They will become just an interference pattern of numerous minds. Some will be so infectious that they will spread widely. For many, it will be impossible to die, and for many others, their minds will be spread globally. The hive minds of Dr Who, then later the Borg on Star Trek are conceptual prototypes but as with any sci-fi, they are limited by the imagination of the time they were conceived. By the time they become feasible, we will have moved on and the playground will be far richer than we can imagine yet.

So, ‘I’ has a future just as everything else. We may have just started to add extra facets a couple of decades ago, but the future will see our concept of self evolve far more quickly.


I got asked by a reader whether I worry about this stuff. Here is my reply:

It isn’t the technology that worries me so much that humanity doesn’t really have any fixed anchor to keep human nature in place. Genetics fixed our biological nature and our values and morality were largely anchored by the main religions. We in the West have thrown our religion in the bin and are already seeing a 30 year cycle in moral judgments which puts our value sets on something of a random walk, with no destination, the current direction governed solely by media and interpretation and political reaction to of the happenings of the day. Political correctness enforces subscription to that value set even more strictly than any bishop ever forced religious compliance. Anyone that thinks religion has gone away just because people don’t believe in God any more is blind.

Then as genetics technology truly kicks in, we will be able to modify some aspects of our nature. Who knows whether some future busybody will decree that a particular trait must be filtered out because it doesn’t fit his or her particular value set? Throwing AI into the mix as a new intelligence alongside will introduce another degree of freedom. So already several forces acting on us in pretty randomized directions that can combine to drag us quickly anywhere. Then the stuff above that allows us to share and swap personality? Sure I worry about it. We are like young kids being handed a big chemistry set for Christmas without the instructions, not knowing that adding the blue stuff to the yellow stuff and setting it alight will go bang.

I am certainly no technotopian. I see the enormous potential that the tech can bring and it could be wonderful and I can’t help but be excited by it. But to get that you need to make the right decisions, and when I look at the sorts of leaders we elect and the sorts of decisions that are made, I can’t find the confidence that we will make the right ones.

On the good side, engineers and scientists are usually smart and can see most of the issues and prevent most of the big errors by using comon industry standards, so there is a parallel self-regulatory system in place that politicians rarely have any interest in. On the other side, those smart guys unfortunately will usually follow the same value sets as the rest of the population. So we’re quite likely to avoid major accidents and blowing ourselves up or being taken over by AIs. But we’re unlikely to avoid the random walk values problem and that will be our downfall.

So it could be worse, but it could be a whole lot better too.


The future of euthanasia and suicide

Another extract from You Tomorrow, one that is very much in debate at the moment, it is an area that needs wise legislation, but I don’t have much confidence that we’ll get it. I’ll highlight some of the questions here, but since I don’t have many answers, I’ll illustrate why: they are hard questions.

Sadly, some people feel the need to end their own lives and an increasing number are asking for the legal right to assisted suicide. Euthanasia is increasingly in debate now too, with some health service practices bordering on it, some would say even crossing the boundary. Suicide and euthanasia are inextricably linked, mainly because it is impossible to know for certain what is in someone’s mind, and that is the basis of the well-known slippery slope from assisted suicide to euthanasia.

The stages of progress are reasonably clear. Is the suicide request a genuine personal decision, originating from that person’s free thoughts, based solely on their own interests? Or is it a personal decision influenced by the interests of others, real or imagined? Or is it a personal decision made after pressure from friends and relatives who want the person to die peacefully rather than suffer, with the best possible interests of the person in mind? In which case, who first raised the possibility of suicide as a potential way out? Or is it a personal decision made after pressure applied because relatives want rid of the person, perhaps over-eager to inherit or wanting to end their efforts to care for them? Guilt can be a powerful force and can be applied very subtly indeed over a period of time.

If the person is losing their ability to communicate a little, perhaps a friend or relative may help interpret their wishes to a doctor. From here, it is a matter of degree of communication skill loss and gradual increase of the part relatives play in guiding the doctor’s opinion of whether the person genuinely wants to die. Eventually, the person might not be directly consulted because their relatives can persuade a doctor that they really want to die but can’t say so effectively. Not much further along the path, people make their minds up what is in the best interests of another person as far as living or dying goes. It is a smooth path between these many small steps from genuine suicide to euthanasia. And that all ignores all the impact of possible alternatives such as pain relief, welfare, special care etc. Interestingly, the health services seem to be moving down the euthanasia route far faster than the above steps would suggest, skipping some of them and going straight to the ‘doctor knows best’ step.

Once the state starts to get involved in deciding cases, even by abdicating it to doctors, it is a long but easy road to Logan’s run, where death is compulsory at a certain age, or a certain care cost, or you’ve used up your lifetime carbon credit allocation.

There are sometimes very clear cases where someone obviously able to make up their own mind has made a thoroughly thought-through decision to end their life because of ongoing pain, poor quality of life and no hope of any cure or recovery, the only prospect being worsening condition leading to an undignified death. Some people would argue with their decision to die, others would consider that they should be permitted to do so in such clear circumstances, without any fear for their friends or relatives being prosecuted.

There are rarely razor-sharp lines between cases; situations can get blurred sometimes because of the complexity of individual lives, and because judges have their own personalities and differ slightly in their judgements. There is inevitably another case slightly further down the line that seems reasonable to a particular judge in the circumstances, and once that point is passed, and accepted by the courts, other cases with slightly less-defined circumstances can use it to help argue theirs. This is the path by which most laws evolve. They start in parliament and then after implementation, case law and a gradually changing public mind-set or even the additive effects of judges’ ideologies gradually evolve them into something quite different.

It seems likely given current trends and pressures that one day, we will accept suicide, and then we may facilitate it. Then, if we are not careful, it may evolve into euthanasia by a hundred small but apparently reasonable steps, and if we don’t stop it in time, one day we might even have a system like the one in the film ‘Logan’s Run’.

 Suicide and euthanasia are certainly gradually becoming less shocking to people, and we should expect that in the far future both will become more accepted. If you doubt that society can change its attitudes quickly, it actually only takes about 30 years to get a full reversal. Think of how long it took for homosexuality to change from condemned to fashionable, or how long abortion took from being something a woman would often be condemned for to something that is now a woman’s right to choose. Each of these took only 3 decades for a full 180 degree turnaround. Attitudes to the environment switched from mad panic about a coming ice age to mad panic about global warming in just 3 decades too, and are already switching back again towards ice age panic. If the turn in attitudes to suicide started 10 years ago, then we may have about 20 years left before it is widely accepted as a basic right that is only questioned by bigots. But social change aside, the technology will make the whole are much more interesting.

As I argued earlier, the very long term (2050 and beyond) will bring technology that allows people to link their brains to the machine world, perhaps using nanotech implants connected to each synapse to relay brain activity to a high speed neural replica hosted by a computer. This will have profound implications for suicide too. When this technology has matured, it will allow people to do wonderful things such as using machine sensors as extensions to their own capabilities. They will be able to use android bodies to move around and experience distant places and activities as if they were there in person. For people who feel compelled to end it all because of disability, pain or suffering, an alternative where they could effectively upload their mind into an android might be attractive. Their quality of life could improve dramatically at least in terms of capability. We might expect that pain and suffering could be dealt with much more effectively too if we have a direct link into the brain to control the way sensations are dealt with. So if that technology does progress as I expect, then we might see a big drop in the number of people who want to die.

But the technology options don’t stop there. If a person has a highly enhanced replica of their own brain/mind, in the machine world, people will begin to ask why they need the original. The machine world could give them greater sensory ability, greater physical ability, and greater mental ability. Smarter, with better memory, more and better senses, connected to all the world’s knowledge via the net, able effectively to wander around the world at the speed of light, and being connected directly to other people’s minds when you want, and doing so without fear of ageing, ill health of pain, this would seem a very attractive lifestyle. And it will become possible this century, at low enough cost for anyone to afford.

What of suicide then? It might not seem so important to keep the original body, especially if it is worn out or defective, so even without any pain and suffering, some people might decide to dispose of their body and carry on their lives without it. Partial suicide might become possible. Aside from any religious issues, this would be a hugely significant secular ethical issue. Updating the debate today, should people be permitted to opt out of physical existence, only keeping an electronic copy of their mind, timesharing android bodies when they need to enter the physical world? Should their families and friends be able to rebuild their loved ones electronically if they die accidentally? If so, should people be able to rebuild several versions, each representing the deceased’s different life stages, or just the final version, which may have been ill or in decline?

And then the ethical questions get even trickier. If it is possible to replicate the brain’s structure and so capture the mind, will people start to build ‘restore points’, where they make a permanent record of the state of their self at a given moment? If they get older and decide they could have run their lives better, they might be able to start again from any restore point. If the person exists in cyberspace and has disposed of their physical body, what about ownership of their estate? What about working and living in cyberspace? Will people get jobs? Will they live in virtual towns like the Sims? Indeed, in the same time frame, AI will have caught up and superseded humans in ability. Maybe Sims will get bored in their virtual worlds and want to end it all by migrating to the real world. Maybe they could swap bodies with someone coming the other way?

What will the State do when it is possible to reduce costs and environmental impact by migrating people into the virtual universe? Will it then become socially and politically acceptable, even compulsory when someone reaches a given age or costs too much for health care?

So perhaps suicide has an interesting future. It might eventually decline, and then later increase again, but in many very different forms, becoming a whole range of partial suicide options. But the scariest possibility is that people may not be able to die completely. If their body is an irrelevance, and there are many restore points from which they can be recovered, friends, family, or even the state might keep them ‘alive’ as long as they are useful. And depending on the law, they might even become a form of slave labour, their minds used for information processing or creativity whether they wish it or not. It has often truly been noted that there are worse fates than death.

Your most likely cause of death is being switched off

This one’s short and sweet.

The majority of you reading this blog live in the USA, UK, Canada or Australia. More than half of you are under 40.

That means your natural life expectancy is over 85, so statistically, your body will probably live until after 2060.

By then, electronic mind enhancement will probably mean that most of your mind runs on external electronics, not in your brain, so that your mind won’t die when your body does. You’ll just need to find a new body, probably an android, for those times you aren’t content being on the net. Most of us identify ourselves mainly as our mind, and would still think of ourselves as still alive if our mind carries on as if nothing much has happened, which is likely.

Electronic immortality is not true immortality though. Your mind can only survive on the net as long as it is supported by the infrastructure. That will be controlled by others. Future technology will likely be able to defend against asteroid strikes, power surges cause by solar storms and so on, so accidental death seems unlikely for hundreds of years. However, since minds supported on it need energy to continue running and electronics to be provided and maintained, and will want to make trips into the ‘real’ world, or even live there a lot of the time, they will have a significant resource footprint. They will probably not be considered as valuable as other people whose bodies are still alive. In fact they might be considered as competition – for jobs, resources, space, housing, energy… They may even be seen as easy targets for future cyber-terrorists.

So, it seems quite likely, maybe even inevitable, that life limits will be imposed on the vast majority of you. At some point you will simply be switched off. There might be some prioritization, competitions, lotteries or other selection mechanism, but only some will benefit from it.

Since you are unlikely to die when your body ceases to work, your most likely cause of death is therefore to be switched off. Sorry to break that to you.