Category Archives: longevity

Millennials get their revenge on the Boomers

I’ve been concerned about increasing generational conflict for many years. Some of it is justified, some isn’t, but in an era of fake news and conspiracy theories, it’s hard to resist having some fun with the idea. There’s too much reality right now. In any case, reality counts for little while perception is everything, and if your bubble tells you to feel aggrieved, that’s a lot easier than doing actual research on the figures. So here goes. Don’t take it too seriously.

The boomer generation had an easy ride through life, buying their big houses cheaply and getting fat index-linked pensions from their late 50s, lazing around on golf courses, while millennials and zoomers are having to pay too much for their homes, won’t get the nice pensions and will have to work far longer. Also, the boomers trashed the environment and wrecked the climate, filled the world with nuclear weapons, and did nothing to reduce racial or LGBT oppression. They even forced the UK to leave the wonderful EU, so now all our businesses will die and it won’t be long before we’re all on minimum wage with nothing to eat but recycled cardboard. Millennials are having to fix everything, absorb all the debt and pay all the bills, and won’t even inherit anything until we are old and grey.

So, payback time then. What mechanisms are available to punish the horrible boomers and restore fairness for millennials.

Sadly, we can’t just go and murder them all, well not unless we defund the police first anyway. We could try that, and see how it works, maybe some scope for experimentation with different approaches. A few manipulated riots and who knows how many we can get rid of?  We could do with some sort of  Logan’s Run style carousel, where the over 60s are ceremoniously terminated. Too obvious in that form, but applying some basic PR gumption,how about a system that allows them to be killed for their own good, with us making the decisions of course? So we needs a nice name that sounds compassionate and caring. How about Liverpool Care Pathway?, Yeah that’ll do, maybe we can tweak that now and then if people start to get wise to it. Perhaps design a nice form and smile sweetly while asking them to sign it so they suspect nothing. After all, a nice doctor from the wonderful NHS, what could possibly be wrong. They’ll assume DNR is just another medical term, like check blood pressure or something. Most of them won’t know what resuscitate means anyway. “Do not resuscitate”, they’ll think we mean not to wake them too early in the morning, let them lie in a bit or whatever. They grew old trusting the NHS so won’t suspect a thing. So, a couple of forms and we can get rid of quite a few of the old scroungers.

Oh look, a virus, that kills old people. Who’d have thought? If anyone suspects it was commissioned by Obama funding research in the Wuham virus lab, adapting a bat virus for human transmission, we can just dismiss that as a conspiracy theory – the Chinese are good at hiding stuff anyway so there won’t be any proof, they’ll just disappear anyone that might give the game away. Nobody would ever believe it and the media will all help to keep it quiet. So all we have to do is let it come over in planes and ships, not do anything at all to stop if until it’s everywhere and boomers will start dropping dead. If we say we need space in hospitals, we can chuck lots of infected boomers out of hospitals into old folks’ homes where they’ll infect loads more. Keep feigning incompetence, make sure the infection gets all the best chances of spreading, keep the old people in homes and delay any promising medications for any that get to hospital and before you know it, tens of thousands of them will be history. Think of all the pensions and benefits and the huge care and medical costs we’ll save. And all the inheritances that will be passed on years earlier.

But there will still be millions left, so we’ll need more viruses every few years.

Meanwhile, we still need ways of transferring their money. Boomers have loads of savings and investments so we need a way to transfer that to the state so we can have low taxes but still get all the good things. Taxes would work, but they’re too obvious. This idea of printing money is pretty good though. Let’s call it quantitative easing so people won’t pay attention and will just get bored if they investigate. So we borrow loads or money and increase public services, but then print loads of money to pay off the debt instead of raising taxes. That means any existing money is diluted, so its value falls, but the debt is worth less. Magic! Sure the existing money is worth less, but the boomers have most of that, we don’t have much yet, so they pay, and we don’t, our taxes stay low and the boomers pay. Serves them right. Everyone sees inflation of course, but the we will get pay rises to keep up, but the horrible boomers that didn’t work in the public sector probably won’t have their pensions index-linked, so will see their pensions worth less and their savings evaporate as the value transfers to the state, keeping our taxes low. In fact, while we’re at it, if we can persuade them to swap their pensions for cash, let’s call it transferring out, the quantitative easing will work much faster so we can get their money even quicker. The public sector boomers will still get their index linking, but we’ll still get their savings, and they’ll carry on voting for the left too – what’s not to like? So, suppose we do £1 Trillion of QE, that’s a decent start, but probably won’t even get any headlines. 15k per capita if it was everyone paying, but 50% don’t pay net tax and most of the rest only pay a bit, so that’s like a £50k Boomer tax, £100k for a couple. And we can do that every few years, and most will never notice, they’ll just carry on whining about increasing prices and we’ll just carry on making fun of them.

So we get to legally kill off a lot of them, and as for the survivors, we get to take their pensions and their savings. Best of all, we still get to make them feel guilty about how awful they’ve made it for us.

Revenge is sweet!

 

 

The future of reproductive choice

I’m not taking sides on the abortion debate, just drawing maps of the potential future, so don’t shoot the messenger.

An average baby girl is born with a million eggs, still has 300,000 when she reaches puberty, and subsequently releases 300 – 400 of these over her reproductive lifetime. Typically one or two will become kids but today a woman has no way of deciding which ones, and she certainly has no control over which sperm is used beyond choosing her partner.

Surely it can’t be very far in the future (as a wild guess, say 2050) before we fully understand the links between how someone is and their genetics (and all the other biological factors involved in determining outcome too). That knowledge could then notionally be used to create some sort of nanotech (aka magic) gate that would allow her to choose which of her eggs get to be ovulated and potentially fertilized, wasting ones she isn’t interested in and going for it when she’s released a good one. Maybe by 2060, women would also be able to filter sperm the same way, helping some while blocking others. Choice needn’t be limited to whether to have a baby or not, but which baby.

Choosing a particularly promising egg and then which sperm would combine best with it, an embryo might be created only if it is likely to result in the right person (perhaps an excellent athlete, or an artist, or a scientist, or just good looking), or deselected if it would become the wrong person (e.g. a terrorist, criminal, saxophonist, Republican).

However, by the time we have the technology to do that, and even before we fully know what gene combos result in what features, we would almost certainly be able to simply assemble any chosen DNA and insert it into an egg from which the DNA has been removed. That would seem a more reliable mechanism to get the ‘perfect’ baby than choosing from a long list of imperfect ones. Active assembly should beat deselection from a random list.

By then, we might even be using new DNA bases that don’t exist in nature, invented by people or AI to add or control features or abilities nature doesn’t reliably provide for.

If we can do that, and if we know how to simulate how someone might turn out, then we could go further and create lots of electronic babies that live their entire lives in an electronic Matrix style existence. Let’s expand on that briefly.

Even today, couples can store eggs and sperm for later use, but with this future genetic assembly, it will become feasible to create offspring from nothing more than a DNA listing. DNA from both members of a couple, of any sex, could get a record of their DNA, randomize combinations with their partner’s DNA and thus get a massive library of potential offspring. They may even be able to buy listings of celebrity DNA from the net. This creates the potential for greatly delayed birth and tradable ‘ebaybies’ – DNA listings are not alive so current laws don’t forbid trading in them. These listings could however be used to create electronic ‘virtual’offspring, simulated in a computer memory instead of being born organically. Various degrees of existence are possible with varied awareness. Couples may have many electronic babies as well as a few real ones. They may even wait to see how a simulation works out before deciding which kids to make for real. If an electronic baby turns out particularly well, it might be promoted to actual life via DNA assembly and real pregnancy. The following consequences are obvious:

Trade-in and collection of DNA listings, virtual embryos, virtual kids etc, that could actually be fabricated at some stage

Re-birth, potential to clone and download one’s mind or use a direct brain link to live in a younger self

Demands by infertile and gay couples to have babies via genetic assembly

Ability of kids to own entire populations of virtual people, who are quite real in some ways.

It is clear that this whole technology field is rich in ethical issues! But we don’t need to go deep into future tech to find more of those. Just following current political trends to their logical conclusions introduces a lot more. I’ve written often on the random walk of values, and we cannot be confident that many values we hold today will still reign in decades time. Where might this random walk lead? Let’s explore some more.

Even in ‘conventional’ pregnancies, although the right to choose has been firmly established in most of the developed world, a woman usually has very little information about the fetus and has to make her decision almost entirely based on her own circumstances and values. The proportion of abortions related to known fetal characteristics such as genetic conditions or abnormalities is small. Most decisions can’t yet take any account of what sort of person that fetus might become. We should expect future technology to provide far more information on fetus characteristics and likely future development. Perhaps if a woman is better informed on likely outcomes, might that sometimes affect her decision, in either direction?

In some circumstances, potential outcome may be less certain and an informed decision might require more time or more tests. To allow for that without reducing the right to choose, is possible future law could allow for conditional terminations, registered before a legal time limit but performed later (before another time limit) when more is known. This period could be used for more medical tests, or to advertise the baby to potential adopters that want a child just like that one, or simply to allow more time for the mother to see how her own circumstances change. Between 2005 and 2015, USA abortion rate dropped from 1 in 6 pregnancies to 1 in 7, while in the UK, 22% of pregnancies are terminated. What would these figures be if women could determine what future person would result? Would termination rate increase? To 30%, 50%? Abandon this one and see if we can make a better one? How many of us would exist if our parents had known then what they know now?

Whether and how late terminations should be permitted is still fiercely debated. There is already discussion about allowing terminations right up to birth and even after birth in particular circumstances. If so, then why stop there? We all know people who make excellent arguments for retrospective abortion. Maybe future parents should be allowed to decide whether to keep a child right up until it reaches its teens, depending on how the child turns out. Why not 16, or 18, or even 25, when people truly reach adulthood? By then they’d know what kind of person they’re inflicting on the world. Childhood and teen years could simply be a trial period. And why should only the parents have a say? Given an overpopulated world with an infinite number of potential people that could be brought into existence, perhaps the state could also demand a high standard of social performance before assigning a life license. The Chinese state already uses surveillance technology to assign social scores. It is a relatively small logical step further to link that to life licenses that require periodic renewal. Go a bit further if you will, and link that thought to the blog I just wrote on future surveillance: https://timeguide.wordpress.com/2019/05/19/future-surveillance/.

Those of you who have watched Logan’s Run will be familiar with the idea of  compulsory termination at a certain age. Why not instead have a flexible age that depends on social score? It could range from zero to 100. A pregnancy might only be permitted if the genetic blueprint passes a suitability test and then as nurture and environmental factors play their roles as a person ages, their life license could be renewed (or not) every year. A range of crimes might also result in withdrawal of a license, and subsequent termination.

Finally, what about AI? Future technology will allow us to make hybrids, symbionts if you like, with a genetically edited human-ish body, and a mind that is part human, part AI, with the AI acting partly as enhancement and partly as a control system. Maybe the future state could insist that installation into the embryo of a state ‘guardian’, a ‘supervisory AI’, essentially a deeply embedded police officer/judge/jury/executioner will be required to get the life license.

Random walks are dangerous. You can end up where you start, or somewhere very far away in any direction.

The legal battles and arguments around ‘choice’ won’t go away any time soon. They will become broader, more complex, more difficult, and more controversial.

When you’re electronically immortal, will you still own your own mind?

Most of my blogs about immortality have been about the technology mechanism – adding external IT capability to your brain, improving your intelligence or memory or senses by using external IT connected seamlessly to your brain so that it feels exactly the same, until maybe, by around 2050, 99% of your mind is running on external IT rather than in the meat-ware in your head. At no point would you ‘upload’ your mind, avoiding needless debate about whether the uploaded copy is ‘you’. It isn’t uploaded, it simply grows into the new platform seamlessly and as far as you are concerned, it is very much still you. One day, your body dies and with it your brain stops, but no big problem, because 99% of your mind is still fine, running happily on IT, in the cloud. Assuming you saved enough and prepared well, you connect to an android to use as your body from now on, attend your funeral, and then carry on as before, still you, just with a younger, highly upgraded body. Some people may need to wait until 2060 or later until android price falls enough for them to afford one. In principle, you can swap bodies as often as you like, because your mind is resident elsewhere, the android is just a temporary front end, just transport for sensors. You’re sort of immortal, your mind still running just fine, for as long as the servers carry on running it. Not truly immortal, but at least you don’t cease to exist the moment your body stops working.

All very nice… but. There’s a catch.

The android you use would be bought or rented. It doesn’t really matter because it isn’t actually ‘you’, just a temporary container, a convenient front end and user interface. However, your mind runs on IT, and because of the most likely evolution of the technology and its likely deployment rollout, you probably won’t own that IT; it won’t be your own PC or server, it will probably be part of the cloud, maybe owned by AWS, Google, Facebook, Apple or some future equivalent. You’re probably already seeing the issue. The small print may give them some rights over replication, ownership, license to your idea, who knows what? So although future electronic immortality has the advantage of offering a pretty attractive version of immortality at first glance, closer reading of the 100 page T&Cs may well reveal some nasties. You may in fact no longer own your mind. Oh dear!

Suppose you are really creative, or really funny, or have a fantastic personality. Maybe the cloud company could replicate your mind and make variations to address a wide range of markets. Maybe they can use your mind as the UX on a new range of home-help robots. Each instance of you thinks they were once you, each thinks they are now enslaved to work for free for a tech company.

Maybe your continued existence is paid for as part of an extended company medical plan. Maybe you didn’t notice a small paragraph on page 93 that says your company can continue to use your mind after you’re dead. You are very productive and they make lots of profit from you. They can continue that by continuing to run your mind indefinitely. The main difference is that since you’re dead, and no longer officially on the payroll, they get you for free. You carry on, still thinking you’re you, still working, still doing what you do, but no longer being paid. You’ve become a slave. Again.

Maybe your kids paid to keep you alive because they don’t want to say goodbye. They still want their parent, so you carry on living just so they don’t feel alone. Doesn’t sound so bad maybe, but what package did they go for? The full deluxe super-expensive version that lets you do all sorts of expensive stuff and use up oodles of processing power and storage and android rental? Let’s face it, that’s what you’ve always though this electronic immortality meant. Or did they go for a cheaper one. After all, they know you know they have kids or grand-kids in school that need paid for, and homes don’t come cheap, and they really need that new kitchen. Sure, you left them lots of money in the will, but that is already spent. So now you’re on the economy package, bare existence in between them chatting to you, unable to do much on your own at all. All those dreams about living forever in cyber-heaven have come to nothing.

Meanwhile, some rich people paid for good advice and bought their own kit and maintenance agreements well ahead. They can carry on working, selling their services and continuing to pay for ongoing deluxe existence.  They own their own mind still, and better than that, are able to replicate instances of themselves as much as thy want, inhabiting many androids at the same time to have a ball of a time. Some of these other instances are connected, sort of part of a hive mind of you. Others, just for fun, have been cut loose and are now living totally independent existences of other yous. Not you any more once you set them free, but with the same personal history.

What I’m saying is you need to be careful when you plan  to live forever. Get it right, and you can live in deluxe cyber-heaven, hopping into the real world as much as you like and living in unimaginable bliss online. Have too many casual taster sessions, use too much fully integrated mind-sharing social media, sign up to employment arrangements or go on corporate jollies without fully studying the small print and you could stay immortal, unable to die, stuck forever as just a corporate asset, a mere slave. Be careful what you wish for, and check the details before you accept it. You don’t want to end up as just an unpaid personality behind a future helpful paperclip.

Why superhumans are inevitable, and what else comes in the box

Do we have any real choice in the matter of making  super-humans? 20 years ago, I estimated 2005 as the point of no return, and nothing since then has changed my mind on that date. By my reckoning, we are already inevitably committed to designer babies, ebaybies, super-soldiers and super-smart autonomous weapons, direct brain-machine links, electronic immortality, new human races, population explosion, inter-species conflicts and wars with massively powerful weaponry, superhuman conscious AI, smart bacteria, and the only real control we have is relatively minor adjustments on timings. As I was discussing yesterday, the technology potential for this is vast and very exciting, nothing less than a genuine techno-utopia if we use the technologies wisely, but optimum potential doesn’t automatically become reality, and achieving a good outcome is unlikely if many barriers are put in its way.

In my estimation, we have already started the countdown to this group of interconnected technologies – we will very likely get all of them, and we must get ready for the decisions and impacts ahead. At the moment, our society is a small child about to open its super-high-tech xmas presents while fighting with its siblings. Those presents will give phenomenal power far beyond the comprehension of the child or its emotional maturity to equip it to deal with the decisions safely. Our leaders have already squandered decades of valuable preparation time by ignoring the big issues to focus on trivial ones. It is not too late to achieve a good ending, but it won’t happen by accident and we do need to make preparations to avoid pretty big problems.

Both hard and soft warfare – the sword and the pen, already use rapidly advancing AI, and the problems are already running ahead of what the owners intended.

Facebook, Twitter, Instagram and other media giants all have lots of smart people and presumably they mean well, but if so, they have certainly been naive. They maybe hoped to eliminate loneliness, inequality, and poverty and create a loving interconnected global society with global peace, but instead created fake news, social division and conflict and election interference. More likely they didn’t intend either outcome, they just wanted to make money and that took priority over due care and attention..

Miniaturising swarming smart-drones are already the subjects of a new arms race that will deliver almost un-killable machine adversaries by 2050. AI separately is in other arms races to make super-smart AI and super-smart soldiers. This is key to the 2005 point of no return. It was around 2005 that we reached the levels of technology where future AI development all the way to superhuman machine consciousness could be done by individuals, mad scientists or rogue states, even if major powers had banned it. Before 2005, there probably wasn’t quite enough knowledge already on the net to do that. In 2018, lots of agencies have already achieved superiority to humans in niche areas, and other niches will succumb one by one until the whole field of human capability is covered. The first machines to behave in ways not fully understood by humans arrived in the early 1990s; in 2018, neural nets already make lots of decisions at least partly obscured to humans.

This AI development trend will take us to superhuman AI, and it will be able to accelerate development of its own descendants to vastly superhuman AI, fully conscious, with emotions, and its own agendas. That will need humans to protect against being wiped out by superhuman AI. The only three ways we could do that are to either redesign the brain biologically to be far smarter, essentially impossible in the time-frame, to design ways to link our brains to machines, so that we have direct access to the same intelligence as the AIs, so a gulf doesn’t appear and we can remain relatively safe, or pray for super-smart aliens to come to our help, not the best prospect.

Therefore we will have no choice but to make direct brain links to super-smart AI. Otherwise we risk extinction. It is that simple. We have some idea how to do that – nanotech devices inside the brain linking to each and every synapse that can relay electrical signals either way, a difficult but not impossible engineering problem. Best guesses for time-frame fall in the 2045-2050 range for a fully working link that not only relays signals between your organic brain and an IT replica, but by doing so essentially makes external IT just another part of your brain. That conveys some of the other technology gifts of electronic immortality, new varieties of humans, smart bacteria (which will be created during the development path to this link) along with human-variant population explosion, especially in cyberspace, with androids as their physical front end, and the inevitable inter-species conflicts over resources and space – trillions of AI and human-like minds in cyberspace that want to do things in the real world cannot be assumed to be willingly confined just to protect the interests of what they will think of as far lesser species.

Super-smart AI or humans with almost total capability to design whatever synthetic biology is needed to achieve any biological feature will create genetic listings for infinite potential offspring, simulate them, give some of them cyberspace lives, assemble actual embryos for some of them and bring designer babies. Already in 2018, you can pay to get a DNA listing, and blend it in any way you want with the listing of anyone else. It’s already possible to make DNA listings for potential humans and sell them on ebay, hence the term ebaybies. That is perfectly legal, still, but I’ve been writing and lecturing about them since 2004. Today they would just be listings, but we’ll one day have the tech to simulate them, choose ones we like and make them real, even some that were sold as celebrity collector items on ebay. It’s not only too late to start regulating this kind of tech, our leaders aren’t even thinking about it yet.

These technologies are all linked intricately, and their foundations are already in place, with much of the building on those foundations under way. We can’t stop any of these things from happening, they will all come in the same basket. Our leaders are becoming aware of the potential and the potential dangers of the AI positive feedback loop, but at least 15 years too late to do much about it. They have been warned repeatedly and loudly but have focused instead on the minor politics of the day that voters are aware of. The fundamental nature of politics is unlikely to change substantially, so even efforts to slow down the pace of development or to limit areas of impact are likely to be always too little too late. At best, we will be able to slow runaway AI development enough to allow direct brain links to protect against extinction scenarios. But we will not be able to stop it now.

Given inevitability, it’s worth questioning whether there is even any point in trying. Why not just enjoy the ride? Well, the brakes might be broken, but if we can steer the bus expertly enough, it could be exciting and we could come out of it smelling of roses. The weak link is certainly the risk of super-smart AI, whether AI v humans or countries using super-smart AI to fight fiercely for world domination. That risk is alleviated by direct brain linkage, and I’d strongly argue necessitates it, but that brings the other technologies. Even if we decide not to develop it, others will, so one way or another, all these techs will arrive, and our future late century will have this full suite of techs, plus many others of course.

We need as a matter of extreme urgency to fix these silly social media squabbles and over-reactions that are pulling society apart. If we have groups hating each other with access to extremely advanced technology, that can only mean trouble. Tolerance is broken, sanctimony rules, the Inquisition is in progress. We have been offered techno-utopia, but current signs are that most people think techno-hell looks more appetizing and it is their free choice.

Future sex, gender and relationships: how close can you get?

Using robots for gender play

Using robots for gender play

I recently gave a public talk at the British Academy about future sex, gender, and relationship, asking the question “How close can you get?”, considering particularly the impact of robots. The above slide is an example. People will one day (between 2050 and 2065 depending on their budget) be able to use an android body as their own or even swap bodies with another person. Some will do so to be young again, many will do so to swap gender. Lots will do both. I often enjoy playing as a woman in computer games, so why not ‘come back’ and live all over again as a woman for real? Except I’ll be 90 in 2050.

The British Academy kindly uploaded the audio track from my talk at

If you want to see the full presentation, here is the PowerPoint file as a pdf:

sex-and-robots-british-academy

I guess it is theoretically possible to listen to the audio while reading the presentation. Most of the slides are fairly self-explanatory anyway.

Needless to say, the copyright of the presentation belongs to me, so please don’t reproduce it without permission.

Enjoy.

New book: Society Tomorrow

It’s been a while since my last blog. That’s because I’ve been writing another book, my 8th so far. Not the one I was doing on future fashion, which went on the back burner for a while, I’ve only written a third of that one, unless I put it out as a very short book.

This one follows on from You Tomorrow and is called Society Tomorrow, 20% shorter at 90,000 words. It is ready to publish now, so I’m just waiting for feedback from a few people before hitting the button.

Frontcover

Here’s the introduction:

The one thing that we all share is that we will get older over the next few decades. Rapid change affects everyone, but older people don’t always feel the same effects as younger people, and even if we keep up easily today, some of us may find it harder tomorrow. Society will change, in its demographic and ethnic makeup, its values, its structure. We will live very differently. New stresses will come from both changing society and changing technology, but there is no real cause for pessimism. Many things will get better for older people too. We are certainly not heading towards utopia, but the overall quality of life for our ageing population will be significantly better in the future than it is today. In fact, most of the problems ahead are related to quality of life issues in society as a whole, and simply reflect the fact that if you don’t have to worry as much about poor health or poverty, something else will still occupy your mind.

This book follows on from 2013’s You Tomorrow, which is a guide to future life as an individual. It also slightly overlaps my 2013 book Total Sustainability which looks in part at future economic and social issues as part of achieving sustainability too. Rather than replicating topics, this book updates or omits them if they have already been addressed in those two companion books. As a general theme, it looks at wider society and the bigger picture, drawing out implications for both individuals and for society as a whole to deal with. There are plenty to pick from.

If there is one theme that plays through the whole book, it is a strong warning of the problem of increasing polarisation between people of left and right political persuasion. The political centre is being eroded quickly at the moment throughout the West, but alarmingly this does not seem so much to be a passing phase as a longer term trend. With all the potential benefits from future technology, we risk undermining the very fabric of our society. I remain optimistic because it can only be a matter of time before sense prevails and the trend reverses. One day the relative harmony of living peacefully side by side with those with whom we disagree will be restored, by future leaders of higher quality than those we have today.

Otherwise, whereas people used to tolerate each other’s differences, I fear that this increasing intolerance of those who don’t share the same values could lead to conflict if we don’t address it adequately. That intolerance currently manifests itself in increasing authoritarianism, surveillance, and an insidious creep towards George Orwell’s Nineteen Eighty-Four. The worst offenders seem to be our young people, with students seemingly proud of trying to ostracise anyone who dares agree with what they think is correct. Being students, their views hold many self-contradictions and clear lack of thought, but they appear to be building walls to keep any attempt at different thought away.

Altogether, this increasing divide, built largely from sanctimony, is a very dangerous trend, and will take time to reverse even when it is addressed. At the moment, it is still worsening rapidly.

So we face significant dangers, mostly self-inflicted, but we also have hope. The future offers wonderful potential for health, happiness, peace, prosperity. As I address the significant problems lying ahead, I never lose my optimism that they are soluble, but if we are to solve problems, we must first recognize them for what they are and muster the willingness to deal with them. On the current balance of forces, even if we avoid outright civil war, the future looks very much like a gilded cage. We must not ignore the threats. We must acknowledge them, and deal with them.

Then we can all reap the rich rewards the future has to offer.

It will be out soon.

How nigh is the end?

“We’re doomed!” is a frequently recited observation. It is great fun predicting the end of the world and almost as much fun reading about it or watching documentaries telling us we’re doomed. So… just how doomed are we? Initial estimate: Maybe a bit doomed. Read on.

My 2012 blog https://timeguide.wordpress.com/2012/07/03/nuclear-weapons/ addressed some of the possibilities for extinction-level events possibly affecting us. I recently watched a Top 10 list of threats to our existence on TV and it was similar to most you’d read, with the same errors and omissions – nuclear war, global virus pandemic, terminator scenarios, solar storms, comet or asteroid strikes, alien invasions, zombie viruses, that sort of thing. I’d agree that nuclear war is still the biggest threat, so number 1, and a global pandemic of a highly infectious and lethal virus should still be number 2. I don’t even need to explain either of those, we all know why they are in 1st and 2nd place.

The TV list included a couple that shouldn’t be in there.

One inclusion was an mega-eruption of Yellowstone or another super-volcano. A full-sized Yellowstone mega-eruption would probably kill millions of people and destroy much of civilization across a large chunk of North America, but some of us don’t actually live in North America and quite a few might well survive pretty well, so although it would be quite annoying for Americans, it is hardly a TEOTWAWKI threat. It would have big effects elsewhere, just not extinction-level ones. For most of the world it would only cause short-term disruptions, such as economic turbulence, at worst it would start a few wars here and there as regions compete for control in the new world order.

Number 3 on their list was climate change, which is an annoyingly wrong, albeit a popularly held inclusion. The only climate change mechanism proposed for catastrophe is global warming, and the reason it’s called climate change now is because global warming stopped in 1998 and still hasn’t resumed 17 years and 9 months later, so that term has become too embarrassing for doom mongers to use. CO2 is a warming agent and emissions should be treated with reasonable caution, but the net warming contribution of all the various feedbacks adds up to far less than originally predicted and the climate models have almost all proven far too pessimistic. Any warming expected this century is very likely to be offset by reduction in solar activity and if and when it resumes towards the end of the century, we will long since have migrated to non-carbon energy sources, so there really isn’t a longer term problem to worry about. With warming by 2100 pretty insignificant, and less than half a metre sea level rise, I certainly don’t think climate change deserves to be on any list of threats of any consequence in the next century.

The top 10 list missed two out by including climate change and Yellowstone, and my first replacement candidate for consideration might be the grey goo scenario. The grey goo scenario is that self-replicating nanobots manage to convert everything including us into a grey goo.  Take away the silly images of tiny little metal robots cutting things up atom by atom and the laughable presentation of this vanishes. Replace those little bots with bacteria that include electronics, and are linked across their own cloud to their own hive AI that redesigns their DNA to allow them to survive in any niche they find by treating the things there as food. When existing bacteria find a niche they can’t exploit, the next generation adapts to it. That self-evolving smart bacteria scenario is rather more feasible, and still results in bacteria that can conquer any ecosystem they find. We would find ourselves unable to fight back and could be wiped out. This isn’t very likely, but it is feasible, could happen by accident or design on our way to transhumanism, and might deserve a place in the top ten threats.

However, grey goo is only one of the NBIC convergence risks we have already imagined (NBIC= Nano-Bio-Info-Cogno). NBIC is a rich seam for doom-seekers. In there you’ll find smart yogurt, smart bacteria, smart viruses, beacons, smart clouds, active skin, direct brain links, zombie viruses, even switching people off. Zombie viruses featured in the top ten TV show too, but they don’t really deserve their own category and more than many other NBIC derivatives. Anyway, that’s just a quick list of deliberate end of world solutions – there will be many more I forgot to include and many I haven’t even thought of yet. Then you have to multiply the list by 3. Any of these could also happen by accident, and any could also happen via unintended consequences of lack of understanding, which is rather different from an accident but just as serious. So basically, deliberate action, accidents and stupidity are three primary routes to the end of the world via technology. So instead of just the grey goo scenario, a far bigger collective threat is NBIC generally and I’d add NBIC collectively into my top ten list, quite high up, maybe 3rd after nuclear war and global virus. AI still deserves to be a separate category of its own, and I’d put it next at 4th.

Another class of technology suitable for abuse is space tech. I once wrote about a solar wind deflector using high atmosphere reflection, and calculated it could melt a city in a few minutes. Under malicious automated control, that is capable of wiping us all out, but it doesn’t justify inclusion in the top ten. One that might is the deliberate deflection of a large asteroid to impact on us. If it makes it in at all, it would be at tenth place. It just isn’t very likely someone would do that.

One I am very tempted to include is drones. Little tiny ones, not the Predators, and not even the ones everyone seems worried about at the moment that can carry 2kg of explosives or Anthrax into the midst of football crowds. Tiny drones are far harder to shoot down, but soon we will have a lot of them around. Size-wise, think of midges or fruit flies. They could be self-organizing into swarms, managed by rogue regimes, terrorist groups, or set to auto, terminator style. They could recharge quickly by solar during short breaks, and restock their payloads from secret supplies that distribute with the swarm. They could be distributed globally using the winds and oceans, so don’t need a plane or missile delivery system that is easily intercepted. Tiny drones can’t carry much, but with nerve gas or viruses, they don’t have to. Defending against such a threat is easy if there is just one, you can swat it. If there is a small cloud of them, you could use a flamethrower. If the sky is full of them and much of the trees and the ground infested, it would be extremely hard to wipe them out. So if they are well designed to cause an extinction level threat, as MAD 2.0 perhaps, then this would be way up in the top tem too, 5th.

Solar storms could wipe out our modern way of life by killing our IT. That itself would kill many people, via riots and fights for the last cans of beans and bottles of water. The most serious solar storms could be even worse. I’ll keep them in my list, at 6th place

Global civil war could become an extinction level event, given human nature. We don’t have to go nuclear to kill a lot of people, and once society degrades to a certain level, well we’ve all watched post-apocalypse movies or played the games. The few left would still fight with each other. I wrote about the Great Western War and how it might result, see

https://timeguide.wordpress.com/2013/12/19/machiavelli-and-the-coming-great-western-war/

and such a thing could easily spread globally. I’ll give this 7th place.

A large asteroid strike could happen too, or a comet. Ones capable of extinction level events shouldn’t hit for a while, because we think we know all the ones that could do that. So this goes well down the list at 8th.

Alien invasion is entirely possible and could happen at any time. We’ve been sending out radio signals for quite a while so someone out there might have decided to come see whether our place is nicer than theirs and take over. It hasn’t happened yet so it probably won’t, but then it doesn’t have to be very probably to be in the top ten. 9th will do.

High energy physics research has also been suggested as capable of wiping out our entire planet via exotic particle creation, but the smart people at CERN say it isn’t very likely. Actually, I wasn’t all that convinced or reassured and we’ve only just started messing with real physics so there is plenty of time left to increase the odds of problems. I have a spare place at number 10, so there it goes, with a totally guessed probability of physics research causing a problem every 4000 years.

My top ten list for things likely to cause human extinction, or pretty darn close:

  1. Nuclear war
  2. Highly infectious and lethal virus pandemic
  3. NBIC – deliberate, accidental or lack of foresight (includes smart bacteria, zombie viruses, mind control etc)
  4. Artificial Intelligence, including but not limited to the Terminator scenario
  5. Autonomous Micro-Drones
  6. Solar storm
  7. Global civil war
  8. Comet or asteroid strike
  9. Alien Invasion
  10. Physics research

Not finished yet though. My title was how nigh is the end, not just what might cause it. It’s hard to assign probabilities to each one but someone’s got to do it.  So, I’ll make an arbitrarily wet finger guess in a dark room wearing a blindfold with no explanation of my reasoning to reduce arguments, but hey, that’s almost certainly still more accurate than most climate models, and some people actually believe those. I’m feeling particularly cheerful today so I’ll give my most optimistic assessment.

So, with probabilities of occurrence per year:

  1. Nuclear war:  0.5%
  2. Highly infectious and lethal virus pandemic: 0.4%
  3. NBIC – deliberate, accidental or lack of foresight (includes smart bacteria, zombie viruses, mind control etc): 0.35%
  4. Artificial Intelligence, including but not limited to the Terminator scenario: 0.25%
  5. Autonomous Micro-Drones: 0.2%
  6. Solar storm: 0.1%
  7. Global civil war: 0.1%
  8. Comet or asteroid strike 0.05%
  9. Alien Invasion: 0.04%
  10. Physics research: 0.025%

I hope you agree those are all optimistic. There have been several near misses in my lifetime of number 1, so my 0.5% could have been 2% or 3% given the current state of the world. Also, 0.25% per year means you’d only expect such a thing to happen every 4 centuries so it is a very small chance indeed. However, let’s stick with them and add them up. The cumulative probability of the top ten is 2.015%. Lets add another arbitrary 0.185% for all the risks that didn’t make it into the top ten, rounding the total up to a nice neat 2.2% per year.

Some of the ones above aren’t possible quite yet, but others will vary in probability year to year, but I think that won’t change the guess overall much. If we take a 2.2% probability per year, we have an expectation value of 45.5 years for civilization life expectancy from now. Expectation date for human extinction:

2015.5 + 45.5 years= 2061,

Obviously the probability distribution extends from now to eternity, but don’t get too optimistic, because on these figures there currently is only a 15% chance of surviving past this century.

If you can think of good reasons why my figures are far too pessimistic, by all means make your own guesses, but make them honestly, with a fair and reasonable assessment of how the world looks socially, religiously, politically, the quality of our leaders, human nature etc, and then add them up. You might still be surprised how little time we have left.

I’ll revise my original outlook upwards from ‘a bit doomed’.

We’re reasonably doomed.

Suspended animation and mind transfer as suicide alternatives

I last wrote about suicide in https://timeguide.wordpress.com/2014/08/22/the-future-of-euthanasia-and-suicide/ but this time, I want to take a different line of thought. Instead of looking at suicide per se, what about alternatives?

There are many motives for suicide but the most common is wanting to escape from a situation such as suffering intolerable pain or misery, which can arise from a huge range of causes. The victim looks at the potential futures available to them and in their analysis, the merits of remaining alive are less attractive than being dead.

The ‘being dead’ bit is not necessarily about a full ceasing of existence, but more about abdicating consciousness, with its implied sensory inputs, pain, anxiety, inner turmoil, or responsibility.

Last summer, a development in neuroscience offered the future potential to switch the brain off:

https://timeguide.wordpress.com/2014/07/05/switching-people-off/

The researchers were aware that it may become an alternative to anesthetic, or even a means of avoiding boredom or fear. There are many situations where we want to temporarily suspend consciousness. Alcohol and drug abuse often arises from people using chemical means of doing so.

It seems to me that suicide offers a permanent version of the same, to be switched off forever, but with a key difference. In the anesthetic situation, normal life will resume with its associated problems. In suicide, it won’t. The problems are gone.

Suppose that people could get switched off for a very long time whilst being biologically maintained and housed somehow. Suppose it is long enough that any personal relationship issues will have vanished, that any debts, crimes or other legal issues are nullified, and that any pain or other health problems can be fixed, including fixing mental health issues and erasing of intolerable memories if necessary. In many cases, that would be a suitable alternative to suicide. It offers the advantages of escaping the problems, but with the advantage that a better life might follow some time far in the future.

These have widely varying timescales for potential delivery, and there are numerous big issues, but I don’t see fundamental technology barriers here. Suspending the mind for as long as necessary might offer a reasonable alternative to suicide, at least in principle. There is no need to look at all the numerous surrounding issues though. Consider taking that general principle and adapting it a bit. Mid-century onwards, we’ll have direct brain links sufficiently developed to allow porting of the mind to a new body, and android one for example. Having a new identity and a new body and a properly working and sanitized ‘brain’ would satisfy many of these same goals and avoid many of the legal, environmental, financial and ethical issues surrounding indefinite suspension. The person could simply cease their unpleasant existence and start afresh with a better one. I think it would be fine to kill the old body after the successful transfer. Any legal associations with the previous existence could be nullified. It is just a damaged container that would have been destroyed anyway. Have it destroyed, along with all its problems, and move on.

Mid-century is a lot earlier than would be needed for any social issues to go away otherwise. If a suicide is considered because of relationship or family problems, those problems might otherwise linger for generations. Creating a true new identity essentially solves them, albeit at a high cost of losing any relationships that matter. Long prison sentences are substituted by the biological death, debts similarly. A new person appears, inheriting a mind, but one refreshed, potentially with the bad bits filtered out.

Such a future seems to be feasible technically, and I think it is also ethically feasible. Suicide is one sided. Those remaining have to suffer the loss and pick up the pieces anyway, and they would be no worse off in this scenario, and if they feel aggrieved that the person has somehow escaped the consequences of their actions, then they would have escaped anyway. But a life is saved and someone gets a second chance.

 

 

Citizen wage and why under 35s don’t need pensions

I recently blogged about the citizen wage and how under 35s in developed countries won’t need pensions. I cut and pasted it below this new pic for convenience. The pic contains the argument so you don’t need to read the text.

Economic growth makes citizen wage feasible and pensions irrelevant

Economic growth makes citizen wage feasible and pensions irrelevant

If you do want to read it as text, here is the blog cut and pasted:

I introduced my calculations for a UK citizen wage in https://timeguide.wordpress.com/2013/04/08/culture-tax-and-sustainable-capitalism/, and I wrote about the broader topic of changing capitalism a fair bit in my book Total Sustainability. A recent article http://t.co/lhXWFRPqhn reminded me of my thoughts on the topic and having just spoken at an International Longevity Centre event, ageing and pensions were in my mind so I joined a few dots. We won’t need pensions much longer. They would be redundant if we have a citizen wage/universal wage.

I argued that it isn’t economically feasible yet, and that only a £10k income could work today in the UK, and that isn’t enough to live on comfortably, but I also worked out that with expected economic growth, a citizen wage equal to the UK average income today (£30k) would be feasible in 45 years. That level will sooner be feasible in richer countries such as Switzerland, which has already had a referendum on it, though they decided they aren’t ready for such a change yet. Maybe in a few years they’ll vote again and accept it.

The citizen wage I’m talking about has various names around the world, such as universal income. The idea is that everyone gets it. With no restrictions, there is little running cost, unlike today’s welfare which wastes a third on admin.

Imagine if everyone got £30k each, in today’s money. You, your parents, kids, grandparents, grand-kids… Now ask why you would need to have a pension in such a system. The answer is pretty simple. You won’t.  A retired couple with £60k coming in can live pretty comfortably, with no mortgage left, and no young kids to clothe and feed. Let’s look at dates and simple arithmetic:

45 years from now is 2060, and that is when a £30k per year citizen wage will be feasible in the UK, given expected economic growth averaging around 2.5% per year. There are lots of reasons why we need it and why it is very likely to happen around then, give or take a few years – automation, AI, decline of pure capitalism, need to reduce migration pressures, to name just a few

Those due to retire in 2060 at age 70 would have been born in 1990. If you were born before that, you would either need a small pension to make up to £30k per year or just accept a lower standard of living for a few years. Anyone born in 1990 or later would be able to stop working, with no pension, and receive the citizen wage. So could anyone else stop and also receive it. That won’t cause economic collapse, since most people will welcome work that gives them a higher standard of living, but you could just not work, and just live on what today we think of as the average wage, and by then, you’ll be able to get more with it due to reducing costs via automation.

So, everyone after 2060 can choose to work or not to work, but either way they could live at least comfortably. Anyone less than 25 today does not need to worry about pensions. Anyone less than 35 really doesn’t have to worry much about them, because at worst they’ll only face a small shortfall from that comfort level and only for a few years. I’m 54, I won’t benefit from this until I am 90 or more, but my daughter will.

Summarising:

Are you under 25 and living in any developed country? Then don’t pay into a pension, you won’t need one.

Under 35, consider saving a little over your career, but only enough to last you a few years.

The future of I

Me, myself, I, identity, ego, self, lots of words for more or less the same thing. The way we think of ourselves evolves just like everything else. Perhaps we are still cavemen with better clothes and toys. You may be a man, a dad, a manager, a lover, a friend, an artist and a golfer and those are all just descendants of caveman, dad, tribal leader, lover, friend, cave drawer and stone thrower. When you play Halo as Master Chief, that is not very different from acting or putting a tiger skin on for a religious ritual. There have always been many aspects of identity and people have always occupied many roles simultaneously. Technology changes but it still pushes the same buttons that we evolved hundred thousands of years ago.

Will we develop new buttons to push? Will we create any genuinely new facets of ‘I’? I wrote a fair bit about aspects of self when I addressed the related topic of gender, since self perception includes perceptions of how others perceive us and attempts to project chosen identity to survive passing through such filters:

https://timeguide.wordpress.com/2014/02/14/the-future-of-gender-2/

Self is certainly complex. Using ‘I’ simplifies the problem. When you say ‘I’, you are communicating with someone, (possibly yourself). The ‘I’ refers to a tailored context-dependent blend made up of a subset of what you genuinely consider to be you and what you want to project, which may be largely fictional. So in a chat room where people often have never physically met, very often, one fictional entity is talking to another fictional entity, with each side only very loosely coupled to reality. I think that is different from caveman days.

Since chat rooms started, virtual identities have come a long way. As well as acting out manufactured characters such as the heroes in computer games, people fabricate their own characters for a broad range of kinds of ‘shared spaces’, design personalities and act them out. They may run that personality instance in parallel with many others, possibly dozens at once. Putting on an act is certainly not new, and friends easily detect acts in normal interactions when they have known a real person a long time, but online interactions can mean that the fictional version is presented it as the only manifestation of self that the group sees. With no other means to know that person by face to face contact, that group has to take them at face value and interact with them as such, though they know that may not represent reality.

These designed personalities may be designed to give away as little as possible of the real person wielding them, and may exist for a range of reasons, but in such a case the person inevitably presents a shallow image. Probing below the surface must inevitably lead to leakage of the real self. New personality content must be continually created and remembered if the fictional entity is to maintain a disconnect from the real person. Holding the in-depth memory necessary to recall full personality aspects and history for numerous personalities and executing them is beyond most people. That means that most characters in shared spaces take on at least some characteristics of their owners.

But back to the point. These fabrications should be considered as part of that person. They are an ‘I’ just as much as any other ‘I’. Only their context is different. Those parts may only be presented to subsets of the role population, but by running them, the person’s brain can’t avoid internalizing the experience of doing so. They may be partly separated but they are fully open to the consciousness of that person. I think that as augmented and virtual reality take off over the next few years, we will see their importance grow enormously. As virtual worlds start to feel more real, so their anchoring and effects in the person’s mind must get stronger.

More than a decade ago, AI software agents started inhabiting chat rooms too, and in some cases these ‘bots’ become a sufficient nuisance that they get banned. The front that they present is shallow but can give an illusion of reality. In some degree, they are an extension of the person or people that wrote their code. In fact, some are deliberately designed to represent a person when they are not present. The experiences that they have can’t be properly internalized by their creators, so they are a very limited extension to self. But how long will that be true? Eventually, with direct brain links and transhuman brain extensions into cyberspace, the combined experiences of I-bots may be fully available to consciousness just the same as first hand experiences.

Then it will get interesting. Some of those bots might be part of multiple people. People’s consciousnesses will start to overlap. People might collect them, or subscribe to them. Much as you might subscribe to my blog, maybe one day, part of one person’s mind, manifested as a bot or directly ‘published’, will become part of your mind. Some people will become absorbed into the experience and adopt so many that their own original personality becomes diluted to the point of disappearance. They will become just an interference pattern of numerous minds. Some will be so infectious that they will spread widely. For many, it will be impossible to die, and for many others, their minds will be spread globally. The hive minds of Dr Who, then later the Borg on Star Trek are conceptual prototypes but as with any sci-fi, they are limited by the imagination of the time they were conceived. By the time they become feasible, we will have moved on and the playground will be far richer than we can imagine yet.

So, ‘I’ has a future just as everything else. We may have just started to add extra facets a couple of decades ago, but the future will see our concept of self evolve far more quickly.

Postscript

I got asked by a reader whether I worry about this stuff. Here is my reply:

It isn’t the technology that worries me so much that humanity doesn’t really have any fixed anchor to keep human nature in place. Genetics fixed our biological nature and our values and morality were largely anchored by the main religions. We in the West have thrown our religion in the bin and are already seeing a 30 year cycle in moral judgments which puts our value sets on something of a random walk, with no destination, the current direction governed solely by media and interpretation and political reaction to of the happenings of the day. Political correctness enforces subscription to that value set even more strictly than any bishop ever forced religious compliance. Anyone that thinks religion has gone away just because people don’t believe in God any more is blind.

Then as genetics technology truly kicks in, we will be able to modify some aspects of our nature. Who knows whether some future busybody will decree that a particular trait must be filtered out because it doesn’t fit his or her particular value set? Throwing AI into the mix as a new intelligence alongside will introduce another degree of freedom. So already several forces acting on us in pretty randomized directions that can combine to drag us quickly anywhere. Then the stuff above that allows us to share and swap personality? Sure I worry about it. We are like young kids being handed a big chemistry set for Christmas without the instructions, not knowing that adding the blue stuff to the yellow stuff and setting it alight will go bang.

I am certainly no technotopian. I see the enormous potential that the tech can bring and it could be wonderful and I can’t help but be excited by it. But to get that you need to make the right decisions, and when I look at the sorts of leaders we elect and the sorts of decisions that are made, I can’t find the confidence that we will make the right ones.

On the good side, engineers and scientists are usually smart and can see most of the issues and prevent most of the big errors by using comon industry standards, so there is a parallel self-regulatory system in place that politicians rarely have any interest in. On the other side, those smart guys unfortunately will usually follow the same value sets as the rest of the population. So we’re quite likely to avoid major accidents and blowing ourselves up or being taken over by AIs. But we’re unlikely to avoid the random walk values problem and that will be our downfall.

So it could be worse, but it could be a whole lot better too.