Category Archives: death

New book: Society Tomorrow

It’s been a while since my last blog. That’s because I’ve been writing another book, my 8th so far. Not the one I was doing on future fashion, which went on the back burner for a while, I’ve only written a third of that one, unless I put it out as a very short book.

This one follows on from You Tomorrow and is called Society Tomorrow, 20% shorter at 90,000 words. It is ready to publish now, so I’m just waiting for feedback from a few people before hitting the button.


Here’s the introduction:

The one thing that we all share is that we will get older over the next few decades. Rapid change affects everyone, but older people don’t always feel the same effects as younger people, and even if we keep up easily today, some of us may find it harder tomorrow. Society will change, in its demographic and ethnic makeup, its values, its structure. We will live very differently. New stresses will come from both changing society and changing technology, but there is no real cause for pessimism. Many things will get better for older people too. We are certainly not heading towards utopia, but the overall quality of life for our ageing population will be significantly better in the future than it is today. In fact, most of the problems ahead are related to quality of life issues in society as a whole, and simply reflect the fact that if you don’t have to worry as much about poor health or poverty, something else will still occupy your mind.

This book follows on from 2013’s You Tomorrow, which is a guide to future life as an individual. It also slightly overlaps my 2013 book Total Sustainability which looks in part at future economic and social issues as part of achieving sustainability too. Rather than replicating topics, this book updates or omits them if they have already been addressed in those two companion books. As a general theme, it looks at wider society and the bigger picture, drawing out implications for both individuals and for society as a whole to deal with. There are plenty to pick from.

If there is one theme that plays through the whole book, it is a strong warning of the problem of increasing polarisation between people of left and right political persuasion. The political centre is being eroded quickly at the moment throughout the West, but alarmingly this does not seem so much to be a passing phase as a longer term trend. With all the potential benefits from future technology, we risk undermining the very fabric of our society. I remain optimistic because it can only be a matter of time before sense prevails and the trend reverses. One day the relative harmony of living peacefully side by side with those with whom we disagree will be restored, by future leaders of higher quality than those we have today.

Otherwise, whereas people used to tolerate each other’s differences, I fear that this increasing intolerance of those who don’t share the same values could lead to conflict if we don’t address it adequately. That intolerance currently manifests itself in increasing authoritarianism, surveillance, and an insidious creep towards George Orwell’s Nineteen Eighty-Four. The worst offenders seem to be our young people, with students seemingly proud of trying to ostracise anyone who dares agree with what they think is correct. Being students, their views hold many self-contradictions and clear lack of thought, but they appear to be building walls to keep any attempt at different thought away.

Altogether, this increasing divide, built largely from sanctimony, is a very dangerous trend, and will take time to reverse even when it is addressed. At the moment, it is still worsening rapidly.

So we face significant dangers, mostly self-inflicted, but we also have hope. The future offers wonderful potential for health, happiness, peace, prosperity. As I address the significant problems lying ahead, I never lose my optimism that they are soluble, but if we are to solve problems, we must first recognize them for what they are and muster the willingness to deal with them. On the current balance of forces, even if we avoid outright civil war, the future looks very much like a gilded cage. We must not ignore the threats. We must acknowledge them, and deal with them.

Then we can all reap the rich rewards the future has to offer.

It will be out soon.

2016: The Dark Side

Bloomberg reports the ‘Pessimists guide to the world in 2016’, by By Flavia Krause-Jackson, Mira Rojanasakul, and John Fraher.

Excellent stuff. A healthy dose of realism to counter the spin and gloss and outright refusals to notice things that don’t fit the agenda that we so often expect from today’s media. Their entries deserve some comment, and I’ll add a few more. I’m good at pessimism.

Their first entry is oil reaching $100 a barrel as ISIS blows up oil fields. Certainly possible, though they also report the existing oil glut:

Just because the second option is the more likely does not invalidate the first as a possible scenario, so that entry is fine.

An EU referendum in June is their 2nd entry. Well, that will only happen if Cameron gets his way and the EU agrees sufficient change to make the referendum result more likely to end in a Yes. If there is any hint of a No, it will be postponed as far as possible to give politics time to turn the right way. Let’s face facts. When the Ukraine had their referendum, they completed the entire process within two weeks. If the Conservatives genuinely wanted a referendum on Europe, it would have happened years ago. The Conservatives make frequent promises to do the Conservative thing very loudly, and then quietly do the Labour thing and hope nobody notices. Osborne promised to cut the deficit but faced with the slightest objections from the media performed a text-book U-turn. That follow numerous U-turns on bin collections, speed cameras, wheel clamping, environment, surveillance, immigration, pensions, fixing the NHS…. I therefore think he will spin the EU talks as far as possible to pretend that tiny promises to think about the possibility of reviewing policies are the same as winning guarantees of major changes. Nevertheless, an ongoing immigration flood and assorted Islamist problems are increasing the No vote rapidly, so I think it far more likely that the referendum will be postponed.

The 3rd is banks being hit by a massive cyber attack. Very possible, even quite likely.

4th, EU crumbles under immigration fears. Very likely indeed. Schengen will be suspended soon and increasing Islamist violence will create increasing hostility to the migrant flow. Forcing countries to accept a proportion of the pain caused by Merkel’s naivety will increase strains between countries to breaking point. The British referendum on staying or leaving adds an escape route that will be very tempting for politicians who want to stay in power.

Their 5th is China’s economy failing and military rising. Again, quite feasible. Their economy has suffered a slowdown, and their military looks enthusiastically at Western decline under left-wing US and Europe leadership, strained by Middle Eastern and Russian tensions. There has never been a better time for their military to exploit weaknesses.

6 is Israel attacking Iranian nuclear facilities. Well, with the US and Europe rapidly turning antisemitic and already very anti-Israel, they have pretty much been left on their own, surrounded by countries that want them eliminated. If anything, I’m surprised they have been so patient.

7 Putin sidelines America. Is that not history?

8 Climate change heats up. My first significant disagreement. With El-Nino, it will be a warm year, but evidence is increasing that the overall trend for the next few decades will be cooling, due to various natural cycles. Man made warming has been greatly exaggerated and people are losing interest in predictions of catastrophe when they can see plainly that most of the alleged change is just alterations to data. Yes, next year will be warm, but thanks to far too many cries of wolf, apart from meta-religious warmists, few people still believe things will get anywhere near as bad as doom-mongers suggest. They will notice that the Paris agreement, if followed, would trash western economies and greatly increase their bills, even though it can’t make any significant change on global CO2 emissions. So, although there will be catastrophe prediction headlines next year making much of higher temperatures due to El Nino, the overall trend will be that people won’t be very interested any more.

9 Latin America’s lost decade. I have to confess I did expect great things from South America, and they haven’t materialized. It is clear evidence that a young vibrant population does not necessarily mean one full of ideas, enthusiasm and entrepreneurial endeavor. Time will tell, but I think they are right on this one.

Their 10th scenario is Trump winning the US presidency. I can’t put odds on it, but it certainly is possible, especially with Islamist violence increasing. He offers the simple choice of political correctness v security, and framed that way, he is certainly not guaranteed to win but he is in with a decent chance. A perfectly valid scenario.

Overall, I’m pretty impressed with this list. As good as any I could have made. But I ought to add a couple.

My first and most likely offering is that a swarm of drones is used in a terrorist attack on a stadium or even a city center. Drones are a terrorist’s dream, and the lack of licensing has meant that people can acquire lots of them and they could be used simultaneously, launched from many locations and gathering together in the same place to launch the attack. The attack could be chemical, biological, explosive or even blinding lasers, but actually, the main weapon would be the panic that would result if even one or two of them do anything. Many could be hurt in the rush to escape.

My second is a successful massive cyber-attack on ordinary people and businesses. There are several forms of attack that could work and cause enormous problems. Encryption based attacks such as ransomware are already here, but if this is developed by the IT experts in ISIS and rogue regimes, the ransom might not be the goal. Simply destroying data or locking it up is quite enough to be a major terrorist goal. It could cause widespread economic harm if enough machines are infected before defenses catch up, and AI-based adaptation might make that take quite a while. The fact is that so far we have been very lucky.

The third is a major solar storm, which could knock out IT infrastructure, again with enormous economic damage. The Sun is entering a period of sunspot drought quite unprecedented since we started using IT. We don’t really know what will happen.

My fourth is a major virus causing millions of deaths. Megacities are such a problem waiting to happen. The virus could evolve naturally, or it could be engineered. It could spread far and wide before quarantines come into effect. This could happen any time, so next year is a valid possibility.

My fifth and final scenario is unlikely but possible, and that is the start of a Western civil war. I have blogged about it in and suggested it is likely in the middle or second half of the century, but it could possibly start next year given the various stimulants we see rising today. It would affect Europe first and could spread to the USA.

Paris – Climate Change v Islamism. Which problem is biggest?

Imagine you are sitting peacefully at home watching a movie with your family. A few terrorists with guns burst in. They start shooting. What is your reaction?

Option A) you tell your family not to do anything but to continue watching TV, because reacting would be giving in to the terrorists – they want you to be angry and try to attack them, but you are the better person, you have the moral superiority and won’t stoop to their level. Anyway, attacking them might anger them more and they might be even more violent. You tell your family they should all stick together and show the terrorists they can’t win and can’t change your way of life by just carrying on as before. You watch as one by one, each of your kids is murdered, determined to occupy the moral high ground until they shoot you too.

Option B) you understand that what the terrorists want is for you and your family to be dead. So you grab whatever you can that might act as some sort of weapon and rush at the terrorists, trying to the end to disarm them and protect your family.  If you survive, you then do all you can to prevent other terrorists from coming into your home. Then you do all you can to identify where they are coming from and root them out.

The above choice is a little simplistic but it highlights the key points of the two streams of current opinion on the ‘right’ response.

Option B recognizes that you have to remain alive to defend your principles. Once you’ve dealt with the threat, then you are free to build as many ivory towers and moral pedestals as you want. Option A simply lets the terrorists win.

There is no third option for discussing it peacefully over a nice cup of tea, no option for peace and love and mutual respect for all. ISIS are not interested in peace and love. They are barbarians with the utmost contempt for civilization who want to destroy everything that doesn’t fit into their perverted interpretation of an Islamic world. However, ISIS is just one Islamist terror group of course and if we are successful in conquering them, and then Al Qaeda and Boko Haram, and so on, other Islamist groups will emerge. Islamism is the problem, ISIS is just the worst current group. We need to deal with it.

I’ll draw out some key points from my previous blogs. If you want more detail on the future of ISIS look at

The situation in Europe shows a few similarities with the IRA conflict, with the advantage today that we are still in the early stages of Islamist violence. In both cases, the terrorists themselves are mostly no-hoper young men with egos out of alignment with their personal reality. Yes there are a few women too. They desperately want to be respected, but with no education and no skills, a huge chip on their shoulder and a bad attitude, ordinary life offers them few opportunities. With both ISIS and the IRA, the terrorists are drawn from a community that considers itself disadvantaged. Add a hefty amount of indoctrination about how terribly unfair the world is, the promise of being a hero, going down in history as a martyr and the promise of 72 virgins to play with in the afterlife, and the offer to pick up a gun or a knife apparently seems attractive to some. The IRA recruited enough fighters even without the promise of the virgins.

The IRA had only about 300 front-line terrorists at any time, but they came from the nationalist community of which an estimated 30% of people declared some sympathy for them. Compare that with a BBC survey earlier this year that found that in the aftermath of the Charlie Hebdo attacks, only 68% of Muslims agreed with the statement “Acts of violence against those who publish images of the Prophet Mohammed can never be justified”. 68% and 70% are pretty close, so I’ll charitably accept that the 68% were being honest and not simply trying to disassociate themselves from the Paris massacre. The overwhelming majority of British Muslims rejecting violence – two thirds in the BBC survey, is entirely consistent with other surveys on Muslim attitudes around the world, and probably a reasonable figure for Muslims across Europe. Is the glass half full or half empty? Your call.

The good news is the low numbers that become actual front-line terrorists. Only 0.122% of the nationalist community in Northern Ireland at any particular time were front-line IRA terrorists. Now that ISIS are asking potential recruits not to go to Syria but to stay where they are and do their thing there, we should consider how many there might be. If we are lucky and the same 0.122% applies to our three million UK Muslims, then about 3600 are potential Islamist terrorists. That’s about 12 times bigger than the IRA problem if ISIS or other Islamist groups get their acts together. With 20 million Muslims in Europe, that would make for potentially 24,000 Islamist terrorists, or 81 IRAs to put it another way. Most can travel freely between countries.

What of immigration then? People genuinely fleeing violence presumably have lower support for it, but they are only a part of the current influx. Many are economic migrants and they probably conform more closely to the norm. We also know that some terrorists are hiding among other migrants, and indeed at least two of those were involved in the latest Paris massacre. Most of the migrants are young men, so that would tend to skew the problem upwards too. With forces acting in both directions, it’s probably not unreasonable as a first guess to assume the same overall support levels. According to the BBC, 750,000 have entered Europe this year, so that means another 900 potential terrorists were likely in their midst. Europe is currently importing 3 IRAs every year.

Meanwhile, it is rather ironic that many of the current migrants are coming because Angela Merkel felt guilty about the Holocaust. Many Jews are now leaving Europe because they no longer feel safe because of the rapidly rising numbers of attacks by the Islamists she has encouraged to come.

So, the first Paris issue is Islamism, already at 81 potential IRAs and growing at 3 IRAs per year, plus a renewed exodus of Jews due to widespread increasing antisemitism.

So, to the other Paris issue, climate change. I am not the only one annoyed by the hijacking of the environment by leftist pressure groups, because the poor quality of analysis and policies resulting from that pressure ultimately harms both the environment and the poor.

The world has warmed since the last ice age. Life has adjusted throughout to that continuing climate change. Over the last century, sea level has steadily increased, and is still increasing at the same rate now. The North Pole ice has shrunk, to 8.5% to 11% below normal at the moment depending whose figures you look at, but it certainly isn’t disappearing any time soon. However, Antarctic sea ice  has grown to 17% to 25% above normal again depending whose figures you look at, so there is more ice than normal overall. Temperature has also increased over the last century, with a few spurts and a few slowdowns. The last spurt was late 70s to late 90s, with a slowdown since. CO2 levels have rocketed up relentlessly, but satellite-measured temperature hasn’t moved at all since 1998. Only when figures are tampered with is any statistically significant rise visible.

Predictions by climate models have almost all been far higher than the empirical data. In any other branch of science, that would mean throwing theories away and formulating better ones. In climate science, numerous adjustments by alleged ‘climate scientists’ show terrible changes ahead; past figures have invariably been adjusted downwards and recent ones upwards to make the rises seem larger. Climate scientists have severely damaged the reputation of science in every field. The public now distrusts all scientists less and disregard for scientific advice in lifestyle, nutrition, exercise and medication will inevitably lead to an increase in deaths.

Everyone agrees that CO2 is a greenhouse gas and increases will have a forcing effect on temperature, but there is strong disagreement about the magnitude of that effect, the mechanisms and magnitudes of the feedback processes throughout the environmental system, and both the mechanisms and magnitudes of a wide range of natural effects. It is increasingly obvious that climate scientists only cover a subset of the processes affecting climate, but they seem contemptuous of science in other disciplines such as astrophysics that cover important factors such as solar cycles. There is a strong correlation between climate and solar cycles historically but the mechanisms are complex and not yet fully understood. It is also increasingly obvious that many climate scientists are less concerned about the scientific integrity of their ‘research’ than maintaining a closed shop, excluding those who disagree with them, getting the next grant or pushing a political agenda.

Empirical data suggests that the forcing factor of CO2 itself is not as high as assumed in most models, and the very many feedbacks are far more complex than assumed in most models.

CO2 is removed from the environment by natural processes of adaptation faster than modeled – e.g. plants and algae grow faster, and other natural processes such as solar or ocean cycles have far greater effects than assumed in the models. Recent research suggests that it has a ‘half-life’ in the atmosphere only of around 40 years, not the 1000 years claimed by ‘climate scientists’. That means that the problem will go away far faster when we fix it than has been stated.

CO2 is certainly a greenhouse gas, and we should not be complacent about generating it, but on current science (before tampering) it seems there is absolutely no cause for urgent action. It is right to look to future energy sources and move away from fossil fuels, which also cause other large environmental problems, not least of which the particulates that kill millions of people every year. Meanwhile, we should expedite movement from coal and oil to low carbon fossil fuels such as shale gas.

As is often observed, sunny regions such as the Sahara could easily produce enough solar energy for all of Europe, but there is no great hurry so we can wait for the technology to become sufficiently cheap and for the political stability in appropriate areas to be addressed so that large solar farms can be safely developed and supply maintained. Meanwhile, southern Europe is reasonably sunny, politically stable and needs cash. Other regions also have sunny deserts to support them. We will also have abundant fusion energy in the 2nd half of the century. So we have no long term energy problem. Solar/fusion energy will eventually be cheap and abundant, and at an equivalent of less than $30 per barrel of oil, we won’t bother using fossil fuels because they will be too expensive compared to alternatives. The problems we do have in energy supply are short term and mostly caused by idiotic green policies that worsen supply, costs and environmental impact. It is hard to think of a ‘green’ policy that actually works.

The CO2 problem will go away in the long term due to nothing but simple economics and market effects. In the short term, we don’t see a measurable problem due to a happy coincidence of solar cycles and ocean cycles counteracting the presumed warming forcing of the CO2. There is absolutely no need to rush into massively problematic taxes and subsidies for immature technology. The social problems caused by short term panic are far worse than the problem they are meant to fix. Increased food prices have been caused by regulation to enforce use of biofuels. Ludicrously stupid carbon offset programs have led to chopping down of rain forests, draining of peat bogs and forced relocation of local peoples, and after all tat have actually increased CO2 emissions. Lately, carbon taxes in the UK, far higher than elsewhere, have led to collapse of the aluminium and steel industries, while the products have still been produced elsewhere at higher CO2 cost. Those made redundant are made even poorer because they have to pay higher prices for energy thanks to enormous subsidies to rich people who own wind or solar farms. Finally, closing down fossil fuel plants before we have proper substitutes in place and then asking wind farm owners to accept even bigger subsidies to put in diesel generators for use on calm  and dull days is the politics of the asylum. Green policies perform best at transferring money from poor to rich, with environmental damage seemingly a small price to pay for a feel-good factor..

Call me a skeptic or a denier or whatever you want if you like. I am technically ‘luke warm’. There is a problem with CO2, but not a big one, and it will go away all by itself. There is no need for political interference and that which we have seen so far has made far worse problems for both people and the environment than climate change would ever have done. Our politicians would do a far better job if they did nothing at all.

So, Paris then. On one hand we have a minor problem from CO2 emissions that will go away fastest with the fewest problems if our politicians do nothing at all. On the other hand, their previous mistakes have already allowed the Islamist terrorist equivalent of 81 IRAs to enter Europe and the current migrant flux is increasing that by 3 IRAs per year. That does need to be addressed, quickly and effectively.

Perhaps they should all stay in Paris but change the subject.


How to make a Star Wars light saber

A couple of years ago I explained how to make a free-floating combat drone: , like the ones in Halo or Mass Effect. They could realistically be made in the next couple of decades and are very likely to feature heavily in far future warfare, or indeed terrorism. I was chatting to a journalist this morning about light sabers, another sci-fi classic. They could also be made in the next few decades, using derivatives of the same principles. A prototype is feasible this side of 2050.

I’ll ignore the sci-fi wikis that explain how they are meant to work, which mostly approximate to fancy words for using magic or The Force and various fictional crystals. On the other hand, we still want something that will look and sound and behave like the light saber.

The handle bit is pretty obvious. It has to look good and contain a power source and either a powerful laser or plasma generator. The traditional problem with using a laser-based saber is that the saber is only meant to be a metre long but laser beams don’t generally stop until they hit something. Plasma on the other hand is difficult to contain and needs a lot of energy even when it isn’t being used to strike your opponent. A laser can be switched on and off and is therefore better. But we can have some nice glowy plasma too, just for fun.

The idea is pretty simple then. The blade would be made of graphene flakes coated with carbon nanotube electron pipes, suspended using the same technique I outlined in the blog above. These could easily be made to form a long cylinder and when you want the traditional Star Wars look, they would move about a bit, giving the nice shimmery blurry edge we all like so that the tube looks just right with blurry glowy edges. Anyway, with the electron pipe surface facing inwards, these flakes would generate the internal plasma and its nice glow. They would self-organize their cylinder continuously to follow the path of the saber. Easy-peasy. If they strike something, they would just re-organize themselves into the cylinder again once they are free.

For later models, a Katana shaped blade will obviously be preferred. As we know, all ultimate weapons end up looking like a Katana, so we might as well go straight to it, and have the traditional cylindrical light saber blade as an optional cosmetic envelope for show fights. The Katana is a universal physics result in all possible universes.

The hum could be generated by a speaker in the handle if you have absolutely no sense of style, but for everyone else, you could simply activate pulsed magnetic fields between the flakes so that they resonate at the required band to give your particular tone. Graphene flakes can be magnetized so again this is perfectly consistent with physics. You could download and customize hums from the cloud.

Now the fun bit. When the blade gets close to an object, such as your opponent’s arm, or your loaf of bread in need of being sliced, the capacitance of the outer flakes would change, and anyway, they could easily transmit infrared light in every direction and pick up reflections. It doesn’t really matter which method you pick to detect the right moment to activate the laser, the point is that this bit would be easy engineering and with lots of techniques to pick from, there could be a range of light sabers on offer. Importantly, at least a few techniques could work that don’t violate any physics. Next, some of those self-organizing graphene flakes would have reflective surface backings (metals bond well with graphene so this is also a doddle allowed by physics), and would therefore form a nice reflecting surface to deflect the laser beam at the object about to be struck. If a few flakes are vaporized, others would be right behind them to reflect the beam.

So just as the blade strikes the surface of the target, the powerful laser switches on and the beam is bounced off the reflecting flakes onto the target, vaporizing it and cauterizing the ends of the severed blood vessels to avoid unnecessary mess that might cause a risk of slipping. The shape of the beam depends on the locations and angles of the reflecting surface flakes, and they could be in pretty much any shape to create any shape of beam needed, which could be anything from a sharp knife to a single point, severing an arm or drilling a nice neat hole through the heart. Obviously, style dictates that the point of the saber is used for a narrow beam and the edge is used as a knife, also useful for cutting bread or making toast (the latter uses transverse laser deflection at lower aggregate power density to char rather than vaporize the bread particles, and toast is an option selectable by a dial on the handle).

What about fights? When two of these blades hit each other there would be a variety of possible effects. Again, it would come down to personal style. There is no need to have any feel at all, the beams could simple go through each other, but where’s the fun in that? Far better that the flakes also carry high electric currents so they could create a nice flurry of sparks and the magnetic interactions between the sabers could also be very powerful. Again, self organisation would allow circuits to form to carry the currents at the right locations to deflect or disrupt the opponent’s saber. A galactic treaty would be needed to ensure that everyone fights by the rules and doesn’t cheat by having an ethereal saber that just goes right through the other one without any nice show. War without glory is nothing, and there can be no glory without a strong emotional investment and physical struggle mediated by magnetic interactions in the sabers.

This saber would have a very nice glow in any color you like, but not have a solid blade, so would look and feel very like the Star Wars saber (when you just want to touch it, the lasers would not activate to slice your fingers off, provided you have read the safety instructions and have the safety lock engaged). The blade could also grow elegantly from the hilt when it is activated, over a second or so, it would not just suddenly appear at full length. We need an on/off button for that bit, but that could simply be emotion or thought recognition so it turns on when you concentrate on The Force, or just feel it.

The power supply could be a battery or graphene capacitor bank of a couple of containers of nice chemicals if you want to build it before we can harness The Force and magic crystals.

A light saber that looks, feels and behaves just like the ones on Star Wars is therefore entirely feasible, consistent with physics, and could be built before 2050. It might use different techniques than I have described, but if no better techniques are invented, we could still do it the way I describe above. One way or another, we will have light sabers.


How nigh is the end?

“We’re doomed!” is a frequently recited observation. It is great fun predicting the end of the world and almost as much fun reading about it or watching documentaries telling us we’re doomed. So… just how doomed are we? Initial estimate: Maybe a bit doomed. Read on.

My 2012 blog addressed some of the possibilities for extinction-level events possibly affecting us. I recently watched a Top 10 list of threats to our existence on TV and it was similar to most you’d read, with the same errors and omissions – nuclear war, global virus pandemic, terminator scenarios, solar storms, comet or asteroid strikes, alien invasions, zombie viruses, that sort of thing. I’d agree that nuclear war is still the biggest threat, so number 1, and a global pandemic of a highly infectious and lethal virus should still be number 2. I don’t even need to explain either of those, we all know why they are in 1st and 2nd place.

The TV list included a couple that shouldn’t be in there.

One inclusion was an mega-eruption of Yellowstone or another super-volcano. A full-sized Yellowstone mega-eruption would probably kill millions of people and destroy much of civilization across a large chunk of North America, but some of us don’t actually live in North America and quite a few might well survive pretty well, so although it would be quite annoying for Americans, it is hardly a TEOTWAWKI threat. It would have big effects elsewhere, just not extinction-level ones. For most of the world it would only cause short-term disruptions, such as economic turbulence, at worst it would start a few wars here and there as regions compete for control in the new world order.

Number 3 on their list was climate change, which is an annoyingly wrong, albeit a popularly held inclusion. The only climate change mechanism proposed for catastrophe is global warming, and the reason it’s called climate change now is because global warming stopped in 1998 and still hasn’t resumed 17 years and 9 months later, so that term has become too embarrassing for doom mongers to use. CO2 is a warming agent and emissions should be treated with reasonable caution, but the net warming contribution of all the various feedbacks adds up to far less than originally predicted and the climate models have almost all proven far too pessimistic. Any warming expected this century is very likely to be offset by reduction in solar activity and if and when it resumes towards the end of the century, we will long since have migrated to non-carbon energy sources, so there really isn’t a longer term problem to worry about. With warming by 2100 pretty insignificant, and less than half a metre sea level rise, I certainly don’t think climate change deserves to be on any list of threats of any consequence in the next century.

The top 10 list missed two out by including climate change and Yellowstone, and my first replacement candidate for consideration might be the grey goo scenario. The grey goo scenario is that self-replicating nanobots manage to convert everything including us into a grey goo.  Take away the silly images of tiny little metal robots cutting things up atom by atom and the laughable presentation of this vanishes. Replace those little bots with bacteria that include electronics, and are linked across their own cloud to their own hive AI that redesigns their DNA to allow them to survive in any niche they find by treating the things there as food. When existing bacteria find a niche they can’t exploit, the next generation adapts to it. That self-evolving smart bacteria scenario is rather more feasible, and still results in bacteria that can conquer any ecosystem they find. We would find ourselves unable to fight back and could be wiped out. This isn’t very likely, but it is feasible, could happen by accident or design on our way to transhumanism, and might deserve a place in the top ten threats.

However, grey goo is only one of the NBIC convergence risks we have already imagined (NBIC= Nano-Bio-Info-Cogno). NBIC is a rich seam for doom-seekers. In there you’ll find smart yogurt, smart bacteria, smart viruses, beacons, smart clouds, active skin, direct brain links, zombie viruses, even switching people off. Zombie viruses featured in the top ten TV show too, but they don’t really deserve their own category and more than many other NBIC derivatives. Anyway, that’s just a quick list of deliberate end of world solutions – there will be many more I forgot to include and many I haven’t even thought of yet. Then you have to multiply the list by 3. Any of these could also happen by accident, and any could also happen via unintended consequences of lack of understanding, which is rather different from an accident but just as serious. So basically, deliberate action, accidents and stupidity are three primary routes to the end of the world via technology. So instead of just the grey goo scenario, a far bigger collective threat is NBIC generally and I’d add NBIC collectively into my top ten list, quite high up, maybe 3rd after nuclear war and global virus. AI still deserves to be a separate category of its own, and I’d put it next at 4th.

Another class of technology suitable for abuse is space tech. I once wrote about a solar wind deflector using high atmosphere reflection, and calculated it could melt a city in a few minutes. Under malicious automated control, that is capable of wiping us all out, but it doesn’t justify inclusion in the top ten. One that might is the deliberate deflection of a large asteroid to impact on us. If it makes it in at all, it would be at tenth place. It just isn’t very likely someone would do that.

One I am very tempted to include is drones. Little tiny ones, not the Predators, and not even the ones everyone seems worried about at the moment that can carry 2kg of explosives or Anthrax into the midst of football crowds. Tiny drones are far harder to shoot down, but soon we will have a lot of them around. Size-wise, think of midges or fruit flies. They could be self-organizing into swarms, managed by rogue regimes, terrorist groups, or set to auto, terminator style. They could recharge quickly by solar during short breaks, and restock their payloads from secret supplies that distribute with the swarm. They could be distributed globally using the winds and oceans, so don’t need a plane or missile delivery system that is easily intercepted. Tiny drones can’t carry much, but with nerve gas or viruses, they don’t have to. Defending against such a threat is easy if there is just one, you can swat it. If there is a small cloud of them, you could use a flamethrower. If the sky is full of them and much of the trees and the ground infested, it would be extremely hard to wipe them out. So if they are well designed to cause an extinction level threat, as MAD 2.0 perhaps, then this would be way up in the top tem too, 5th.

Solar storms could wipe out our modern way of life by killing our IT. That itself would kill many people, via riots and fights for the last cans of beans and bottles of water. The most serious solar storms could be even worse. I’ll keep them in my list, at 6th place

Global civil war could become an extinction level event, given human nature. We don’t have to go nuclear to kill a lot of people, and once society degrades to a certain level, well we’ve all watched post-apocalypse movies or played the games. The few left would still fight with each other. I wrote about the Great Western War and how it might result, see

and such a thing could easily spread globally. I’ll give this 7th place.

A large asteroid strike could happen too, or a comet. Ones capable of extinction level events shouldn’t hit for a while, because we think we know all the ones that could do that. So this goes well down the list at 8th.

Alien invasion is entirely possible and could happen at any time. We’ve been sending out radio signals for quite a while so someone out there might have decided to come see whether our place is nicer than theirs and take over. It hasn’t happened yet so it probably won’t, but then it doesn’t have to be very probably to be in the top ten. 9th will do.

High energy physics research has also been suggested as capable of wiping out our entire planet via exotic particle creation, but the smart people at CERN say it isn’t very likely. Actually, I wasn’t all that convinced or reassured and we’ve only just started messing with real physics so there is plenty of time left to increase the odds of problems. I have a spare place at number 10, so there it goes, with a totally guessed probability of physics research causing a problem every 4000 years.

My top ten list for things likely to cause human extinction, or pretty darn close:

  1. Nuclear war
  2. Highly infectious and lethal virus pandemic
  3. NBIC – deliberate, accidental or lack of foresight (includes smart bacteria, zombie viruses, mind control etc)
  4. Artificial Intelligence, including but not limited to the Terminator scenario
  5. Autonomous Micro-Drones
  6. Solar storm
  7. Global civil war
  8. Comet or asteroid strike
  9. Alien Invasion
  10. Physics research

Not finished yet though. My title was how nigh is the end, not just what might cause it. It’s hard to assign probabilities to each one but someone’s got to do it.  So, I’ll make an arbitrarily wet finger guess in a dark room wearing a blindfold with no explanation of my reasoning to reduce arguments, but hey, that’s almost certainly still more accurate than most climate models, and some people actually believe those. I’m feeling particularly cheerful today so I’ll give my most optimistic assessment.

So, with probabilities of occurrence per year:

  1. Nuclear war:  0.5%
  2. Highly infectious and lethal virus pandemic: 0.4%
  3. NBIC – deliberate, accidental or lack of foresight (includes smart bacteria, zombie viruses, mind control etc): 0.35%
  4. Artificial Intelligence, including but not limited to the Terminator scenario: 0.25%
  5. Autonomous Micro-Drones: 0.2%
  6. Solar storm: 0.1%
  7. Global civil war: 0.1%
  8. Comet or asteroid strike 0.05%
  9. Alien Invasion: 0.04%
  10. Physics research: 0.025%

I hope you agree those are all optimistic. There have been several near misses in my lifetime of number 1, so my 0.5% could have been 2% or 3% given the current state of the world. Also, 0.25% per year means you’d only expect such a thing to happen every 4 centuries so it is a very small chance indeed. However, let’s stick with them and add them up. The cumulative probability of the top ten is 2.015%. Lets add another arbitrary 0.185% for all the risks that didn’t make it into the top ten, rounding the total up to a nice neat 2.2% per year.

Some of the ones above aren’t possible quite yet, but others will vary in probability year to year, but I think that won’t change the guess overall much. If we take a 2.2% probability per year, we have an expectation value of 45.5 years for civilization life expectancy from now. Expectation date for human extinction:

2015.5 + 45.5 years= 2061,

Obviously the probability distribution extends from now to eternity, but don’t get too optimistic, because on these figures there currently is only a 15% chance of surviving past this century.

If you can think of good reasons why my figures are far too pessimistic, by all means make your own guesses, but make them honestly, with a fair and reasonable assessment of how the world looks socially, religiously, politically, the quality of our leaders, human nature etc, and then add them up. You might still be surprised how little time we have left.

I’ll revise my original outlook upwards from ‘a bit doomed’.

We’re reasonably doomed.

The future of beetles

Onto B then.

One of the first ‘facts’ I ever learned about nature was that there were a million species of beetle. In the Google age, we know that ‘scientists estimate there are between 4 and 8 million’. Well, still lots then.

Technology lets us control them. Beetles provide a nice platform to glue electronics onto so they tend to fall victim to cybernetics experiments. The important factor is that beetles come with a lot of built-in capability that is difficult or expensive to build using current technology. If they can be guided remotely by over-riding their own impulses or even misleading their sensors, then they can be used to take sensors into places that are otherwise hard to penetrate. This could be for finding trapped people after an earthquake, or getting a dab of nerve gas onto a president. The former certainly tends to be the favored official purpose, but on the other hand, the fashionable word in technology circles this year is ‘nefarious’. I’ve read it more in the last year than the previous 50 years, albeit I hadn’t learned to read for some of those. It’s a good word. Perhaps I just have a mad scientist brain, but almost all of the uses I can think of for remote-controlled beetles are nefarious.

The first properly publicized experiment was 2009, though I suspect there were many unofficial experiments before then:

There are assorted YouTube videos such as

A more recent experiment:

Big beetles make it easier to do experiments since they can carry up to 20% of body weight as payload, and it is obviously easier to find and connect to things on a bigger insect, but obviously once the techniques are well-developed and miniaturization has integrated things down to single chip with low power consumption, we should expect great things.

For example, a cloud of redundant smart dust would make it easier to connect to various parts of a beetle just by getting it to take flight in the cloud. Bits of dust would stick to it and self-organisation principles and local positioning can then be used to arrange and identify it all nicely to enable control. This would allow large numbers of beetles to be processed and hijacked, ideal for mad scientists to be more time efficient. Some dust could be designed to burrow into the beetle to connect to inner parts, or into the brain, which obviously would please the mad scientists even more. Again, local positioning systems would be advantageous.

Then it gets more fun. A beetle has its own sensors, but signals from those could be enhanced or tweaked via cloud-based AI so that it can become a super-beetle. Beetles traditionally don’t have very large brains, so they can be added to remotely too. That doesn’t have to be using AI either. As we can also connect to other animals now, and some of those animals might have very useful instincts or skills, then why not connect a rat brain into the beetle? It would make a good team for exploring. The beetle can do the aerial maneuvers and the rat can control it once it lands, and we all know how good rats are at learning mazes. Our mad scientist friend might then swap over the management system to another creature with a more vindictive streak for the final assault and nerve gas delivery.

So, Coleoptera Nefarius then. That’s the cool new beetle on the block. And its nicer but underemployed twin Coleoptera Benignus I suppose.


Suspended animation and mind transfer as suicide alternatives

I last wrote about suicide in but this time, I want to take a different line of thought. Instead of looking at suicide per se, what about alternatives?

There are many motives for suicide but the most common is wanting to escape from a situation such as suffering intolerable pain or misery, which can arise from a huge range of causes. The victim looks at the potential futures available to them and in their analysis, the merits of remaining alive are less attractive than being dead.

The ‘being dead’ bit is not necessarily about a full ceasing of existence, but more about abdicating consciousness, with its implied sensory inputs, pain, anxiety, inner turmoil, or responsibility.

Last summer, a development in neuroscience offered the future potential to switch the brain off:

The researchers were aware that it may become an alternative to anesthetic, or even a means of avoiding boredom or fear. There are many situations where we want to temporarily suspend consciousness. Alcohol and drug abuse often arises from people using chemical means of doing so.

It seems to me that suicide offers a permanent version of the same, to be switched off forever, but with a key difference. In the anesthetic situation, normal life will resume with its associated problems. In suicide, it won’t. The problems are gone.

Suppose that people could get switched off for a very long time whilst being biologically maintained and housed somehow. Suppose it is long enough that any personal relationship issues will have vanished, that any debts, crimes or other legal issues are nullified, and that any pain or other health problems can be fixed, including fixing mental health issues and erasing of intolerable memories if necessary. In many cases, that would be a suitable alternative to suicide. It offers the advantages of escaping the problems, but with the advantage that a better life might follow some time far in the future.

These have widely varying timescales for potential delivery, and there are numerous big issues, but I don’t see fundamental technology barriers here. Suspending the mind for as long as necessary might offer a reasonable alternative to suicide, at least in principle. There is no need to look at all the numerous surrounding issues though. Consider taking that general principle and adapting it a bit. Mid-century onwards, we’ll have direct brain links sufficiently developed to allow porting of the mind to a new body, and android one for example. Having a new identity and a new body and a properly working and sanitized ‘brain’ would satisfy many of these same goals and avoid many of the legal, environmental, financial and ethical issues surrounding indefinite suspension. The person could simply cease their unpleasant existence and start afresh with a better one. I think it would be fine to kill the old body after the successful transfer. Any legal associations with the previous existence could be nullified. It is just a damaged container that would have been destroyed anyway. Have it destroyed, along with all its problems, and move on.

Mid-century is a lot earlier than would be needed for any social issues to go away otherwise. If a suicide is considered because of relationship or family problems, those problems might otherwise linger for generations. Creating a true new identity essentially solves them, albeit at a high cost of losing any relationships that matter. Long prison sentences are substituted by the biological death, debts similarly. A new person appears, inheriting a mind, but one refreshed, potentially with the bad bits filtered out.

Such a future seems to be feasible technically, and I think it is also ethically feasible. Suicide is one sided. Those remaining have to suffer the loss and pick up the pieces anyway, and they would be no worse off in this scenario, and if they feel aggrieved that the person has somehow escaped the consequences of their actions, then they would have escaped anyway. But a life is saved and someone gets a second chance.



The future of drones – predators. No, not that one.

It is a sad fact of life that companies keep using the most useful terminology for things that don’t deserve it. The Apple retina display, which makes it more difficult to find a suitable name for direct retinal displays that use the retina directly. Why can’t they be the ones called retina displays? Or the LED TV, where the LEDs are typically just LED back-lighting for an LCD display. That makes it hard to name TVs where each pixel is actually an LED. Or the Predator drone, as definitely  not the topic of this blog, where I will talk about predator drones that attack other ones.

I have written several times now on the dangers of drones. My most recent scare was realizing the potential for small drones carrying high-powered lasers and using cloud based face recognition to identify valuable targets in a crowd and blind them, using something like a Raspberry Pi as the main controller. All of that could be done tomorrow with components easily purchased on the net. A while ago I blogged that the Predators and Reapers are not the ones you need to worry about, so much as the little ones which can attack you in swarms.

This morning I was again considering terrorist uses for the micro-drones we’re now seeing. A 5cm drone with a networked camera and control could carry a needle infected with Ebola or aids or carrying a drop of nerve toxin. A small swarm of tiny drones, each with a gram of explosive that detonates when it collides with a forehead, could kill as many people as a bomb.

We will soon have to defend against terrorist drones and the tiniest drones give the most effective terror per dollar so they are the most likely to be the threat. The solution is quite simple. and nature solved it a long time ago. Mosquitos and flies in my back garden get eaten by a range of predators. Frogs might get them if they come too close to the surface, but in the air, dragonflies are expert at catching them. Bats are good too. So to deal with threats from tiny drones, we could use predator drones to seek and destroy them. For bigger drones, we’d need bigger predators and for very big ones, conventional anti-aircraft weapons become useful. In most cases, catching them in nets would work well. Nets are very effective against rotors. The use of nets doesn’t need such sophisticated control systems and if the net can be held a reasonable distance from the predator, it won’t destroy it if the micro-drone explodes. With a little more precise control, spraying solidifying foam onto the target drone could also immobilize it and some foams could help disperse small explosions or contain their lethal payloads. Spiders also provide inspiration here, as many species wrap their victims in silk to immobilize them. A single predator could catch and immobilize many victims. Such a defense system ought to be feasible.

The main problem remains. What do we call predator drones now that the most useful name has been trademarked for a particular model?


The future of terminators

The Terminator films were important in making people understand that AI and machine consciousness will not necessarily be a good thing. The terminator scenario has stuck in our terminology ever since.

There is absolutely no reason to assume that a super-smart machine will be hostile to us. There are even some reasons to believe it would probably want to be friends. Smarter-than-man machines could catapult us into a semi-utopian era of singularity level development to conquer disease and poverty and help us live comfortably alongside a healthier environment. Could.

But just because it doesn’t have to be bad, that doesn’t mean it can’t be. You don’t have to be bad but sometimes you are.

It is also the case that even if it means us no harm, we could just happen to be in the way when it wants to do something, and it might not care enough to protect us.

Asimov’s laws of robotics are irrelevant. Any machine smart enough to be a terminator-style threat would presumably take little notice of rules it has been given by what it may consider a highly inferior species. The ants in your back garden have rules to govern their colony and soldier ants trained to deal with invader threats to enforce territorial rules. How much do you consider them when you mow the lawn or rearrange the borders or build an extension?

These arguments are put in debates every day now.

There are however a few points that are less often discussed

Humans are not always good, indeed quite a lot of people seem to want to destroy everything most of us want to protect. Given access to super-smart machines, they could design more effective means to do so. The machines might be very benign, wanting nothing more than to help mankind as far as they possibly can, but misled into working for them, believing in architected isolation that such projects are for the benefit of humanity. (The machines might be extremely  smart, but may have existed since their inception in a rigorously constructed knowledge environment. To them, that might be the entire world, and we might be introduced as a new threat that needs to be dealt with.) So even benign AI could be an existential threat when it works for the wrong people. The smartest people can sometimes be very naive. Perhaps some smart machines could be deliberately designed to be so.

I speculated ages ago what mad scientists or mad AIs could do in terms of future WMDs:

Smart machines might be deliberately built for benign purposes and turn rogue later, or they may be built with potential for harm designed in, for military purposes. These might destroy only enemies, but you might be that enemy. Others might do that and enjoy the fun and turn on their friends when enemies run short. Emotions might be important in smart machines just as they are in us, but we shouldn’t assume they will be the same emotions or be wired the same way.

Smart machines may want to reproduce. I used this as the core storyline in my sci-fi book. They may have offspring and with the best intentions of their parent AIs, the new generation might decide not to do as they’re told. Again, in human terms, a highly familiar story that goes back thousands of years.

In the Terminator film, it is a military network that becomes self aware and goes rogue that is the problem. I don’t believe digital IT can become conscious, but I do believe reconfigurable analog adaptive neural networks could. The cloud is digital today, but it won’t stay that way. A lot of analog devices will become part of it. In

I argued how new self-organising approaches to data gathering might well supersede big data as the foundations of networked intelligence gathering. Much of this could be in a the analog domain and much could be neural. Neural chips are already being built.

It doesn’t have to be a military network that becomes the troublemaker. I suggested a long time ago that ‘innocent’ student pranks from somewhere like MIT could be the source. Some smart students from various departments could collaborate to see if they can hijack lots of networked kit to see if they can make a conscious machine. Their algorithms or techniques don’t have to be very efficient if they can hijack enough. There is a possibility that such an effort could succeed if the right bits are connected into the cloud and accessible via sloppy security, and the ground up data industry might well satisfy that prerequisite soon.

Self-organisation technology will make possible extremely effective combat drones.

Terminators also don’t have to be machines. They could be organic, products of synthetic biology. My own contribution here is smart yogurt:

With IT and biology rapidly converging via nanotech, there will be many ways hybrids could be designed, some of which could adapt and evolve to fill different niches or to evade efforts to find or harm them. Various grey goo scenarios can be constructed that don’t have any miniature metal robots dismantling things. Obviously natural viruses or bacteria could also be genetically modified to make weapons that could kill many people – they already have been. Some could result from seemingly innocent R&D by smart machines.

I dealt a while back with the potential to make zombies too, remotely controlling people – alive or dead. Zombies are feasible this century too: &

A different kind of terminator threat arises if groups of people are linked at consciousness level to produce super-intelligences. We will have direct brain links mid-century so much of the second half may be spent in a mental arms race. As I wrote in my blog about the Great Western War, some of the groups will be large and won’t like each other. The rest of us could be wiped out in the crossfire as they battle for dominance. Some people could be linked deeply into powerful machines or networks, and there are no real limits on extent or scope. Such groups could have a truly global presence in networks while remaining superficially human.

Transhumans could be a threat to normal un-enhanced humans too. While some transhumanists are very nice people, some are not, and would consider elimination of ordinary humans a price worth paying to achieve transhumanism. Transhuman doesn’t mean better human, it just means humans with greater capability. A transhuman Hitler could do a lot of harm, but then again so could ordinary everyday transhumanists that are just arrogant or selfish, which is sadly a much bigger subset.

I collated these various varieties of potential future cohabitants of our planet in:

So there are numerous ways that smart machines could end up as a threat and quite a lot of terminators that don’t need smart machines.

Outcomes from a terminator scenario range from local problems with a few casualties all the way to total extinction, but I think we are still too focused on the death aspect. There are worse fates. I’d rather be killed than converted while still conscious into one of 7 billion zombies and that is one of the potential outcomes too, as is enslavement by some mad scientist.


The future of euthanasia and suicide

Another extract from You Tomorrow, one that is very much in debate at the moment, it is an area that needs wise legislation, but I don’t have much confidence that we’ll get it. I’ll highlight some of the questions here, but since I don’t have many answers, I’ll illustrate why: they are hard questions.

Sadly, some people feel the need to end their own lives and an increasing number are asking for the legal right to assisted suicide. Euthanasia is increasingly in debate now too, with some health service practices bordering on it, some would say even crossing the boundary. Suicide and euthanasia are inextricably linked, mainly because it is impossible to know for certain what is in someone’s mind, and that is the basis of the well-known slippery slope from assisted suicide to euthanasia.

The stages of progress are reasonably clear. Is the suicide request a genuine personal decision, originating from that person’s free thoughts, based solely on their own interests? Or is it a personal decision influenced by the interests of others, real or imagined? Or is it a personal decision made after pressure from friends and relatives who want the person to die peacefully rather than suffer, with the best possible interests of the person in mind? In which case, who first raised the possibility of suicide as a potential way out? Or is it a personal decision made after pressure applied because relatives want rid of the person, perhaps over-eager to inherit or wanting to end their efforts to care for them? Guilt can be a powerful force and can be applied very subtly indeed over a period of time.

If the person is losing their ability to communicate a little, perhaps a friend or relative may help interpret their wishes to a doctor. From here, it is a matter of degree of communication skill loss and gradual increase of the part relatives play in guiding the doctor’s opinion of whether the person genuinely wants to die. Eventually, the person might not be directly consulted because their relatives can persuade a doctor that they really want to die but can’t say so effectively. Not much further along the path, people make their minds up what is in the best interests of another person as far as living or dying goes. It is a smooth path between these many small steps from genuine suicide to euthanasia. And that all ignores all the impact of possible alternatives such as pain relief, welfare, special care etc. Interestingly, the health services seem to be moving down the euthanasia route far faster than the above steps would suggest, skipping some of them and going straight to the ‘doctor knows best’ step.

Once the state starts to get involved in deciding cases, even by abdicating it to doctors, it is a long but easy road to Logan’s run, where death is compulsory at a certain age, or a certain care cost, or you’ve used up your lifetime carbon credit allocation.

There are sometimes very clear cases where someone obviously able to make up their own mind has made a thoroughly thought-through decision to end their life because of ongoing pain, poor quality of life and no hope of any cure or recovery, the only prospect being worsening condition leading to an undignified death. Some people would argue with their decision to die, others would consider that they should be permitted to do so in such clear circumstances, without any fear for their friends or relatives being prosecuted.

There are rarely razor-sharp lines between cases; situations can get blurred sometimes because of the complexity of individual lives, and because judges have their own personalities and differ slightly in their judgements. There is inevitably another case slightly further down the line that seems reasonable to a particular judge in the circumstances, and once that point is passed, and accepted by the courts, other cases with slightly less-defined circumstances can use it to help argue theirs. This is the path by which most laws evolve. They start in parliament and then after implementation, case law and a gradually changing public mind-set or even the additive effects of judges’ ideologies gradually evolve them into something quite different.

It seems likely given current trends and pressures that one day, we will accept suicide, and then we may facilitate it. Then, if we are not careful, it may evolve into euthanasia by a hundred small but apparently reasonable steps, and if we don’t stop it in time, one day we might even have a system like the one in the film ‘Logan’s Run’.

 Suicide and euthanasia are certainly gradually becoming less shocking to people, and we should expect that in the far future both will become more accepted. If you doubt that society can change its attitudes quickly, it actually only takes about 30 years to get a full reversal. Think of how long it took for homosexuality to change from condemned to fashionable, or how long abortion took from being something a woman would often be condemned for to something that is now a woman’s right to choose. Each of these took only 3 decades for a full 180 degree turnaround. Attitudes to the environment switched from mad panic about a coming ice age to mad panic about global warming in just 3 decades too, and are already switching back again towards ice age panic. If the turn in attitudes to suicide started 10 years ago, then we may have about 20 years left before it is widely accepted as a basic right that is only questioned by bigots. But social change aside, the technology will make the whole are much more interesting.

As I argued earlier, the very long term (2050 and beyond) will bring technology that allows people to link their brains to the machine world, perhaps using nanotech implants connected to each synapse to relay brain activity to a high speed neural replica hosted by a computer. This will have profound implications for suicide too. When this technology has matured, it will allow people to do wonderful things such as using machine sensors as extensions to their own capabilities. They will be able to use android bodies to move around and experience distant places and activities as if they were there in person. For people who feel compelled to end it all because of disability, pain or suffering, an alternative where they could effectively upload their mind into an android might be attractive. Their quality of life could improve dramatically at least in terms of capability. We might expect that pain and suffering could be dealt with much more effectively too if we have a direct link into the brain to control the way sensations are dealt with. So if that technology does progress as I expect, then we might see a big drop in the number of people who want to die.

But the technology options don’t stop there. If a person has a highly enhanced replica of their own brain/mind, in the machine world, people will begin to ask why they need the original. The machine world could give them greater sensory ability, greater physical ability, and greater mental ability. Smarter, with better memory, more and better senses, connected to all the world’s knowledge via the net, able effectively to wander around the world at the speed of light, and being connected directly to other people’s minds when you want, and doing so without fear of ageing, ill health of pain, this would seem a very attractive lifestyle. And it will become possible this century, at low enough cost for anyone to afford.

What of suicide then? It might not seem so important to keep the original body, especially if it is worn out or defective, so even without any pain and suffering, some people might decide to dispose of their body and carry on their lives without it. Partial suicide might become possible. Aside from any religious issues, this would be a hugely significant secular ethical issue. Updating the debate today, should people be permitted to opt out of physical existence, only keeping an electronic copy of their mind, timesharing android bodies when they need to enter the physical world? Should their families and friends be able to rebuild their loved ones electronically if they die accidentally? If so, should people be able to rebuild several versions, each representing the deceased’s different life stages, or just the final version, which may have been ill or in decline?

And then the ethical questions get even trickier. If it is possible to replicate the brain’s structure and so capture the mind, will people start to build ‘restore points’, where they make a permanent record of the state of their self at a given moment? If they get older and decide they could have run their lives better, they might be able to start again from any restore point. If the person exists in cyberspace and has disposed of their physical body, what about ownership of their estate? What about working and living in cyberspace? Will people get jobs? Will they live in virtual towns like the Sims? Indeed, in the same time frame, AI will have caught up and superseded humans in ability. Maybe Sims will get bored in their virtual worlds and want to end it all by migrating to the real world. Maybe they could swap bodies with someone coming the other way?

What will the State do when it is possible to reduce costs and environmental impact by migrating people into the virtual universe? Will it then become socially and politically acceptable, even compulsory when someone reaches a given age or costs too much for health care?

So perhaps suicide has an interesting future. It might eventually decline, and then later increase again, but in many very different forms, becoming a whole range of partial suicide options. But the scariest possibility is that people may not be able to die completely. If their body is an irrelevance, and there are many restore points from which they can be recovered, friends, family, or even the state might keep them ‘alive’ as long as they are useful. And depending on the law, they might even become a form of slave labour, their minds used for information processing or creativity whether they wish it or not. It has often truly been noted that there are worse fates than death.