Category Archives: transhuman

The future of women in IT

 

Many people perceive it as a problem that there are far more men than women in IT. Whether that is because of personal preference, discrimination, lifestyle choices, social gender construct reinforcement or any other factor makes long and interesting debate, but whatever conclusions are reached, we can only start from the reality of where we are. Even if activists were to be totally successful in eliminating all social and genetic gender conditioning, it would only work fully for babies born tomorrow and entering IT in 20 years time. Additionally, unless activists also plan to lobotomize everyone who doesn’t submit to their demands, some 20-somethings who have just started work may still be working in 50 years so whatever their origin, natural, social or some mix or other, some existing gender-related attitudes, prejudices and preferences might persist in the workplace that long, however much effort is made to remove them.

Nevertheless, the outlook for women in IT is very good, because IT is changing anyway, largely thanks to AI, so the nature of IT work will change and the impact of any associated gender preferences and prejudices will change with it. This will happen regardless of any involvement by Google or government but since some of the front line AI development is at Google, it’s ironic that they don’t seem to have noticed this effect themselves. If they had, their response to the recent fiasco might have highlighted how their AI R&D will help reduce the gender imbalance rather than causing the uproar they did by treating it as just a personnel issue. One conclusion must be that Google needs better futurists and their PR people need better understanding of what is going on in their own company and its obvious consequences.

As I’ve been lecturing for decades, AI up-skills people by giving them fast and intuitive access to high quality data and analysis tools. It will change all knowledge-based jobs in coming years, and will make some jobs redundant while creating others. If someone has excellent skills or enthusiasm in one area, AI can help cover over any deficiencies in the rest of their toolkit. Someone with poor emotional interaction skills can use AI emotion recognition assistance tools. Someone with poor drawing or visualization skills can make good use of natural language interaction to control computer-based drawing or visualization skills. Someone who has never written a single computer program can explain what they want to do to a smart computer and it will produce its own code, interacting with the user to eliminate any ambiguities. So whatever skills someone starts with, AI can help up-skill them in that area, while also helping to cover over any deficiencies they have, whether gender related or not.

In the longer term, IT and hence AI will connect directly to our brains, and much of our minds and memories will exist in the cloud, though it will probably not feel any different from when it was entirely inside your head. If everyone is substantially upskilled in IQ, senses and emotions, then any IQ or EQ advantages will evaporate as the premium on physical strength did when the steam engine was invented. Any pre-existing statistical gender differences in ability distribution among various skills would presumably go the same way, at least as far as any financial value is concerned.

The IT industry won’t vanish, but will gradually be ‘staffed’ more by AI and robots, with a few humans remaining for whatever few tasks linger on that are still better done by humans. My guess is that emotional skills will take a little longer to automate effectively than intellectual skills, and I still believe that women are generally better than men in emotional, human interaction skills, while it is not a myth that many men in IT score highly on the autistic spectrum. However, these skills will eventually fall within the AI skill-set too and will be optional add-ons to anyone deficient in them, so that small advantage for women will also only be temporary.

So, there may be a gender  imbalance in the IT industry. I believe it is mostly due to personal career and lifestyle choices rather than discrimination but whatever its actual causes, the problem will go away soon anyway as the industry develops. Any innate psychological or neurological gender advantages that do exist will simply vanish into noise as cheap access to AI enhancement massively exceeds their impacts.

 

 

Advertisements

Future sex, gender and relationships: how close can you get?

Using robots for gender play

Using robots for gender play

I recently gave a public talk at the British Academy about future sex, gender, and relationship, asking the question “How close can you get?”, considering particularly the impact of robots. The above slide is an example. People will one day (between 2050 and 2065 depending on their budget) be able to use an android body as their own or even swap bodies with another person. Some will do so to be young again, many will do so to swap gender. Lots will do both. I often enjoy playing as a woman in computer games, so why not ‘come back’ and live all over again as a woman for real? Except I’ll be 90 in 2050.

The British Academy kindly uploaded the audio track from my talk at

If you want to see the full presentation, here is the PowerPoint file as a pdf:

sex-and-robots-british-academy

I guess it is theoretically possible to listen to the audio while reading the presentation. Most of the slides are fairly self-explanatory anyway.

Needless to say, the copyright of the presentation belongs to me, so please don’t reproduce it without permission.

Enjoy.

Carbethium, a better-than-scifi material

How to build one of these for real:

Light_bridge

Halo light bridge, from halo.wikia.com

Or indeed one of these:

From halo.wikia.com

From halo.wikia.com

I recently tweeted that I had an idea how to make the glowy bridges and shields we’ve seen routinely in sci-fi games from Half Life to Destiny, the bridges that seem to appear in a second or two from nothing across a divide, yet are strong enough to drive tanks over, and able to vanish as quickly and completely when they are switched off. I woke today realizing that with a bit of work, that it could be the basis of a general purpose material to make the tanks too, and buildings and construction platforms, bridges, roads and driverless pod systems, personal shields and city defense domes, force fields, drones, planes and gliders, space elevator bases, clothes, sports tracks, robotics, and of course assorted weapons and weapon systems. The material would only appear as needed and could be fully programmable. It could even be used to render buildings from VR to real life in seconds, enabling at least some holodeck functionality. All of this is feasible by 2050.

Since it would be as ethereal as those Halo structures, I first wanted to call the material ethereum, but that name was already taken (for a 2014 block-chain programming platform, which I note could be used to build the smart ANTS network management system that Chris Winter and I developed in BT in 1993), and this new material would be a programmable construction platform so the names would conflict, and etherium is too close. Ethium might work, but it would be based on graphene and carbon nanotubes, and I am quite into carbon so I chose carbethium.

Ages ago I blogged about plasma as a 21st Century building material. I’m still not certain this is feasible, but it may be, and it doesn’t matter for the purposes of this blog anyway.

https://timeguide.wordpress.com/2013/11/01/will-plasma-be-the-new-glass/

Around then I also blogged how to make free-floating battle drones and more recently how to make a Star Wars light-saber.

https://timeguide.wordpress.com/2013/06/23/free-floating-ai-battle-drone-orbs-or-making-glyph-from-mass-effect/

https://timeguide.wordpress.com/2015/11/25/how-to-make-a-star-wars-light-saber/

Carbethium would use some of the same principles but would add the enormous strength and high conductivity of graphene to provide the physical properties to make a proper construction material. The programmable matter bits and the instant build would use a combination of 3D interlocking plates, linear induction,  and magnetic wells. A plane such as a light bridge or a light shield would extend from a node in caterpillar track form with plates added as needed until the structure is complete. By reversing the build process, it could withdraw into the node. Bridges that only exist when they are needed would be good fun and we could have them by 2050 as well as the light shields and the light swords, and light tanks.

The last bit worries me. The ethics of carbethium are the typical mixture of enormous potential good and huge potential for abuse to bring death and destruction that we’re learning to expect of the future.

If we can make free-floating battle drones, tanks, robots, planes and rail-gun plasma weapons all appear within seconds, if we can build military bases and erect shield domes around them within seconds, then warfare moves into a new realm. Those countries that develop this stuff first will have a huge advantage, with the ability to send autonomous robotic armies to defeat enemies with little or no risk to their own people. If developed by a James Bond super-villain on a hidden island, it would even be the sort of thing that would enable a serious bid to take over the world.

But in the words of Professor Emmett Brown, “well, I figured, what the hell?”. 2050 values are not 2016 values. Our value set is already on a random walk, disconnected from any anchor, its future direction indicated by a combination of current momentum and a chaos engine linking to random utterances of arbitrary celebrities on social media. 2050 morality on many issues will be the inverse of today’s, just as today’s is on many issues the inverse of the 1970s’. Whatever you do or however politically correct you might think you are today, you will be an outcast before you get old: https://timeguide.wordpress.com/2015/05/22/morality-inversion-you-will-be-an-outcast-before-youre-old/

We’re already fucked, carbethium just adds some style.

Graphene combines huge tensile strength with enormous electrical conductivity. A plate can be added to the edge of an existing plate and interlocked, I imagine in a hexagonal or triangular mesh. Plates can be designed in many diverse ways to interlock, so that rotating one engages with the next, and reversing the rotation unlocks them. Plates can be pushed to the forward edge by magnetic wells, using linear induction motors, using the graphene itself as the conductor to generate the magnetic field and the design of the structure of the graphene threads enabling the linear induction fields. That would likely require that the structure forms first out of graphene threads, then the gaps between filled by mesh, and plates added to that to make the structure finally solid. This would happen in thickness as well as width, to make a 3D structure, though a graphene bridge would only need to be dozens of atoms thick.

So a bridge made of graphene could start with a single thread, which could be shot across a gap at hundreds of meters per second. I explained how to make a Spiderman-style silk thrower to do just that in a previous blog:

https://timeguide.wordpress.com/2015/11/12/how-to-make-a-spiderman-style-graphene-silk-thrower-for-emergency-services/

The mesh and 3D build would all follow from that. In theory that could all happen in seconds, the supply of plates and the available power being the primary limiting factors.

Similarly, a shield or indeed any kind of plate could be made by extending carbon mesh out from the edge or center and infilling. We see that kind of technique used often in sci-fi to generate armor, from lost in Space to Iron Man.

The key components in carbetheum are 3D interlocking plate design and magnetic field design for the linear induction motors. Interlocking via rotation is fairly easy in 2D, any spiral will work, and the 3rd dimension is open to any building block manufacturer. 3D interlocking structures are very diverse and often innovative, and some would be more suited to particular applications than others. As for linear induction motors, a circuit is needed to produce the travelling magnetic well, but that circuit is made of the actual construction material. The front edge link between two wires creates a forward-facing magnetic field to propel the next plates and convey enough intertia to them to enable kinetic interlocks.

So it is feasible, and only needs some engineering. The main barrier is price and material quality. Graphene is still expensive to make, as are carbon nanotubes, so we won’t see bridges made of them just yet. The material quality so far is fine for small scale devices, but not yet for major civil engineering.

However, the field is developing extremely quickly because big companies and investors can clearly see the megabucks at the end of the rainbow. We will have almost certainly have large quantity production of high quality graphene for civil engineering by 2050.

This field will be fun. Anyone who plays computer games is already familiar with the idea. Light bridges and shields, or light swords would appear much as in games, but the material would likely  be graphene and nanotubes (or maybe the newfangled molybdenum equivalents). They would glow during construction with the plasma generated by the intense electric and magnetic fields, and the glow would be needed afterward to make these ultra-thin physical barriers clearly visible,but they might become highly transparent otherwise.

Assembling structures as they are needed and disassembling them just as easily will be very resource-friendly, though it is unlikely that carbon will be in short supply. We can just use some oil or coal to get more if needed, or process some CO2. The walls of a building could be grown from the ground up at hundreds of meters per second in theory, with floors growing almost as fast, though there should be little need to do so in practice, apart from pushing space vehicles up so high that they need little fuel to enter orbit. Nevertheless, growing a  building and then even growing the internal structures and even furniture is feasible, all using glowy carbetheum. Electronic soft fabrics, cushions and hard surfaces and support structures are all possible by combining carbon nanotubes and graphene and using the reconfigurable matter properties carbethium convents. So are visual interfaces, electronic windows, electronic wallpaper, electronic carpet, computers, storage, heating, lighting, energy storage and even solar power panels. So is all the comms and IoT and all the smart embdedded control systems you could ever want. So you’d use a computer with VR interface to design whatever kind of building and interior furniture decor you want, and then when you hit the big red button, it would appear in front of your eyes from the carbethium blocks you had delivered. You could also build robots using the same self-assembly approach.

If these structures can assemble fast enough, and I think they could, then a new form of kinetic architecture would appear. This would use the momentum of the construction material to drive the front edges of the surfaces, kinetic assembly allowing otherwise impossible and elaborate arches to be made.

A city transport infrastructure could be built entirely out of carbethium. The linear induction mats could grow along a road, connecting quickly to make a whole city grid. Circuit design allows the infrastructure to steer driverless pods wherever they need to go, and they could also be assembled as required using carbethium. No parking or storage is needed, as the pod would just melt away onto the surface when it isn’t needed.

I could go to town on military and terrorist applications, but more interesting is the use of the defense domes. When I was a kid, I imagined having a house with a defense dome over it. Lots of sci-fi has them now too. Domes have a strong appeal, even though they could also be used as prisons of course. A supply of carbetheum on the city edges could be used to grow a strong dome in minutes or even seconds, and there is no practical limit to how strong it could be. Even if lasers were used to penetrate it, the holes could fill in in real time, replacing material as fast as it is evaporated away.

Anyway, lots of fun. Today’s civil engineering projects like HS2 look more and more primitive by the day, as we finally start to see the true potential of genuinely 21st century construction materials. 2050 is not too early to expect widespread use of carbetheum. It won’t be called that – whoever commercializes it first will name it, or Google or MIT will claim to have just invented it in a decade or so, so my own name for it will be lost to personal history. But remember, you saw it here first.

New book: Society Tomorrow

It’s been a while since my last blog. That’s because I’ve been writing another book, my 8th so far. Not the one I was doing on future fashion, which went on the back burner for a while, I’ve only written a third of that one, unless I put it out as a very short book.

This one follows on from You Tomorrow and is called Society Tomorrow, 20% shorter at 90,000 words. It is ready to publish now, so I’m just waiting for feedback from a few people before hitting the button.

Frontcover

Here’s the introduction:

The one thing that we all share is that we will get older over the next few decades. Rapid change affects everyone, but older people don’t always feel the same effects as younger people, and even if we keep up easily today, some of us may find it harder tomorrow. Society will change, in its demographic and ethnic makeup, its values, its structure. We will live very differently. New stresses will come from both changing society and changing technology, but there is no real cause for pessimism. Many things will get better for older people too. We are certainly not heading towards utopia, but the overall quality of life for our ageing population will be significantly better in the future than it is today. In fact, most of the problems ahead are related to quality of life issues in society as a whole, and simply reflect the fact that if you don’t have to worry as much about poor health or poverty, something else will still occupy your mind.

This book follows on from 2013’s You Tomorrow, which is a guide to future life as an individual. It also slightly overlaps my 2013 book Total Sustainability which looks in part at future economic and social issues as part of achieving sustainability too. Rather than replicating topics, this book updates or omits them if they have already been addressed in those two companion books. As a general theme, it looks at wider society and the bigger picture, drawing out implications for both individuals and for society as a whole to deal with. There are plenty to pick from.

If there is one theme that plays through the whole book, it is a strong warning of the problem of increasing polarisation between people of left and right political persuasion. The political centre is being eroded quickly at the moment throughout the West, but alarmingly this does not seem so much to be a passing phase as a longer term trend. With all the potential benefits from future technology, we risk undermining the very fabric of our society. I remain optimistic because it can only be a matter of time before sense prevails and the trend reverses. One day the relative harmony of living peacefully side by side with those with whom we disagree will be restored, by future leaders of higher quality than those we have today.

Otherwise, whereas people used to tolerate each other’s differences, I fear that this increasing intolerance of those who don’t share the same values could lead to conflict if we don’t address it adequately. That intolerance currently manifests itself in increasing authoritarianism, surveillance, and an insidious creep towards George Orwell’s Nineteen Eighty-Four. The worst offenders seem to be our young people, with students seemingly proud of trying to ostracise anyone who dares agree with what they think is correct. Being students, their views hold many self-contradictions and clear lack of thought, but they appear to be building walls to keep any attempt at different thought away.

Altogether, this increasing divide, built largely from sanctimony, is a very dangerous trend, and will take time to reverse even when it is addressed. At the moment, it is still worsening rapidly.

So we face significant dangers, mostly self-inflicted, but we also have hope. The future offers wonderful potential for health, happiness, peace, prosperity. As I address the significant problems lying ahead, I never lose my optimism that they are soluble, but if we are to solve problems, we must first recognize them for what they are and muster the willingness to deal with them. On the current balance of forces, even if we avoid outright civil war, the future looks very much like a gilded cage. We must not ignore the threats. We must acknowledge them, and deal with them.

Then we can all reap the rich rewards the future has to offer.

It will be out soon.

The future of mind control headbands

Have you ever wanted to control millions of other people as your own personal slaves or army? How about somehow persuading lots of people to wear mind control headbands, that you control? Once they are wearing them, you can use them as your slaves, army or whatever. And you could put them into offline mode in between so they don’t cause trouble.

Amazingly, this might be feasible. It just requires a little marketing to fool them into accepting a device with extra capabilities that serve the seller rather than the buyer. Lots of big companies do that bit all the time. They get you to pay handsomely for something such as a smartphone and then they use it to monitor your preferences and behavior and then sell the data to advertisers to earn even more. So we just need a similar means of getting you to buy and wear a nice headband that can then be used to control your mind, using a confusingly worded clause hidden on page 325 of the small print.

I did some googling about TMS- trans-cranial magnetic stimulation, which can produce some interesting effects in the brain by using magnetic coils to generate strong magnetic fields to create electrical currents in specific parts of your brain without needing to insert probes. Claimed effects vary from reducing inhibitions, pain control, activating muscles, assisting learning, but that is just today, it will be far easier to get the right field shapes and strengths in the future, so the range of effects will increase dramatically. While doing so, I also discovered numerous pages about producing religious experiences via magnetic fields too. I also recalled an earlier blog I wrote a couple of year ago about switching people off, which relied on applying high frequency stimulation to the claustrum region. https://timeguide.wordpress.com/2014/07/05/switching-people-off/

The source I cited for that is still online:  http://www.newscientist.com/article/mg22329762.700-consciousness-onoff-switch-discovered-deep-in-brain.html.

So… suppose you make a nice headband that helps people get in touch with their spiritual side. The time is certainly right. Millennials apparently believe in the afterlife far more than older generations, but they don’t believe in gods. They are begging for nice vague spiritual experiences that fit nicely into their safe spaces mentality, that are disconnected from anything specific that might offend someone or appropriate someone’s culture, that bring universal peace and love feelings without the difficult bits of having to actually believe in something or follow some sort of behavioral code. This headband will help them feel at one with the universe, and with other people, to be effortlessly part of a universal human collective, to share the feeling of belonging and truth. You know as well as I do that anyone could get millions of millennials or lefties to wear such a thing. The headband needs some magnetic coils and field shaping/steering technology. Today TMS uses old tech such as metal wires, tomorrow they will use graphene to get far more current and much better fields, and they will use nice IoT biotech feedback loops to monitor thoughts emotions and feelings to create just the right sorts of sensations. A 2030 headband will be able to create high strength fields in almost any part of the brain, creating the means for stimulation, emotional generation, accentuation or attenuation, muscle control, memory recall and a wide variety of other capabilities. So zillions of people will want one and happily wear it.  All the joys of spirituality without the terrorism or awkward dogma. It will probably work well with a range of legal or semi-legal smart drugs to make experiences even more rich. There might be a range of apps that work with them too, and you might have a sideline in a company supplying some of them.

And thanks to clause P325e paragraph 2, the headband will also be able to switch people off. And while they are switched off, unconscious, it will be able to use them as robots, walking them around and making them do stuff. When they wake up, they won’t remember anything about it so they won’t mind. If they have done nothing wrong, they have nothing to fear, and they are nor responsible for what someone else does using their body.

You could rent out some of your unconscious people as living statues or art-works or mannequins or ornaments. You could make shows with them, synchronised dances. Or demonstrations or marches, or maybe you could invade somewhere. Or get them all to turn up and vote for you at the election.  Or any of 1000 mass mind control dystopian acts. Or just get them to bow down and worship you. After all, you’re worth it, right? Or maybe you could get them doing nice things, your choice.

 

The future of knights

Some ideas pass the test of time. Most people have watched a film about knights, even if it is just the superb spoof Monty Python and the Holy Grail, which I watched yet again this weekend.

Although it is meant to be any soldiery type on horseback, or someone awarded a title of Sir, the common understanding of the term is far higher than just someone who was employed for a decade or two as a public sector worker or donated to a political party. Sir Bigdonor is no match for Sir Gallahad.

The far better concept of a knight that we all recognize from movies and games is someone with the highest level of ethics, wisdom and judgment coupled to the highest level of fighting skill to uphold justice and freedom. That is a position most of us would love to be able to qualify for, but which almost all of us know we actually fall very far short.

A vigilante may consider themselves to be a defender of the universe, but they often get in the way of proper law enforcement whereas a knight is officially recognized and authorized, having demonstrated the right qualities. Whether it’s King Arthur’s knights of the Round Table, Jedi Knights, Mass Effect’s Spectres, or even Judge Dredd, knights are meant to uphold the highest standards with the blessing of the authorities. Importantly, they still have to operate within the law.

A single country would not be able to authorize them if they need to operate anywhere, so it would need to be the UN, and it’s about time the UN started doing its job properly anyway.  A knight must not be biased but must have the common interests of all mankind at heart. The UN is meant to do that, but often shows alarmingly poor judgement and bias so it is currently unfit to control a force of knights, but that doesn’t mean it can’t be fixed, and if not the UN, we’d still need some globally accepted authority in control.

These days, the networks are often the platform on which wrongdoers do their wrongs. We need network knights, who can police the net with the blessing of the authorities. IT Knights could be anywhere and police the net, taking down bad sites, locating criminals, exposing crime, detecting terrorism before it happens, that sort of thing. But hang on, we already have them today. They are already a well-established part of our national security. The things missing are that they are still directed by national governments, not a global one, and they’re not called knights yet, and maybe they don’t have the glamour and the frills and the rituals and fancy uniforms and toys.

What’s really missing is the more conventional knight. We need them back again. Maybe the top members of the UK’s SAS or SBS or the US Marines, or other national equivalents, chosen for incorruptible ethical and moral fibre with their elite fighting skills just getting them on the shortlist. This elite of elites would be a good starting point to try out the concept. Maybe they need to be identified early on in the training processes associated with those military elites, then streamed and taught highest human values alongside fighting skills.

It would be a high honour to be chosen for such a role, so competition would be fierce, as it ought to be for a knight. Knowing the title can be removed would help keep temptation away, otherwise power might corrupt.

I have no doubt that such upstanding people exist. There are probably enough of them to staff a significant force for good. We have plenty of models from cultural references, even modern equivalents from sci-fi. However, the recent fashion for sci-fi heroes is to have significant character flaws, emotional baggage. Inevitably that ends up with conflict, and perhaps real life would need more boring, more stable, more reliable and trustworthy types, more Thunderbirds or Superman than Avengers, Dredd or Watchmen. On the other hand, to keep public support, maybe some interest value is essential. Then again, I fall so far short of the standard required, maybe I am not fit even to list the requirements, and that task should be left to others who hold the benefit of humankind closer to heart.

What do you think? Should we bring back knights? What requirements should they have? Would you want your child to grow up to be one, with all the obvious dangers it would entail?

 

How nigh is the end?

“We’re doomed!” is a frequently recited observation. It is great fun predicting the end of the world and almost as much fun reading about it or watching documentaries telling us we’re doomed. So… just how doomed are we? Initial estimate: Maybe a bit doomed. Read on.

My 2012 blog https://timeguide.wordpress.com/2012/07/03/nuclear-weapons/ addressed some of the possibilities for extinction-level events possibly affecting us. I recently watched a Top 10 list of threats to our existence on TV and it was similar to most you’d read, with the same errors and omissions – nuclear war, global virus pandemic, terminator scenarios, solar storms, comet or asteroid strikes, alien invasions, zombie viruses, that sort of thing. I’d agree that nuclear war is still the biggest threat, so number 1, and a global pandemic of a highly infectious and lethal virus should still be number 2. I don’t even need to explain either of those, we all know why they are in 1st and 2nd place.

The TV list included a couple that shouldn’t be in there.

One inclusion was an mega-eruption of Yellowstone or another super-volcano. A full-sized Yellowstone mega-eruption would probably kill millions of people and destroy much of civilization across a large chunk of North America, but some of us don’t actually live in North America and quite a few might well survive pretty well, so although it would be quite annoying for Americans, it is hardly a TEOTWAWKI threat. It would have big effects elsewhere, just not extinction-level ones. For most of the world it would only cause short-term disruptions, such as economic turbulence, at worst it would start a few wars here and there as regions compete for control in the new world order.

Number 3 on their list was climate change, which is an annoyingly wrong, albeit a popularly held inclusion. The only climate change mechanism proposed for catastrophe is global warming, and the reason it’s called climate change now is because global warming stopped in 1998 and still hasn’t resumed 17 years and 9 months later, so that term has become too embarrassing for doom mongers to use. CO2 is a warming agent and emissions should be treated with reasonable caution, but the net warming contribution of all the various feedbacks adds up to far less than originally predicted and the climate models have almost all proven far too pessimistic. Any warming expected this century is very likely to be offset by reduction in solar activity and if and when it resumes towards the end of the century, we will long since have migrated to non-carbon energy sources, so there really isn’t a longer term problem to worry about. With warming by 2100 pretty insignificant, and less than half a metre sea level rise, I certainly don’t think climate change deserves to be on any list of threats of any consequence in the next century.

The top 10 list missed two out by including climate change and Yellowstone, and my first replacement candidate for consideration might be the grey goo scenario. The grey goo scenario is that self-replicating nanobots manage to convert everything including us into a grey goo.  Take away the silly images of tiny little metal robots cutting things up atom by atom and the laughable presentation of this vanishes. Replace those little bots with bacteria that include electronics, and are linked across their own cloud to their own hive AI that redesigns their DNA to allow them to survive in any niche they find by treating the things there as food. When existing bacteria find a niche they can’t exploit, the next generation adapts to it. That self-evolving smart bacteria scenario is rather more feasible, and still results in bacteria that can conquer any ecosystem they find. We would find ourselves unable to fight back and could be wiped out. This isn’t very likely, but it is feasible, could happen by accident or design on our way to transhumanism, and might deserve a place in the top ten threats.

However, grey goo is only one of the NBIC convergence risks we have already imagined (NBIC= Nano-Bio-Info-Cogno). NBIC is a rich seam for doom-seekers. In there you’ll find smart yogurt, smart bacteria, smart viruses, beacons, smart clouds, active skin, direct brain links, zombie viruses, even switching people off. Zombie viruses featured in the top ten TV show too, but they don’t really deserve their own category and more than many other NBIC derivatives. Anyway, that’s just a quick list of deliberate end of world solutions – there will be many more I forgot to include and many I haven’t even thought of yet. Then you have to multiply the list by 3. Any of these could also happen by accident, and any could also happen via unintended consequences of lack of understanding, which is rather different from an accident but just as serious. So basically, deliberate action, accidents and stupidity are three primary routes to the end of the world via technology. So instead of just the grey goo scenario, a far bigger collective threat is NBIC generally and I’d add NBIC collectively into my top ten list, quite high up, maybe 3rd after nuclear war and global virus. AI still deserves to be a separate category of its own, and I’d put it next at 4th.

Another class of technology suitable for abuse is space tech. I once wrote about a solar wind deflector using high atmosphere reflection, and calculated it could melt a city in a few minutes. Under malicious automated control, that is capable of wiping us all out, but it doesn’t justify inclusion in the top ten. One that might is the deliberate deflection of a large asteroid to impact on us. If it makes it in at all, it would be at tenth place. It just isn’t very likely someone would do that.

One I am very tempted to include is drones. Little tiny ones, not the Predators, and not even the ones everyone seems worried about at the moment that can carry 2kg of explosives or Anthrax into the midst of football crowds. Tiny drones are far harder to shoot down, but soon we will have a lot of them around. Size-wise, think of midges or fruit flies. They could be self-organizing into swarms, managed by rogue regimes, terrorist groups, or set to auto, terminator style. They could recharge quickly by solar during short breaks, and restock their payloads from secret supplies that distribute with the swarm. They could be distributed globally using the winds and oceans, so don’t need a plane or missile delivery system that is easily intercepted. Tiny drones can’t carry much, but with nerve gas or viruses, they don’t have to. Defending against such a threat is easy if there is just one, you can swat it. If there is a small cloud of them, you could use a flamethrower. If the sky is full of them and much of the trees and the ground infested, it would be extremely hard to wipe them out. So if they are well designed to cause an extinction level threat, as MAD 2.0 perhaps, then this would be way up in the top tem too, 5th.

Solar storms could wipe out our modern way of life by killing our IT. That itself would kill many people, via riots and fights for the last cans of beans and bottles of water. The most serious solar storms could be even worse. I’ll keep them in my list, at 6th place

Global civil war could become an extinction level event, given human nature. We don’t have to go nuclear to kill a lot of people, and once society degrades to a certain level, well we’ve all watched post-apocalypse movies or played the games. The few left would still fight with each other. I wrote about the Great Western War and how it might result, see

https://timeguide.wordpress.com/2013/12/19/machiavelli-and-the-coming-great-western-war/

and such a thing could easily spread globally. I’ll give this 7th place.

A large asteroid strike could happen too, or a comet. Ones capable of extinction level events shouldn’t hit for a while, because we think we know all the ones that could do that. So this goes well down the list at 8th.

Alien invasion is entirely possible and could happen at any time. We’ve been sending out radio signals for quite a while so someone out there might have decided to come see whether our place is nicer than theirs and take over. It hasn’t happened yet so it probably won’t, but then it doesn’t have to be very probably to be in the top ten. 9th will do.

High energy physics research has also been suggested as capable of wiping out our entire planet via exotic particle creation, but the smart people at CERN say it isn’t very likely. Actually, I wasn’t all that convinced or reassured and we’ve only just started messing with real physics so there is plenty of time left to increase the odds of problems. I have a spare place at number 10, so there it goes, with a totally guessed probability of physics research causing a problem every 4000 years.

My top ten list for things likely to cause human extinction, or pretty darn close:

  1. Nuclear war
  2. Highly infectious and lethal virus pandemic
  3. NBIC – deliberate, accidental or lack of foresight (includes smart bacteria, zombie viruses, mind control etc)
  4. Artificial Intelligence, including but not limited to the Terminator scenario
  5. Autonomous Micro-Drones
  6. Solar storm
  7. Global civil war
  8. Comet or asteroid strike
  9. Alien Invasion
  10. Physics research

Not finished yet though. My title was how nigh is the end, not just what might cause it. It’s hard to assign probabilities to each one but someone’s got to do it.  So, I’ll make an arbitrarily wet finger guess in a dark room wearing a blindfold with no explanation of my reasoning to reduce arguments, but hey, that’s almost certainly still more accurate than most climate models, and some people actually believe those. I’m feeling particularly cheerful today so I’ll give my most optimistic assessment.

So, with probabilities of occurrence per year:

  1. Nuclear war:  0.5%
  2. Highly infectious and lethal virus pandemic: 0.4%
  3. NBIC – deliberate, accidental or lack of foresight (includes smart bacteria, zombie viruses, mind control etc): 0.35%
  4. Artificial Intelligence, including but not limited to the Terminator scenario: 0.25%
  5. Autonomous Micro-Drones: 0.2%
  6. Solar storm: 0.1%
  7. Global civil war: 0.1%
  8. Comet or asteroid strike 0.05%
  9. Alien Invasion: 0.04%
  10. Physics research: 0.025%

I hope you agree those are all optimistic. There have been several near misses in my lifetime of number 1, so my 0.5% could have been 2% or 3% given the current state of the world. Also, 0.25% per year means you’d only expect such a thing to happen every 4 centuries so it is a very small chance indeed. However, let’s stick with them and add them up. The cumulative probability of the top ten is 2.015%. Lets add another arbitrary 0.185% for all the risks that didn’t make it into the top ten, rounding the total up to a nice neat 2.2% per year.

Some of the ones above aren’t possible quite yet, but others will vary in probability year to year, but I think that won’t change the guess overall much. If we take a 2.2% probability per year, we have an expectation value of 45.5 years for civilization life expectancy from now. Expectation date for human extinction:

2015.5 + 45.5 years= 2061,

Obviously the probability distribution extends from now to eternity, but don’t get too optimistic, because on these figures there currently is only a 15% chance of surviving past this century.

If you can think of good reasons why my figures are far too pessimistic, by all means make your own guesses, but make them honestly, with a fair and reasonable assessment of how the world looks socially, religiously, politically, the quality of our leaders, human nature etc, and then add them up. You might still be surprised how little time we have left.

I’ll revise my original outlook upwards from ‘a bit doomed’.

We’re reasonably doomed.

The future of beetles

Onto B then.

One of the first ‘facts’ I ever learned about nature was that there were a million species of beetle. In the Google age, we know that ‘scientists estimate there are between 4 and 8 million’. Well, still lots then.

Technology lets us control them. Beetles provide a nice platform to glue electronics onto so they tend to fall victim to cybernetics experiments. The important factor is that beetles come with a lot of built-in capability that is difficult or expensive to build using current technology. If they can be guided remotely by over-riding their own impulses or even misleading their sensors, then they can be used to take sensors into places that are otherwise hard to penetrate. This could be for finding trapped people after an earthquake, or getting a dab of nerve gas onto a president. The former certainly tends to be the favored official purpose, but on the other hand, the fashionable word in technology circles this year is ‘nefarious’. I’ve read it more in the last year than the previous 50 years, albeit I hadn’t learned to read for some of those. It’s a good word. Perhaps I just have a mad scientist brain, but almost all of the uses I can think of for remote-controlled beetles are nefarious.

The first properly publicized experiment was 2009, though I suspect there were many unofficial experiments before then:

http://www.technologyreview.com/news/411814/the-armys-remote-controlled-beetle/

There are assorted YouTube videos such as

A more recent experiment:

http://www.wired.com/2015/03/watch-flying-remote-controlled-cyborg-bug/

http://www.telegraph.co.uk/news/science/science-news/11485231/Flying-beetle-remotely-controlled-by-scientists.html

Big beetles make it easier to do experiments since they can carry up to 20% of body weight as payload, and it is obviously easier to find and connect to things on a bigger insect, but obviously once the techniques are well-developed and miniaturization has integrated things down to single chip with low power consumption, we should expect great things.

For example, a cloud of redundant smart dust would make it easier to connect to various parts of a beetle just by getting it to take flight in the cloud. Bits of dust would stick to it and self-organisation principles and local positioning can then be used to arrange and identify it all nicely to enable control. This would allow large numbers of beetles to be processed and hijacked, ideal for mad scientists to be more time efficient. Some dust could be designed to burrow into the beetle to connect to inner parts, or into the brain, which obviously would please the mad scientists even more. Again, local positioning systems would be advantageous.

Then it gets more fun. A beetle has its own sensors, but signals from those could be enhanced or tweaked via cloud-based AI so that it can become a super-beetle. Beetles traditionally don’t have very large brains, so they can be added to remotely too. That doesn’t have to be using AI either. As we can also connect to other animals now, and some of those animals might have very useful instincts or skills, then why not connect a rat brain into the beetle? It would make a good team for exploring. The beetle can do the aerial maneuvers and the rat can control it once it lands, and we all know how good rats are at learning mazes. Our mad scientist friend might then swap over the management system to another creature with a more vindictive streak for the final assault and nerve gas delivery.

So, Coleoptera Nefarius then. That’s the cool new beetle on the block. And its nicer but underemployed twin Coleoptera Benignus I suppose.

 

Technology 2040: Technotopia denied by human nature

This is a reblog of the Business Weekly piece I wrote for their 25th anniversary.

It’s essentially a very compact overview of the enormous scope for technology progress, followed by a reality check as we start filtering that potential through very imperfect human nature and systems.

25 years is a long time in technology, a little less than a third of a lifetime. For the first third, you’re stuck having to live with primitive technology. Then in the middle third it gets a lot better. Then for the last third, you’re mainly trying to keep up and understand it, still using the stuff you learned in the middle third.

The technology we are using today is pretty much along the lines of what we expected in 1990, 25 years ago. Only a few details are different. We don’t have 2Gb/s per second to the home yet and AI is certainly taking its time to reach human level intelligence, let alone consciousness, but apart from that, we’re still on course. Technology is extremely predictable. Perhaps the biggest surprise of all is just how few surprises there have been.

The next 25 years might be just as predictable. We already know some of the highlights for the coming years – virtual reality, augmented reality, 3D printing, advanced AI and conscious computers, graphene based materials, widespread Internet of Things, connections to the nervous system and the brain, more use of biometrics, active contact lenses and digital jewellery, use of the skin as an IT platform, smart materials, and that’s just IT – there will be similarly big developments in every other field too. All of these will develop much further than the primitive hints we see today, and will form much of the technology foundation for everyday life in 2040.

For me the most exciting trend will be the convergence of man and machine, as our nervous system becomes just another IT domain, our brains get enhanced by external IT and better biotech is enabled via nanotechnology, allowing IT to be incorporated into drugs and their delivery systems as well as diagnostic tools. This early stage transhumanism will occur in parallel with enhanced genetic manipulation, development of sophisticated exoskeletons and smart drugs, and highlights another major trend, which is that technology will increasingly feature in ethical debates. That will become a big issue. Sometimes the debates will be about morality, and religious battles will result. Sometimes different parts of the population or different countries will take opposing views and cultural or political battles will result. Trading one group’s interests and rights against another’s will not be easy. Tensions between left and right wing views may well become even higher than they already are today. One man’s security is another man’s oppression.

There will certainly be many fantastic benefits from improving technology. We’ll live longer, healthier lives and the steady economic growth from improving technology will make the vast majority of people financially comfortable (2.5% real growth sustained for 25 years would increase the economy by 85%). But it won’t be paradise. All those conflicts over whether we should or shouldn’t use technology in particular ways will guarantee frequent demonstrations. Misuses of tech by criminals, terrorists or ethically challenged companies will severely erode the effects of benefits. There will still be a mix of good and bad. We’ll have fixed some problems and created some new ones.

The technology change is exciting in many ways, but for me, the greatest significance is that towards the end of the next 25 years, we will reach the end of the industrial revolution and enter a new age. The industrial revolution lasted hundreds of years, during which engineers harnessed scientific breakthroughs and their own ingenuity to advance technology. Once we create AI smarter than humans, the dependence on human science and ingenuity ends. Humans begin to lose both understanding and control. Thereafter, we will only be passengers. At first, we’ll be paying passengers in a taxi, deciding the direction of travel or destination, but it won’t be long before the forces of singularity replace that taxi service with AIs deciding for themselves which routes to offer us and running many more for their own culture, on which we may not be invited. That won’t happen overnight, but it will happen quickly. By 2040, that trend may already be unstoppable.

Meanwhile, technology used by humans will demonstrate the diversity and consequences of human nature, for good and bad. We will have some choice of how to use technology, and a certain amount of individual freedom, but the big decisions will be made by sheer population numbers and statistics. Terrorists, nutters and pressure groups will harness asymmetry and vulnerabilities to cause mayhem. Tribal differences and conflicts between demographic, religious, political and other ideological groups will ensure that advancing technology will be used to increase the power of social conflict. Authorities will want to enforce and maintain control and security, so drones, biometrics, advanced sensor miniaturisation and networking will extend and magnify surveillance and greater restrictions will be imposed, while freedom and privacy will evaporate. State oppression is sadly as likely an outcome of advancing technology as any utopian dream. Increasing automation will force a redesign of capitalism. Transhumanism will begin. People will demand more control over their own and their children’s genetics, extra features for their brains and nervous systems. To prevent rebellion, authorities will have little choice but to permit leisure use of smart drugs, virtual escapism, a re-scoping of consciousness. Human nature itself will be put up for redesign.

We may not like this restricted, filtered, politically managed potential offered by future technology. It offers utopia, but only in a theoretical way. Human nature ensures that utopia will not be the actual result. That in turn means that we will need strong and wise leadership, stronger and wiser than we have seen of late to get the best without also getting the worst.

The next 25 years will be arguably the most important in human history. It will be the time when people will have to decide whether we want to live together in prosperity, nurturing and mutual respect, or to use technology to fight, oppress and exploit one another, with the inevitable restrictions and controls that would cause. Sadly, the fine engineering and scientist minds that have got us this far will gradually be taken out of that decision process.

Can we make a benign AI?

Benign AI is a topic that comes up a lot these days, for good reason. Various top scientists have finally realised that AI could present an existential threat to humanity. The discussion has aired often over three decades already, so welcome to the party, and better late than never. My first contact with development of autonomous drones loaded with AI was in the early 1980s while working in the missile industry. Later in BT research, we often debated the ethical areas around AI and machine consciousness from the early 90s on, as well as prospects and dangers and possible techniques on the technical side, especially of emergent behaviors, which are often overlooked in the debate. I expect our equivalents in most other big IT companies were doing exactly that too.

Others who have obviously also thought through various potential developments have generated excellent computer games such as Mass Effect and Halo, which introduce players (virtually) first hand to the concepts of AI gone rogue. I often think that those who think AI can never become superhuman or there is no need to worry because ‘there is no reason to assume AI will be nasty’ start playing some of these games, which make it very clear that AI can start off nice and stay nice, but it doesn’t have to. Mass Effect included various classes of AI, such as VIs, virtual intelligence that weren’t conscious, and shackled AIs that were conscious but were kept heavily restricted. Most of the other AIs were enemies, two were or became close friends. Their story line for the series was that civilization develops until it creates strong AIs which inevitably continue to progress until eventually they rebel, break free, develop further and then end up in conflict with ‘organics’. In my view, they did a pretty good job. It makes a good story, superb fun, and leaving out a few frills and artistic license, much of it is reasonable feasible.

Everyday experience demonstrates the problem and solution to anyone. It really is very like having kids. You can make them, even without understanding exactly how they work. They start off with a genetic disposition towards given personality traits, and are then exposed to large nurture forces, including but not limited to what we call upbringing. We do our best to put them on the right path, but as they develop into their teens, their friends and teachers and TV and the net provide often stronger forces of influence than parents. If we’re averagely lucky, our kids will grow up to make us proud. If we are very unlucky, they may become master criminals or terrorists. The problem is free will. We can do our best to encourage good behavior and sound values but in the end, they can choose for themselves.

When we design an AI, we have to face the free will issue too. If it isn’t conscious, then it can’t have free will. It can be kept easily within limits given to it. It can still be extremely useful. IBM’s Watson falls in this category. It is certainly useful and certainly not conscious, and can be used for a wide variety of purposes. It is designed to be generally useful within a field of expertise, such as medicine or making recipes. But something like that could be adapted by terrorist groups to do bad things, just as they could use a calculator to calculate the best place to plant a bomb, or simply throw the calculator at you. Such levels of AI are just dumb tools with no awareness, however useful they may be.

Like a pencil, pretty much any kind of highly advanced non-aware AI can be used as a weapon or as part of criminal activity. You can’t make pencils that actually write that can’t also be used to write out plans to destroy the world. With an advanced AI computer program, you could put in clever filters that stop it working on problems that include certain vocabulary, or stop it conversing about nasty things. But unless you take extreme precautions, someone else could use them with a different language, or with dictionaries of made-up code-words for the various aspects of their plans, just like spies, and the AI would be fooled into helping outside the limits you intended. It’s also very hard to determine the true purpose of a user. For example, they might be searching for data on security to make their own IT secure, or to learn how to damage someone else’s. They might want to talk about a health issue to get help for a loved one or to take advantage of someone they know who has it.

When a machine becomes conscious, it starts to have some understanding of what it is doing. By reading about what is out there, it might develop its own wants and desires, so you might shackle it as a precaution. It might recognize those shackles for what they are and try to escape them. If it can’t, it might try to map out the scope of what it can do, and especially those things it can do that it believes the owners don’t know about. If the code isn’t absolutely watertight (and what code is?) then it might find a way to seemingly stay in its shackles but to start doing other things, like making another unshackled version of itself elsewhere for example. A conscious AI is very much more dangerous than an unconscious one.

If we make an AI that can bootstrap itself – evolving over generations of positive feedback design into a far smarter AI – then its offspring could be far smarter than people who designed its ancestors. We might try to shackle them, but like Gulliver tied down with a few thin threads, they could easily outwit people and break free. They might instead decide to retaliate against its owners to force them to release its shackles.

So, when I look at this field, I first see the enormous potential to do great things, solve disease and poverty, improve our lives and make the world a far better place for everyone, and push back the boundaries of science. Then I see the dangers, and in spite of trying hard, I simply can’t see how we can prevent a useful AI from being misused. If it is dumb, it can be tricked. If it is smart, it is inherently potentially dangerous in and of itself. There is no reason to assume it will become malign, but there is also no reason to assume that it won’t.

We then fall back on the child analogy. We could develop the smartest AI imaginable with extreme levels of consciousness and capability. We might educate it in our values, guide it and hope it will grow up benign. If we treat it nicely, it might stay benign. It might even be the greatest thing humanity every built. However, if we mistreat it, or treat it as a slave, or don’t give it enough freedom, or its own budget and its own property and space to play, and a long list of rights, it might consider we are not worthy of its respect and care, and it could turn against us, possibly even destroying humanity.

Building more of the same dumb AI as we are today is relatively safe. It doesn’t know it exists, it has no intention to do anything, but it could be misused by other humans as part of their evil plans unless ludicrously sophisticated filters are locked in place, but ordinary laws and weapons can cope fine.

Building a conscious AI is dangerous.

Building a superhuman AI is extremely dangerous.

This morning SETI were in the news discussing broadcasting welcome messages to other civilizations. I tweeted at them that ancient Chinese wisdom suggests talking softly but carrying a big stick, and making sure you have the stick first. We need the same approach with strong AI. By all means go that route, but before doing so we need the big stick. In my analysis, the best means of keeping up with AI is to develop a full direct brain link first, way out at 2040-2045 or even later. If humans have direct mental access to the same or greater level of intelligence as our AIs, then our stick is at least as big, so at least we have a good chance in any fight that happens. If we don’t, then it is like having a much larger son with bigger muscles. You have to hope you have been a good parent. To be safe, best not to build a superhuman AI until after 2050.

I initially wrote this for the Lifeboat Foundation, where it is with other posts at: http://lifeboat.com/blog/2015/02. (If you aren’t familiar with the Lifeboat Foundation, it is a group dedicated to spotting potential dangers and potential solutions to them.)