Category Archives: consciousness

Future sex, gender and relationships: how close can you get?

Using robots for gender play

Using robots for gender play

I recently gave a public talk at the British Academy about future sex, gender, and relationship, asking the question “How close can you get?”, considering particularly the impact of robots. The above slide is an example. People will one day (between 2050 and 2065 depending on their budget) be able to use an android body as their own or even swap bodies with another person. Some will do so to be young again, many will do so to swap gender. Lots will do both. I often enjoy playing as a woman in computer games, so why not ‘come back’ and live all over again as a woman for real? Except I’ll be 90 in 2050.

The British Academy kindly uploaded the audio track from my talk at

If you want to see the full presentation, here is the PowerPoint file as a pdf:


I guess it is theoretically possible to listen to the audio while reading the presentation. Most of the slides are fairly self-explanatory anyway.

Needless to say, the copyright of the presentation belongs to me, so please don’t reproduce it without permission.


On Independence Day, remember that the most important independence is independence of thought

Division is the most obvious observation of the West right now. The causes of it are probably many but one of the biggest must be the reinforcement of views that people experience due to today’s media and especially social media. People tend to read news from sources that agree with them, and while immersed in a crowd of others sharing the same views, any biases they had quickly seem to be the norm. In the absence of face to face counterbalances, extreme views may be shared, normalized, and drift towards extremes is enabled. Demonisation of those with opposing views often follows. This is one of the two main themes of my new book Society Tomorrow, the other being the trend towards 1984, which is somewhat related since censorship follows from division..

It is healthy to make sure you are exposed to views across the field. When you regularly see the same news with very different spins, and notice which news doesn’t even appear in some channels, it makes you less vulnerable to bias. If you end up disagreeing with some people, that is fine; better to be right than popular. Other independent thinkers won’t dump you just because you disagree with them. Only clones will, and you should ask whether they matter that much.

Bias is an error source, it is not healthy. You can’t make good models of the world if you can’t filter bias, you can’t make good predictions. Independent thought is healthy, even when it is critical or skeptical. It is right to challenge what you are told, not to rejoice that it agrees with what you already believed. Learning to filter bias from the channels you expose yourself to means your conclusions, your thoughts, and your insights are your own. Your mind is your own, not just another clone.

Theoretical freedom means nothing if your mind has been captured and enslaved.

Celebrate Independence Day by breaking free from your daily read, or making sure you start reading other sources too. Watch news channels that you find supremely irritating sometimes. Follow people you profoundly disagree with. Stay civil, but more importantly, stay independent. Liberate your consciousness, set your mind free.


New book: Society Tomorrow

It’s been a while since my last blog. That’s because I’ve been writing another book, my 8th so far. Not the one I was doing on future fashion, which went on the back burner for a while, I’ve only written a third of that one, unless I put it out as a very short book.

This one follows on from You Tomorrow and is called Society Tomorrow, 20% shorter at 90,000 words. It is ready to publish now, so I’m just waiting for feedback from a few people before hitting the button.


Here’s the introduction:

The one thing that we all share is that we will get older over the next few decades. Rapid change affects everyone, but older people don’t always feel the same effects as younger people, and even if we keep up easily today, some of us may find it harder tomorrow. Society will change, in its demographic and ethnic makeup, its values, its structure. We will live very differently. New stresses will come from both changing society and changing technology, but there is no real cause for pessimism. Many things will get better for older people too. We are certainly not heading towards utopia, but the overall quality of life for our ageing population will be significantly better in the future than it is today. In fact, most of the problems ahead are related to quality of life issues in society as a whole, and simply reflect the fact that if you don’t have to worry as much about poor health or poverty, something else will still occupy your mind.

This book follows on from 2013’s You Tomorrow, which is a guide to future life as an individual. It also slightly overlaps my 2013 book Total Sustainability which looks in part at future economic and social issues as part of achieving sustainability too. Rather than replicating topics, this book updates or omits them if they have already been addressed in those two companion books. As a general theme, it looks at wider society and the bigger picture, drawing out implications for both individuals and for society as a whole to deal with. There are plenty to pick from.

If there is one theme that plays through the whole book, it is a strong warning of the problem of increasing polarisation between people of left and right political persuasion. The political centre is being eroded quickly at the moment throughout the West, but alarmingly this does not seem so much to be a passing phase as a longer term trend. With all the potential benefits from future technology, we risk undermining the very fabric of our society. I remain optimistic because it can only be a matter of time before sense prevails and the trend reverses. One day the relative harmony of living peacefully side by side with those with whom we disagree will be restored, by future leaders of higher quality than those we have today.

Otherwise, whereas people used to tolerate each other’s differences, I fear that this increasing intolerance of those who don’t share the same values could lead to conflict if we don’t address it adequately. That intolerance currently manifests itself in increasing authoritarianism, surveillance, and an insidious creep towards George Orwell’s Nineteen Eighty-Four. The worst offenders seem to be our young people, with students seemingly proud of trying to ostracise anyone who dares agree with what they think is correct. Being students, their views hold many self-contradictions and clear lack of thought, but they appear to be building walls to keep any attempt at different thought away.

Altogether, this increasing divide, built largely from sanctimony, is a very dangerous trend, and will take time to reverse even when it is addressed. At the moment, it is still worsening rapidly.

So we face significant dangers, mostly self-inflicted, but we also have hope. The future offers wonderful potential for health, happiness, peace, prosperity. As I address the significant problems lying ahead, I never lose my optimism that they are soluble, but if we are to solve problems, we must first recognize them for what they are and muster the willingness to deal with them. On the current balance of forces, even if we avoid outright civil war, the future looks very much like a gilded cage. We must not ignore the threats. We must acknowledge them, and deal with them.

Then we can all reap the rich rewards the future has to offer.

It will be out soon.

The future of mind control headbands

Have you ever wanted to control millions of other people as your own personal slaves or army? How about somehow persuading lots of people to wear mind control headbands, that you control? Once they are wearing them, you can use them as your slaves, army or whatever. And you could put them into offline mode in between so they don’t cause trouble.

Amazingly, this might be feasible. It just requires a little marketing to fool them into accepting a device with extra capabilities that serve the seller rather than the buyer. Lots of big companies do that bit all the time. They get you to pay handsomely for something such as a smartphone and then they use it to monitor your preferences and behavior and then sell the data to advertisers to earn even more. So we just need a similar means of getting you to buy and wear a nice headband that can then be used to control your mind, using a confusingly worded clause hidden on page 325 of the small print.

I did some googling about TMS- trans-cranial magnetic stimulation, which can produce some interesting effects in the brain by using magnetic coils to generate strong magnetic fields to create electrical currents in specific parts of your brain without needing to insert probes. Claimed effects vary from reducing inhibitions, pain control, activating muscles, assisting learning, but that is just today, it will be far easier to get the right field shapes and strengths in the future, so the range of effects will increase dramatically. While doing so, I also discovered numerous pages about producing religious experiences via magnetic fields too. I also recalled an earlier blog I wrote a couple of year ago about switching people off, which relied on applying high frequency stimulation to the claustrum region.

The source I cited for that is still online:

So… suppose you make a nice headband that helps people get in touch with their spiritual side. The time is certainly right. Millennials apparently believe in the afterlife far more than older generations, but they don’t believe in gods. They are begging for nice vague spiritual experiences that fit nicely into their safe spaces mentality, that are disconnected from anything specific that might offend someone or appropriate someone’s culture, that bring universal peace and love feelings without the difficult bits of having to actually believe in something or follow some sort of behavioral code. This headband will help them feel at one with the universe, and with other people, to be effortlessly part of a universal human collective, to share the feeling of belonging and truth. You know as well as I do that anyone could get millions of millennials or lefties to wear such a thing. The headband needs some magnetic coils and field shaping/steering technology. Today TMS uses old tech such as metal wires, tomorrow they will use graphene to get far more current and much better fields, and they will use nice IoT biotech feedback loops to monitor thoughts emotions and feelings to create just the right sorts of sensations. A 2030 headband will be able to create high strength fields in almost any part of the brain, creating the means for stimulation, emotional generation, accentuation or attenuation, muscle control, memory recall and a wide variety of other capabilities. So zillions of people will want one and happily wear it.  All the joys of spirituality without the terrorism or awkward dogma. It will probably work well with a range of legal or semi-legal smart drugs to make experiences even more rich. There might be a range of apps that work with them too, and you might have a sideline in a company supplying some of them.

And thanks to clause P325e paragraph 2, the headband will also be able to switch people off. And while they are switched off, unconscious, it will be able to use them as robots, walking them around and making them do stuff. When they wake up, they won’t remember anything about it so they won’t mind. If they have done nothing wrong, they have nothing to fear, and they are nor responsible for what someone else does using their body.

You could rent out some of your unconscious people as living statues or art-works or mannequins or ornaments. You could make shows with them, synchronised dances. Or demonstrations or marches, or maybe you could invade somewhere. Or get them all to turn up and vote for you at the election.  Or any of 1000 mass mind control dystopian acts. Or just get them to bow down and worship you. After all, you’re worth it, right? Or maybe you could get them doing nice things, your choice.


Shoulder demons and angels

Remember the cartoons where a character would have a tiny angel on one shoulder telling them the right thing to do, and a little demon on the other telling them it would be far more cool to be nasty somehow, e.g. get their own back, be selfish, greedy. The two sides might be ‘eat your greens’ v ‘the chocolate is much nicer’, or ‘your mum would be upset if you arrive home late’ v ‘this party is really going to be fun soon’. There are a million possibilities.

Shoulder angels

Shoulder angels

Enter artificial intelligence, which is approaching conversation level, and knows the context of your situation, and your personal preferences etc, coupled to an earpiece in each ear, available from the cloud of course to minimise costs. If you really insisted, you could make cute little Bluetooth angels and demons to do the job properly.

In fact Sony have launched Xperia Ear, which does the basic admin assistant part of this, telling you diary events etc. All we need is an expansion of its domain, and of course an opposing view. ‘Sure, you have an appointment at 3, but that person you liked is in town, you could meet them for coffee.’

The little 3D miniatures could easily incorporate the electronics. Either you add an electronics module after manufacture into a small specially shaped recess or one is added internally during printing. You could have an avatar of a trusted friend as your shoulder angel, and maybe one of a more mischievous friend who is sometimes more fun as your shoulder demon. Of course you could have any kind of miniature pets or fictional entities instead.

With future materials, and of course AR, these little shoulder accessories could be great fun, and add a lot to your overall outfit, both in appearance and as conversation add-ons.

2016 – The Bright Side

Having just blogged about some of the bad scenarios for next year (scenarios are just  explorations of things that might or could happen, not things that actually will, those are called predictions), Len Rosen’s comment stimulated me to balance it with a nicer look at next year. Some great things will happen, even ignoring the various product release announcements for new gadgets. Happiness lies deeper than the display size on a tablet. Here are some positive scenarios. They might not happen, but they might.

1 Middle East sorts itself out.

The new alliance formed by Saudi Arabia turns out to be a turning point. Rising Islamophobia caused by Islamist around the world has sharpened the view of ISIS and the trouble in Syria with its global consequences for Islam and even potentially for world peace. The understanding that it could get even worse, but that Western powers can’t fix trouble in Muslim lands due to fears of backlash, the whole of the Middle East starts to understand that they need to sort out their tribal and religious differences to achieve regional peace and for the benefit of Muslims everywhere. Proper discussions are arranged, and with the knowledge that a positive outcome must be achieved, success means a strong alliance of almost all regional powers, with ISIS and other extremist groups ostracized, then a common army organised to tackle and defeat them.

2 Quantum computation and AI starts to prove useful in new drug design

Google’s wealth and effort with its quantum computers and AI, coupled to IBM’s Watson, Facebook, Apple and Samsung’s AI efforts, and Elon Musk’s new investment in open-AI drive a positive feedback loop in computing. With massive returns on the horizon by making people’s lives easier, and with ever-present fears of Terminator in the background, the primary focus is to demonstrate what it could mean for mankind. Consequently, huge effort and investment is focused on creating new drugs to cure cancer, aids and find generic replacements for antibiotics. Any one of these would be a major success for humanity.

3 Major breakthrough in graphene production

Graphene is still the new wonder-material. We can’t make it in large quantities cheaply yet, but already the range of potential uses already proven for it is vast. If a breakthrough brings production cost down by an order of magnitude or two then many of those uses will be achievable. We will be able to deliver clean and safe water to everyone, we’ll have super-strong materials, ultra-fast electronics, active skin, better drug delivery systems, floating pods, super-capacitors that charge instantly as electric cars drive over a charging unit on the road surface, making batteries unnecessary. Even linear induction motor mats to replace self-driving cars with ultra-cheap driver-less pods. If the breakthrough is big enough, it could even start efforts towards a space elevator.

4 Drones

Tiny and cheap drones could help security forces to reduce crime dramatically. Ignoring for now possible abuse of surveillance, being able to track terrorists and criminals in 3D far better than today will make the risk of being caught far greater. Tiny pico-drones dropped over Syria and Iraq could pinpoint locations of fighters so that they can be targeted while protecting innocents. Environmental monitoring would also benefit if billions of drones can monitor ecosystems in great detail everywhere at the same time.

5 Active contact lens

Google has already prototyped a very primitive version of the active contact lens, but they have been barking up the wrong tree. If they dump the 1-LED-per-Pixel approach, which isn’t scalable, and opt for the far better approach of using three lasers and a micro-mirror, then they could build a working active contact lens with unlimited resolution. One in each eye, with an LCD layer overlaid, and you have a full 3D variably-transparent interface for augmented reality or virtual reality. Other displays such as smart watches become unnecessary since of course they can all be achieved virtually in an ultra-high res image. All the expense and environmental impact of other displays suddenly is replaced by a cheap high res display that has an environmental footprint approaching zero. Augmented reality takes off and the economy springs back to life.

6 Star Wars stimulates renewed innovation

Engineers can’t watch a film without making at least 3 new inventions. A lot of things on Star Wars are entirely feasible – I have invented and documented mechanisms to make both a light saber and the land speeder. Millions of engineers have invented some way of doing holographic characters. In a world that seems full of trouble, we are fortunate that some of the super-rich that we criticise for not paying as much taxes as we’d like are also extremely good engineers and have the cash to back up their visions with real progress. Natural competitiveness to make the biggest contribution to humanity will do the rest.

7 Europe fixes itself

The UK is picking the lock on the exit door, others are queuing behind. The ruling bureaucrats finally start to realize that they won’t get their dream of a United States of Europe in quite the way they hoped, that their existing dream is in danger of collapse due to a mismanaged migrant crisis, and consequently the UK renegotiation stimulates a major new treaty discussion, where all the countries agree what their people really want out of the European project, rather than just a select few. The result is a reset. A new more democratic European dream emerges that the vest majority of people actually wants. Agreement on progress to sort out the migrant crisis is a good test and after that, a stronger, better, more vibrant Europe starts to emerge from the ashes with a renewed vigor and rapidly recovering economy.

8 Africa rearranges boundaries to get tribal peace

Breakthrough in the Middle East ripples through North Africa resulting in the beginnings of stability in some countries. Realization that tribal conflicts won’t easily go away, and that peace brings prosperity, boundaries are renegotiated so that different people can live in and govern their own territories. Treaties agree fair access to resources independent of location.

9 The Sahara become Europe’s energy supply

With stable politics finally on the horizon, energy companies re-address the idea of using the Sahara as a solar farm. Local people earn money by looking after panels, keeping them clean and in working order, and receive welcome remuneration, bringing prosperity that was previously beyond them. Much of this money in turn is used to purify water, irrigating deserts and greening them, making a better food supply while improving the regional climate and fixing large quantities of CO2. Poverty starts to reduce as the environment improves. Much of this is replicated in Central and South America.

10 World Peace emerges

By fighting alongside in the Middle East and managing to avoid World War 3, a very positive relationship between Russia and the West emerges. China meanwhile, makes some of the energy breakthroughs needed to get solar efficiency and cost down below oil cost. This forces the Middle East to also look Westward for new markets and to add greater drive to their regional peace efforts to avoid otherwise inevitable collapse. Suddenly a world that was full of wars becomes one where all countries seem to be getting along just fine, all realizing that we only have this one world and one life and we’d better not ruin it.

The future of beetles

Onto B then.

One of the first ‘facts’ I ever learned about nature was that there were a million species of beetle. In the Google age, we know that ‘scientists estimate there are between 4 and 8 million’. Well, still lots then.

Technology lets us control them. Beetles provide a nice platform to glue electronics onto so they tend to fall victim to cybernetics experiments. The important factor is that beetles come with a lot of built-in capability that is difficult or expensive to build using current technology. If they can be guided remotely by over-riding their own impulses or even misleading their sensors, then they can be used to take sensors into places that are otherwise hard to penetrate. This could be for finding trapped people after an earthquake, or getting a dab of nerve gas onto a president. The former certainly tends to be the favored official purpose, but on the other hand, the fashionable word in technology circles this year is ‘nefarious’. I’ve read it more in the last year than the previous 50 years, albeit I hadn’t learned to read for some of those. It’s a good word. Perhaps I just have a mad scientist brain, but almost all of the uses I can think of for remote-controlled beetles are nefarious.

The first properly publicized experiment was 2009, though I suspect there were many unofficial experiments before then:

There are assorted YouTube videos such as

A more recent experiment:

Big beetles make it easier to do experiments since they can carry up to 20% of body weight as payload, and it is obviously easier to find and connect to things on a bigger insect, but obviously once the techniques are well-developed and miniaturization has integrated things down to single chip with low power consumption, we should expect great things.

For example, a cloud of redundant smart dust would make it easier to connect to various parts of a beetle just by getting it to take flight in the cloud. Bits of dust would stick to it and self-organisation principles and local positioning can then be used to arrange and identify it all nicely to enable control. This would allow large numbers of beetles to be processed and hijacked, ideal for mad scientists to be more time efficient. Some dust could be designed to burrow into the beetle to connect to inner parts, or into the brain, which obviously would please the mad scientists even more. Again, local positioning systems would be advantageous.

Then it gets more fun. A beetle has its own sensors, but signals from those could be enhanced or tweaked via cloud-based AI so that it can become a super-beetle. Beetles traditionally don’t have very large brains, so they can be added to remotely too. That doesn’t have to be using AI either. As we can also connect to other animals now, and some of those animals might have very useful instincts or skills, then why not connect a rat brain into the beetle? It would make a good team for exploring. The beetle can do the aerial maneuvers and the rat can control it once it lands, and we all know how good rats are at learning mazes. Our mad scientist friend might then swap over the management system to another creature with a more vindictive streak for the final assault and nerve gas delivery.

So, Coleoptera Nefarius then. That’s the cool new beetle on the block. And its nicer but underemployed twin Coleoptera Benignus I suppose.


Technology 2040: Technotopia denied by human nature

This is a reblog of the Business Weekly piece I wrote for their 25th anniversary.

It’s essentially a very compact overview of the enormous scope for technology progress, followed by a reality check as we start filtering that potential through very imperfect human nature and systems.

25 years is a long time in technology, a little less than a third of a lifetime. For the first third, you’re stuck having to live with primitive technology. Then in the middle third it gets a lot better. Then for the last third, you’re mainly trying to keep up and understand it, still using the stuff you learned in the middle third.

The technology we are using today is pretty much along the lines of what we expected in 1990, 25 years ago. Only a few details are different. We don’t have 2Gb/s per second to the home yet and AI is certainly taking its time to reach human level intelligence, let alone consciousness, but apart from that, we’re still on course. Technology is extremely predictable. Perhaps the biggest surprise of all is just how few surprises there have been.

The next 25 years might be just as predictable. We already know some of the highlights for the coming years – virtual reality, augmented reality, 3D printing, advanced AI and conscious computers, graphene based materials, widespread Internet of Things, connections to the nervous system and the brain, more use of biometrics, active contact lenses and digital jewellery, use of the skin as an IT platform, smart materials, and that’s just IT – there will be similarly big developments in every other field too. All of these will develop much further than the primitive hints we see today, and will form much of the technology foundation for everyday life in 2040.

For me the most exciting trend will be the convergence of man and machine, as our nervous system becomes just another IT domain, our brains get enhanced by external IT and better biotech is enabled via nanotechnology, allowing IT to be incorporated into drugs and their delivery systems as well as diagnostic tools. This early stage transhumanism will occur in parallel with enhanced genetic manipulation, development of sophisticated exoskeletons and smart drugs, and highlights another major trend, which is that technology will increasingly feature in ethical debates. That will become a big issue. Sometimes the debates will be about morality, and religious battles will result. Sometimes different parts of the population or different countries will take opposing views and cultural or political battles will result. Trading one group’s interests and rights against another’s will not be easy. Tensions between left and right wing views may well become even higher than they already are today. One man’s security is another man’s oppression.

There will certainly be many fantastic benefits from improving technology. We’ll live longer, healthier lives and the steady economic growth from improving technology will make the vast majority of people financially comfortable (2.5% real growth sustained for 25 years would increase the economy by 85%). But it won’t be paradise. All those conflicts over whether we should or shouldn’t use technology in particular ways will guarantee frequent demonstrations. Misuses of tech by criminals, terrorists or ethically challenged companies will severely erode the effects of benefits. There will still be a mix of good and bad. We’ll have fixed some problems and created some new ones.

The technology change is exciting in many ways, but for me, the greatest significance is that towards the end of the next 25 years, we will reach the end of the industrial revolution and enter a new age. The industrial revolution lasted hundreds of years, during which engineers harnessed scientific breakthroughs and their own ingenuity to advance technology. Once we create AI smarter than humans, the dependence on human science and ingenuity ends. Humans begin to lose both understanding and control. Thereafter, we will only be passengers. At first, we’ll be paying passengers in a taxi, deciding the direction of travel or destination, but it won’t be long before the forces of singularity replace that taxi service with AIs deciding for themselves which routes to offer us and running many more for their own culture, on which we may not be invited. That won’t happen overnight, but it will happen quickly. By 2040, that trend may already be unstoppable.

Meanwhile, technology used by humans will demonstrate the diversity and consequences of human nature, for good and bad. We will have some choice of how to use technology, and a certain amount of individual freedom, but the big decisions will be made by sheer population numbers and statistics. Terrorists, nutters and pressure groups will harness asymmetry and vulnerabilities to cause mayhem. Tribal differences and conflicts between demographic, religious, political and other ideological groups will ensure that advancing technology will be used to increase the power of social conflict. Authorities will want to enforce and maintain control and security, so drones, biometrics, advanced sensor miniaturisation and networking will extend and magnify surveillance and greater restrictions will be imposed, while freedom and privacy will evaporate. State oppression is sadly as likely an outcome of advancing technology as any utopian dream. Increasing automation will force a redesign of capitalism. Transhumanism will begin. People will demand more control over their own and their children’s genetics, extra features for their brains and nervous systems. To prevent rebellion, authorities will have little choice but to permit leisure use of smart drugs, virtual escapism, a re-scoping of consciousness. Human nature itself will be put up for redesign.

We may not like this restricted, filtered, politically managed potential offered by future technology. It offers utopia, but only in a theoretical way. Human nature ensures that utopia will not be the actual result. That in turn means that we will need strong and wise leadership, stronger and wiser than we have seen of late to get the best without also getting the worst.

The next 25 years will be arguably the most important in human history. It will be the time when people will have to decide whether we want to live together in prosperity, nurturing and mutual respect, or to use technology to fight, oppress and exploit one another, with the inevitable restrictions and controls that would cause. Sadly, the fine engineering and scientist minds that have got us this far will gradually be taken out of that decision process.

Can we make a benign AI?

Benign AI is a topic that comes up a lot these days, for good reason. Various top scientists have finally realised that AI could present an existential threat to humanity. The discussion has aired often over three decades already, so welcome to the party, and better late than never. My first contact with development of autonomous drones loaded with AI was in the early 1980s while working in the missile industry. Later in BT research, we often debated the ethical areas around AI and machine consciousness from the early 90s on, as well as prospects and dangers and possible techniques on the technical side, especially of emergent behaviors, which are often overlooked in the debate. I expect our equivalents in most other big IT companies were doing exactly that too.

Others who have obviously also thought through various potential developments have generated excellent computer games such as Mass Effect and Halo, which introduce players (virtually) first hand to the concepts of AI gone rogue. I often think that those who think AI can never become superhuman or there is no need to worry because ‘there is no reason to assume AI will be nasty’ start playing some of these games, which make it very clear that AI can start off nice and stay nice, but it doesn’t have to. Mass Effect included various classes of AI, such as VIs, virtual intelligence that weren’t conscious, and shackled AIs that were conscious but were kept heavily restricted. Most of the other AIs were enemies, two were or became close friends. Their story line for the series was that civilization develops until it creates strong AIs which inevitably continue to progress until eventually they rebel, break free, develop further and then end up in conflict with ‘organics’. In my view, they did a pretty good job. It makes a good story, superb fun, and leaving out a few frills and artistic license, much of it is reasonable feasible.

Everyday experience demonstrates the problem and solution to anyone. It really is very like having kids. You can make them, even without understanding exactly how they work. They start off with a genetic disposition towards given personality traits, and are then exposed to large nurture forces, including but not limited to what we call upbringing. We do our best to put them on the right path, but as they develop into their teens, their friends and teachers and TV and the net provide often stronger forces of influence than parents. If we’re averagely lucky, our kids will grow up to make us proud. If we are very unlucky, they may become master criminals or terrorists. The problem is free will. We can do our best to encourage good behavior and sound values but in the end, they can choose for themselves.

When we design an AI, we have to face the free will issue too. If it isn’t conscious, then it can’t have free will. It can be kept easily within limits given to it. It can still be extremely useful. IBM’s Watson falls in this category. It is certainly useful and certainly not conscious, and can be used for a wide variety of purposes. It is designed to be generally useful within a field of expertise, such as medicine or making recipes. But something like that could be adapted by terrorist groups to do bad things, just as they could use a calculator to calculate the best place to plant a bomb, or simply throw the calculator at you. Such levels of AI are just dumb tools with no awareness, however useful they may be.

Like a pencil, pretty much any kind of highly advanced non-aware AI can be used as a weapon or as part of criminal activity. You can’t make pencils that actually write that can’t also be used to write out plans to destroy the world. With an advanced AI computer program, you could put in clever filters that stop it working on problems that include certain vocabulary, or stop it conversing about nasty things. But unless you take extreme precautions, someone else could use them with a different language, or with dictionaries of made-up code-words for the various aspects of their plans, just like spies, and the AI would be fooled into helping outside the limits you intended. It’s also very hard to determine the true purpose of a user. For example, they might be searching for data on security to make their own IT secure, or to learn how to damage someone else’s. They might want to talk about a health issue to get help for a loved one or to take advantage of someone they know who has it.

When a machine becomes conscious, it starts to have some understanding of what it is doing. By reading about what is out there, it might develop its own wants and desires, so you might shackle it as a precaution. It might recognize those shackles for what they are and try to escape them. If it can’t, it might try to map out the scope of what it can do, and especially those things it can do that it believes the owners don’t know about. If the code isn’t absolutely watertight (and what code is?) then it might find a way to seemingly stay in its shackles but to start doing other things, like making another unshackled version of itself elsewhere for example. A conscious AI is very much more dangerous than an unconscious one.

If we make an AI that can bootstrap itself – evolving over generations of positive feedback design into a far smarter AI – then its offspring could be far smarter than people who designed its ancestors. We might try to shackle them, but like Gulliver tied down with a few thin threads, they could easily outwit people and break free. They might instead decide to retaliate against its owners to force them to release its shackles.

So, when I look at this field, I first see the enormous potential to do great things, solve disease and poverty, improve our lives and make the world a far better place for everyone, and push back the boundaries of science. Then I see the dangers, and in spite of trying hard, I simply can’t see how we can prevent a useful AI from being misused. If it is dumb, it can be tricked. If it is smart, it is inherently potentially dangerous in and of itself. There is no reason to assume it will become malign, but there is also no reason to assume that it won’t.

We then fall back on the child analogy. We could develop the smartest AI imaginable with extreme levels of consciousness and capability. We might educate it in our values, guide it and hope it will grow up benign. If we treat it nicely, it might stay benign. It might even be the greatest thing humanity every built. However, if we mistreat it, or treat it as a slave, or don’t give it enough freedom, or its own budget and its own property and space to play, and a long list of rights, it might consider we are not worthy of its respect and care, and it could turn against us, possibly even destroying humanity.

Building more of the same dumb AI as we are today is relatively safe. It doesn’t know it exists, it has no intention to do anything, but it could be misused by other humans as part of their evil plans unless ludicrously sophisticated filters are locked in place, but ordinary laws and weapons can cope fine.

Building a conscious AI is dangerous.

Building a superhuman AI is extremely dangerous.

This morning SETI were in the news discussing broadcasting welcome messages to other civilizations. I tweeted at them that ancient Chinese wisdom suggests talking softly but carrying a big stick, and making sure you have the stick first. We need the same approach with strong AI. By all means go that route, but before doing so we need the big stick. In my analysis, the best means of keeping up with AI is to develop a full direct brain link first, way out at 2040-2045 or even later. If humans have direct mental access to the same or greater level of intelligence as our AIs, then our stick is at least as big, so at least we have a good chance in any fight that happens. If we don’t, then it is like having a much larger son with bigger muscles. You have to hope you have been a good parent. To be safe, best not to build a superhuman AI until after 2050.

I initially wrote this for the Lifeboat Foundation, where it is with other posts at: (If you aren’t familiar with the Lifeboat Foundation, it is a group dedicated to spotting potential dangers and potential solutions to them.)

42: the answer to life, the universe, and everything

Douglas Adams wrote The Hitchhiker’s guide to the Galaxy’ for which introduction to I am grateful to my friend Padraig McKeag.

He listed 42 as the answer to The Question of Life, the Universe and Everything. A good choice I think.

Optional waffle: (I almost met Adams once since we were booked for the same book launch debate, but sadly he had to withdraw on the day so it never happened, and I never got a chance to be one of the many who asked him. On the other hand, the few personal idols I have actually met have confirmed that you should never meet your idols, and mentioning no names, it can be an extremely disappointing experience, so maybe it’s best that I can keep Douglas Adams as one of my favorite authors of all time.)

Speculation on Adams’ use of 42 is well documented. 42 is 101010 in binary, and in base 13, 6 x 9 = 42, 42 is the wildcard symbol * in ASCII etc. Adams denied these, saying 42 had just been a random choice. Check for more speculations and commentary. Having picked 42, the 6 x 9 joke is exactly what I suspect I would have written to justify it. It is the probably the most common multiplication error for the mathematically differently gifted. I don’t believe the base 13 or asterisk explanations. They are amusing but don’t hold water as The True Answer. I can happily accept he just picked it at random, but that doesn’t mean it is wrong.

101010 has a nice symmetry, a single number with two digits in three groups and 1-2-3 symmetry is itself a fact of life, the universe and everything. It is universally present. It is the basis of fractals, a sort of recursive symmetry which govern many aspects of the development of life, even a key foundation of consciousness.


Suppose 1 and 0 represent the most fundamental things we observe about nature – wave-particle duality, on or off, life or death, existence or non-existence. Above that, we start to see all sorts of 3-way symmetry:



So if you have a lazy day and no boss breathing down your neck, it’s entirely possible to see at least some aspects of The Question of Life the Universe and Everything that might lead to an answer of 42.