Category Archives: terrorism

WMDs for mad AIs

We think sometimes about mad scientists and what they might do. It’s fun, makes nice films occasionally, and highlights threats years before they become feasible. That then allows scientists and engineers to think through how they might defend against such scenarios, hopefully making sure they don’t happen.

You’ll be aware that a lot more talk of AI is going on again now. It does seem to be picking up progress finally. If it succeeds well enough, a lot more future science and engineering will be done by AI than by people. If genuinely conscious, self-aware AI, with proper emotions etc becomes feasible, as I think it will, then we really ought to think about what happens when it goes wrong. (Sci-fi computer games producers already do think that stuff through sometimes – my personal favorite is Mass Effect). We will one day have some insane AIs. In Mass Effect, the concept of AI being shackled is embedded in the culture, thereby attempting to limit the damage it could presumably do. On the other hand, we have had Asimov’s laws of robotics for decades, but they are sometimes being ignored when it comes to making autonomous defense systems. That doesn’t bode well. So, assuming that Mass Effect’s writers don’t get to be in charge of the world, and instead we have ideological descendants of our current leaders, what sort of things could an advanced AI do in terms of its chosen weaponry?

Advanced AI

An ultra-powerful AI is a potential threat in itself. There is no reason to expect that an advanced AI will be malign, but there is also no reason to assume it won’t be. High level AI could have at least the range of personality that we associate with people, with a potentially greater  range of emotions or motivations, so we’d have the super-helpful smart scientist type AIs but also perhaps the evil super-villain and terrorist ones.

An AI doesn’t have to intend harm to be harmful. If it wants to do something and we are in the way, even if it has no malicious intent, we could still become casualties, like ants on a building site.

I have often blogged about achieving conscious computers using techniques such as gel computing and how we could end up in a terminator scenario, favored by sci-fi. This could be deliberate act of innocent research, military development or terrorism.

Terminator scenarios are diverse but often rely on AI taking control of human weapons systems. I won’t major on that here because that threat has already been analysed in-depth by many people.

Conscious botnets could arrive by accident too – a student prank harnessing millions of bots even with an inefficient algorithm might gain enough power to achieve high level of AI. 

Smart bacteriaBacterial DNA could be modified so that bacteria can make electronics inside their cell, and power it. Linking to other bacteria, massive AI could be achieved.

Zombies

Adding the ability to enter a human nervous system or disrupt or capture control of a human brain could enable enslavement, giving us zombies. Having been enslaved, zombies could easily be linked across the net. The zombie films we watch tend to miss this feature. Zombies in films and games tend to move in herds, but not generally under control or in a much coordinated way. We should assume that real ones will be full networked, liable to remote control, and able to share sensory systems. They’d be rather smarter and more capable than what we’re generally used to. Shooting them in the head might not work so well as people expect either, as their nervous systems don’t really need a local controller, and could just as easily be controlled by a collective intelligence, though blood loss would eventually cause them to die. To stop a herd of real zombies, you’d basically have to dismember them. More Dead Space than Dawn of the Dead.

Zombie viruses could be made other ways too. It isn’t necessary to use smart bacteria. Genetic modification of viruses, or a suspension of nanoparticles are traditional favorites because they could work. Sadly, we are likely to see zombies result from deliberate human acts, likely this century.

From Zombies, it is a short hop to full evolution of the Borg from Star Trek, along with emergence of characters from computer games to take over the zombified bodies.

Terraforming

Using strong external AI to make collective adaptability so that smart bacteria can colonize many niches, bacterial-based AI or AI using bacteria could engage in terraforming. Attacking many niches that are important to humans or other life would be very destructive. Terraforming a planet you live on is not generally a good idea, but if an organism can inhabit land, sea or air and even space, there is plenty of scope to avoid self destruction. Fighting bacteria engaged on such a pursuit might be hard. Smart bacteria could spread immunity to toxins or biological threats almost instantly through a population.

Correlated traffic

Information waves and other correlated traffic, network resonance attacks are another way of using networks to collapse economies by taking advantage of the physical properties of the links and protocols rather than using more traditional viruses or denial or service attacks. AIs using smart dust or bacteria could launch signals in perfect coordination from any points on any networks simultaneously. This could push any network into resonant overloads that would likely crash them, and certainly act to deprive other traffic of bandwidth.

Decryption

Conscious botnets could be used to make decryption engines to wreck security and finance systems. Imagine how much more so a worldwide collection of trillions of AI-harnessed organisms or devices. Invisibly small smart dust and networked bacteria could also pick up most signals well before they are encrypted anyway, since they could be resident on keyboards or the components and wires within. They could even pick up electrical signals from a person’s scalp and engage in thought recognition, intercepting passwords well before a person’s fingers even move to type them.

Space guns

Solar wind deflector guns are feasible, ionizing some of the ionosphere to make a reflective surface to deflect some of the incoming solar wind to make an even bigger reflector, then again, thus ending up with an ionospheric lens or reflector that can steer perhaps 1% of the solar wind onto a city. That could generate a high enough energy density to ignite and even melt a large area of city within minutes.

This wouldn’t be as easy as using space based solar farms, and using energy direction from them. Space solar is being seriously considered but it presents an extremely attractive target for capture because of its potential as a directed energy weapon. Their intended use is to use microwave beams directed to rectenna arrays on the ground, but it would take good design to prevent a takeover possibility.

Drone armies

Drones are already becoming common at an alarming rate, and the sizes of drones are increasing in range from large insects to medium sized planes. The next generation is likely to include permanently airborne drones and swarms of insect-sized drones. The swarms offer interesting potential for WMDs. They can be dispersed and come together on command, making them hard to attack most of the time.

Individual insect-sized drones could build up an electrical charge by a wide variety of means, and could collectively attack individuals, electrocuting or disabling them, as well as overload or short-circuit electrical appliances.

Larger drones such as the ones I discussed in

http://carbonweapons.com/2013/06/27/free-floating-combat-drones/ would be capable of much greater damage, and collectively, virtually indestructible since each can be broken to pieces by an attack and automatically reassembled without losing capability using self organisation principles. A mixture of large and small drones, possibly also using bacteria and smart dust, could present an extremely formidable coordinated attack.

I also recently blogged about the storm router

http://carbonweapons.com/2014/03/17/stormrouter-making-wmds-from-hurricanes-or-thunderstorms/ that would harness hurricanes, tornados or electrical storms and divert their energy onto chosen targets.

In my Space Anchor novel, my superheroes have to fight against a formidable AI army that appears as just a global collection of tiny clouds. They do some of the things I highlighted above and come close to threatening human existence. It’s a fun story but it is based on potential engineering.

Well, I think that’s enough threats to worry about for today. Maybe given the timing of release, you’re expecting me to hint that this is an April Fool blog. Not this time. All these threats are feasible.

Drone Delivery: Technical feasibility does not guarantee market success

One of my first ever futurology articles explained why Digital Compact Cassette wouldn’t succeed in the marketplace and I was proved right. It should have been obvious from the outset that it wouldn’t fly well, but it was still designed, manufactured and shipped to a few customers.

Decades on, I had a good laugh yesterday reading about the Amazon drone delivery service. Yes, you can buy drones; yes, they can carry packages, and yes, you can make them gently place a package on someone’s doorstep. No, it won’t work in the marketplace. I was asked by the BBC Radio 4 to explain on air, but the BBC is far more worried about audio quality than content quality and I could only do the interview from home, so they decided not to use me after all (not entirely fair – I didn’t check who they actually used and it might have been someone far better).

Anyway, here’s what I would have said:

The benefits are obvious. Many of the dangers are also obvious, and Amazon isn’t a company I normally associate with stupidity, so they can’t really be planning to go all the way. Therefore, this must be a simple PR stunt, and the media shouldn’t be such easy prey for free advertising.

Very many packages are delivered to homes and offices every day. If even a small percentage were drone-delivered, the skies will be full of drones. Amazon would only control some of them. There would be mid-air collisions between drones, between drones and kites and balloons, with new wind turbines, model aeroplanes and helicopters, even with real emergency helicopters. Drones with spinning blades would be dropping out of the sky frequently, injuring people, damaging houses and gardens, onto roads, causing accidents. People would die.

Drones are not silent. A lot of drones would make a lot of extra ambient noise in an environment where noise pollution is already too high. They are also visible, creating another nuisance visual disturbance.

Kids are mischievous. Some adults are mischievous, some criminal, some nosey, some terrorists. I can’t help wonder what the life expectancy of a drone would be if it is delivering to a housing estate full of kids like the one I was. If I was still a kid, I’d be donning a mask (don’t want Amazon giving my photo to the police) and catching them, making nets to bring them down and stringing wires between buildings on their normal routes, throwing stones at them, shooting them with bows and arrows, Nerf guns, water pistols, flying other toy drones into their paths. I’d be tying all sorts of other things onto them for their ongoing journey. I’d be having a lot of fun on the black market with all the intercepted goods too.

If I were a terrorist, and if drones were becoming common delivery tools, I’d buy some and put Amazon labels on them, or if I’m short of cash, I’d hijack a few, pay kids pocket money to capture them, and after suitable mods, start using them to deliver very nasty packages precisely onto doorsteps or spray lethal concoctions into the air above specific locations.

If I were just criminal, I’d make use of the abundance of drones to make my own less conspicuous, so that I could case homes for burglaries, spy on businesses with cameras and intercept their wireless signals, check that an area is free of police, or get interesting videos for my voyeur websites. Maybe I’d add a blinding laser into them to attack any police coming into the scene of my crime, giving valuable extra time without giving my location away.

There are also social implications: jobs in Amazon, delivery and logistics companies would trade against drone manufacturing and management. Neighbours might fall out if a house frequently gets noisy deliveries from a drone while people are entering and leaving an adjacent door or relaxing in the garden, or their kids are playing innocently in the front garden as a drone lands very close by. Drone delivery would be especially problematic when doorways are close together, as they often are in cities.

Drones are good fun as toys and for hobbies, in low numbers. They are also useful for some utility and emergency service tasks, under supervision. They are really not a good solution for home delivery, even if technically it can be done. Amazon knows that as well as I do, and this whole thing can only be a publicity stunt. And if it is, well, I don’t mind, I had a lot of fun with it anyway.

Bubblewrap terrorism

I can happily spend ages bursting bubblewrap. It has a certain surprise value that never stops giving, you never quite know when the bubble will burst.

I saw some nice photos Rachel Armstrong (@livingarchitect and Senior Ted Fellow) has made playing with chemistry, using bubblewrap cells as mini-reaction chambers to great effect. She is a proper scientist, not a mad one, and does some interesting stuff and is worth following. She isn’t the problem.

My first thought was, I need a chemistry lab, then I thought about Dexter and Stewie with their labs and realised that I have way too much in common with them, including mental age band, and remembered some of my childhood ‘events’ with my chemistry set, and basically I haven’t moved on so it wouldn’t be a good idea.

My second thought, a couple of seconds later, was that bubblewrap would make an excellent way to keep nasty chemicals separate and then to suddenly mix them, and that it could be easily wrapped around the body or in a briefcase lining, or as obviously innocent packing material for a fragile object being carried on a plane, with the top few rows of cells kept largely chemical-free to erase suspicion. Now I know I can’t be the first person to think of that, but I remember seeing warnings about all sorts of things I’m not allowed to take on planes or that must be put in the little plastic bag and I don’t recall seeing any mention of bubblewrap.

And if chemistry, why not biotech, mixing some sort of dispersal activator with a few cells of nasty viruses. Or in fact, why bother with the dispersion chemicals, bubblewrap makes a nice burst of compressed air when you pop it anyway so would be good for dispersing stuff just by popping cells.

Just a thought, but is bubblewrap already a known terrorist threat, or is it just about to become one?

Free-floating AI battle drone orbs (or making Glyph from Mass Effect)

I have spent many hours playing various editions of Mass Effect, from EA Games. It is one of my favourites and has clearly benefited from some highly creative minds. They had to invent a wide range of fictional technology along with technical explanations in the detail for how they are meant to work. Some is just artistic redesign of very common sci-fi ideas, but they have added a huge amount of their own too. Sci-fi and real engineering have always had a strong mutual cross-fertilisation. I have lectured sometimes on science fact v sci-fi, to show that what we eventually achieve is sometimes far better than the sci-fi version (Exhibit A – the rubbish voice synthesisers and storage devices use on Star Trek, TOS).

Glyph

Liara talking to her assistant Glyph.Picture Credit: social.bioware.com

In Mass Effect, lots of floating holographic style orbs float around all over the place for various military or assistant purposes. They aren’t confined to a fixed holographic projection system. Disruptor and battle drones are common, and  a few home/lab/office assistants such as Glyph, who is Liara’s friendly PA, not a battle drone. These aren’t just dumb holograms, they can carry small devices and do stuff. The idea of a floating sphere may have been inspired by Halo’s, but the Mass Effect ones look more holographic and generally nicer. (Think Apple v Microsoft). Battle drones are highly topical now, but current technology uses wings and helicopters. The drones in sci-fi like Mass Effect and Halo are just free-floating ethereal orbs. That’s what I am talking about now. They aren’t in the distant future. They will be here quite soon.

I recently wrote on how to make force field and floating cars or hover-boards.

http://timeguide.wordpress.com/2013/06/21/how-to-actually-make-a-star-wars-landspeeder-or-a-back-to-the-future-hoverboard/

Briefly, they work by creating a thick cushion of magnetically confined plasma under the vehicle that can be used to keep it well off the ground, a bit like a hovercraft without a skirt or fans. Using layers of confined plasma could also be used to make relatively weak force fields. A key claim of the idea is that you can coat a firm surface with a packed array of steerable electron pipes to make the plasma, and a potentially reconfigurable and self organising circuit to produce the confinement field. No moving parts, and the coating would simply produce a lifting or propulsion force according to its area.

This is all very easy to imagine for objects with a relatively flat base like cars and hover-boards, but I later realised that the force field bit could be used to suspend additional components, and if they also have a power source, they can add locally to that field. The ability to sense their exact relative positions and instantaneously adjust the local fields to maintain or achieve their desired position so dynamic self-organisation would allow just about any shape  and dynamics to be achieved and maintained. So basically, if you break the levitation bit up, each piece could still work fine. I love self organisation, and biomimetics generally. I wrote my first paper on hormonal self-organisation over 20 years ago to show how networks or telephone exchanges could self organise, and have used it in many designs since. With a few pieces generating external air flow, the objects could wander around. Cunning design using multiple components could therefore be used to make orbs that float and wander around too, even with the inspired moving plates that Mass Effect uses for its drones. It could also be very lightweight and translucent, just like Glyph. Regular readers will not be surprised if I recommend some of these components should be made of graphene, because it can be used to make wonderful things. It is light, strong, an excellent electrical and thermal conductor, a perfect platform for electronics, can be used to make super-capacitors and so on. Glyph could use a combination of moving physical plates, and use some to add some holographic projection – to make it look pretty. So, part physical and part hologram then.

Plates used in the structure can dynamically attract or repel each other and use tethers, or use confined plasma cushions. They can create air jets in any direction. They would have a small load-bearing capability. Since graphene foam is potentially lighter than helium

http://timeguide.wordpress.com/2013/01/05/could-graphene-foam-be-a-future-helium-substitute/

it could be added into structures to reduce forces needed. So, we’re not looking at orbs that can carry heavy equipment here, but carrying processing, sensing, storage and comms would be easy. Obviously they could therefore include whatever state of the art artificial intelligence has got to, either on-board, distributed, or via the cloud. Beyond that, it is hard to imagine a small orb carrying more than a few hundred grammes. Nevertheless, it could carry enough equipment to make it very useful indeed for very many purposes. These drones could work pretty much anywhere. Space would be tricky but not that tricky, the drones would just have to carry a little fuel.

But let’s get right to the point. The primary market for this isn’t the home or lab or office, it is the battlefield. Battle drones are being regulated as I type, but that doesn’t mean they won’t be developed. My generation grew up with the nuclear arms race. Millennials will grow up with the drone arms race. And that if anything is a lot scarier. The battle drones on Mass Effect are fairly easy to kill. Real ones won’t.

a Mass Effect combat droneMass Effect combat drone, picture credit: masseffect.wikia.com

If these cute little floating drone things are taken out of the office and converted to military uses they could do pretty much all the stuff they do in sci-fi. They could have lots of local energy storage using super-caps, so they could easily carry self-organising lightweight  lasers or electrical shock weaponry too, or carry steerable mirrors to direct beams from remote lasers, and high definition 3D cameras and other sensing for reconnaissance. The interesting thing here is that self organisation of potentially redundant components would allow a free roaming battle drone that would be highly resistant to attack. You could shoot it for ages with laser or bullets and it would keep coming. Disruption of its fields by electrical weapons would make it collapse temporarily, but it would just get up and reassemble as soon as you stop firing. With its intelligence potentially local cloud based, you could make a small battalion of these that could only be properly killed by totally frazzling them all. They would be potentially lethal individually but almost irresistible as a team. Super-capacitors could be recharged frequently using companion drones to relay power from the rear line. A mist of spare components could make ready replacements for any that are destroyed. Self-orientation and use of free-space optics for comms make wiring and circuit boards redundant, and sub-millimetre chips 100m away would be quite hard to hit.

Well I’m scared. If you’re not, I didn’t explain it properly.

Killing machines

There is rising concern about machines such as drones and battlefield robots that could soon be given the decision on whether to kill someone. Since I wrote this and first posted it a couple of weeks ago, the UN has put out their thoughts as the DM writes today:

http://www.dailymail.co.uk/news/article-2318713/U-N-report-warns-killer-robots-power-destroy-human-life.html 

At the moment, drones and robots are essentially just remote controlled devices and a human makes the important decisions. In the sense that a human uses them to dispense death from a distance, they aren’t all that different from a spear or a rifle apart from scale of destruction and the distance from which death can be dealt. Without consciousness, a missile is no different from a spear or bullet, nor is a remote controlled machine that it is launched from. It is the act of hitting the fire button that is most significant, but proximity is important too. If an operator is thousands of miles away and isn’t physically threatened, or perhaps has never even met people from the target population, other ethical issues start emerging. But those are ethical issues for the people, not the machine.

Adding artificial intelligence to let a machine to decide whether a human is to be killed or not isn’t difficult per se. If you don’t care about killing innocent people, it is pretty easy. It is only made difficult because civilised countries value human lives, and because they distinguish between combatants and civilians.

Personally, I don’t fully understand the distinction between combatants and soldiers. In wars, often combatants have no real choice but to fight or are conscripted, and they are usually told what to do, often by civilian politicians hiding in far away bunkers, with strong penalties for disobeying. If a country goes to war, on the basis of a democratic mandate, then surely everyone in the electorate is guilty, even pacifists, who accept the benefits of living in the host country but would prefer to avoid the costs. Children are the only innocents.

In my analysis, soldiers in a democratic country are public sector employees like any other, just doing a job on behalf of the electorate. But that depends to some degree on them keeping their personal integrity and human judgement. The many military who take pride in following orders could be thought of as being dehumanised and reduced to killing machines. Many would actually be proud to be thought of as killing machines. A soldier like that, who merely follow orders, deliberately abdicates human responsibility. Having access to the capability for good judgement, but refusing to use it, they reduce themselves to a lower moral level than a drone. At least a drone doesn’t know what it is doing.

On the other hand, disobeying a direct order may save soothe issues of conscience but invoke huge personal costs, anything from shaming and peer disapproval to execution. Balancing that is a personal matter, but it is the act of balancing it that is important, not necessarily the outcome. Giving some thought to the matter and wrestling at least a bit with conscience before doing it makes all the difference. That is something a drone can’t yet do.

So even at the start, the difference between a drone and at least some soldiers is not always as big as we might want it to be, for other soldiers it is huge. A killing machine is competing against a grey scale of judgement and morality, not a black and white equation. In those circumstances, in a military that highly values following orders, human judgement is already no longer an essential requirement at the front line. In that case, the leaders might set the drones into combat with a defined objective, the human decision already taken by them, the local judgement of who or what to kill assigned to adaptive AI, algorithms and sensor readings. For a military such as that, drones are no different to soldiers who do what they’re told.

However, if the distinction between combatant and civilian is required, then someone has to decide the relative value of different classes of lives. Then they either have to teach it to the machines so they can make the decision locally, or the costs of potential collateral damage from just killing anyone can be put into the equations at head office. Or thirdly, and most likely in practice, a compromise can be found where some judgement is made in advance and some locally. Finally, it is even possible for killing machines to make decisions on some easier cases and refer difficult ones to remote operators.

We live in an electronic age, with face recognition, friend or foe electronic ID, web searches, social networks, location and diaries, mobile phone signals and lots of other clues that might give some knowledge of a target and potential casualties. How important is it to kill or protect this particular individual or group, or take that particular objective? How many innocent lives are acceptable cost, and from which groups – how many babies, kids, adults, old people? Should physical attractiveness or the victim’s professions be considered? What about race or religion, or nationality, or sexuality, or anything else that could possibly be found out about the target before killing them? How much should people’s personal value be considered, or should everyone be treated equal at point of potential death? These are tough questions, but the means of getting hold of the date are improving fast and we will be forced to answer them. By the time truly intelligent drones will be capable of making human-like decisions, they may well know who they are killing.

In some ways this far future with a smart or even conscious drone or robot making informed decisions before killing people isn’t as scary as the time between now and then. Terminator and Robocop may be nightmare scenarios, but at least in those there is clarity of which one is the enemy. Machines don’t yet have anywhere near that capability. However, if an objective is considered valuable, military leaders could already set a machine to kill people even when there is little certainty about the role or identity of the victims. They may put in some algorithms and crude AI to improve performance or reduce errors, but the algorithmic uncertainty and callous uncaring dispatch of potentially innocent people is very worrying.

Increasing desperation could be expected to lower barriers to use. So could a lower regard for the value of human life, and often in tribal conflicts people don’t consider the lives of the opposition to have a very high value. This is especially true in terrorism, where the objective is often to kill innocent people. It might not matter that the drone doesn’t know who it is killing, as long as it might be killing the right target as part of the mix. I think it is reasonable to expect a lot of battlefield use and certainly terrorist use of semi-smart robots and drones that kill relatively indiscriminatingly. Even when truly smart machines arrive, they might be set to malicious goals.

Then there is the possibility of rogue drones and robots. The Terminator/Robocop scenario. If machines are allowed to make their own decisions and then to kill, can we be certain that the safeguards are in place that they can always be safely deactivated? Could they be hacked? Hijacked? Sabotaged by having their fail-safes and shut-offs deactivated? Have their ‘minds’ corrupted? As an engineer, I’d say these are realistic concerns.

All in all, it is a good thing that concern is rising and we are seeing more debate. It is late, but not too late, to make good progress to limit and control the future damage killing machines might do. Not just directly in loss of innocent life, but to our fundamental humanity as armies get increasingly used to delegating responsibility to machines to deal with a remote dehumanised threat. Drones and robots are not the end of warfare technology, there are far scarier things coming later. It is time to get a grip before it is too late.

When people fought with sticks and stones, at least they were personally involved. We must never allow personal involvement to disappear from the act of killing someone.

Towards the singularity

This entry now forms a chapter in my book Total Sustainability, available from Amazon in paper or ebook form.

Could graphene foam be a future Helium substitute?

I just did a back-of-the-envelope calculation to work out what size of sphere containing a vacuum would give the same average density as helium at room temperature, if the sphere is made of graphene, the new one-size-does-everthing-you-can-imagine wonder material.

Why? Well, the Yanks have just prototyped a big airship and it uses helium for buoyancy. http://www.dailymail.co.uk/sciencetech/article-2257201/The-astonishing-Aeroscraft–new-type-rigid-airship-thats-set-revolutionise-haulage-tourism–warfare.html

Helium weighs 0.164kg per cubic metre. Graphene sheet weighs only 0.77mg per square metre. Mind you, the data source was Wikipedia so don’t start a business based on this without checking! If you could make a sphere out of a single layer of graphene, and have a vacuum inside (graphene is allegedly impervious to gas) it would become less dense than helium at sizes above 0.014mm. Wow! That’s very small. I expected ping pong ball sizes when I started and knew that would never work because large thin spheres would be likely to collapse. 14 micron spheres are too small to see with the naked eye, not much bigger than skin cells, maybe they would work OK.

Confession time now. I have no idea whether a single layer of graphene is absolutely impervious to gas, it says so on some websites but it says a lot of things on some websites that are total nonsense.

The obvious downside even if it could work is that graphene is still very expensive, but everything is when is starts off. Imagine how much you could sell a plastic cup for to an Egyptian Pharaoh.

Helium is an endangered resource. We use it for party balloons and then it goes into the atmosphere and from there leaks into space. It is hard to replace, at least for the next few decades. If we could use common elements like carbon as a substitute that would be good news. Getting the cost of production down is just engineering and people are good at that when there is an incentive.

So in the future, maybe we could fill party balloons and blimps with graphene foam. You could make huge airships happily with it, that don’t need helium of hydrogen. 

Tiny particles that size readily behave as a fluid and can easily be pumped. You could make lighter-than-air building materials for ultra-tall skyscrapers, launch platforms, floating Avatar-style sky islands and so on.

You could also make small clusters of them to carry tiny payloads for espionage or terrorism. Floating invisibly tiny particles of clever electronics around has good and bad uses. You could distribute explosives with floating particles that congeal into whatever shape you want on whatever target you want using self-organisation and liberal use of EM fields. I don’t even have that sort of stuff on Halo. I’d better stop now before I start laughing evilly and muttering about taking over the world.

The future of time travel: cheat

Time travel comes up frequently in science fiction, and some physicists think it might be theoretically possible, to some degree, within major constraints, at vast expense, between times that are in different universes. Frankly, my physics is rusty and I don’t have any useful contribution to make on how we might do physical time travel, nor on its potential. However, intelligence available to us to figure the full physics out will accelerate dramatically thanks to the artificial intelligence positive feedback loop (smarter machines can build even smarter ones even faster)  and some time later this century we will definitely work out once and for all whether it is doable in real life and how to do it. And we’ll know why we never meet time tourists. If it can be done and done reasonable economically and safely, then it will just be a matter of time to build it after that.

Well, stuff that! Not interested in waiting! If the laws of physics make it so hard that it may never happen and certainly not till at least towards the end of this century, even if it is possible, then let’s bypass the laws of physics. Engineers do that all the time. If you are faced with an infinitely tall impenetrable barrier so you can’t go over it or through it, then check whether the barrier is also very wide, because there may well be an easy route past the barrier that doesn’t require you to go that way. I can’t walk over tall buildings, but I still haven’t found one I couldn’t walk past on the street. There is usually a way past barriers.

And with time travel, that turns out to be the case. There is an easy route past. Physics only controls the physical world. Although physics certain governs the technologies we use to create cyberspace, it doesn’t really limit what you can do in cyberspace any more than in a dream, a costume drama, or a memory.

Cyberspace takes many forms, it is’t homogeneous or even continuous. It has many dimensions. It can be quite alien. But in some areas, such as websites, archives are kept and you can look at how a site was in the past. Extend that to social networking and a problem immediately appears. How can you communicate or interact with someone if the site you are on is just an historical snapshot and isn’t live? How could you go back and actually chat to someone or play a game against them?

The solution to this problem is a tricky technological one but it is entirely  possible, and it won’t violate any physics. If you want to go back in time and interact with people as they were, then all you need is to have an archive of those people. Difficult, but possible. In cyberspace.

Around 2050, we should be starting to do direct brain links, at least in the lab and maybe a bit further. Not just connections to the optic nerve or inner ear, or chips to control wheelchairs, we already have that. And we already have basic thought recognition. By 2050 we will be starting to do full links, that allow thoughts to pass both ways between man and machine, so that the machine world is effectively an extension of your brain.

As people’s thoughts, memories and even sensations become more cyberspace based, as they will, the physical body will become less relevant. (Some of my previous blogs have considered the implication of this for immortality). Once stuff is in the IT world, it can be copied, and backed up. That gives us the potential to make recordings of people’s entire lives, and capable of effectively replicating them at will. Today we have web archives that try to do that with web sites so you can access material on older versions of them. Tomorrow we’ll also be able to include people in that. Virtually replicating the buildings and other stuff would be pretty trivial by comparison.

In that world, it will be possible for your mind, which is itself an almost entirely online entity, to interact with historic populations, essentially to time travel. Right back to the date when they were started being backed up, some time after 2050. The people they would be dealing with would be the same actual people that existed then, exactly as they were, perfect copies. They would behave and respond exactly the same. So you could use this technique to time travel back to 2050 at the very best but no earlier. And for a proper experience it would be much later, say 2100.

And then it starts to get interesting. In an electronic timeline such as that, the interactions you have with those people in the last would have two options. They could be just time tourism  or social research, or other archaeology, which has no lasting effect, and any traces of your trip would vanish when you leave. Or they could be more effectual. The interactions you have when you visit could ripple all the way back through the timeline to your ‘present?’, or future? or was it the past when you were present in the future? (it is really hard to choose the right words tenses when you write about time travel!!). The computers could make it all real, running the entire society through its course, at a greatly accelerated speed. The interactions could therefore be quite real, and all the interactions and all the minds and the rippling social effects could all be implemented. But the possibilities branch again, because although that could be true, and the future society could be genuinely changed, that could also be done by entirely replicating the cyberworld, and implementing the effects only in the parallel new cyber-universe. Doing either of these effectual options might prove very expensive, and obviously dangerous. Replicating things can be done, but you need a lot of computer power and storage to do it with everything affected, so it might be severely restricted. And policed.

But importantly, this sort of time travel could be done – you could go back in time to change the present. All the minds of all the people could be changed by someone going back in the past cyberspace records and doing something that would ripple forwards through time to change those same minds. It couldn’t be made fully clean, because some people for example might choose not to have kids in the revised edition, and although the cyberspace presence of their minds could be changed or deleted, you’d still have to dispose of their physical bodies and tidy up other physical residual effects. But not being clean is one of the things we’d expect for time travel. There would be residues, mess, paradoxes, and presumably this would all limit the things you’d be allowed to mess with. And we will need the time cops and time detectives and licenses and time cleaners and administrators and so on. But in our future cyberspace world, TIME TRAVEL WILL BE POSSIBLE. I can’t shout that loud enough. And please don’t ignore the italics, I am absolutely not suggesting it will be doable in the real world.

Fun! Trouble is, I’m going to be 90 in 2050 so I probably won’t have the energy any more.

Nuclear weapons + ?

I was privileged and honoured in 2005 to be elected one of the Fellows of the World Academy of Art and Science. It is a mark of recognition and distinction that I wear with pride. The WAAS was set up by Einstein, Oppenheimer, Bertrand Russel and a few other great people, as a forum to discuss the big issues that affect the whole of humanity, especially the potential misuse of scientific discoveries, and by extension, technological developments. Not surprisingly therefore, one of their main programs from the outset has been the pursuit of the abolition of nuclear weapons. It’s a subject I have never written about before so maybe now is a good time to start. Most importantly, I think it’s now time to add others to the list.

There are good arguments on both sides of this issue.

In favour of nukes, it can be argued from a pragmatic stance that the existence of nuclear capability has contributed to reduction in the ferocity of wars. If you know that the enemy could resort to nuclear weapon use if pushed too far, then it may create some pressure to restrict the devastation levied on the enemy.

But this only works if both sides value lives of their citizens sufficiently. If a leader thinks he may survive such a war, or doesn’t mind risking his life for the cause, then the deterrent ceases to work properly. An all out global nuclear war could kill billions of people and leave the survivors in a rather unpleasant world. As Einstein observed, he wasn’t sure what weapons World War 3 would be fought with, but world war 4 would be fought with sticks and stones. Mutually assured destruction may work to some degree as a deterrent, but it is based on second guessing a madman. It isn’t a moral argument, just a pragmatic one. Wear a big enough bomb, and people might give you a wide berth.

Against nukes, it can be argued from a moral basis that such weapons should never be used in any circumstances, their capability to cause devastation beyond the limits that should be tolerated by any civilisation. Furthermore, any resources spent on creating and maintaining them are therefore wasted and could have been put to better more constructive use.

This argument is appealing, but lacks pragmatism in a world where some people don’t abide by the rules.

Pragmatism and morality often align with the right and left of the political spectrum, but there is a solution that keeps both sides happy, albeit an imperfect one. If all nuclear weapons can be removed, and stay removed, so that no-one has any or can build any, then pragmatically, there could be even more wars, and they may be even more prolonged and nasty, but the damage will be kept short of mutual annihilation. Terrorists and mad rulers wouldn’t be able to destroy us all in a nuclear Armageddon. Morally, we may accept the increased casualties as the cost of keeping the moral high ground and protecting human civilisation. This total disarmament option is the goal of the WAAS. Pragmatic to some degree, and just about morally digestible.

Another argument that is occasionally aired is the ‘what if?’ WW2 scenario. What if nuclear weapons hadn’t been invented? More people would probably have died in a longer WW2. If they had been invented and used earlier by the other side, and the Germans had won, perhaps we would have had ended up with a unified Europe with the Germans in the driving seat. Would that be hugely different from the Europe we actually have 65 years later anyway. Are even major wars just fights over the the nature of our lives over a few decades? What if the Romans or the Normans or Vikings had been defeated? Would Britain be so different today? ‘What if?’ debates get you little except interesting debate.

The arguments for and against nuclear weapons haven’t really moved on much over the years, but now the scope is changing a bit. They are as big a threat as ever, maybe more-so with the increasing possibility of rogue regimes and terrorists getting their hands on them, but we are adding other technologies that are potentially just as destructive, in principle anyway, and they could be weaponised if required.

One path to destruction that entered a new phase in the last few years is our messing around with the tools of biology. Biotechnology and genetic modification, synthetic biology, and the linking of external technology into our nervous systems are individual different strands of this threat, but each of them is developing quickly. What links all these is the increasing understanding, harnessing and ongoing development of processes similar to those that nature uses to make life. We start with what nature provides, reverse engineer some of the tools, improve on them, adapt and develop them for particular tasks, and then use these to do stuff that improves on or interacts with natural systems.

Alongside nuclear weapons, we have already become used to the bio-weapons threat based on genetically modified viruses or bacteria, and also to weapons using nerve gases that inhibit neural functioning to kill us. But not far away is biotech designed to change the way our brains work, potentially to control or enslave us. It is starting benignly of course, helping people with disabilities or nerve or brain disorders. But some will pervert it.

Traditional war has been based on causing enough pain to the enemy until they surrender and do as you wish. Future warfare could be based on altering their thinking until it complies with what you want, making an enemy into a willing ally, servant or slave. We don’t want to lose the great potential for improving lives, but we shouldn’t be naive about the risks.

The broad convergence of neurotechnology and IT is a particularly dangerous area. Adding artificial intelligence into the mix opens the possibility of smart adapting organisms as well as the Terminator style threats. Organisms that can survive in multiple niches, or hybrid nature/cyberspace ones that use external AI to redesign their offspring to colonise others. Organisms that penetrate your brain and take control.

Another dangerous offspring from better understanding of biology is that we now have clubs where enthusiasts gather to make genetically modified organisms. At the moment, this is benign novelty stuff, such as transferring a bio-luminescence gene or a fluorescent marker to another organism, just another after-school science club for gifted school-kids and hobbyist adults. But it is I think a dangerous hobby to encourage. With better technology and skill developing all the time, some of those enthusiasts will move on to designing and creating synthetic genes, some won’t like being constrained by safety procedures, and some may have accidents and release modified organisms into the wild that were developed without observing the safety rules. Some will use them to learn genetic design, modification and fabrication techniques and then work in secret or teach terrorist groups. Not all the members can be guaranteed to be fine upstanding members of the community, and it should be assumed that some will be people of ill intent trying to learn how to do the most possible harm.

At least a dozen new types of WMD are possible based on this family of technologies, even before we add in nanotechnology. We should not leave it too late to take this threat seriously. Whereas nuclear weapons are hard to build and require large facilities that are hard to hide, much of this new stuff can be done in garden sheds or ordinary office buildings. They are embryonic and even theoretical today, but that won’t last. I am glad to say that in organisations such as the Lifeboat Foundation (lifeboat.com), in many universities and R&D labs, and doubtless in military ones, some thought has already gone into defence against them and how to police them, but not enough. It is time now to escalate these kinds of threats to the same attention we give to the nuclear one.

With a global nuclear war, much of the life on earth could be destroyed, and that will become possible with the release of well-designed organisms. But I doubt if I am alone in thinking that the possibility of being left alive with my mind controlled by others may well be a fate worse than death.

Blocking Pirate Bay makes little sense

http://www.telegraph.co.uk/technology/news/9236667/Pirate-Bay-must-be-blocked-High-Court-tells-ISPs.html Justice Arnold ruled that ISPs must block their customers from accessing Pirate Bay. Regardless of the morality or legality of Pirate Bay, forcing ISPs to block access to it will cause them inconvenience and costs, but won’t fix the core problem of copyright materials being exchanged without permission from the owners.

I have never looked at the Pirate Bay site, but I am aware of what it offers. It doesn’t host material, but allows its users to download from each other. By blocking access to the Bay, the judge blocks another one of billions of ways to exchange data. Many others exist and it is very easy to set up new ones, so trying to deal with them one by one seems rather pointless. Pirate Bay’s users will simply use alternatives. If they were to block all current file sharing sites, others would spring up to replace them, and if need be, with technological variations that set them outside of any new legislation. At best judges could play a poor catch-up game in an eternal war between global creativity and the law. Because that is what this is.

Pirate Bay can only be blocked because it is possible to identify it and put it in court. It is possible to write software that doesn’t need a central site, or indeed any legally identifiable substance. It could for example be open-source software written and maintained by evolving adaptive AI, hidden behind anonymity, distributed algorithms and encryption walls, roaming freely among web servers and PCs, never stopping anywhere. It could be untraceable. It could use combinations of mobile or fixed phone nets, the internet, direct gadget-gadget comms and even use codes on other platforms such as newspapers. Such a system would be dangerous to build from a number of perspectives, but may be forced by actions to close alternatives. If people feel angered by arrogance and greed, they may be pushed down this development road. The only way to fully stop such a system would be to stop communication.

The simple fact is that technology that we depend on for most aspects of our lives also makes it possible to swap files, and to do so secretly as needed. We could switch it off, but our economy and society would collapse. To pretend otherwise is folly. Companies that feel abused should recognise that the world has moved on and they need to adapt their businesses to survive in the world today, not ask everyone to move back to the world of yesterday so that they can cope. Because we can’t and shouldn’t even waste time trying to. My copyright material gets stolen frequently. So what? I just write more. That model works fine for me. It ain’t broke, and trying to fix it without understanding how stuff works won’t protect anyone and will only make it worse for all of us.