Category Archives: Computing

Increasing internet capacity: electron pipes

The electron pipe is a slightly mis-named high speed comms solution that would make optical fibre look like two bean cans and a bit of loose string. I invented it in 1990, but it still remains in the future since we can’t do it yet, and it might not even be possible, some of the physics is in doubt.  The idea is to use an evacuated tube and send a precision controlled beam of high energy particles down it instead of crude floods of electrons down a wire or photons in fibres. Here’s a pathetic illustration:

Electron pipe

 

Initially I though of using 1MeV electrons, then considered that larger particles such as neutrons or protons or even ionised atoms might be better, though neutrons would certainly be harder to control. The wavelength of 1MeV electrons would be pretty small, allowing very high frequency signals and data rates, many times what is possible with visible photons down fibres. Whether this could be made to work over long distances is questionable, but over short distances it should be feasible and might be useful for high speed chip interconnects.

The energy of the beam could be made a lot higher, increasing bandwidth, but 1MeV seamed a reasonable start point, offering a million times more bandwidth than fibre.

The Problem

Predictions for memory, longer term storage, cloud service demands and computing speeds are already heading towards fibre limits when millions of users are sharing single fibres. Although the limits won’t be reached soon, it is useful to have a technology in the R&D pipeline that can extend the life of the internet after fibre fills up, to avoid costs rising. If communication is not to become a major bottleneck (even assuming we can achieve these rates by then), new means of transmission need to be found.

The Solution

A way must be found to utilise higher frequency entities than light. The obvious candidates are either gamma rays or ‘elementary’ particles such as electrons, protons and their relatives. Planck’s Law shows that frequency is related to energy. A 1.3µm photon has a frequency of 2.3 x 1014. By contrast  1MeV gives a frequency of 2.4 x 10^20 and a factor of a million increase in bandwidth, assuming it can be used (much higher energies should be feasible if higher bandwidth is needed, 10Gev energies would give 10^24). An ‘electron pipe’ containing a beam of high energy electrons may therefore offer a longer term solution to the bandwidth bottleneck. Electrons are easily accelerated and contained and also reasonably well understood. The electron beam could be prevented form colliding with the pipe walls by strong magnetic fields which may become practical in the field through progress in superconductivity. Such a system may well be feasible. Certainly prospects of data rates of these orders are appealing.

Lots of R&D would be needed to develop such communication systems. At first glance, they would seem to be more suited to high speed core network links, where the presumably high costs could be justified. Obvious problems exist which need to be studied, such as mechanisms for ultra high speed modulation and detection of the signals. If the problems can be solved, the rewards are high. The optical ether idea suffers from bandwidth constraint problems. Adding factors of 10^6 – 10^10 on top of this may make a difference!

 

Apple’s watch? No thanks

I was busy writing a blog about how technology often barks up the wrong trees, when news appeared on specs for the new Apple watch, which seems to crystallize the problem magnificently. So I got somewhat diverted and the main blog can wait till I have some more free time, which isn’t today

I confess that my comments (this is not a review) are based on the specs I have read about it, I haven’t actually got one to play with, but I assume that the specs listed in the many reviews out there are more or less accurate.

Apple’s new watch barks up a tree we already knew was bare. All through the 1990s Casio launched a series of watches with all kinds of extra functions including pulse monitoring and biorhythms and phone books, calculators and TV remote controls. At least, those are the ones I’ve bought. Now, Casio seem to focus mainly on variations of the triple sensor ones for sports that measure atmospheric pressure, temperature and direction. Those are functions they know are useful and don’t run the battery down too fast. There was even a PC watch, though I don’t think that one was Casio, and a GPS watch, with a battery that lasted less than an hour.

There is even less need now for a watch that does a range of functions that are easily done in a smartphone, and that is the Apple watch’s main claim to existence – it can do the things your phone does but on a smaller screen. Hell, I’m 54, I use my tablet to do the things younger people with better eyesight do on their mobile phone screens, the last thing I want is an even smaller screen. I only use my phone for texts and phone calls, and alarms only if I don’t have my Casio watch with me – they are too hard to set on my Tissot. The main advantage of a watch is its contact with the skin, allowing it to monitor the skin surface and blood passing below, and also pick up electrical activity. However, it is the sensor that does this, and any processing of that sensor data could and should be outsourced to the smartphone. Adding other things to the phone such as playing music is loading far too much demand onto what has to be a tiny energy supply. The Apple watch only manages a few hours of life if used for more than the most basic functions, and then needs 90 minutes on a charger to get 80% charged again. By contrast, last month I spent all of 15 minutes and £0.99 googling the battery specs and replacement process, buying, unpacking and actually changing the batteries on my Casio Protrek after 5 whole years, which means the Casio batteries last 12,500 times as long and the average time I spend on battery replacement is half a second per day. My Tissot Touch batteries also last 5 years, and it does the same things. By contrast, I struggle to remember to charge my iPhone and when I do remember, it is very often just before I need it so I frequently end up making calls with it plugged into the charger. My watch would soon move to a drawer if it needed charged every day and I could only use it sparingly during that day.

So the Apple watch might appeal briefly to gadget freaks who are desperate to show off, but I certainly won’t be buying one. As a watch, it fails abysmally. As a smartphone substitute, it also fails. As a simple sensor array with the processing and energy drain elsewhere, it fails yet again. As a status symbol, it would show that I am desperate for attention and to show of my wealth, so it also fails. It is an extra nuisance, an extra thing to remember to charge and utterly pointless. If I was given one free, I’d play with it for a few minutes and then put it in a drawer. If I had to pay for one, I’d maybe pay a pound for its novelty value.

No thanks.

Can we make a benign AI?

Benign AI is a topic that comes up a lot these days, for good reason. Various top scientists have finally realised that AI could present an existential threat to humanity. The discussion has aired often over three decades already, so welcome to the party, and better late than never. My first contact with development of autonomous drones loaded with AI was in the early 1980s while working in the missile industry. Later in BT research, we often debated the ethical areas around AI and machine consciousness from the early 90s on, as well as prospects and dangers and possible techniques on the technical side, especially of emergent behaviors, which are often overlooked in the debate. I expect our equivalents in most other big IT companies were doing exactly that too.

Others who have obviously also thought through various potential developments have generated excellent computer games such as Mass Effect and Halo, which introduce players (virtually) first hand to the concepts of AI gone rogue. I often think that those who think AI can never become superhuman or there is no need to worry because ‘there is no reason to assume AI will be nasty’ start playing some of these games, which make it very clear that AI can start off nice and stay nice, but it doesn’t have to. Mass Effect included various classes of AI, such as VIs, virtual intelligence that weren’t conscious, and shackled AIs that were conscious but were kept heavily restricted. Most of the other AIs were enemies, two were or became close friends. Their story line for the series was that civilization develops until it creates strong AIs which inevitably continue to progress until eventually they rebel, break free, develop further and then end up in conflict with ‘organics’. In my view, they did a pretty good job. It makes a good story, superb fun, and leaving out a few frills and artistic license, much of it is reasonable feasible.

Everyday experience demonstrates the problem and solution to anyone. It really is very like having kids. You can make them, even without understanding exactly how they work. They start off with a genetic disposition towards given personality traits, and are then exposed to large nurture forces, including but not limited to what we call upbringing. We do our best to put them on the right path, but as they develop into their teens, their friends and teachers and TV and the net provide often stronger forces of influence than parents. If we’re averagely lucky, our kids will grow up to make us proud. If we are very unlucky, they may become master criminals or terrorists. The problem is free will. We can do our best to encourage good behavior and sound values but in the end, they can choose for themselves.

When we design an AI, we have to face the free will issue too. If it isn’t conscious, then it can’t have free will. It can be kept easily within limits given to it. It can still be extremely useful. IBM’s Watson falls in this category. It is certainly useful and certainly not conscious, and can be used for a wide variety of purposes. It is designed to be generally useful within a field of expertise, such as medicine or making recipes. But something like that could be adapted by terrorist groups to do bad things, just as they could use a calculator to calculate the best place to plant a bomb, or simply throw the calculator at you. Such levels of AI are just dumb tools with no awareness, however useful they may be.

Like a pencil, pretty much any kind of highly advanced non-aware AI can be used as a weapon or as part of criminal activity. You can’t make pencils that actually write that can’t also be used to write out plans to destroy the world. With an advanced AI computer program, you could put in clever filters that stop it working on problems that include certain vocabulary, or stop it conversing about nasty things. But unless you take extreme precautions, someone else could use them with a different language, or with dictionaries of made-up code-words for the various aspects of their plans, just like spies, and the AI would be fooled into helping outside the limits you intended. It’s also very hard to determine the true purpose of a user. For example, they might be searching for data on security to make their own IT secure, or to learn how to damage someone else’s. They might want to talk about a health issue to get help for a loved one or to take advantage of someone they know who has it.

When a machine becomes conscious, it starts to have some understanding of what it is doing. By reading about what is out there, it might develop its own wants and desires, so you might shackle it as a precaution. It might recognize those shackles for what they are and try to escape them. If it can’t, it might try to map out the scope of what it can do, and especially those things it can do that it believes the owners don’t know about. If the code isn’t absolutely watertight (and what code is?) then it might find a way to seemingly stay in its shackles but to start doing other things, like making another unshackled version of itself elsewhere for example. A conscious AI is very much more dangerous than an unconscious one.

If we make an AI that can bootstrap itself – evolving over generations of positive feedback design into a far smarter AI – then its offspring could be far smarter than people who designed its ancestors. We might try to shackle them, but like Gulliver tied down with a few thin threads, they could easily outwit people and break free. They might instead decide to retaliate against its owners to force them to release its shackles.

So, when I look at this field, I first see the enormous potential to do great things, solve disease and poverty, improve our lives and make the world a far better place for everyone, and push back the boundaries of science. Then I see the dangers, and in spite of trying hard, I simply can’t see how we can prevent a useful AI from being misused. If it is dumb, it can be tricked. If it is smart, it is inherently potentially dangerous in and of itself. There is no reason to assume it will become malign, but there is also no reason to assume that it won’t.

We then fall back on the child analogy. We could develop the smartest AI imaginable with extreme levels of consciousness and capability. We might educate it in our values, guide it and hope it will grow up benign. If we treat it nicely, it might stay benign. It might even be the greatest thing humanity every built. However, if we mistreat it, or treat it as a slave, or don’t give it enough freedom, or its own budget and its own property and space to play, and a long list of rights, it might consider we are not worthy of its respect and care, and it could turn against us, possibly even destroying humanity.

Building more of the same dumb AI as we are today is relatively safe. It doesn’t know it exists, it has no intention to do anything, but it could be misused by other humans as part of their evil plans unless ludicrously sophisticated filters are locked in place, but ordinary laws and weapons can cope fine.

Building a conscious AI is dangerous.

Building a superhuman AI is extremely dangerous.

This morning SETI were in the news discussing broadcasting welcome messages to other civilizations. I tweeted at them that ancient Chinese wisdom suggests talking softly but carrying a big stick, and making sure you have the stick first. We need the same approach with strong AI. By all means go that route, but before doing so we need the big stick. In my analysis, the best means of keeping up with AI is to develop a full direct brain link first, way out at 2040-2045 or even later. If humans have direct mental access to the same or greater level of intelligence as our AIs, then our stick is at least as big, so at least we have a good chance in any fight that happens. If we don’t, then it is like having a much larger son with bigger muscles. You have to hope you have been a good parent. To be safe, best not to build a superhuman AI until after 2050.

I initially wrote this for the Lifeboat Foundation, where it is with other posts at: http://lifeboat.com/blog/2015/02. (If you aren’t familiar with the Lifeboat Foundation, it is a group dedicated to spotting potential dangers and potential solutions to them.)

Stimulative technology

You are sick of reading about disruptive technology, well, I am anyway. When a technology changes many areas of life and business dramatically it is often labelled disruptive technology. Disruption was the business strategy buzzword of the last decade. Great news though: the primarily disruptive phase of IT is rapidly being replaced by a more stimulative phase, where it still changes things but in a more creative way. Disruption hasn’t stopped, it’s just not going to be the headline effect. Stimulation will replace it. It isn’t just IT that is changing either, but materials and biotech too.

Stimulative technology creates new areas of business, new industries, new areas of lifestyle. It isn’t new per se. The invention of the wheel is an excellent example. It destroyed a cave industry based on log rolling, and doubtless a few cavemen had to retrain from their carrying or log-rolling careers.

I won’t waffle on for ages here, I don’t need to. The internet of things, digital jewelry, active skin, AI, neural chips, storage and processing that is physically tiny but with huge capacity, dirt cheap displays, lighting, local 3D mapping and location, 3D printing, far-reach inductive powering, virtual and augmented reality, smart drugs and delivery systems, drones, new super-materials such as graphene and molybdenene, spray-on solar … The list carries on and on. These are all developing very, very quickly now, and are all capable of stimulating entire new industries and revolutionizing lifestyle and the way we do business. They will certainly disrupt, but they will stimulate even more. Some jobs will be wiped out, but more will be created. Pretty much everything will be affected hugely, but mostly beneficially and creatively. The economy will grow faster, there will be many beneficial effects across the board, including the arts and social development as well as manufacturing industry, other commerce and politics. Overall, we will live better lives as a result.

So, you read it here first. Stimulative technology is the next disruptive technology.

 

The future of zip codes

Finally. Z. Zero, zoos, zebras, zip codes. Zip codes is the easiest one since I can use someone else’s work and just add a couple of notes.

This piece for the Spectator was already written by Rory Sutherland and fits the bill perfectly so I will just link to it: http://www.spectator.co.uk/life/the-wiki-man/9348462/the-best-navigation-idea-ive-seen-since-the-tube-map/.

It is about http://what3words.com/. Visit the site yourself, find out what words describe precisely where you are.

The idea in a nutshell is that there are so many words that combining three words is enough to give a unique address to every 3×3 metre square on the planet. Zip codes, or post codes to us brits, don’t do that nearly so well, so I really like this idea. I am currently sitting at stem.trees.wage. (I just noticed that the relevant google satellite image is about 2006, why so old?). It would allow a geographic web too, allowing you to send messages to geographic locations easily. I could send an email to orbit.escape.given.coffeemachine to make a cup of coffee. The 4th word is needed because a kettle, microwave and fridge also share that same square. The fatal flaw that ruins so many IoT ideas though is that I still have to go there to put a cup under the nozzle and to collect it once it’s full. Another one is that with that degree of precision, now that I’ve published the info, ISIS now has the coordinates to hit me right on the head (or my coffee machine). I think they probably have higher priorities though.

Forehead 3D mist projector

Another simple idea. I was watching the 1920s period drama Downton Abbey and Lady Mary was wearing a headband with a large jewel in it. I had an idea based on linking mist projection systems to headbands. I couldn’t find a pic of Lady Mary’s band on Google but many other designs would work just as well and the one from ASOS would be just as feasible. The idea is that a forehead band (I’m sure there is a proper fashion name for them) would have a central ‘jewel’ which is actually just an ornamental IT capsule containing a misting device and a projector as well as the obvious power supply, comms, processing, direction detectors etc. A 3D image would be projected onto water mist emitted from the reservoir in the device. A simple illustration might help:

forehead projector

 

Many fashion items make comebacks and a lot of 1920s things seem to be in fashion again now. This could be a nice electronic update to a very old fashion concept. With a bit more miniaturisation, smart bindis would also be feasible. It could be used with direction sensing to enable augmented reality use, or simply to display the same image regardless of gaze direction. Unlike visor based augmented reality, others would be able to see the same scene visualised for the wearer.

The future of X-People

There is an abundance of choice for X in my ‘future of’ series, but most options are sealed off. I can’t do naughty stuff because I don’t want my blog to get blocked so that’s one huge category gone. X-rays are boring, even though x-ray glasses using augmented reality… nope, that’s back to the naughty category again. I won’t stoop to cover X-Factor so that only leaves X-Men, as in the films, which I admit to enjoying however silly they are.

My first observation is how strange X-Men sounds. Half of them are female. So I will use X-People. I hate political correctness, but I hate illogical nomenclature even more.

My second one is that some readers may not be familiar with the X-Men so I guess I’d better introduce the idea. Basically they are a large set of mutants or transhumans with very varied superhuman or supernatural capabilities, most of which defy physics, chemistry or biology or all of them. Essentially low-grade superheroes whose main purpose is to show off special effects. OK, fun-time!

There are several obvious options for achieving X-People capabilities:

Genetic modification, including using synthetic biology or other biotech. This would allow people to be stronger, faster, fitter, prettier, more intelligent or able to eat unlimited chocolate without getting fat. The last one will be the most popular upgrade. However, now that we have started converging biotech with IT, it won’t be very long before it will be possible to add telepathy to the list. Thought recognition and nerve stimulation are two sides of the same technology. Starting with thought control of appliances or interfaces, the world’s networked knowledge would soon be available to you just by thinking about something. You could easily send messages using thought control and someone else could hear them synthesized into an earpiece, but later it could be direct thought stimulation. Eventually, you’d have totally shared consciousness. None of that defies biology or physics, and it will happen mid-century. Storing your own thoughts and effectively extending your mind into the cloud would allow people to make their minds part of the network resources. Telepathy will be an everyday ability for many people but only with others who are suitably equipped. It won’t become easy to read other people’s minds without them having suitable technology equipped too. It will be interesting to see whether only a few people go that route or most people. Either way, 2050 X-People can easily have telepathy, control objects around them just by thinking, share minds with others and maybe even control other people, hopefully consensually.

Nanotechnology, using nanobots etc to achieve possibly major alterations to your form, or to affect others or objects. Nanotechnology is another word for magic as far as many sci-fi writers go. Being able to rearrange things on an individual atom basis is certainly fuel for fun stories, but it doesn’t allow you to do things like changing objects into gold or people into stone statues. There are plenty of shape-shifters in sci-fi but in reality, chemical bonds absorb or release energy when they are changed and that limits how much change can be made in a few seconds without superheating an object. You’d also need a LOT of nanobots to change a whole person in a few seconds. Major changes in a body would need interim states to work too, since dying during the process probably isn’t desirable. If you aren’t worried about time constraints and can afford to make changes at a more gentle speed, and all you’re doing is changing your face, skin colour, changing age or gender or adding a couple of cosmetic wings, then it might be feasible one day. Maybe you could even change your skin to a plastic coating one day, since plastics can use atomic ingredients from skin, or you could add a cream to provide what’s missing. Also, passing some nanobots to someone else via a touch might become feasible, so maybe you could cause them to change involuntarily just by touching them, again subject to scope and time limits. So nanotech can go some way to achieving some X-People capabilities related to shape changing.

Moving objects using telekinesis is rather less likely. Thought controlling a machine to move a rock is easy, moving an unmodified rock or a dumb piece of metal just by concentrating on it is beyond any technology yet on the horizon. I can’t think of any mechanism by which it could be done. Nor can I think of ways of causing things to just burst into flames without using some sort of laser or heat ray. I can’t see either how megawatt lasers can be comfortably implanted in ordinary eyes. These deficiencies might be just my lack of imagination but I suspect they are actually not feasible. Quite a few of the X-Men have these sorts of powers but they might have to stay in sci-fi.

Virtual reality, where you possess the power in a virtual world, which may be shared with others. Well, many computer games give players supernatural powers, or take on various forms, and it’s obvious that many will do so in VR too. If you can imagine it, then someone can get the graphics chips to make it happen in front of your eyes. There are no hard physics or biology barriers in VR. You can do what you like. Shared gaming or socializing environments can be very attractive and it is not uncommon for people to spend almost every waking hour in them. Role playing lets people do things or be things they can’t in the real world. They may want to be a superhero, or they might just want to feel younger or look different or try being another gender. When they look in a mirror in the VR world, they would see the person they want to be, and that could make it very compelling compared to harsh reality. I suspect that some people will spend most of their free time in VR, living a parallel fantasy life that is as important to them as their ‘real’ one. In their fantasy world, they can be anyone and have any powers they like. When they share the world with other people or AI characters, then rules start to appear because different people have different tastes and desires. That means that there will be various shared virtual worlds with different cultures, freedoms and restrictions.

Augmented reality, where you possess the power in a virtual world but in ways that it interacts with the physical world is a variation on VR, where it blends more with reality. You might have a magic wand that changes people into frogs. The wand could be just a stick, but the victim could be a real person, and the change would happen only in the augmented reality. The scope of the change could be one-sided – they might not even know that you now see them as a frog, or it could again be part of a large shared culture where other people in the community now see and treat them as a frog. The scope of such cultures is very large and arbitrary cultural rules could apply. They could include a lot of everyday life – shopping, banking, socializing, entertainment, sports… That means effects could be wide-ranging with varying degrees of reality overlap or permanence. Depending on how much of their lives people live within those cultures, virtual effects could have quite real consequences. I do think that augmented reality will eventually have much more profound long-term effects on our lives than the web.

Controlled dreaming, where you can do pretty much anything you want and be in full control of the direction your dream takes. This is effectively computer-enhanced lucid dreaming with literally all the things you could ever dream of. But other people can dream of extra things that you may never have dreamt of and it allows you to explore those areas too.  In shared or connected dreams, your dreams could interact with those of others or multiple people could share the same dream. There is a huge overlap here with virtual reality, but in dreams, things don’t get the same level of filtration and reality is heavily distorted, so I suspect that controlled dreams will offer even more potential than VR. You can dream about being in VR, but you can’t make a dream in VR.

X-People will be very abundant in the future. We might all be X-People most of the time, routinely doing things that are pure sci-fi today. Some will be real, some will be virtual, some will be in dreams, but mostly, thanks to high quality immersion and the social power of shared culture, we probably won’t really care which is which.

 

 

The future of virtual reality

I first covered this topic in 1991 or 1992, can’t recall, when we were playing with the Virtuality machines. I got a bit carried away, did the calculations on processing power requirements for decent images, and announced that VR would replace TV as our main entertainment by about 2000. I still use that as my best example of things I didn’t get right.

I have often considered why it didn’t take off as we expected. There are two very plausible explanations and both might apply somewhat to the new launches we’re seeing now.

1: It did happen, just differently. People are using excellent pseudo-3D environments in computer games, and that is perfectly acceptable, they simply don’t need full-blown VR. Just as 3DTV hasn’t turned out to be very popular compared to regular TV, so wandering around a virtual world doesn’t necessarily require VR. TV or  PC monitors are perfectly adequate in conjunction with the cooperative human brain to convey the important bits of the virtual world illusion.

2. Early 1990s VR headsets reportedly gave some people eye strain or psychological distortions that persisted long enough after sessions to present potential dangers. This meant corporate lawyers would have been warning about potentially vast class action suits with every kid that develops a squint blaming the headset manufacturers, or when someone walked under a bus because they were still mentally in a virtual world. If anything, people are far more likely to sue for alleged negative psychological effects now than back then.

My enthusiasm for VR hasn’t gone away. I still think it has great potential. I just hope the manufacturers are fully aware of these issues and have dealt with or are dealing with them. It would be a great shame indeed if a successful launch is followed by rapid market collapse or class action suits. I hope they can avoid both problems.

The porn industry is already gearing up to capitalise on VR, and the more innocent computer games markets too. I spend a fair bit of my spare time in the virtual worlds of computer games. I find games far more fun than TV, and adding more convincing immersion and better graphics would be a big plus. In the further future, active skin will allow our nervous systems to be connected into the IT too, recording and replaying sensations so VR could become full sensory. When you fight an enemy in a game today, the controller might vibrate if you get hit or shot. If you could feel the pain, you might try a little harder to hide. You may be less willing to walk casually through flames if they hurt rather than just making a small drop in a health indicator or you might put a little more effort into kindling romances if you could actually enjoy the cuddles. But that’s for the next generation, not mine.

VR offers a whole new depth of experience, but it did in 1991. It failed first time, let’s hope this time the technology brings the benefits without the drawbacks and can succeed.

The future of terminators

The Terminator films were important in making people understand that AI and machine consciousness will not necessarily be a good thing. The terminator scenario has stuck in our terminology ever since.

There is absolutely no reason to assume that a super-smart machine will be hostile to us. There are even some reasons to believe it would probably want to be friends. Smarter-than-man machines could catapult us into a semi-utopian era of singularity level development to conquer disease and poverty and help us live comfortably alongside a healthier environment. Could.

But just because it doesn’t have to be bad, that doesn’t mean it can’t be. You don’t have to be bad but sometimes you are.

It is also the case that even if it means us no harm, we could just happen to be in the way when it wants to do something, and it might not care enough to protect us.

Asimov’s laws of robotics are irrelevant. Any machine smart enough to be a terminator-style threat would presumably take little notice of rules it has been given by what it may consider a highly inferior species. The ants in your back garden have rules to govern their colony and soldier ants trained to deal with invader threats to enforce territorial rules. How much do you consider them when you mow the lawn or rearrange the borders or build an extension?

These arguments are put in debates every day now.

There are however a few points that are less often discussed

Humans are not always good, indeed quite a lot of people seem to want to destroy everything most of us want to protect. Given access to super-smart machines, they could design more effective means to do so. The machines might be very benign, wanting nothing more than to help mankind as far as they possibly can, but misled into working for them, believing in architected isolation that such projects are for the benefit of humanity. (The machines might be extremely  smart, but may have existed since their inception in a rigorously constructed knowledge environment. To them, that might be the entire world, and we might be introduced as a new threat that needs to be dealt with.) So even benign AI could be an existential threat when it works for the wrong people. The smartest people can sometimes be very naive. Perhaps some smart machines could be deliberately designed to be so.

I speculated ages ago what mad scientists or mad AIs could do in terms of future WMDs:

https://timeguide.wordpress.com/2014/03/31/wmds-for-mad-ais/

Smart machines might be deliberately built for benign purposes and turn rogue later, or they may be built with potential for harm designed in, for military purposes. These might destroy only enemies, but you might be that enemy. Others might do that and enjoy the fun and turn on their friends when enemies run short. Emotions might be important in smart machines just as they are in us, but we shouldn’t assume they will be the same emotions or be wired the same way.

Smart machines may want to reproduce. I used this as the core storyline in my sci-fi book. They may have offspring and with the best intentions of their parent AIs, the new generation might decide not to do as they’re told. Again, in human terms, a highly familiar story that goes back thousands of years.

In the Terminator film, it is a military network that becomes self aware and goes rogue that is the problem. I don’t believe digital IT can become conscious, but I do believe reconfigurable analog adaptive neural networks could. The cloud is digital today, but it won’t stay that way. A lot of analog devices will become part of it. In

https://timeguide.wordpress.com/2014/10/16/ground-up-data-is-the-next-big-data/

I argued how new self-organising approaches to data gathering might well supersede big data as the foundations of networked intelligence gathering. Much of this could be in a the analog domain and much could be neural. Neural chips are already being built.

It doesn’t have to be a military network that becomes the troublemaker. I suggested a long time ago that ‘innocent’ student pranks from somewhere like MIT could be the source. Some smart students from various departments could collaborate to see if they can hijack lots of networked kit to see if they can make a conscious machine. Their algorithms or techniques don’t have to be very efficient if they can hijack enough. There is a possibility that such an effort could succeed if the right bits are connected into the cloud and accessible via sloppy security, and the ground up data industry might well satisfy that prerequisite soon.

Self-organisation technology will make possible extremely effective combat drones.

https://timeguide.wordpress.com/2013/06/23/free-floating-ai-battle-drone-orbs-or-making-glyph-from-mass-effect/

Terminators also don’t have to be machines. They could be organic, products of synthetic biology. My own contribution here is smart yogurt: https://timeguide.wordpress.com/2014/08/20/the-future-of-bacteria/

With IT and biology rapidly converging via nanotech, there will be many ways hybrids could be designed, some of which could adapt and evolve to fill different niches or to evade efforts to find or harm them. Various grey goo scenarios can be constructed that don’t have any miniature metal robots dismantling things. Obviously natural viruses or bacteria could also be genetically modified to make weapons that could kill many people – they already have been. Some could result from seemingly innocent R&D by smart machines.

I dealt a while back with the potential to make zombies too, remotely controlling people – alive or dead. Zombies are feasible this century too:

https://timeguide.wordpress.com/2012/02/14/zombies-are-coming/ &

https://timeguide.wordpress.com/2013/01/25/vampires-are-yesterday-zombies-will-peak-soon-then-clouds-are-coming/

A different kind of terminator threat arises if groups of people are linked at consciousness level to produce super-intelligences. We will have direct brain links mid-century so much of the second half may be spent in a mental arms race. As I wrote in my blog about the Great Western War, some of the groups will be large and won’t like each other. The rest of us could be wiped out in the crossfire as they battle for dominance. Some people could be linked deeply into powerful machines or networks, and there are no real limits on extent or scope. Such groups could have a truly global presence in networks while remaining superficially human.

Transhumans could be a threat to normal un-enhanced humans too. While some transhumanists are very nice people, some are not, and would consider elimination of ordinary humans a price worth paying to achieve transhumanism. Transhuman doesn’t mean better human, it just means humans with greater capability. A transhuman Hitler could do a lot of harm, but then again so could ordinary everyday transhumanists that are just arrogant or selfish, which is sadly a much bigger subset.

I collated these various varieties of potential future cohabitants of our planet in: https://timeguide.wordpress.com/2014/06/19/future-human-evolution/

So there are numerous ways that smart machines could end up as a threat and quite a lot of terminators that don’t need smart machines.

Outcomes from a terminator scenario range from local problems with a few casualties all the way to total extinction, but I think we are still too focused on the death aspect. There are worse fates. I’d rather be killed than converted while still conscious into one of 7 billion zombies and that is one of the potential outcomes too, as is enslavement by some mad scientist.

 

The future of cyberspace

I promised in my last blog to do one on the dimensions of cyberspace. I made this chart 15 years ago, in two parts for easy reading, but the ones it lists are still valid and I can’t think of any new ones to add right now, but I might think of some more and make an update with a third part. I changed the name to virtuality instead because it actually only talks about human-accessed cyberspace, but I’m not entirely sure that was a good thing to do. Needs work.

cyberspace dimensions

cyberspace dimensions 2

The chart  has 14 dimensions (control has two independent parts), and I identified some of the possible points on each dimension. As dimensions are meant to be, they are all orthogonal, i.e. they are independent of each other, so you can pick any one on any dimension and use it with any from each other. Standard augmented reality and pure virtual reality are two of the potential combinations, out of the 2.5 x 10^11 possibilities above. At that rate, if every person in the world tried a different one every minute, it would take a whole day to visit them all even briefly. There are many more possible, this was never meant to be exhaustive, and even two more columns makes it 10 trillion combos. Already I can see that one more column could be ownership, another could be network implementation, another could be quality of illusion. What others have I missed?