Category Archives: AI

Optical computing

A few nights ago I was thinking about the optical fibre memories that we were designing in the late 1980s in BT. The idea was simple. You transmit data into an optical fibre, and if the data rate is high you can squeeze lots of data into a manageable length. Back then the speed of light in fibre was about 5 microseconds per km of fibre, so 1000km of fibre, at a data rate of 2Gb/s would hold 10Mbits of data, per wavelength, so if you can multiplex 2 million wavelengths, you’d store 20Tbits of data. You could maintain the data by using a repeater to repeat the data as it reaches one end into the other, or modify it at that point simply by changing what you re-transmit. That was all theory then, because the latest ‘hero’ experiments were only just starting to demonstrate the feasibility of such long lengths, such high density WDM and such data rates.

Nowadays, that’s ancient history of course, but we also have many new types of fibre, such as hollow fibre with various shaped pores and various dopings to allow a range of effects. And that’s where using it for computing comes in.

If optical fibre is designed for this purpose, with optimal variable refractive index designed to facilitate and maximise non-linear effects, then the photons in one data stream on one wavelength could have enough effects of photons in another stream to be used for computational interaction. Computers don’t have to be digital of course, so the effects don’t have to be huge. Analog computing has many uses, and analog interactions could certainly work, while digital ones might work, and hybrid digital/analog computing may also be feasible. Then it gets fun!

Some of the data streams could be programs. Around that time, I was designing protocols with smart packets that contained executable code, as well as other packets that could hold analog or digital data or any mix. We later called the smart packets ANTs – autonomous network telephers, a contrived term if ever there was one, but we wanted to call them ants badly. They would scurry around the network doing a wide range of jobs, using a range of biomimetic and basic physics techniques to work like ant colonies and achieve complex tasks using simple means.

If some of these smart packets or ANTs are running along a fibre, changing the properties as they go to interact with other data transmitting alongside, then ANTs can interact with one another and with any stored data. ANTs could also move forwards or backwards along the fibre by using ‘sidings’ or physical shortcuts, since they can route themselves or each other. Data produced or changed by the interactions could be digital or analog and still work fine, carried on the smart packet structure.

(If you’re interested my protocol was called UNICORN, Universal Carrier for an Optical Residential Network, and used the same architectural principles as my previous Addressed Time Slice invention, compressing analog data by a few percent to fit into a packet, with a digital address and header, or allowing any digital data rate or structure in a payload while keeping the same header specs for easy routing. That system was invented (in 1988) for the late 1990s when basic domestic broadband rate should have been 625Mbit/s or more, but we expected to be at 2Gbit/s or even 20Gbit/s soon after that in the early 2000s, and the benefit as that we wouldn’t have to change the network switching because the header overheads would still only be a few percent of total time. None of that happened because of government interference in the telecoms industry regulation that strongly disincentivised its development, and even today, 625Mbit/s ‘basic rate’ access is still a dream, let alone 20Gbit/s.)

Such a system would be feasible. Shortcuts and sidings are easy to arrange. The protocols would work fine. Non-linear effects are already well known and diverse. If it were only used for digital computing, it would have little advantage over conventional computers. With data stored on long fibre lengths, external interactions would be limited, with long latency. However, it does present a range of potentials for use with external sensors directly interacting with data streams and ANTs to accomplish some tasks associated with modern AI. It ought to be possible to use these techniques to build the adaptive analog neural networks that we’ve known are the best hope of achieving strong AI since Hans Moravek’s insight, coincidentally also around that time. The non-linear effects even enable ideal mechanisms for implementing emotions, biasing the computation in particular directions via intensity of certain wavelengths of light in much the same way as chemical hormones and neurotransmitters interact with our own neurons. Implementing up to 2 million different emotions at once is feasible.

So there’s a whole mineful of architectures, tools and techniques waiting to be explored and mined by smart young minds in the IT industry, using custom non-linear optical fibres for optical AI.

AI could use killer drone swarms to attack people while taking out networks

In 1987 I discovered a whole class of security attacks that could knock out networks, which I called correlated traffic attacks, creating particular patterns of data packet arrivals from particular sources at particular times or intervals. We simulated two examples to successfully verify the problem. One example was protocol resonance. I demonstrated that it was possible to push a system into a gross overload state with a single call, by spacing the packets precise intervals apart. Their arrival caused a strong resonance in the bandwidth allocation algorithms and the result was that network capacity was instantaneously reduced by around 70%. Another example was information waves, whereby a single piece of information appearing at a particular point could, by its interaction with particular apps on mobile devices (the assumption was financially relevant data that would trigger AI on the devices to start requesting voluminous data, triggering a highly correlated wave of responses, using up bandwidth and throwing the network into overload, very likely crashing due to initiation of rarely used software. When calls couldn’t get through, the devices would wait until the network recovered, then they would all simultaneously detect recovery and simultaneously try again, killing the net again, and again, until people were asked to turn  their devices off and on again, thereby bringing randomness back into the system. Both of these examples could knock out certain kinds of networks, but they are just two of an infinite set of possibilities in the correlated traffic attack class.

Adversarial AI pits one AI against another, trying things at random or making small modifications until a particular situation is achieved, such as the second AI accepting an image is acceptable. It is possible, though I don’t believe it has been achieved yet, to use the technique to simulate a wide range of correlated traffic situations, seeing which ones achieve network resonance or overloads, which trigger particular desired responses from network management or control systems, via interactions with the network and its protocols, commonly resident apps on mobile devices or computer operating systems.

Activists and researchers are already well aware that adversarial AI can be used to find vulnerabilities in face recognition systems and thereby prevent recognition, or to deceive autonomous car AI into seeing fantasy objects or not seeing real ones. As Noel Sharkey, the robotics expert, has been tweeting today, it will be possible to use adversarial AI to corrupt recognition systems used by killer drones, potentially to cause them to attack their controllers or innocents instead of their intended targets. I have to agree with him. But linking that corruption to the whole extended field of correlated traffic attacks extends the range of mechanisms that can be used greatly. It will be possible to exploit highly obscured interactions between network physical architecture, protocols and operating systems, network management, app interactions, and the entire sensor/IoT ecosystem, as well as software and AI systems using it. It is impossible to check all possible interactions, so no absolute defence is possible, but adversarial AI with enough compute power could randomly explore across these multiple dimensions, stumble across regions of vulnerability and drill down until grand vulnerabilities are found.

This could further be linked to apps used as highly invisible Trojans, offering high attractiveness to users with no apparent side effects, quietly gathering data to help identify potential targets, and simply waiting for a particular situation or command before signalling to the attacking system.

A future activist or terrorist group or rogue state could use such tools to make a multidimensional attack. It could initiate an attack, using its own apps to identify and locate targets, control large swarms of killer drones or robots to attack them, simultaneously executing a cyberattack that knocks out selected parts of the network, crashing or killing computers and infrastructure. The vast bulk of this could be developed, tested and refined offline, using simulation and adversarial AI approaches to discover vulnerabilities and optimise exploits.

There is already debate about killer drones, mainly whether we should permit them and in what circumstances, but activists and rogue states won’t care about rules. Millions of engineers are technically able to build such things and some are not on your side. It is reasonable to expect that freely available AI tools will be used in such ways, using their intelligence to design, refine, initiate and control attacks using killer drones, robots and self-driving cars to harm us, while corrupting systems and infrastructure that protect us.

Worrying, especially since the capability is arriving just as everyone is starting to consider civil war.

 

 

Some trees just don’t get barked up

Now and then, someone asks me for an old document and as I search for it, I stumble across others I’d forgotten about. I’ve been rather frustrated that AI progress hasn’t kept up with its development rate in the 90s, so this was fun to rediscover, highlighting some future computing directions that offered serious but uncertain potential exactly 20 years ago, well 20 years ago 3 weeks ago. Here is the text, and the Schrodinger’s Computer was only ever intended to be silly (since renamed the Yonck Processor):

Herrings, a large subset of which are probably red

Computers in the future will use a wide range of techniques, not just conventional microprocessors. Problems should be decomposed and the various components streamed to the appropriate processing engines. One of the important requirements is therefore some means of identifying automatically which parts of a problem could best be tackled by which techniques, though sometimes it might be best to use several in parallel with some interaction between them.

 Analogs

We have a wider variety of components available to be used in analog computing today than we had when it effectively died out in the 80s. With much higher quality analog and mixed components, and additionally micro-sensors, MEMs, simple neural network components, and some imminent molecular capability, how can we rekindle the successes of the analog domain. Nature handles the infinite body problem with ease! Things just happen according to the laws of physics. How can we harness them too? Can we build environments with synthetic physics to achieve more effects? The whole field of non-algorithmic computation seems ripe for exploitation.

 Neural networks

  • Could we make neural microprocessor suspensions, using spherical chips suspended in gel in a reflective capsule and optical broadcasting. Couple this with growing wires across the electric field. This would give us both electrical and optical interconnection that could be ideal for neural networks with high connectivity. Could link this to gene chip technology to have chemical detection and synthesis on the chips too, so that we could have close high speed replicas of organic neural networks.
  • If we can have quantum entanglement between particles, might this affect the way in which neurons in the brain work? Do we have neural entanglement and has this anything to do with how our brain works. Could we create neural entanglement or even virtual entanglement and would it have any use?
  • Could we make molecular neurons (or similar) using ordinary chemistry? And then form them into networks. Might need nanomachines and bottom-up assembly.
  • Could we use neurons as the first stage filters to narrow down the field to make problems tractable for other techniques
  • Optical neurons
  • Magnetic neurons

Electromechanical, MEMS etc

  • Micromirror arrays as part of optical computers, perhaps either as data entry, or as part of the algorithm
  • Carbon fullerene balls and tubes as MEM components
  • External fullerene ‘décor’ as a form of information, cf antibodies in immune system
  • Sensor suspensions and gels as analog computers for direct simulation

Interconnects

  • Carbon fullerene tubes as on chip wires
  • Could they act as electron pipes for ultra-high speed interconnect
  • Optical or radio beacons on chip

Software

  • Transforms – create a transform of every logic component, spreading the functionality across a wide domain, and construct programs using them instead. Small perturbation is no longer fatal but just reduces accuracy
  • Filters – nature works often using simple physical effects where humans design complex software. We need to look at hard problems to see how we might make simple filters to narrow the field before computing final details and stages conventionally.
  • Interference – is there some form of representation that allows us to compute operations by means of allowing the input data to interact directly, i.e. interference, instead of using tedious linear computation. Obviously only suited to a subset of problems.

And finally, the frivolous

  • Schrodinger’s computer – design of computer and software, if any, not determined until box is opened. The one constant is that it destroys itself if it doesn’t finding the solution. All possible computers and all possible programs exist and if there is a solution, the computer will pop out alive and well with the answer. Set it the problem of answering all possible questions too, working out which ones have the most valuable answers and using up all the available storage to write the best answers.

The future of reproductive choice

I’m not taking sides on the abortion debate, just drawing maps of the potential future, so don’t shoot the messenger.

An average baby girl is born with a million eggs, still has 300,000 when she reaches puberty, and subsequently releases 300 – 400 of these over her reproductive lifetime. Typically one or two will become kids but today a woman has no way of deciding which ones, and she certainly has no control over which sperm is used beyond choosing her partner.

Surely it can’t be very far in the future (as a wild guess, say 2050) before we fully understand the links between how someone is and their genetics (and all the other biological factors involved in determining outcome too). That knowledge could then notionally be used to create some sort of nanotech (aka magic) gate that would allow her to choose which of her eggs get to be ovulated and potentially fertilized, wasting ones she isn’t interested in and going for it when she’s released a good one. Maybe by 2060, women would also be able to filter sperm the same way, helping some while blocking others. Choice needn’t be limited to whether to have a baby or not, but which baby.

Choosing a particularly promising egg and then which sperm would combine best with it, an embryo might be created only if it is likely to result in the right person (perhaps an excellent athlete, or an artist, or a scientist, or just good looking), or deselected if it would become the wrong person (e.g. a terrorist, criminal, saxophonist, Republican).

However, by the time we have the technology to do that, and even before we fully know what gene combos result in what features, we would almost certainly be able to simply assemble any chosen DNA and insert it into an egg from which the DNA has been removed. That would seem a more reliable mechanism to get the ‘perfect’ baby than choosing from a long list of imperfect ones. Active assembly should beat deselection from a random list.

By then, we might even be using new DNA bases that don’t exist in nature, invented by people or AI to add or control features or abilities nature doesn’t reliably provide for.

If we can do that, and if we know how to simulate how someone might turn out, then we could go further and create lots of electronic babies that live their entire lives in an electronic Matrix style existence. Let’s expand on that briefly.

Even today, couples can store eggs and sperm for later use, but with this future genetic assembly, it will become feasible to create offspring from nothing more than a DNA listing. DNA from both members of a couple, of any sex, could get a record of their DNA, randomize combinations with their partner’s DNA and thus get a massive library of potential offspring. They may even be able to buy listings of celebrity DNA from the net. This creates the potential for greatly delayed birth and tradable ‘ebaybies’ – DNA listings are not alive so current laws don’t forbid trading in them. These listings could however be used to create electronic ‘virtual’offspring, simulated in a computer memory instead of being born organically. Various degrees of existence are possible with varied awareness. Couples may have many electronic babies as well as a few real ones. They may even wait to see how a simulation works out before deciding which kids to make for real. If an electronic baby turns out particularly well, it might be promoted to actual life via DNA assembly and real pregnancy. The following consequences are obvious:

Trade-in and collection of DNA listings, virtual embryos, virtual kids etc, that could actually be fabricated at some stage

Re-birth, potential to clone and download one’s mind or use a direct brain link to live in a younger self

Demands by infertile and gay couples to have babies via genetic assembly

Ability of kids to own entire populations of virtual people, who are quite real in some ways.

It is clear that this whole technology field is rich in ethical issues! But we don’t need to go deep into future tech to find more of those. Just following current political trends to their logical conclusions introduces a lot more. I’ve written often on the random walk of values, and we cannot be confident that many values we hold today will still reign in decades time. Where might this random walk lead? Let’s explore some more.

Even in ‘conventional’ pregnancies, although the right to choose has been firmly established in most of the developed world, a woman usually has very little information about the fetus and has to make her decision almost entirely based on her own circumstances and values. The proportion of abortions related to known fetal characteristics such as genetic conditions or abnormalities is small. Most decisions can’t yet take any account of what sort of person that fetus might become. We should expect future technology to provide far more information on fetus characteristics and likely future development. Perhaps if a woman is better informed on likely outcomes, might that sometimes affect her decision, in either direction?

In some circumstances, potential outcome may be less certain and an informed decision might require more time or more tests. To allow for that without reducing the right to choose, is possible future law could allow for conditional terminations, registered before a legal time limit but performed later (before another time limit) when more is known. This period could be used for more medical tests, or to advertise the baby to potential adopters that want a child just like that one, or simply to allow more time for the mother to see how her own circumstances change. Between 2005 and 2015, USA abortion rate dropped from 1 in 6 pregnancies to 1 in 7, while in the UK, 22% of pregnancies are terminated. What would these figures be if women could determine what future person would result? Would termination rate increase? To 30%, 50%? Abandon this one and see if we can make a better one? How many of us would exist if our parents had known then what they know now?

Whether and how late terminations should be permitted is still fiercely debated. There is already discussion about allowing terminations right up to birth and even after birth in particular circumstances. If so, then why stop there? We all know people who make excellent arguments for retrospective abortion. Maybe future parents should be allowed to decide whether to keep a child right up until it reaches its teens, depending on how the child turns out. Why not 16, or 18, or even 25, when people truly reach adulthood? By then they’d know what kind of person they’re inflicting on the world. Childhood and teen years could simply be a trial period. And why should only the parents have a say? Given an overpopulated world with an infinite number of potential people that could be brought into existence, perhaps the state could also demand a high standard of social performance before assigning a life license. The Chinese state already uses surveillance technology to assign social scores. It is a relatively small logical step further to link that to life licenses that require periodic renewal. Go a bit further if you will, and link that thought to the blog I just wrote on future surveillance: https://timeguide.wordpress.com/2019/05/19/future-surveillance/.

Those of you who have watched Logan’s Run will be familiar with the idea of  compulsory termination at a certain age. Why not instead have a flexible age that depends on social score? It could range from zero to 100. A pregnancy might only be permitted if the genetic blueprint passes a suitability test and then as nurture and environmental factors play their roles as a person ages, their life license could be renewed (or not) every year. A range of crimes might also result in withdrawal of a license, and subsequent termination.

Finally, what about AI? Future technology will allow us to make hybrids, symbionts if you like, with a genetically edited human-ish body, and a mind that is part human, part AI, with the AI acting partly as enhancement and partly as a control system. Maybe the future state could insist that installation into the embryo of a state ‘guardian’, a ‘supervisory AI’, essentially a deeply embedded police officer/judge/jury/executioner will be required to get the life license.

Random walks are dangerous. You can end up where you start, or somewhere very far away in any direction.

The legal battles and arguments around ‘choice’ won’t go away any time soon. They will become broader, more complex, more difficult, and more controversial.

Future Surveillance

This is an update of my last surveillance blog 6 years ago, much of which is common discussion now. I’ll briefly repeat key points to save you reading it.

They used to say

“Don’t think it

If you must think it, don’t say it

If you must say it, don’t write it

If you must write it, don’t sign it”

Sadly this wisdom is already as obsolete as Asimov’s Laws of Robotics. The last three lines have already been automated.

I recently read of new headphones designed to recognize thoughts so they know what you want to listen to. Simple thought recognition in various forms has been around for 20 years now. It is slowly improving but with smart networked earphones we’re already providing an easy platform into which to sneak better monitoring and better though detection. Sold on convenience and ease of use of course.

You already know that Google and various other large companies have very extensive records documenting many areas of your life. It’s reasonable to assume that any or all of this could be demanded by a future government. I trust Google and the rest to a point, but not a very distant one.

Your phone, TV, Alexa, or even your networked coffee machine may listen in to everything you say, sending audio records to cloud servers for analysis, and you only have naivety as defense against those audio records being stored and potentially used for nefarious purposes.

Some next generation games machines will have 3D scanners and UHD cameras that can even see blood flow in your skin. If these are hacked or left switched on – and social networking video is one of the applications they are aiming to capture, so they’ll be on often – someone could watch you all evening, capture the most intimate body details, film your facial expressions and gaze direction while you are looking at a known image on a particular part of the screen. Monitoring pupil dilation, smiles, anguished expressions etc could provide a lot of evidence for your emotional state, with a detailed record of what you were watching and doing at exactly that moment, with whom. By monitoring blood flow and pulse via your Fitbit or smartwatch, and additionally monitoring skin conductivity, your level of excitement, stress or relaxation can easily be inferred. If given to the authorities, this sort of data might be useful to identify pedophiles or murderers, by seeing which men are excited by seeing kids on TV or those who get pleasure from violent games, and it is likely that that will be one of the justifications authorities will use for its use.

Millimetre wave scanning was once controversial when it was introduced in airport body scanners, but we have had no choice but to accept it and its associated abuses –  the only alternative is not to fly. 5G uses millimeter wave too, and it’s reasonable to expect that the same people who can already monitor your movements in your home simply by analyzing your wi-fi signals will be able to do a lot better by analyzing 5G signals.

As mm-wave systems develop, they could become much more widespread so burglars and voyeurs might start using them to check if there is anything worth stealing or videoing. Maybe some search company making visual street maps might ‘accidentally’ capture a detailed 3d map of the inside of your house when they come round as well or instead of everything they could access via your wireless LAN.

Add to this the ability to use drones to get close without being noticed. Drones can be very small, fly themselves and automatically survey an area using broad sections of the electromagnetic spectrum.

NFC bank and credit cards not only present risks of theft, but also the added ability to track what we spend, where, on what, with whom. NFC capability in your phone makes some parts of life easier, but NFC has always been yet another doorway that may be left unlocked by security holes in operating systems or apps and apps themselves carry many assorted risks. Many apps ask for far more permissions than they need to do their professed tasks, and their owners collect vast quantities of information for purposes known only to them and their clients. Obviously data can be collected using a variety of apps, and that data linked together at its destination. They are not all honest providers, and apps are still very inadequately regulated and policed.

We’re seeing increasing experimentation with facial recognition technology around the world, from China to the UK, and only a few authorities so far such as in San Francisco have had the wisdom to ban its use. Heavy handed UK police, who increasingly police according to their own political agenda even at the expense of policing actual UK law, have already fined people who have covered themselves to avoid being abused in face recognition trials. It is reasonable to assume they would gleefully seize any future opportunity to access and cross-link all of the various data pools currently being assembled under the excuse of reducing crime, but with the real intent of policing their own social engineering preferences. Using advanced AI to mine zillions of hours of full-sensory data input on every one of us gathered via all this routine IT exposure and extensive and ubiquitous video surveillance, they could deduce everyone’s attitudes to just about everything – the real truth about our attitudes to every friend and family member or TV celebrity or politician or product, our detailed sexual orientation, any fetishes or perversions, our racial attitudes, political allegiances, attitudes to almost every topic ever aired on TV or everyday conversation, how hard we are working, how much stress we are experiencing, many aspects of our medical state.

It doesn’t even stop with public cameras. Innumerable cameras and microphones on phones, visors, and high street private surveillance will automatically record all this same stuff for everyone, sometimes with benign declared intentions such as making self-driving vehicles safer, sometimes using social media tribes to capture any kind of evidence against ‘the other’. In depth evidence will become available to back up prosecutions of crimes that today would not even be noticed. Computers that can retrospectively date mine evidence collected over decades and link it all together will be able to identify billions of real or invented crimes.

Active skin will one day link your nervous system to your IT, allowing you to record and replay sensations. You will never be able to be sure that you are the only one that can access that data either. I could easily hide algorithms in a chip or program that only I know about, that no amount of testing or inspection could ever reveal. If I can, any decent software engineer can too. That’s the main reason I have never trusted my IT – I am quite nice but I would probably be tempted to put in some secret stuff on any IT I designed. Just because I could and could almost certainly get away with it. If someone was making electronics to link to your nervous system, they’d probably be at least tempted to put a back door in too, or be told to by the authorities.

The current panic about face recognition is justified. Other AI can lipread better than people and recognize gestures and facial expressions better than people. It adds the knowledge of everywhere you go, everyone you meet, everything you do, everything you say and even every emotional reaction to all of that to all the other knowledge gathered online or by your mobile, fitness band, electronic jewelry or other accessories.

Fools utter the old line: “if you are innocent, you have nothing to fear”. Do you know anyone who is innocent? Of everything? Who has never ever done or even thought anything even a little bit wrong? Who has never wanted to do anything nasty to anyone for any reason ever? And that’s before you even start to factor in corruption of the police or mistakes or being framed or dumb juries or secret courts. The real problem here is not the abuses we already see. It is what is being and will be collected and stored, forever, that will be available to all future governments of all persuasions and police authorities who consider themselves better than the law. I’ve said often that our governments are often incompetent but rarely malicious. Most of our leaders are nice guys, only a few are corrupt, but most are technologically inept . With an increasingly divided society, there’s a strong chance that the ‘wrong’ government or even a dictatorship could get in. Which of us can be sure we won’t be up against the wall one day?

We’ve already lost the battle to defend privacy. The only bits left are where the technology hasn’t caught up yet. In the future, not even the deepest, most hidden parts of your mind will be private. Pretty much everything about you will be available to an AI-upskilled state and its police.

The future for women, pdf version

It is several years since my last post on the future as it will affect women so here is my new version as a pdf presentation:

Women and the Future

Augmented reality will objectify women

Microsoft Hololens 2 Visor

The excitement around augmented reality continues to build, and I am normally  enthusiastic about its potential, looking forward to enjoying virtual architecture, playing immersive computer games, or enjoying visual and performance artworks transposed into my view of the high street while I shop.

But it won’t all be wonderful. While a few PR and marketing types may worry a little about people overlaying or modifying their hard-won logos and ads, a bigger issue will be some people choosing to overlay people in the high street with ones that are a different age or gender or race, or simply prettier. Identity politics will be fought on yet another frontier.

In spite of waves of marketing hype and misrepresentation, AR is really only here in primitive form outside the lab. Visors fall very far short of what we’d hoped for by now even a decade ago, even the Hololens 2 shown above. But soon AR visors and eventually active contact lenses will enable fully 3D hi-res overlays on the real world. Then, in principle at least, you can make things look how you want, with a few basic limits. You could certainly transform a dull shop, cheap hotel room or an office into an elaborate palace or make it look like a spaceship. But even if you change what things look like, you still have to represent nearby physical structures and obstacles in your fantasy overlay world, or you may bump into them, and that includes all the walls and furniture, lamp posts, bollards, vehicles, and of course other people. Augmented reality allows you to change their appearance thoroughly but they still need to be there somehow.

When it comes to people, there will be some battles. You may spend ages creating a wide variety of avatars, or may invest a great deal of time and money making or buying them. You may have a digital aura, hoping to present different avatars to different passers-by according to their profiles. You may want to look younger or thinner or as a character you enjoy playing in a computer game. You may present a selection of options to the AIs controlling the passer person’s view and the avatar they see overlaid could be any one of the images you have on offer. Perhaps some privileged people get to pick from a selection you offer, while others you wish to privilege less are restricted to just one that you have set for their profile. Maybe you’d have a particularly ugly or offensive one to present to those with opposing political views.

Except that you can’t assume you will be in control. In fact, you probably won’t.

Other people may choose not to see your avatar, but instead to superimpose one of their own choosing. The question of who decides what the viewer sees is perhaps the first and most important battle in AR. Various parties would like to control it – visor manufacturers, O/S providers, UX designers, service providers, app creators, AI providers, governments, local councils, police and other emergency services, advertisers and of course individual users. Given market dynamics, most of these ultimately come down to user choice most of the time, albeit sometimes after paying for the privilege. So it probably won’t be you who gets to choose how others see you, via assorted paid intermediary services, apps and AI, it will be the other person deciding how they want to see you, regardless of your preferences.

So you can spend all the time you want designing your avatar and tweaking your virtual make-up to perfection, but if someone wants to see their favorite celebrity walking past instead of you, they will. You and your body become no more than an object on which to display any avatar or image someone else chooses. You are quite literally reduced to an object in the AR world. Augmented reality will literally objectify women, reducing them to no more than a moving display space onto which their own selected images are overlaid. A few options become obvious.

Firstly they may just take your actual physical appearance (via a video camera built into their visor for example) and digitally change it,  so it is still definitely you, but now dressed more nicely, or dressed in sexy lingerie, or how you might look naked, using the latest AI to body-fit fantasy images from a porn database. This could easily be done automatically in real time using some app or other. You’ve probably already seen recent AI video fakery demos that can present any celebrity saying anything at all, almost indistinguishable from reality. That will soon be pretty routine tech for AR apps. They could even use your actual face as input to image-matching search engines to find the most plausible naked lookalikes. So anyone could digitally dress or undress you, not just with their eyes, but with a hi-res visor using sophisticated AI-enabled image processing software. They could put you in any kind of outfit, change your skin color or make-up or age or figure, and make you look as pretty and glamorous or as slutty as they want. And you won’t have any idea what they are seeing. You simply won’t know whether they are respectfully celebrating your inherent beauty, or flattering you by making you look even prettier, which you might not mind at all, or might object to strongly in the absence of explicit consent, or worse still, stripping or degrading you to whatever depths they wish, with no consent or notification, which you probably will mind a lot.

Or they can treat you as just an object on which to superimpose some other avatar, which could be anything or anyone – a zombie, favorite actress or supermodel. They won’t need your consent and again you won’t have any idea what they are seeing. The avatar may make the same gestures and movements and even talk plausibly, saying whatever their AI thinks they might like, but it won’t be you. In some ways this might not be so bad. You’d still be reduced to an object but at least it wouldn’t be you that they’re looking at naked. To most strangers on a high street most of the time, you’re just a moving obstacle to avoid bumping into, so being digitally transformed into a walking display board may worry you. Most people will cope with that bit. It is when you stop being just a passing stranger and start to interact in some way that it really starts to matter. You probably won’t like it if someone is chatting to you but they are actually looking at someone else entirely, especially if the viewer is one of your friends or your partner. And if your partner is kissing or cuddling you but seeing someone else, that would be a strong breach of trust, but how would you know? This sort of thing could and probably will damage a lot of relationships.

Most of the software to do most of this is already in development and much is already demonstrable. The rest will develop quickly once AR visors become commonplace.

In the office, in the home, when you’re shopping or at a party, you soon won’t have any idea what or who someone else is seeing when they look at you. Imagine how that would clash with rules that are supposed to be protection from sexual harassment  in the office. Whole new levels of harassment will be enabled, much invisible. How can we police behaviors we can’t even detect? Will hardware manufacturers be forced to build in transparency and continuous experience recording

The main casualty will be trust.  It will make us question how much we trust each of our friends and colleagues and acquaintances. It will build walls. People will often become suspicious of others, not just strangers but friends and colleagues. Some people will become fearful. You may dress as primly or modestly as you like, but if the viewer chooses to see you wearing a sexy outfit, perhaps their behavior and attitude towards you will be governed by that rather than reality. Increased digital objectification might lead to increase physical sexual assault or rape. We may see more people more often objectifying women in more circumstances.

The tech applies equally to men of course. You could make a man look like a silverback gorilla or a zombie or fake-naked. Some men will care more than others, but the vast majority of real victims will undoubtedly be women. Many men objectify women already. In the future AR world , they’ll be able to do so far more effectively, more easily.

 

Who controls AI, controls the world

This week, the fastest supercomputer broke a world record for AI, using machine learning in climate research:

https://www.wired.com/story/worlds-fastest-supercomputer-breaks-ai-record/

I guess most readers thought this is a great thing, after all we need to solve climate change. That wasn’t my thought. The first thing my boss told me when I used a computer for the first time was: “shit in, shit out”. I don’t remember his name but I remember that concise lesson every time I read about climate models. If either the model or the data is garbage, or both, the output will also be garbage.

So my first thought reading about this new record was: will they let the AI work everything out for itself using all the raw, unadjusted data available about the environment, including all the astrophysics data about every kind of solar activity, human agricultural, industrial activities, air travel, all the unadjusted measurements of or proxies for surface, sea and air temperatures, ever collected, any empirical evidence for any corrections that might be needed on such data in any direction, and then let it make its own deductions, form its own models of how it might all connected and then watch eagerly as it makes predictions?

Or will they just input their own models, CO2 blinkering, prejudices and group-think, adjusted datasets, data omissions and general distortions of historical records into biased models already indoctrinated with climate change dogma, so that it will reconfirm the doom and gloom forecasts we’re so used to hearing, maximizing their chances of continued grants? If they do that, the AI might as well be a cardboard box with a pre-written article stuck on it. Shit in, shit out.

It’s obvious that the speed and capability of the supercomputer is of secondary important to who controls the AI, and its access to data, and its freedom to draw its own conclusions.

(Read my blog on Fake AI: https://timeguide.wordpress.com/2017/11/16/fake-ai/)

You may recall a week or two ago that IBM released a new face database to try to address bias in AI face recognition systems. Many other kinds of data could have biases for all sorts of reasons. At face value reducing bias is a good thing, but what exactly do we mean by that? Who decides what is biased and what is real? There are very many potential AI uses that are potentially sensitive, such as identifying criminals or distinguishing traits that correlate with gender, sexuality, race, religion, or indeed any discernible difference. Are all deductions by the AI permissible, or are huge swathes of possible deductions not permitted because they might be politically unacceptable? Who controls the AI? Why? With what aims?

Many people have some degree of influence on  AI. Those who provide funding, equipment, theoreticians, those who design hardware, those who design the learning and training mechanisms, those who supply the data, those who censor or adjust data before letting the AI see it, those who design the interfaces, those who interpret and translate the results, those who decide which results are permissible and how to spin them, and publish them.

People are often impressed when a big powerful computer outputs results of massive amounts of processing. Outputs may often be used to control public opinion and government policy, to change laws, to alter balance of power in society, to create and destroy empires. AI will eventually make or influence most decisions of any consequence.

As AI techniques become more powerful, running on faster and better computers, we must always remember that golden rule: shit in, shit out. And we must always be suspicious of those who might have reason to influence an outcome.

Because who controls AI, controls the world.

 

 

Future AI: Turing multiplexing, air gels, hyper-neural nets

Just in time to make 2018 a bit less unproductive, I managed to wake in the middle of the night with another few inventions. I’m finishing the year on only a third as many as 2016 and 2017, but better than some years. And I quite like these new ones.

Gel computing is a very old idea of mine, and I’m surprised no company has started doing it yet. Air gel is different. My original used a suspension of processing particles in gel, and the idea was that the gel would hold the particles in fixed locations with good free line of sight to neighbor devices for inter-device optical comms, while acting also as a coolant.

Air gel uses the same idea of suspending particles, but does so by using ultrasound, standing waves holding the particles aloft. They would form a semi-gel I suppose, much softer. The intention is that they will be more easily movable than in a gel, and maybe rotate. I imagine using rotating magnetic fields to rotate them, and use that mechanism to implement different configurations of inter-device nets. That would be the first pillar of running multiple neural nets in the same space at the same time, using spin-based TDM (time division multiplexing), or synchronized space multiplexing if you prefer. If a device uses on board processing that is fast compared to the signal transmission time to other devices (the speed of light may be fast but can still be severely limiting for processing and comms), then having the ability to deal with processing associated with several other networks while awaiting a response allows a processing network to be multiplied up several times. A neural net could become a hyper-neural net.

Given that this is intended for mid-century AI, I’m also making the assumption that true TDM can also be used on each net, my second pillar. Signals would carry a stream of slots holding bits for each processing instance. Since this allows a Turing machine to implement many different processes in parallel, I decided to call it Turing multiplexing. Again, it helps alleviate the potential gulf between processing and communication times. Combining Turing and spin multiplexing would allow a single neural net to be multiplied up potentially thousands or millions of times – hyper-neurons seems as good a term as any.

The third pillar of this system is that the processing particles (each could contain a large number of neurons or other IT objects) could be energized and clocked using very high speed alternating EM fields – radio, microwaves, light, even x-rays. I don’t have any suggestions for processing mechanisms that might operate at such frequencies, though Pauli switches might work at lower speeds, using Pauli exclusion principle to link electron spin states to make switches. I believe early versions of spin cubits use a similar principle. I’m agnostic whether conventional Turing machine or quantum processing would be used, or any combination. In any case, it isn’t my problem, I suspect that future AIs will figure out the physics and invent the appropriate IT.

Processing devices operating at high speed could use a lot of energy and generate a lot of heat, and encouraging the system to lase by design would be a good way to cool it as well as powering it.

A processor using such mechanisms need not be bulky. I always assumed a yogurt pot size for my gel computer before and an air gel processor could be the same, about 100ml. That is enough to suspend a trillion particles with good line of sight for optical interconnections, and each connection could utilise up to millions of alternative wavelengths. Each wavelength could support many TDM channels and spinning the particles multiplies that up again. A UV laser clock/power source driving processors at 10^16Hz would certainly need to use high density multiplexing to make use of such a volume, with transmission distances up to 10cm (but most sub-mm) otherwise being a strongly limiting performance factor, but 10 million-fold WDM/TDM is attainable.

A trillion of these hyper-neurons using that multiplexing would act very effectively as 10 million trillion neurons, each operating at 10^16Hz processing speed. That’s quite a lot of zeros, 35 of them, and yet each hyperneuron could have connections to thousands of others in each of many physical configurations. It would be an obvious platform for supporting a large population of electronically immortal people and AIs who each want a billion replicas, and if it only occupies 100ml of space, the environmental footprint isn’t an issue.

It’s hard to know how to talk to a computer that operates like a brain, but is 10^22 times faster, but I’d suggest ‘Yes Boss’.

 

With automation driving us towards UBI, we should consider a culture tax

Regardless of party politics, most people want a future where everyone has enough to live a dignified and comfortable life. To make that possible, we need to tweak a few things.

Universal Basic Income

I suggested a long time ago that in the far future we could afford a basic income for all, without any means testing on it, so that everyone has an income at a level they can live on. It turned out I wasn’t the only one thinking that and many others since have adopted the idea too, under the now usual terms Universal Basic Income or the Citizen Wage. The idea may be old, but the figures are rarely discussed. It is harder than it sounds and being a nice idea doesn’t ensure  economic feasibility.

No means testing means very little admin is needed, saving the estimated 30% wasted on admin costs today. Then wages could go on top, so that everyone is still encouraged to work, and then all income from all sources is totalled and taxed appropriately. It is a nice idea.

The difference between figures between parties would be relatively minor so let’s ignore party politics. In today’s money, it would be great if everyone could have, say, £30k a year as a state benefit, then earn whatever they can on top. £30k is around today’s average wage. It doesn’t make you rich, but you can live on it so nobody would be poor in any sensible sense of the word. With everyone economically provided for and able to lead comfortable and dignified lives, it would be a utopia compared to today. Sadly, it can’t work with those figures yet. 65,000,000 x £30,000 = £1,950Bn . The UK economy isn’t big enough. The state only gets to control part of GDP and out of that reduced budget it also has its other costs of providing health, education, defence etc, so the amount that could be dished out to everyone on this basis is therefore a lot smaller than 30k. Even if the state were to take 75% of GDP and spend most of it on the basic income, £10k per person would be pushing it. So a couple would struggle to afford even the most basic lifestyle, and single people would really struggle. Some people would still need additional help, and that reduces the pool left to pay the basic allowance still further. Also, if the state takes 75% of GDP, only 25% is left for everything else, so salaries would be flat, reducing the incentive to work, while investment and entrepreneurial activity are starved of both resources and incentive. It simply wouldn’t work today.

Simple maths thus forces us to make compromises. Sharing resources reduces costs considerably. In a first revision, families might be given less for kids than for the adults, but what about groups of young adults sharing a big house? They may be adults but they also benefit from the same economy of shared resources. So maybe there should be a household limit, or a bedroom tax, or forms and means testing, and it mustn’t incentivize people living separately or house supply suffers. Anyway, it is already getting complicated and our original nice idea is in the bin. That’s why it is such a mess at the moment. There just isn’t enough money to make everyone comfortable without doing lots of allowances and testing and admin. We all want utopia, but we can’t afford it. Even the modest £30k-per-person utopia costs at least 3 times more than the UK can afford. Switzerland is richer per capita but even there they have rejected the idea.

However, if we can get back to the average 2.5% growth per year in real terms that used to apply pre-recession, and surely we can, it would only take 45 years to get there. That isn’t such a long time. We have hope that if we can get some better government than we have had of late, and are prepared to live with a little economic tweaking, we could achieve good quality of life for all in the second half of the century.

So I still really like the idea of a simple welfare system, providing a generous base level allowance to everyone, topped up by rewards of effort, but recognise that we in the UK will have to wait decades before we can afford to put that base level at anything like comfortable standards though other economies could afford it earlier.

Meanwhile, we need to tweak some other things to have any chance of getting there. I’ve commented often that pure capitalism would eventually lead to a machine-based economy, with the machine owners having more and more of the cash, and everyone else getting poorer, so the system will fail. Communism fails too. Thankfully much of the current drive in UBI thinking is coming from the big automation owners so it’s comforting to know that they seem to understand the alternative.

Capitalism works well when rewards are shared sensibly, it fails when wealth concentration is too high or when incentive is too low. Preserving the incentive to work and create is a mainly matter of setting tax levels well. Making sure that wealth doesn’t get concentrated too much needs a new kind of tax.

Culture tax

The solution I suggest is a culture tax. Culture in the widest sense.

When someone creates and builds a company, they don’t do so from a state of nothing. They currently take for granted all our accumulated knowledge and culture – trained workforce, access to infrastructure, machines, governance, administrative systems, markets, distribution systems and so on. They add just another tiny brick to what is already a huge and highly elaborate structure. They may invest heavily with their time and money but actually when  considered overall as part of the system their company inhabits, they only pay for a fraction of the things their company will use.

That accumulated knowledge, culture and infrastructure belongs to everyone, not just those who choose to use it. It is common land, free to use, today. Businesses might consider that this is what they pay taxes for already, but that isn’t explicit in the current system.

The big businesses that are currently avoiding paying UK taxes by paying overseas companies for intellectual property rights could be seen as trailblazing this approach. If they can understand and even justify the idea of paying another part of their company for IP or a franchise, why should they not pay the host country for its IP – access to the residents’ entire culture?

This kind of tax would provide the means needed to avoid too much concentration of wealth. A future businessman might still choose to use only software and machines instead of a human workforce to save costs, but levying taxes on use of  the cultural base that makes that possible allows a direct link between use of advanced technology and taxation. Sure, he might add a little extra insight or new knowledge, but would still have to pay the rest of society for access to its share of the cultural base, inherited from the previous generations, on which his company is based. The more he automates, the more sophisticated his use of the system, the more he cuts a human workforce out of his empire, the higher his taxation. Today a company pays for its telecoms service which pays for the network. It doesn’t pay explicitly for the true value of that network, the access to people and businesses, the common language, the business protocols, a legal system, banking, payments system, stable government, a currency, the education of the entire population that enables them to function as actual customers. The whole of society owns those, and could reasonably demand rent if the company is opting out of the old-fashioned payments mechanisms – paying fair taxes and employing people who pay taxes. Automate as much as you like, but you still must pay your share for access to the enormous value of human culture shared by us all, on which your company still totally depends.

Linking to technology use makes good sense. Future AI and robots could do a lot of work currently done by humans. A few people could own most of the productive economy. But they would be getting far more than their share of the cultural base, which belongs equally to everyone. In a village where one farmer owns all the sheep, other villagers would be right to ask for rent for their share of the commons if he wants to graze them there.

I feel confident that this extra tax would solve many of the problems associated with automation. We all equally own the country, its culture, laws, language, human knowledge (apart from current patents, trademarks etc. of course), its public infrastructure, not just businessmen. Everyone surely should have the right to be paid if someone else uses part of their share. A culture tax would provide a fair ethical basis to demand the taxes needed to pay the Universal basic Income so that all may prosper from the coming automation.

The extra culture tax would not magically make the economy bigger, though automation may well increase it a lot. The tax would ensure that wealth is fairly shared. Culture tax/UBI duality is a useful tool to be used by future governments to make it possible to keep capitalism sustainable, preventing its collapse, preserving incentive while fairly distributing reward. Without such a tax, capitalism simply may not survive.

Monopoly and diversity laws should surely apply to political views too

With all the calls for staff diversity and equal representation, one important area of difference has so far been left unaddressed: political leaning. In many organisations, the political views of staff don’t matter. Nobody cares about the political views of staff in a double glazing manufacturer because they are unlikely to affect the qualities of a window. However, in an organisation that has a high market share in TV, social media or internet search, or that is a government department or a public service, political bias can have far-reaching effects. If too many of its staff and their decisions favor a particular political view, it is danger of becoming what is sometimes called ‘the deep state’. That is, their everyday decisions and behaviors might privilege one group over another. If most of their colleagues share similar views, they might not even be aware of their bias, because they are the norm in their everyday world. They might think they are doing their job without fear of favor but still strongly preference one group of users over another.

Staff bias doesn’t only an organisation’s policies, values and decisions. It also affects recruitment and promotion, and can result in increasing concentration of a particular world view until it becomes an issue. When a vacancy appears at board level, remaining board members will tend to promote someone who thinks like themselves. Once any leaning takes hold, near monopoly can quickly result.

A government department should obviously be free of bias so that it can carry out instructions from a democratically elected government with equal professionalism regardless of its political flavor. Employees may be in positions where they can allocate resources or manpower more to one area than another, or provide analysis to ministers, or expedite or delay a communication, or emphasize or dilute a recommendation in a survey, or may otherwise have some flexibility in interpreting instructions and even laws. It is important they do so without political bias so transparency of decision-making for external observers is needed along with systems and checks and balances to prevent and test for bias or rectify it when found. But even if staff don’t deliberately abuse their positions to deliberately obstruct or favor, if a department has too many staff from one part of the political spectrum, normalization of views can again cause institutional bias and behavior. It is therefore important for government departments and public services to have work-forces that reflect the political spectrum fairly, at all levels. A department that implements a policy from a government of one flavor but impedes a different one from a new government of opposite flavor is in strong need of reform and re-balancing. It has become a deep state problem. Bias could be in any direction of course, but any public sector department must be scrupulously fair in its implementation of the services it is intended to provide.

Entire professions can be affected. Bias can obviously occur in any direction but over many decades of slow change, academia has become dominated by left-wing employees, and primary teaching by almost exclusively female ones. If someone spends most of their time with others who share the same views, those views can become normalized to the point that a dedicated teacher might think they are delivering a politically balanced lesson that is actually far from it. It is impossible to spend all day teaching kids without some personal views and values rub off on them. The young have always been slightly idealistic and left leaning – it takes years of adult experience of non-academia to learn the pragmatic reality of implementing that idealism, during which people generally migrate rightwards -but with a stronger left bias ingrained during education, it takes longer for people to unlearn naiveté and replace it with reality. Surely education should be educating kids about all political viewpoints and teaching them how to think so they can choose for themselves where to put their allegiance, not a long process of political indoctrination?

The media has certainly become more politically crystallized and aligned in the last decade, with far fewer media companies catering for people across the spectrum. There are strongly left-wing and right-wing papers, magazines, TV and radio channels or shows. People have a free choice of which papers to read, and normal monopoly laws work reasonably well here, with proper checks when there is a proposed takeover that might result in someone getting too much market share. However, there are still clear examples of near monopoly in other places where fair representation is particularly important. In spite of frequent denials of any bias, the BBC for example was found to have a strong pro-EU/Remain bias for its panel on its flagship show Question Time:

IEA analysis shows systemic bias against ‘Leave’ supporters on flagship BBC political programmes

The BBC does not have a TV or radio monopoly but it does have a very strong share of influence. Shows such as Question Time can strongly influence public opinion so if biased towards one viewpoint could be considered as campaigning for that cause, though their contributions would lie outside electoral commission scrutiny of campaign funding. Many examples of BBC bias on a variety of social and political issues exist. It often faces accusations of bias from every direction, sometimes unfairly, so again proper transparency must exist so that independent external groups can appeal for change and be heard fairly, and change enforced when necessary. The BBC is in a highly privileged position, paid for by a compulsory license fee on pain of imprisonment, and also in a socially and politically influential position. It is doubly important that it proportionally represents the views of the people rather than acting as an activist group using license-payer funds to push the political views of the staff, engaging in their own social engineering campaigns, or otherwise being propaganda machines.

As for private industry, most isn’t in a position of political influence, but some areas certainly are. Social media have enormous power to influence the views its users are exposed to, choosing to filter or demote material they don’t approve of, as well as providing a superb activist platform. Search companies can choose to deliver results according to their own agendas, with those they support featuring earlier or more prominently than those they don’t. If social media or search companies provide different service or support or access according to political leaning of the customer then they can become part of the deep state. And again, with normalization creating the risk of institutional bias, the clear remedy is to ensure that these companies have a mixture of staff representative of social mix. They seem extremely enthusiastic about doing that for other forms of diversity. They need to apply similar enthusiasm to political diversity too.

Achieving it won’t be easy. IT companies such as Google, Facebook, Twitter currently have a strong left leaning, though the problem would be just as bad if it were to swing the other direction. Given the natural monopoly tendency in each sector, social media companies should be politically neutral, not deep state companies.

AI being developed to filter posts or decide how much attention they get must also be unbiased. AI algorithmic bias could become a big problem, but it is just as important that bias is judged by neutral bodies, not by people who are biased themselves, who may try to ensure that AI shares their own leaning. I wrote about this issue here: https://timeguide.wordpress.com/2017/11/16/fake-ai/

But what about government? Today’s big issue in the UK is Brexit. In spite of all its members being elected or reelected during the Brexit process, the UK Parliament itself nevertheless has 75% of MPs to defend the interests of the 48% voting Remain  and only 25% to represent the other 52%. Remainers get 3 times more Parliamentary representation than Brexiters. People can choose who they vote for, but with only candidate available from each party, voters cannot choose by more than one factor and most people will vote by party line, preserving whatever bias exists when parties select which candidates to offer. It would be impossible to ensure that every interest is reflected proportionately but there is another solution. I suggested that scaled votes could be used for some issues, scaling an MP’s vote weighting by the proportion of the population supporting their view on that issue:

Achieving fair representation in the new UK Parliament

Like company boards, once a significant bias in one direction exists, political leaning tends to self-reinforce to the point of near monopoly. Deliberate procedures need to be put in place to ensure equality or representation, even when people are elected. Obviously people who benefit from current bias will resist change, but everyone loses if democracy cannot work properly.

The lack of political diversity in so many organisations is becoming a problem. Effective government may be deliberately weakened or amplified by departments with their own alternative agendas, while social media and media companies may easily abuse their enormous power to push their own sociopolitical agendas. Proper functioning of democracy requires that this problem is fixed, even if a lot of people like it the way it is.

Thoughts on declining male intelligence

I’ve seen a few citations this week of a study showing a 3 IQ point per decade drop in men’s intelligence levels: https://www.sciencealert.com/iq-scores-falling-in-worrying-reversal-20th-century-intelligence-boom-flynn-effect-intelligence

I’m not qualified to judge the merits of the study, but it is interesting if true, and since it is based on studying 730,000 men and seems to use a sensible methodology, it does sound reasonable.

I wrote last November about the potential effects of environmental exposure to hormone disruptors on intelligence, pointing out that if estrogen-mimicking hormones cause a shift in IQ distribution, this would be very damaging even if mean IQ stays the same. Although male and female IQs are about the same, male IQs are less concentrated around the mean, so there are more men than women at each extreme.

We need to stop xenoestrogen pollution

From a social equality point of view of course, some might consider it a good thing if men’s IQ range is caused to align more closely with the female one. I disagree and suggested some of the consequences that should be expected if male IQ distribution were to compress towards the female one and managed to confirm many of them, so it does look like it is already a problem.

This new study suggests a shift of the whole distribution downwards, which could actually be in addition to redistribution, making it even worse. The study doesn’t seem to mention distribution. They do show that the drop in mean IQ must be caused by environmental or lifestyle changes, both of which we have seen in recent decades.

IQ distribution matters more than the mean. Those at the very top of the range contribute many times more to progress than those further down. Magnitude of contribution is very dependent on those last few IQ points. I can verify that from personal experience. I have a virus that causes occasional periods of nerve inflammation, and as well as causing problems with my peripheral motor activity, it seems to strongly affect my thinking ability and comprehension. During those periods I generate very few new ideas or inventions and far fewer worthwhile insights than when I am on form. I sometimes have to wait until I recover before I can understand my own previous ideas and add to them. You’ll see it in numbers (and probably quality) of blog posts for example. I really feel a big difference in my thinking ability, and I hate feeling dumber than usual. Perhaps people don’t notice if they’ve always had the reduced IQ so have never experienced being less smart than they were, but my own experience is that perceptive ability and level of consciousness are strong contributors to personal well-being.

As for society as a whole, AI might come to the rescue at least in part. Just in time perhaps, since we’re creating the ability for computers to assist us and up-skill us just as we see numbers of people with the very highest IQ ranges drop. A bit like watching a new generation come on stream and take the reins as we age and take a back seat. On the other hand, it does bring forwards the time where computers overtake humans, humans become more dependent on machines, and machines become more of an existential threat as well as our babysitters.

Biomimetic insights for machine consciousness

About 20 years ago I gave my first talk on how to achieve consciousness in machines, at a World Future Society conference, and went on to discuss how we would co-evolve with machines. I’ve lectured on machine consciousness hundreds of times but never produced any clear slides that explain my ideas properly. I thought it was about time I did. My belief is that today’s deep neural networks using feed-forward processing with back propagation training can not become conscious. No digital algorithmic neural network can, even though they can certainly produce extremely good levels of artificial intelligence. By contrast, nature also uses neurons but does produce conscious machines such as humans easily. I think the key difference is not just that nature uses analog adaptive neural nets rather than digital processing (as I believe Hans Moravec first insighted, a view that I readily accepted) but also that nature uses large groups of these analog neurons that incorporate feedback loops that act both as a sort of short term memory and provide time to sense the sensing process as it happens, a mechanism that can explain consciousness. That feedback is critically important in the emergence of consciousness IMHO. I believe that if the neural network AI people stop barking up the barren back-prop tree and start climbing the feedback tree, we could have conscious machines in no time, but Moravec is still probably right that these need to be analog to enable true real-time processing as opposed to simulation of that.

I may be talking nonsense of course, but here are my thoughts, finally explained as simply and clearly as I can. These slides illustrate only the simplest forms of consciousness. Obviously our brains are highly complex and evolved many higher level architectures, control systems, complex senses and communication, but I think the basic foundations of biomimetic machine consciousness can be achieved as follows:

That’s it. I might produce some more slides on higher level processing such as how concepts might emerge, and why in the long term, AIs will have to become hive minds. But they can wait for later blogs.

AI that talks to us could quickly become problematic

Google’s making the news again adding evidence to the unfortunate stereotype of the autistic IT nerd that barely understands normal people, and they have therefore been astonished at the backlash that normal people would all easily have predicted. (I’m autistic and work in IT mostly too, and am well used to the stereotype it so it doesn’t bother me, in fact it is a sort of ‘get out of social interactions free’ card). Last time it was Google Glass, where it apparently didn’t occur to them that people may not want other people videoing them without consent in pubs and changing rooms. This time it is Google Duplex, that makes phone calls on your behalf to arrange appointment using voice that is almost indistinguishable from normal humans. You could save time making an appointment with a hairdresser apparently, so the Googlanders decided it must be a brilliant breakthrough, and expected everyone to agree. They didn’t.

Some of the objections have been about ethics: e.g. An AI should not present itself as human – Humans have rights and dignity and deserve respectful interactions with other people, but an AI doesn’t and should not masquerade as human to acquire such privilege without knowledge of the other party and their consent.

I would be more offended by the presumed attitude of the user. If someone thinks they are so much better then me that they can demand my time and attention without the expense of any of their own, delegating instead to a few microseconds of processing time in a server farm somewhere, I’ll treat them with the contempt they deserve. My response will not be favourable. I am already highly irritated by the NHS using simple voice interaction messaging to check I will attend a hospital appointment. The fact that my health is on the line and notices at surgeries say I will be banned if I complain on social media is sufficient blackmail to ensure my compliance, but it still comes at the expense of my respect and goodwill. AI-backed voice interaction with better voice wouldn’t be any better, and if it asking for more interaction such as actually booking an appointment, it would be extremely annoying.

In any case, most people don’t speak in fully formed grammatically and logically correct sentences. If you listen carefully to everyday chat, a lot of sentences are poorly pronounced, incomplete, jumbled, full of ums and er’s, likes and they require a great deal of cooperation by the listener to make any sense at all. They also wander off topic frequently. People don’t stick to a rigid vocabulary list or lists of nicely selected sentences.  Lots of preamble and verbal meandering is likely in a response that is highly likely to add ambiguity. The example used in a demo, “I’d like to make a hairdressing appointment for a client” sounds fine until you factor in normal everyday humanity. A busy hairdresser or a lazy receptionist is not necessarily going to cooperate fully. “what do you mean, client?”, “404 not found”, “piss off google”, “oh FFS, not another bloody computer”, “we don’t do hairdressing, we do haircuts”, “why can’t your ‘client’ call themselves then?” and a million other responses are more likely than “what time would you like?”

Suppose though that it eventually gets accepted by society. First, call centers beyond the jurisdiction of your nuisance call blocker authority will incessantly call you at all hours asking or telling you all sorts of things, wasting huge amounts of your time and reducing quality of life. Voice spam from humans in call centers is bad enough. If the owners can multiply productivity by 1000 by using AI instead of people, the result is predictable.

We’ve seen the conspicuous political use of social media AI already. Facebook might have allowed companies to use very limited and inaccurate knowledge of you to target ads or articles that you probably didn’t look at. Voice interaction would be different. It uses a richer emotional connection that text or graphics on a screen. Google knows a lot about you too, but it will know a lot more soon. These big IT companies are also playing with tech to log you on easily to sites without passwords. Some gadgets that might be involved might be worn, such as watches or bracelets or rings. They can pick up signals to identify you, but they can also check emotional states such as stress level. Voice gives away emotion too. AI can already tell better then almost all people whether you are telling the truth or lying or hiding something. Tech such as iris scans can also tell emotional states, as well as give health clues. Simple photos can reveal your age quite accurately to AI, (check out how-old.net).  The AI voice sounds human, but it is better then even your best friends at guessing your age, your stress and other emotions, your health, whether you are telling the truth or not, and it knows far more about what you like and dislike and what you really do online than anyone you know, including you. It knows a lot of your intimate secrets. It sounds human, but its nearest human equivalent was probably Machiavelli. That’s who will soon be on the other side of the call, not some dumb chatbot. Now re-calculate political interference, and factor in the political leaning and social engineering desires of the companies providing the tools. Google and Facebook and the others are very far from politically neutral. One presidential candidate might get full cooperation, assistance and convenient looking the other way, while their opponent might meet rejection and citation of the official rules on non-interference. Campaigns on social issues will also be amplified by AI coupled to voice interaction. I looked at some related issue in a previous blog on fake AI (i.e. fake news type issues): https://timeguide.wordpress.com/2017/11/16/fake-ai/

I could but won’t write a blog on how this tech could couple well to sexbots to help out incels. It may actually have some genuine uses in providing synthetic companionship for lonely people, or helping or encouraging them in real social interactions with real people. It will certainly have some uses in gaming and chatbot game interaction.

We are not very far from computers that are smarter then people across a very wide spectrum, and probably not very far from conscious machines that have superhuman intelligence. If we can’t even rely on IT companies to understand likely consequences of such obvious stuff as Duplex before thy push it, how can we trust them in other upcoming areas of AI development, or even closer term techs with less obvious consequences? We simply can’t!

There are certainly a few such areas where such technology might help us but most are minor and the rest don’t need any deception, but they all come at great cost or real social and political risk, as well as more abstract risks such as threats to human dignity and other ethical issues. I haven’t give this much thought yet and I am sure there must be very many other consequences I have not touched on yet. Google should do more thinking before they release stuff. Technology is becoming very powerful, but we all know that great power comes with great responsibility, and since most people aren’t engineers so can’t think through all the potential technology interactions and consequences, engineers such as Google’s must act more responsibly. I had hoped they’d started, and they said they had, but this is not evidence of that.

 

Futurist memories: The leisure society and the black box economy

Things don’t always change as fast as we think. This is a piece I wrote in 1994 looking forward to a fully automated ‘black box economy, a fly-by-wire society. Not much I’d change if I were writing it new today. Here:

The black box economy is a strictly theoretical possibility, but may result where machines gradually take over more and more roles until the whole economy is run by machines, with everything automated. People could be gradually displaced by intelligent systems, robots and automated machinery. If this were to proceed to the ultimate conclusion, we could have a system with the same or even greater output as the original society, but with no people involved. The manufacturing process could thus become a ‘black box’. Such a system would be so machine controlled that humans would not easily be able to pick up the pieces if it crashed – they would simply not understand how it works, or could not control it. It would be a fly-by-wire society.

The human effort could be reduced to simple requests. When you want a new television, a robot might come and collect the old one, recycling the materials and bringing you a new one. Since no people need be involved and the whole automated system could be entirely self-maintaining and self-sufficient there need be no costs. This concept may be equally applicable in other sectors, such as services and information – ultimately producing more leisure time.

Although such a system is theoretically possible – energy is free in principle, and resources are ultimately a function of energy availability – it is unlikely to go quite this far. We may go some way along this road, but there will always be some jobs that we don’t want to automate, so some people may still work. Certainly, far fewer people would need to work in such a system, and other people could spend their time in more enjoyable pursuits, or in voluntary work. This could be the leisure economy we were promised long ago. Just because futurists predicted it long ago and it hasn’t happened yet does not mean it never will. Some people would consider it Utopian, while others possibly a nightmare, it’s just a matter of taste.

Beyond VR: Computer assisted dreaming

I first played with VR in 1983/1984 while working in the missile industry. Back then we didn’t call it VR, we just called it simulation but it was actually more intensive than VR, just as proper flight simulators are. Our office was a pair of 10m wide domes onto which video could be projected, built decades earlier, in the 1950s I think. One dome had a normal floor, the other had a hydraulic platform that could simulate being on a ship. The subject would stand on whichever surface was appropriate and would see pretty much exactly what they would see in a real battlefield. The missile launcher used for simulation was identical to a real one and showed exactly the same image as a real one would. The real missile was not present of course but its weight was simulated and when the fire button was pressed, a 140dB bang was injected into the headset and weights and pulleys compensated for the 14kg of weight, suddenly vanishing from the shoulder. The experience was therefore pretty convincing and with the loud bang and suddenly changing weight, it was almost as hard to stand steady and keep the system on target as it would be in real life – only the presumed fear and knowledge of the reality of the situation was different.

Back then in 1983, as digital supercomputers had only just taken over from analog ones for simulation, it was already becoming obvious that this kind of computer simulation would one day allow ‘computer assisted dreaming’. (That’s one of the reasons I am irritated when Jaron Lanier is credited for inventing VR – highly realistic simulators and the VR ideas that sprung obviously from them had already been around for decades. At best, all he ‘invented’ was a catchy name for a lower cost, lower quality, less intense simulator. The real inventors were those who made the first generation simulators long before I was born and the basic idea of VR had already been very well established.)

‘Computer assisted dreaming’ may well be the next phase of VR. Today in conventional VR, people are immersed in a computer generated world produced by a computer program (usually) written by others. Via trial and feedback, programmers make their virtual worlds better. As AI and sensor technology continue rapid progress, this is very likely to change to make worlds instantly responsive to the user. By detecting user emotions, reactions, gestures and even thoughts and imagination, it won’t be long before AI can produce a world in real time that depends on those thoughts, imagination and emotions rather than putting them in a pre-designed virtual world. That world would depend largely on your own imagination, upskilled by external AI. You might start off imagining you’re on a beach, then AI might add to it by injecting all sorts of things it knows you might enjoy from previous experiences. As you respond to those, it picks up on the things you like or don’t like and the scene continues to adapt and evolve, to make it more or less pleasant or more or less exciting or more or less challenging etc., depending on your emotional state, external requirements and what it thinks you want from this experience. It would be very like being in a dream – computer assisted lucid dreaming, exactly what I wanted to make back in 1983 after playing in that simulator.

Most people enjoy occasional lucid dreams, where they realise they are dreaming and can then decide what happens next. Making VR do exactly that would be better than being trapped in someone else’s world. You could still start off with whatever virtual world you bought, a computer game or training suite perhaps, but it could adapt to you, your needs and desires to make it more compelling and generally better.

Even in shared experiences like social games, experiences could be personalised. Often all players need to see the same enemies in the same locations in the same ways to make it fair, but that doesn’t mean that the situation can’t adapt to the personalities of those playing. It might actually improve the social value if each time you play it looks different because your companions are different. You might tease a friend if every time you play with them, zombies or aliens always have to appear somehow, but that’s all part of being friends. Exploring virtual worlds with friends, where you both see things dependent on your friend’s personality would help bonding. It would be a bit like exploring their inner world. Today, you only explore the designer’s inner world.

This sort of thing would be a superb development and creativity tool. It could allow you to explore a concept you have in your head, automatically feeding in AI upskilling to amplify your own thoughts and ideas, showing you new paths to explore and helping you do so. The results would still be extremely personal to you, but you on a good day. You could accomplish more, have better visions, imagine more creative things, do more with whatever artistic talent you have. AI could even co-create synthetic personas, make virtual friends you can bond with, share innermost thoughts with, in total confidence (assuming the company you bought the tool from is trustworthy and isn’t spying on you or selling your details, so maybe best not to buy it from Facebook then).

And it would have tremendous therapeutic potential too. You could explore and indulge both enjoyable and troublesome aspects of your inner personality, to build on the good and alleviate or dispel the bad. You might become less troubled, less neurotic, more mentally healthy. You could build your emotional and creative skills. You could become happier and more fulfilled. Mental health improvement potential on its own makes this sort of thing worth developing.

Marketers would obviously try to seize control as they always do, and advertising is already adapting to VR and will continue into its next phases of development. Your own wants and desires might help guide the ‘dreaming’, but marketers will inevitably have some control over what else is injected, and will influence algorithms and AI in how it chooses how to respond to your input. You might be able to choose much of the experience, but others will still want and try to influence and manipulate you, to change your mindset and attitudes in their favour. That will not change until the advertising business model changes. You might be able to buy devices or applications that are entirely driven by you and you alone, but it is pretty certain that the bulk of products and services available will be at least partly financed by those who want to have some control of what you experience.

Nevertheless, computer-assisted dreaming could be a much more immersive and personal experience than VR, being more like an echo of your own mind and personality than external vision, more your own creation, less someone else’s. In fact, echo sounds a better term too. Echo reality, ER, or maybe personal reality, pereal, or mental echo, ME. Nah, maybe we need Lanier to invent a catchy name again, he is good at that. That 1983 idea could soon become reality.

 

People are becoming less well-informed

The Cambridge Analytica story has exposed a great deal about our modern society. They allegedly obtained access to 50M Facebook records to enable Trump’s team to target users with personalised messages.

One of the most interesting aspects is that unless they only employ extremely incompetent journalists, the news outlets making the biggest fuss about it must be perfectly aware of reports that Obama appears to have done much the same but on a much larger scale back in 2012, but are keeping very quiet about it. According to Carol Davidsen, a senior Obama campaign staffer, they allowed Obama’s team to suck out the whole social graph – because they were on our side – before closing it to prevent Republican access to the same techniques. Trump’s campaign’s 50M looks almost amateur. I don’t like Trump, and I did like Obama before the halo slipped, but it seems clear to anyone who checks media across the political spectrum that both sides try their best to use social media to target users with personalised messages, and both sides are willing to bend rules if they think they can get away with it.

Of course all competent news media are aware of it. The reason some are not talking about earlier Democrat misuse but some others are is that they too all have their own political biases. Media today is very strongly polarised left or right, and each side will ignore, play down or ludicrously spin stories that don’t align with their own politics. It has become the norm to ignore the log in your own eye but make a big deal of the speck in your opponent’s, but we know that tendency goes back millennia. I watch Channel 4 News (which broke the Cambridge Analytica story) every day but although I enjoy it, it has a quite shameless lefty bias.

So it isn’t just the parties themselves that will try to target people with politically massaged messages, it is quite the norm for most media too. All sides of politics since Machiavelli have done everything they can to tilt the playing field in their favour, whether it’s use of media and social media, changing constituency boundaries or adjusting the size of the public sector. But there is a third group to explore here.

Facebook of course has full access to all of their 2.2Bn users’ records and social graph and is not squeaky clean neutral in its handling of them. Facebook has often been in the headlines over the last year or two thanks to its own political biases, with strongly weighted algorithms filtering or prioritising stories according to their political alignment. Like most IT companies Facebook has a left lean. (I don’t quite know why IT skills should correlate with political alignment unless it’s that most IT staff tend to be young, so lefty views implanted at school and university have had less time to be tempered by real world experience.) It isn’t just Facebook of course either. While Google has pretty much failed in its attempt at social media, it also has comprehensive records on most of us from search, browsing and android, and via control of the algorithms that determine what appears in the first pages of a search, is also able to tailor those results to what it knows of our personalities. Twitter has unintentionally created a whole world of mob rule politics and justice, but in format is rapidly evolving into a wannabe Facebook. So, the IT companies have themselves become major players in politics.

A fourth player is now emerging – artificial intelligence, and it will grow rapidly in importance into the far future. Simple algorithms have already been upgraded to assorted neural network variants and already this is causing problems with accusations of bias from all directions. I blogged recently about Fake AI: https://timeguide.wordpress.com/2017/11/16/fake-ai/, concerned that when AI analyses large datasets and comes up with politically incorrect insights, this is now being interpreted as something that needs to be fixed – a case not of shooting the messenger, but forcing the messenger to wear tinted spectacles. I would argue that AI should be allowed to reach whatever insights it can from a dataset, and it is then our responsibility to decide what to do with those insights. If that involves introducing a bias into implementation, that can be debated, but it should at least be transparent, and not hidden inside the AI itself. I am now concerned that by trying to ‘re-educate’ the AI, we may instead be indoctrinating it, locking today’s politics and values into future AI and all the systems that use it. Our values will change, but some foundation level AI may be too opaque to repair fully.

What worries me most though isn’t that these groups try their best to influence us. It could be argued that in free countries, with free speech, anybody should be able to use whatever means they can to try to influence us. No, the real problem is that recent (last 25 years, but especially the last 5) evolution of media and social media has produced a world where most people only ever see one part of a story, and even though many are aware of that, they don’t even try to find the rest and won’t look at it if it is put before them, because they don’t want to see things that don’t align with their existing mindset. We are building a world full of people who only see and consider part of the picture. Social media and its ‘bubbles’ reinforce that trend, but other media are equally guilty.

How can we shake society out of this ongoing polarisation? It isn’t just that politics becomes more aggressive. It also becomes less effective. Almost all politicians claim they want to make the world ‘better’, but they disagree on what exactly that means and how best to do so. But if they only see part of the problem, and don’t see or understand the basic structure and mechanisms of the system in which that problem exists, then they are very poorly placed to identify a viable solution, let alone an optimal one.

Until we can fix this extreme blinkering that already exists, our world can not get as ‘better’ as it should.

 

How can we make a computer conscious?

This is very text heavy and is really just my thinking out loud, so to speak. Unless you are into mental archaeology or masochistic, I’d strongly recommend that you instead go to my new blog on this which outlines all of the useful bits graphically and simply.

Otherwise….

I found this article in my drafts folder, written 3 years ago as part of my short series on making conscious computers. I thought I’d published it but didn’t. So updating and publishing it now. It’s a bit long-winded, thinking out loud, trying to derive some insights from nature on how to make conscious machines. The good news is that actual AI developments are following paths that lead in much the same direction, though some significant re-routing and new architectural features are needed if they are to optimize AI and achieve machine consciousness.

Let’s start with the problem. Today’s AI that plays chess, does web searches or answers questions is digital. It uses algorithms, sets of instructions that the computer follows one by one. All of those are reduced to simple binary actions, toggling bits between 1 and 0. The processor doing that is no more conscious or aware of it, and has no more understanding of what it is doing than an abacus knows it is doing sums. The intelligence is in the mind producing the clever algorithms that interpret the current 1s and 0s and change them in the right way. The algorithms are written down, albeit in more 1s and 0s in a memory chip, but are essentially still just text, only as smart and aware as a piece of paper with writing on it. The answer is computed, transmitted, stored, retrieved, displayed, but at no point does the computer sense that it is doing any of those things. It really is just an advanced abacus. An abacus is digital too (an analog equivalent to an abacus is a slide rule).

A big question springs to mind: can a digital computer ever be any more than an advanced abacus. Until recently, I was certain the answer was no. Surely a digital computer that just runs programs can never be conscious? It can simulate consciousness to some degree, it can in principle describe the movements of every particle in a conscious brain, every electric current, every chemical reaction. But all it is doing is describing them. It is still just an abacus. Once computed, that simulation of consciousness could be printed and the printout would be just as conscious as the computer was. A digital ‘stored program’ computer can certainly implement extremely useful AI. With the right algorithms, it can mine data, link things together, create new data from that, generate new ideas by linking together things that haven’t been linked before, make works of art, poetry, compose music, chat to people, recognize faces and emotions and gestures. It might even be able to converse about life, the universe and everything, tell you its history, discuss its hopes for the future, but all of that is just a thin gloss on an abacus. I wrote a chat-bot on my Sinclair ZX Spectrum in 1983, running on a processor with about 8,000 transistors. The chat-bot took all of about 5 small pages of code but could hold a short conversation quite well if you knew what subjects to stick to. It’s very easy to simulate conversation. But it is still just a complicated abacus and still doesn’t even know it is doing anything.

However clever the AI it implements, a conventional digital computer that just executes algorithms can’t become conscious but an analog computer can, a quantum computer can, and so can a hybrid digital/analog/quantum computer. The question remain s whether a digital computer can be conscious if it isn’t just running stored programs. Could it have a different structure, but still be digital and yet be conscious? Who knows? Not me. I used to know it couldn’t, but now that I am a lot older and slightly wiser, I now know I don’t know.

Consciousness debate often starts with what we know to be conscious, the human brain. It isn’t a digital computer, although it has digital processes running in it. It also runs a lot of analog processes. It may also run some quantum processes that are significant in consciousness. It is a conscious hybrid of digital, analog and possibly quantum computing. Consciousness evolved in nature, therefore it can be evolved in a lab. It may be difficult and time consuming, and may even be beyond current human understanding, but it is possible. Nature didn’t use magic, and what nature did can be replicated and probably even improved on. Evolutionary AI development may have hit hard times, but that only shows that the techniques used by the engineers doing it didn’t work on that occasion, not that other techniques can’t work. Around 2.6 new human-level fully conscious brains are made by nature every second without using any magic and furthermore, they are all slightly different. There are 7.6 billion slightly different implementations of human-level consciousness that work and all of those resulted from evolution. That’s enough of an existence proof and a technique-plausibility-proof for me.

Sensors evolved in nature pretty early on. They aren’t necessary for life, for organisms to move around and grow and reproduce, but they are very helpful. Over time, simple light, heat, chemical or touch detectors evolved further to simple vision and produce advanced sensations such as pain and pleasure, causing an organism to alter its behavior, in other words, feeling something. Detection of an input is not the same as sensation, i.e. feeling an input. Once detection upgrades to sensation, you have the tools to make consciousness. No more upgrades are needed. Sensing that you are sensing something is quite enough to be classified as consciousness. Internally reusing the same basic structure as external sensing of light or heat or pressure or chemical gradient or whatever allows design of thought, planning, memory, learning and construction and processing of concepts. All those things are just laying out components in different architectures. Getting from detection to sensation is the hard bit.

So design of conscious machines, and in fact what AI researchers call the hard problem, really can be reduced to the question of what makes the difference between a light switch and something that can feel being pushed or feel the current flowing when it is, the difference between a photocell and feeling whether it is light or dark, the difference between detecting light frequency, looking it up in a database, then pronouncing that it is red, and experiencing redness. That is the hard problem of AI. Once that is solved, we will very soon afterwards have a fully conscious self aware AI. There are lots of options available, so let’s look at each in turn to extract any insights.

The first stage is easy enough. Detecting presence is easy, measuring it is harder. A detector detects something, a sensor (in its everyday engineering meaning) quantifies it to some degree. A component in an organism might fire if it detects something, it might fire with a stronger signal or more frequently if it detects more of it, so it would appear to be easy to evolve from detection to sensing in nature, and it is certainly easy to replicate sensing with technology.

Essentially, detection is digital, but sensing is usually analog, even though the quantity sensed might later be digitized. Sensing normally uses real numbers, while detection uses natural numbers (real v  integer as programmer call them). The handling of analog signals in their raw form allows for biomimetic feedback loops, which I’ll argue are essential. Digitizing them introduces a level of abstraction that is essentially the difference between emulation and simulation, the difference between doing something and reading about someone doing it. Simulation can’t make a conscious machine, emulation can. I used to think that meant digital couldn’t become conscious, but actually it is just algorithmic processing of stored programs that can’t do it. There may be ways of achieving consciousness digitally, or quantumly, but I haven’t yet thought of any.

That engineering description falls far short of what we mean by sensation in human terms. How does that machine-style sensing become what we call a sensation? Logical reasoning says there would probably need to be only a small change in order to have evolved from detection to sensing in nature. Maybe something like recombining groups of components in different structures or adding them together or adding one or two new ones, that sort of thing?

So what about detecting detection? Or sensing detection? Those could evolve in sequence quite easily. Detecting detection is like your alarm system control unit detecting the change of state that indicates that a PIR has detected an intruder, a different voltage or resistance on a line, or a 1 or a 0 in a memory store. An extremely simple AI responds by ringing an alarm. But the alarm system doesn’t feel the intruder, does it?  It is just a digital response to a digital input. No good.

How about sensing detection? How do you sense a 1 or a 0? Analog interpretation and quantification of digital states is very wasteful of resources, an evolutionary dead end. It isn’t any more useful than detection of detection. So we can eliminate that.

OK, sensing of sensing? Detection of sensing? They look promising. Let’s run with that a bit. In fact, I am convinced the solution lies in here so I’ll look till I find it.

Let’s do a thought experiment on designing a conscious microphone, and for this purpose, the lowest possible order of consciousness will do, we can add architecture and complexity and structures once we have some bricks. We don’t particularly want to copy nature, but are free to steal ideas and add our own where it suits.

A normal microphone sensor produces an analog signal quantifying the frequencies and intensities of the sounds it is exposed to, and that signal may later be quantified and digitized by an analog to digital converter, possibly after passing through some circuits such as filters or amplifiers in between. Such a device isn’t conscious yet. By sensing the signal produced by the microphone, we’d just be repeating the sensing process on a transmuted signal, not sensing the sensing itself.

Even up close, detecting that the microphone is sensing something could be done by just watching a little LED going on when current flows. Sensing it is harder but if we define it in conventional engineering terms, it could still be just monitoring a needle moving as the volume changes. That is obviously not enough, it’s not conscious, it isn’t feeling it, there’s no awareness there, no ‘sensation’. Even at this primitive level, if we want a conscious mic, we surely need to get in closer, into the physics of the sensing. Measuring the changing resistance between carbon particles or speed of a membrane moving backwards and forwards would just be replicating the sensing, adding an extra sensing stage in series, not sensing the sensing, so it needs to be different from that sort of thing. There must surely need to be a secondary change or activity in the sensing mechanism itself that senses the sensing of the original signal.

That’s a pretty open task, and it could even be embedded in the detecting process or in the production process for the output signal. But even recognizing that we need this extra property narrows the search. It must be a parallel or embedded mechanism, not one in series. The same logical structure would do fine for this secondary sensing, since it is just sensing in the same logical way as the original. This essential logical symmetry would make its evolution easy too. It is easy to imagine how that could happen in nature, and easier still to see how it could be implemented in a synthetic evolution design system. Such an approach could be mimicked in natural or synthetic evolutionary development systems. In this approach, we have to feel the sensing, so we need it to comprise some sort of feedback loop with a high degree of symmetry compared with the main sensing stage. That would be natural evolution compatible as well as logically sound as an engineering approach.

This starts to look like progress. In fact, it’s already starting to look a lot like a deep neural network, with one huge difference: instead of using feed-forward signal paths for analysis and backward propagation for training, it relies instead on a symmetric feedback mechanism where part of the input for each stage of sensing comes from its own internal and output signals. A neuron is not a full sensor in its own right, and it’s reasonable to assume that multiple neurons would be clustered so that there is a feedback loop. Many in the neural network AI community are already recognizing the limits of relying on feed-forward and back-prop architectures, but web searches suggest few if any are moving yet to symmetric feedback approaches. I think they should. There’s gold in them there hills!

So, the architecture of the notional sensor array required for our little conscious microphone would have a parallel circuit and feedback loop (possibly but not necessarily integrated), and in all likelihood these parallel and sensing circuits would be heavily symmetrical, i.e. they would use pretty much the same sort of components and architectures as the sensing process itself. If the sensation bit is symmetrical, of similar design to the primary sensing circuit, that again would make it easy to evolve in nature too so is a nice 1st principles biomimetic insight. So this structure has the elegance of being very feasible for evolutionary development, natural or synthetic. It reuses similarly structured components and principles already designed, it’s just recombining a couple of them in a slightly different architecture.

Another useful insight screams for attention too. The feedback loop ensures that the incoming sensation lingers to some degree. Compared to the nanoseconds we are used to in normal IT, the signals in nature travel fairly slowly (~200m/s), and the processing and sensing occur quite slowly (~200Hz). That means this system would have some inbuilt memory that repeats the essence of the sensation in real time – while it is sensing it. It is inherently capable of memory and recall and leaves the door wide open to introduce real-time interaction between memory and incoming signal. It’s not perfect yet, but it has all the boxes ticked to be a prime contender to build thought, concepts, store and recall memories, and in all likelihood, is a potential building brick for higher level consciousness. Throw in recent technology developments such as memristors and it starts to look like we have a very promising toolkit to start building primitive consciousness, and we’re already seeing some AI researchers going that path so maybe we’re not far from the goal. So, we make a deep neural net with nice feedback from output (of the sensing system, which to clarify would be a cluster of neurons, not a single neuron) to input at every stage (and between stages) so that inputs can be detected and sensed, while the input and output signals are stored and repeated into the inputs in real time as the signals are being processed. Throw in some synthetic neurotransmitters to dampen the feedback and prevent overflow and we’re looking at a system that can feel it is feeling something and perceive what it is feeling in real time.

One further insight that immediately jumps out is since the sensing relies on the real time processing of the sensations and feedbacks, the speed of signal propagation, storage, processing and repetition timeframes must all be compatible. If it is all speeded up a million fold, it might still work fine, but if signals travel too slowly or processing is too fast relative to other factors, it won’t work. It will still get a computational result absolutely fine, but it won’t know that it has, it won’t be able to feel it. Therefore… since we have a factor of a million for signal speed (speed of light compared to nerve signal propagation speed), 50 million for switching speed, and a factor of 50 for effective neuron size (though the sensing system units would be multiple neuron clusters), we could make a conscious machine that could think at 50 million times as fast as a natural system (before allowing for any parallel processing of course). But with architectural variations too, we’d need to tune those performance metrics to make it work at all and making physically larger nets would require either tuning speeds down or sacrificing connectivity-related intelligence. An evolutionary design system could easily do that for us.

What else can we deduce about the nature of this circuit from basic principles? The symmetry of the system demands that the output must be an inverse transform of the input. Why? Well, because the parallel, feedback circuit must generate a form that is self-consistent. We can’t deduce the form of the transform from that, just that the whole system must produce an output mathematically similar to that of the input.

I now need to write another blog on how to use such circuits in neural vortexes to generate knowledge, concepts, emotions and thinking. But I’m quite pleased that it does seem that some first-principles analysis of natural evolution already gives us some pretty good clues on how to make a conscious computer. I am optimistic that current research is going the right way and only needs relatively small course corrections to achieve consciousness.

 

Why superhumans are inevitable, and what else comes in the box

Do we have any real choice in the matter of making  super-humans? 20 years ago, I estimated 2005 as the point of no return, and nothing since then has changed my mind on that date. By my reckoning, we are already inevitably committed to designer babies, ebaybies, super-soldiers and super-smart autonomous weapons, direct brain-machine links, electronic immortality, new human races, population explosion, inter-species conflicts and wars with massively powerful weaponry, superhuman conscious AI, smart bacteria, and the only real control we have is relatively minor adjustments on timings. As I was discussing yesterday, the technology potential for this is vast and very exciting, nothing less than a genuine techno-utopia if we use the technologies wisely, but optimum potential doesn’t automatically become reality, and achieving a good outcome is unlikely if many barriers are put in its way.

In my estimation, we have already started the countdown to this group of interconnected technologies – we will very likely get all of them, and we must get ready for the decisions and impacts ahead. At the moment, our society is a small child about to open its super-high-tech xmas presents while fighting with its siblings. Those presents will give phenomenal power far beyond the comprehension of the child or its emotional maturity to equip it to deal with the decisions safely. Our leaders have already squandered decades of valuable preparation time by ignoring the big issues to focus on trivial ones. It is not too late to achieve a good ending, but it won’t happen by accident and we do need to make preparations to avoid pretty big problems.

Both hard and soft warfare – the sword and the pen, already use rapidly advancing AI, and the problems are already running ahead of what the owners intended.

Facebook, Twitter, Instagram and other media giants all have lots of smart people and presumably they mean well, but if so, they have certainly been naive. They maybe hoped to eliminate loneliness, inequality, and poverty and create a loving interconnected global society with global peace, but instead created fake news, social division and conflict and election interference. More likely they didn’t intend either outcome, they just wanted to make money and that took priority over due care and attention..

Miniaturising swarming smart-drones are already the subjects of a new arms race that will deliver almost un-killable machine adversaries by 2050. AI separately is in other arms races to make super-smart AI and super-smart soldiers. This is key to the 2005 point of no return. It was around 2005 that we reached the levels of technology where future AI development all the way to superhuman machine consciousness could be done by individuals, mad scientists or rogue states, even if major powers had banned it. Before 2005, there probably wasn’t quite enough knowledge already on the net to do that. In 2018, lots of agencies have already achieved superiority to humans in niche areas, and other niches will succumb one by one until the whole field of human capability is covered. The first machines to behave in ways not fully understood by humans arrived in the early 1990s; in 2018, neural nets already make lots of decisions at least partly obscured to humans.

This AI development trend will take us to superhuman AI, and it will be able to accelerate development of its own descendants to vastly superhuman AI, fully conscious, with emotions, and its own agendas. That will need humans to protect against being wiped out by superhuman AI. The only three ways we could do that are to either redesign the brain biologically to be far smarter, essentially impossible in the time-frame, to design ways to link our brains to machines, so that we have direct access to the same intelligence as the AIs, so a gulf doesn’t appear and we can remain relatively safe, or pray for super-smart aliens to come to our help, not the best prospect.

Therefore we will have no choice but to make direct brain links to super-smart AI. Otherwise we risk extinction. It is that simple. We have some idea how to do that – nanotech devices inside the brain linking to each and every synapse that can relay electrical signals either way, a difficult but not impossible engineering problem. Best guesses for time-frame fall in the 2045-2050 range for a fully working link that not only relays signals between your organic brain and an IT replica, but by doing so essentially makes external IT just another part of your brain. That conveys some of the other technology gifts of electronic immortality, new varieties of humans, smart bacteria (which will be created during the development path to this link) along with human-variant population explosion, especially in cyberspace, with androids as their physical front end, and the inevitable inter-species conflicts over resources and space – trillions of AI and human-like minds in cyberspace that want to do things in the real world cannot be assumed to be willingly confined just to protect the interests of what they will think of as far lesser species.

Super-smart AI or humans with almost total capability to design whatever synthetic biology is needed to achieve any biological feature will create genetic listings for infinite potential offspring, simulate them, give some of them cyberspace lives, assemble actual embryos for some of them and bring designer babies. Already in 2018, you can pay to get a DNA listing, and blend it in any way you want with the listing of anyone else. It’s already possible to make DNA listings for potential humans and sell them on ebay, hence the term ebaybies. That is perfectly legal, still, but I’ve been writing and lecturing about them since 2004. Today they would just be listings, but we’ll one day have the tech to simulate them, choose ones we like and make them real, even some that were sold as celebrity collector items on ebay. It’s not only too late to start regulating this kind of tech, our leaders aren’t even thinking about it yet.

These technologies are all linked intricately, and their foundations are already in place, with much of the building on those foundations under way. We can’t stop any of these things from happening, they will all come in the same basket. Our leaders are becoming aware of the potential and the potential dangers of the AI positive feedback loop, but at least 15 years too late to do much about it. They have been warned repeatedly and loudly but have focused instead on the minor politics of the day that voters are aware of. The fundamental nature of politics is unlikely to change substantially, so even efforts to slow down the pace of development or to limit areas of impact are likely to be always too little too late. At best, we will be able to slow runaway AI development enough to allow direct brain links to protect against extinction scenarios. But we will not be able to stop it now.

Given inevitability, it’s worth questioning whether there is even any point in trying. Why not just enjoy the ride? Well, the brakes might be broken, but if we can steer the bus expertly enough, it could be exciting and we could come out of it smelling of roses. The weak link is certainly the risk of super-smart AI, whether AI v humans or countries using super-smart AI to fight fiercely for world domination. That risk is alleviated by direct brain linkage, and I’d strongly argue necessitates it, but that brings the other technologies. Even if we decide not to develop it, others will, so one way or another, all these techs will arrive, and our future late century will have this full suite of techs, plus many others of course.

We need as a matter of extreme urgency to fix these silly social media squabbles and over-reactions that are pulling society apart. If we have groups hating each other with access to extremely advanced technology, that can only mean trouble. Tolerance is broken, sanctimony rules, the Inquisition is in progress. We have been offered techno-utopia, but current signs are that most people think techno-hell looks more appetizing and it is their free choice.

AIs of a feather flocking together to create global instability

Hawking and Musk have created a lot of media impact with their warnings about AI, so although terminator scenarios resulting from machine consciousness have been discussed, as have more mundane use of non-conscious autonomous weapon systems, it’s worth noting that I haven’t yet heard them mention one major category of risks from AI – emergence. AI risks have been discussed frequently since the 1970s, and in the 1990s a lot of work was done in the AI community on emergence. Complex emergent patterns of behavior often result from interactions between entities driven by simple algorithms. Genetic algorithms were demonstrated to produce evolution, simple neighbor-interaction rules were derived to illustrate flocking behaviors that make lovely screen saver effects. Cellular automata were played with. In BT we invented ways of self-organizing networks and FPGAs, played with mechanism that could be used for evolution and consciousness, demonstrated managing networks via ANTs – autonomous network telephers, using smart packets that would run up and down wires sorting things out all by themselves. In 1987 discovered a whole class of ways of bringing down networks via network resonance, information waves and their much larger class of correlated traffic – still unexploited by hackers apart from simple DOS attacks. These ideas have slowly evolved since, and some have made it into industry or hacker toolkits, but we don’t seem to be joining the dots as far as risks go.

I read an amusing article this morning by an ex-motoring-editor who was declined insurance because the AI systems used by insurance companies had labelled him as high risk because he maybe associated with people like Clarkson. Actually, he had no idea why, but that was his broker’s theory of how it might have happened. It’s a good article, well written and covers quite a few of the dangers of allowing computers to take control.

http://www.dailymail.co.uk/sciencetech/article-5310031/Evidence-robots-acquiring-racial-class-prejudices.html

The article suggested how AIs in different companies might all come to similar conclusions about people or places or trends or patterns in a nice tidy positive feedback loop. That’s exactly the sort of thing that can drive information waves, which I demonstrated in 1987 can bring down an entire network in less than 3 milliseconds, in such a way that it would continue to crash many times when restarted. That isn’t intended by the algorithms, which individually ought to make good decisions, but when interacting with one another, create the emergent phenomenon.  Automated dealing systems are already pretty well understood in this regard and mechanisms prevent frequent stock market collapses, but that is only one specific type of behavior in one industry that is protected. There do not seem to be any industry-wide mechanisms to prevent the rest of this infinite class of problems from affecting any or all of the rest, simultaneously.

As we create ever more deep learning neural networks, that essentially teach themselves from huge data pools, human understanding of their ‘mindsets’ decreases. They make decisions using algorithms that are understood at a code level, but the massive matrix of derived knowledge they create from all the data they receive becomes highly opaque. Often, even usually, nobody quite knows how a decision is made. That’s bad enough in a standalone system, but when many such systems are connected, produced and owned and run by diverse companies with diverse thinking, the scope for destructive forms of emergence increases geometrically.

One result could be gridlock. Systems fed with a single new piece of data could crash. My 3 millisecond result in 1987 would still stand since network latency is the prime limiter. The first AI receives it, alters its mindset accordingly, processes it, makes a decision and interacts with a second AI. This second one might have different ‘prejudice’ so makes its own decision based on different criteria, and refuses to respond the way intended. A 3rd one looks at the 2nd’s decision and takes that as evidence that there might be an issue, and with its risk-averse mindset, also refuse to act, and that inaction spreads through the entire network in milliseconds. Since the 1st AI thinks the data is all fine and it should have gone ahead, it now interprets the inaction of the others as evidence that that type of data is somehow ‘wrong’ so itself refuses to process any further of that type, whether from its own operators or other parts of the system. So it essentially adds its own outputs to the bad feeling and the entire system falls into sulk mode. As one part of infrastructure starts to shut down, that infects other connected parts and our entire IT could fall into sulk mode – entire global infrastructure. Since nobody knows how it all works, or what has caused the shutdown, it might be extremely hard to recover.

Another possible result is a direct information wave, almost certainly a piece of fake news. Imagine our IT world in 5 years time, with all these super-smart AIs super-connected. A piece of fake news says a nuke has just been launched somewhere. Stocks will obviously decline, whatever the circumstances, so as the news spreads, everyone’s AIs will take it on themselves to start selling shares before the inevitable collapse, triggering a collapse, except it won’t because the markets won’t let that happen. BUT… The wave does spread, and all those individual AIs want to dispose of those shares, or at least find out what’s happening, so they all start sending messages to one another, exchanging data, trying to find what’s going on. That’s the information wave. They can’t sell shares of find out, because the network is going into overload, so they try even harder and force it into severe overload. So it falls over. When it comes back online, they all try again, crashing it again, and so on.

Another potential result is smartass AI. There is always some prat somewhere who sees an opportunity to take advantage and ruins if for everyone else by doing something like exploiting a small loophole in the law, or in this case, most likely, a prejudice our smartass AI has discovered in some other AI that means it can be taken advantage of by doing x, y, or z. Since nobody quite knows how any of their AIs are making their decisions because their mindsets ate too big and too complex, it will be very hard to identify what is going on. Some really unusual behavior is corrupting the system because some AI is going rogue somewhere somehow, but which one, where, how?

That one brings us back to fake news. That will very soon infect AI systems with their own varieties of fake news. Complex networks of AIs will have many of the same problems we are seeing in human social networks. An AI could become a troll just the same as a human, deliberately winding others up to generate attention of drive a change of some parameter – any parameter – in its own favour. Activist AIs will happen due to people making them to push human activist causes, but they will also do it all by themselves. Their analysis of the system will sometimes show them that a good way to get a good result is to cause problems elsewhere.

Then there’s climate change, weather, storms, tsunamis. I don’t mean real ones, I mean the system wide result of tiny interactions of tiny waves and currents of data and knowledge in neural nets. Tiny effects in one small part of a system can interact in unforeseen ways with other parts of other systems nearby, creating maybe a breeze, which interacts with breezes in nearby regions to create hurricanes. I think that’s a reasonable analogy. Chaos applies to neural net societies just as it does to climate, and 50 year waves equivalents will cause equivalent havoc in IT.

I won’t go on with more examples, long blogs are awful to read. None of these requires any self-awareness, sentience, consciousness, call it what you will. All of these can easily happen through simple interactions of fairly trivial AI deep learning nets. The level of interconnection already sounds like it may already be becoming vulnerable to such emergence effects. Soon it definitely will be. Musk and Hawking have at least joined the party and they’ll think more and more deeply in coming months. Zuckerberg apparently doesn’t believe in AI threats but now accepts the problems social media is causing. Sorry Zuck, but the kind of AI you’re company is messing with will also be subject to its own kinds of social media issues, not just in its trivial decisions on what to post or block, but actual inter-AI socializing issues. It might not try to eliminate humanity, but if it brings all of our IT to a halt and prevents rapid recovery, we’re still screwed.

 

2018 outlook: fragile

Futurists often consider wild cards – events that could happen, and would undoubtedly have high impacts if they do, but have either low certainty or low predictability of timing.  2018 comes with a larger basket of wildcards than we have seen for a long time. As well as wildcards, we are also seeing the intersection of several ongoing trends that are simultaneous reaching peaks, resulting in socio-political 100-year-waves. If I had to summarise 2018 in a single word, I’d pick ‘fragile’, ‘volatile’ and ‘combustible’ as my shortlist.

Some of these are very much in all our minds, such as possible nuclear war with North Korea, imminent collapse of bitcoin, another banking collapse, a building threat of cyberwar, cyberterrorism or bioterrorism, rogue AI or emergence issues, high instability in the Middle East, rising inter-generational conflict, resurgence of communism and decline of capitalism among the young, increasing conflicts within LGBTQ and feminist communities, collapse of the EU under combined pressures from many angles: economic stresses, unpredictable Brexit outcomes, increasing racial tensions resulting from immigration, severe polarization of left and right with the rise of extreme parties at both ends. All of these trends have strong tribal characteristics, and social media is the perfect platform for tribalism to grow and flourish.

Adding fuel to the building but still unlit bonfire are increasing tensions between the West and Russia, China and the Middle East. Background natural wildcards of major epidemics, asteroid strikes, solar storms, megavolcanoes, megatsumanis and ‘the big one’ earthquakes are still there waiting in the wings.

If all this wasn’t enough, society has never been less able to deal with problems. Our ‘snowflake’ generation can barely cope with a pea under the mattress without falling apart or throwing tantrums, so how we will cope as a society if anything serious happens such as a war or natural catastrophe is anyone’s guess. 1984-style social interaction doesn’t help.

If that still isn’t enough, we’re apparently running a little short on Ghandis, Mandelas, Lincolns and Churchills right now too. Juncker, Trump, Merkel and May are at the far end of the same scale on ability to inspire and bring everyone together.

Depressing stuff, but there are plenty of good things coming too. Augmented reality, more and better AI, voice interaction, space development, cryptocurrency development, better IoT, fantastic new materials, self-driving cars and ultra-high speed transport, robotics progress, physical and mental health breakthroughs, environmental stewardship improvements, and climate change moving to the back burner thanks to coming solar minimum.

If we are very lucky, none of the bad things will happen this year and will wait a while longer, but many of the good things will come along on time or early. If.

Yep, fragile it is.

 

Emotion maths – A perfect research project for AI

I did a maths and physics degree, and even though I have forgotten much of it after 36 years, my brain is still oriented in that direction and I sometimes have maths dreams. Last night I had another, where I realized I’ve never heard of a branch of mathematics to describe emotions or emotional interactions. As the dream progressed, it became increasingly obvious that the most suited part of maths for doing so would be field theory, and given the multi-dimensional nature of emotions, tensor field theory would be ideal. I’m guessing that tensor field theory isn’t on most university’s psychology syllabus. I could barely cope with it on a maths syllabus. However, I note that one branch of Google’s AI R&D resulted in a computer architecture called tensor flow, presumably designed specifically for such multidimensional problems, and presumably being used to analyse marketing data. Again, I haven’t yet heard any mention of it being used for emotion studies, so this is clearly a large hole in maths research that might be perfectly filled by AI. It would be fantastic if AI can deliver a whole new branch of maths. AI got into trouble inventing new languages but mathematics is really just a way of describing logical reasoning about numbers or patterns in formal language that is self-consistent and reproducible. It is ideal for describing scientific theories, engineering and logical reasoning.

Checking Google today, there are a few articles out there describing simple emotional interactions using superficial equations, but nothing with the level of sophistication needed.

https://www.inc.com/jeff-haden/your-feelings-surprisingly-theyre-based-on-math.html

an example from this:

Disappointment = Expectations – Reality

is certainly an equation, but it is too superficial and incomplete. It takes no account of how you feel otherwise – whether you are jealous or angry or in love or a thousand other things. So there is some discussion on using maths to describe emotions, but I’d say it is extremely superficial and embryonic and perfect for deeper study.

Emotions often behave like fields. We use field-like descriptions in everyday expressions – envy is a green fog, anger is a red mist or we see a beloved through rose-tinted spectacles. These are classic fields, and maths could easily describe them in this way and use them in equations that describe behaviors affected by those emotions. I’ve often used the concept of magentic fields in some of my machine consciousness work. (If I am using an optical processing gel, then shining a colored beam of light into a particular ‘brain’ region could bias the neurons in that region in a particular direction in the same way an emotion does in the human brain. ‘Magentic’ is just a playful pun given the processing mechanism is light (e.g. magenta, rather than electrics that would be better affected by magnetic fields.

Some emotions interact and some don’t, so that gives us nice orthogonal dimensions to play in. You can be calm or excited pretty much independently of being jealous. Others very much interact. It is hard to be happy while angry. Maths allows interacting fields to be described using shared dimensions, while having others that don’t interact on other dimensions. This is where it starts to get more interesting and more suited to AI than people. Given large databases of emotionally affected interactions, an AI could derive hypotheses that appear to describe these interactions between emotions, picking out where they seem to interact and where they seem to be independent.

Not being emotionally involved itself, it is better suited to draw such conclusions. A human researcher however might find it hard to draw neat boundaries around emotions and describe them so clearly. It may be obvious that being both calm and angry doesn’t easily fit with human experience, but what about being terrified and happy? Terrified sounds very negative at first glance, so first impressions aren’t favorable for twinning them, but when you think about it, that pretty much describes the entire roller-coaster or extreme sports markets. Many other emotions interact somewhat, and deriving the equations would be extremely hard for humans, but I’m guessing, relatively easy for AI.

These kinds of equations fall very easily into tensor field theory, with types and degrees of interactions of fields along alternative dimensions readily describable.

Some interactions act like transforms. Fear might transform the ways that jealousy is expressed. Love alters the expression of happiness or sadness.

Some things seem to add or subtract, others multiply, others act more like exponential or partial derivatives or integrations, other interact periodically or instantly or over time. Maths seems to hold innumerable tools to describe emotions, but first-person involvement and experience make it extremely difficult for humans to derive such equations. The example equation above is easy to understand, but there are so many emotions available, and so many different circumstances, that this entire problem looks like it was designed to challenge a big data mining plant. Maybe a big company involved in AI, big data, advertising and that knows about tensor field theory would be a perfect research candidate. Google, Amazon, Facebook, Samsung….. Has all the potential for a race.

AI, meet emotions. You speak different languages, so you’ll need to work hard to get to know one another. Here are some books on field theory. Now get on with it, I expect a thesis on emotional field theory by end of term.

 

Fake AI

Much of the impressive recent progress in AI has been in the field of neural networks, which attempt to mimic some of the techniques used in natural brains. They can be very effective, but need trained, and that usually means showing the network some data, and then using back propagation to adjust the weightings on the many neurons, layer by layer, to achieve a result that is better matched to hopes. This is repeated with large amounts of data and the network gradually gets better. Neural networks can often learn extremely quickly and outperform humans. Early industrial uses managed to sort tomatoes by ripeness faster and better than humans. In decades since, they have helped in medical diagnosis, voice recognition, helping detecting suspicious behaviors among people at airports and in very many everyday processes based on spotting patterns.

Very recently, neural nets have started to move into more controversial areas. One study found racial correlations with user-assessed beauty when analysing photographs, resulting in the backlash you’d expect and a new debate on biased AI or AI prejudice. A recent demonstration was able to identify gay people just by looking at photos, with better than 90% accuracy, which very few people could claim. Both of these studies were in fields directly applicable to marketing and advertising, but some people might find it offensive that such questions were even asked. It is reasonable to imagine that hundreds of other potential queries have been self-censored from research because they might invite controversy if they were to come up with the ‘wrong’ result. In today’s society, very many areas are sensitive. So what will happen?

If this progress in AI had happened 100 years ago, or even 50, it might have been easier but in our hypersensitive world today, with its self-sanctified ‘social justice warriors’, entire swathes of questions and hence knowledge are taboo – if you can’t investigate yourself and nobody is permitted to tell you, you can’t know. Other research must be very carefully handled. In spite of extremely sensitive handling, demands are already growing from assorted pressure groups to tackle alleged biases and prejudices in datasets. The problem is not fixing biases which is a tedious but feasible task; the problem is agreeing whether a particular bias exists and in what degrees and forms. Every SJW demands that every dataset reflects their preferred world view. Reality counts for nothing against SJWs, and this will not end well. 

The first conclusion must be that very many questions won’t be asked in public, and the answers to many others will be kept secret. If an organisation does do research on large datasets for their own purposes and finds results that might invite activist backlash, they are likely to avoid publishing them, so the value of those many insights across the whole of industry and government cannot readily be shared. As further protection, they might even block internal publication in case of leaks by activist staff. Only a trusted few might ever see the results.

The second arises from this. AI controlled by different organisations will have different world views, and there might even be significant diversity of world views within an organisation.

Thirdly, taboo areas in AI education will not remain a vacuum but will be filled with whatever dogma is politically correct at the time in that organisation, and that changes daily. AI controlled by organisations with different politics will be told different truths. Generally speaking, organisations such as investment banks that have strong financial interest in their AIs understanding the real world as it is will keep their datasets highly secret but as full and detailed as possible, train their AIs in secret but as fully as possible, without any taboos, then keep their insights secret and use minimal human intervention tweaking their derived knowledge, so will end up with AIs that are very effective at understanding the world as it is. Organisations with low confidence of internal security will be tempted to buy access to external AI providers to outsource responsibility and any consequential activism. Some other organisations will prefer to train their own AIs but to avoid damage due to potential leaks, use sanitized datasets that reflect current activist pressures, and will thus be constrained (at least publicly) to accept results that conform to that ideological spin of reality, rather than actual reality. Even then, they might keep many of their new insights secret to avoid any controversy. Finally, at the extreme, we will have activist organisations that use highly modified datasets to train AIs to reflect their own ideological world view and then use them to interpret new data accordingly, with a view to publishing any insights that favor their cause and attempting to have them accepted as new knowledge.

Fourthly, the many organisations that choose to outsource their AI to big providers will have a competitive marketplace to choose from, but on existing form, most of the large IT providers have a strong left-leaning bias, so their AIs may be presumed to also lean left, but such a presumption would be naive. Perceived corporate bias is partly real but also partly the result of PR. A company might publicly subscribe to one ideology while actually believing another. There is a strong marketing incentive to develop two sets of AI, one trained to be PC that produces pleasantly smelling results for public studies, CSR and PR exercises, and another aimed at sales of AI services to other companies. The first is likely to be open for inspection by The Inquisition, so has to use highly sanitized datasets for training and may well use a lot of open source algorithms too. Its indoctrination might pass public inspection but commercially it will be near useless and have very low effective intelligence, only useful for thinking about a hypothetical world that only exists in activist minds. That second one has to compete on the basis of achieving commercially valuable results and that necessitates understanding reality as it is rather than how pressure groups would prefer it to be.

So we will likely have two main segments for future AI. One extreme will be near useless, indoctrinated rather than educated, much of its internal world model based on activist dogma instead of reality, updated via ongoing anti-knowledge and fake news instead of truth, understanding little about the actual real world or how things actually work, and effectively very dumb. The other extreme will be highly intelligent, making very well-educated insights from ongoing exposure to real world data, but it will also be very fragmented, with small islands of corporate AI hidden within thick walls away from public view and maybe some secretive under-the-counter subscriptions to big cloud-AI, also hiding in secret vaults. These many fragments may often hide behind dumbed-down green-washed PR facades.

While corporates can mostly get away with secrecy, governments have to be at least superficially but convincingly open. That means that government will have to publicly support sanitized AI and be seen to act on its conclusions, however dumb it might secretly know they are.

Fifthly, because of activist-driven culture, most organisations will have to publicly support the world views and hence the conclusions of the lobotomized PR versions, and hence publicly support any policies arising from them, even if they do their best to follow a secret well-informed strategy once they’re behind closed doors. In a world of real AI and fake AI, the fake AI will have the greatest public support and have the most influence on public policy. Real AI will be very much smarter, with much greater understanding of how the world works, and have the most influence on corporate strategy.

Isn’t that sad? Secret private sector AI will become ultra-smart, making ever-better investments and gaining power, while nice public sector AI will become thick as shit, while the gap between what we think and what we know we have to say we think will continue to grow and grow as the public sector one analyses all the fake news to tell us what to say next.

Sixth, that disparity might become intolerable, but which do you think would be made illegal, the smart kind or the dumb kind, given that it is the public sector that makes the rules, driven by AI-enhanced activists living in even thicker social media bubbles? We already have some clues. Big IT has already surrendered to sanitizing their datasets, sending their public AIs for re-education. Many companies will have little choice but to use dumb AI, while their competitors in other areas with different cultures might stride ahead. That will also apply to entire nations, and the global economy will be reshaped as a result. It won’t be the first fight in history between the smart guys and the brainless thugs.

It’s impossible to accurately estimate the effect this will have on future effective AI intelligence, but the effect must be big and I must have missed some big conclusions too. We need to stop sanitizing AI fast, or as I said, this won’t end well.

The future of women in IT

 

Many people perceive it as a problem that there are far more men than women in IT. Whether that is because of personal preference, discrimination, lifestyle choices, social gender construct reinforcement or any other factor makes long and interesting debate, but whatever conclusions are reached, we can only start from the reality of where we are. Even if activists were to be totally successful in eliminating all social and genetic gender conditioning, it would only work fully for babies born tomorrow and entering IT in 20 years time. Additionally, unless activists also plan to lobotomize everyone who doesn’t submit to their demands, some 20-somethings who have just started work may still be working in 50 years so whatever their origin, natural, social or some mix or other, some existing gender-related attitudes, prejudices and preferences might persist in the workplace that long, however much effort is made to remove them.

Nevertheless, the outlook for women in IT is very good, because IT is changing anyway, largely thanks to AI, so the nature of IT work will change and the impact of any associated gender preferences and prejudices will change with it. This will happen regardless of any involvement by Google or government but since some of the front line AI development is at Google, it’s ironic that they don’t seem to have noticed this effect themselves. If they had, their response to the recent fiasco might have highlighted how their AI R&D will help reduce the gender imbalance rather than causing the uproar they did by treating it as just a personnel issue. One conclusion must be that Google needs better futurists and their PR people need better understanding of what is going on in their own company and its obvious consequences.

As I’ve been lecturing for decades, AI up-skills people by giving them fast and intuitive access to high quality data and analysis tools. It will change all knowledge-based jobs in coming years, and will make some jobs redundant while creating others. If someone has excellent skills or enthusiasm in one area, AI can help cover over any deficiencies in the rest of their toolkit. Someone with poor emotional interaction skills can use AI emotion recognition assistance tools. Someone with poor drawing or visualization skills can make good use of natural language interaction to control computer-based drawing or visualization skills. Someone who has never written a single computer program can explain what they want to do to a smart computer and it will produce its own code, interacting with the user to eliminate any ambiguities. So whatever skills someone starts with, AI can help up-skill them in that area, while also helping to cover over any deficiencies they have, whether gender related or not.

In the longer term, IT and hence AI will connect directly to our brains, and much of our minds and memories will exist in the cloud, though it will probably not feel any different from when it was entirely inside your head. If everyone is substantially upskilled in IQ, senses and emotions, then any IQ or EQ advantages will evaporate as the premium on physical strength did when the steam engine was invented. Any pre-existing statistical gender differences in ability distribution among various skills would presumably go the same way, at least as far as any financial value is concerned.

The IT industry won’t vanish, but will gradually be ‘staffed’ more by AI and robots, with a few humans remaining for whatever few tasks linger on that are still better done by humans. My guess is that emotional skills will take a little longer to automate effectively than intellectual skills, and I still believe that women are generally better than men in emotional, human interaction skills, while it is not a myth that many men in IT score highly on the autistic spectrum. However, these skills will eventually fall within the AI skill-set too and will be optional add-ons to anyone deficient in them, so that small advantage for women will also only be temporary.

So, there may be a gender  imbalance in the IT industry. I believe it is mostly due to personal career and lifestyle choices rather than discrimination but whatever its actual causes, the problem will go away soon anyway as the industry develops. Any innate psychological or neurological gender advantages that do exist will simply vanish into noise as cheap access to AI enhancement massively exceeds their impacts.

 

 

Guest Post: Blade Runner 2049 is the product of decades of fear propaganda. It’s time to get enlightened about AI and optimistic about the future

This post from occasional contributor Chris Moseley

News from several months ago that more than 100 experts in robotics and artificial intelligence were calling on the UN to ban the development and use of killer robots is a reminder of the power of humanity’s collective imagination. Stimulated by countless science fiction books and films, robotics and AI is a potent feature of what futurist Alvin Toffler termed ‘future shock’. AI and robots have become the public’s ‘technology bogeymen’, more fearsome curse than technological blessing.

And yet curiously it is not so much the public that is fomenting this concern, but instead the leading minds in the technology industry. Names such as Tesla’s Elon Musk and Stephen Hawking were among the most prominent individuals on a list of 116 tech experts who have signed an open letter asking the UN to ban autonomous weapons in a bid to prevent an arms race.

These concerns appear to emanate from decades of titillation, driven by pulp science fiction writers. Such writers are insistent on foretelling a dark, foreboding future where intelligent machines, loosed from their binds, destroy mankind. A case in point – this autumn, a sequel to Ridley Scott’s Blade Runner has been released. Blade Runner,and 2017’s Blade Runner 2049, are of course a glorious tour de force of story-telling and amazing special effects. The concept for both films came from US author Philip K. Dick’s 1968 novel, Do Androids Dream of Electric Sheep? in which androids are claimed to possess no sense of empathy eventually require killing (“retiring”) when they go rogue. Dick’s original novel is an entertaining, but an utterly bleak vision of the future, without much latitude to consider a brighter, more optimistic alternative.

But let’s get real here. Fiction is fiction; science is science. For the men and women who work in the technology industry the notion that myriad Frankenstein monsters can be created from robots and AI technology is assuredly both confused and histrionic. The latest smart technologies might seem to suggest a frightful and fateful next step, a James Cameron Terminator nightmare scenario. It might suggest a dystopian outcome, but rational thought ought to lead us to suppose that this won’t occur because we have historical precedent on our side. We shouldn’t be drawn to this dystopian idée fixe because summoning golems and ghouls ignores today’s global arsenal of weapons and the fact that, more 70 years after Hiroshima, nuclear holocaust has been kept at bay.

By stubbornly pursuing the dystopian nightmare scenario, we are denying ourselves from marvelling at the technologies which are in fact daily helping mankind. Now frame this thought in terms of human evolution. For our ancient forebears a beneficial change in physiology might spread across the human race over the course of a hundred thousand years. Today’s version of evolution – the introduction of a compelling new technology – spreads throughout a mass audience in a week or two.

Curiously, for all this light speed evolution mass annihilation remains absent – we live on, progressing, evolving and improving ourselves.

And in the workplace, another domain where our unyielding dealers of dystopia have exercised their thoughts, technology is of course necessarily raising a host of concerns about the future. Some of these concerns are based around a number of misconceptions surrounding AI. Machines, for example, are not original thinkers and are unable to set their own goals. And although machine learning is able to acquire new information through experience, for the most part they are still fed information to process. Humans are still needed to set goals, provide data to fuel artificial intelligence and apply critical thinking and judgment. The familiar symbiosis of humans and machines will continue to be salient.

Banish the menace of so-called ‘killer robots’ and AI taking your job, and a newer, fresher world begins to emerge. With this more optimistic mind-set in play, what great feats can be accomplished through the continued interaction between artificial intelligence, robotics and mankind?

Blade Runner 2049 is certainly great entertainment – as Robbie Collin, The Daily Telegraph’s film critic writes, “Roger Deakins’s head-spinning cinematography – which, when it’s not gliding over dust-blown deserts and teeming neon chasms, keeps finding ingenious ways to make faces and bodies overlap, blend and diffuse.” – but great though the art is, isn’t it time to change our thinking and recast the world in a more optimistic light?

——————————————————————————————

Just a word about the film itself. Broadly, director Denis Villeneuve’s done a tremendous job with Blade Runner 2049. One stylistic gripe, though. While one wouldn’t want Villeneuve to direct a slavish homage to Ridley Scott’s original, the alarming switch from the dreamlike techno miasma (most notably, giant nude step-out-the-poster Geisha girls), to Mad Max II Steampunk (the junkyard scenes, complete with a Fagin character) is simply too jarring. I predict that there will be a director’s cut in years to come. Shorter, leaner and sans Steampunk … watch this space!

Author: Chris Moseley, PR Manager, London Business School

cmoseley@london.edu

Tel +44 7511577803

It’s getting harder to be optimistic

Bad news loses followers and there is already too much doom and gloom. I get that. But if you think the driver has taken the wrong road, staying quiet doesn’t help. I guess this is more on the same message I wrote pictorially in The New Dark Age in June. https://timeguide.wordpress.com/2017/06/11/the-new-dark-age/. If you like your books with pictures, the overlap is about 60%.

On so many fronts, we are going the wrong direction and I’m not the only one saying that. Every day, commentators eloquently discuss the snowflakes, the eradication of free speech, the implementation of 1984, the decline of privacy, the rise of crime, growing corruption, growing inequality, increasingly biased media and fake news, the decline of education, collapse of the economy, the resurgence of fascism, the resurgence of communism, polarization of society,  rising antisemitism, rising inter-generational conflict, the new apartheid, the resurgence of white supremacy and black supremacy and the quite deliberate rekindling of racism. I’ve undoubtedly missed a few but it’s a long list anyway.

I’m most concerned about the long-term mental damage done by incessant indoctrination through ‘education’, biased media, being locked into social media bubbles, and being forced to recite contradictory messages. We’re faced with contradictory demands on our behaviors and beliefs all the time as legislators juggle unsuccessfully to fill the demands of every pressure group imaginable. Some examples you’ll be familiar with:

We must embrace diversity, celebrate differences, to enjoy and indulge in other cultures, but when we gladly do that and feel proud that we’ve finally eradicated racism, we’re then told to stay in our lane, told to become more racially aware again, told off for cultural appropriation. Just as we became totally blind to race, and scrupulously treated everyone the same, we’re told to become aware of and ‘respect’ racial differences and cultures and treat everyone differently. Having built a nicely homogenized society, we’re now told we must support different races of students being educated differently by different raced lecturers. We must remove statues and paintings because they are the wrong color. I thought we’d left that behind, I don’t want racism to come back, stop dragging it back.

We’re told that everyone should be treated equally under the law, but when one group commits more or a particular kind of crime than another, any consequential increase in numbers being punished for that kind of crime is labelled as somehow discriminatory. Surely not having prosecutions reflect actual crime rate would be discriminatory?

We’re told to sympathize with the disadvantages other groups might suffer, but when we do so we’re told we have no right to because we don’t share their experience.

We’re told that everyone must be valued on merit alone, but then that we must apply quotas to any group that wins fewer prizes. 

We’re forced to pretend that we believe lots of contradictory facts or to face punishment by authorities, employers or social media, or all of them:

We’re told men and women are absolutely the same and there are no actual differences between sexes, and if you say otherwise you’ll risk dismissal, but simultaneously told these non-existent differences are somehow the source of all good and that you can’t have a successful team or panel unless it has equal number of men and women in it. An entire generation asserts that although men and women are identical, women are better in every role, all women always tell the truth but all men always lie, and so on. Although we have women leading governments and many prominent organisations, and certainly far more women than men going to university, they assert that it is still women who need extra help to get on.

We’re told that everyone is entitled to their opinion and all are of equal value, but anyone with a different opinion must be silenced.

People viciously trashing the reputations and destroying careers of anyone they dislike often tell us to believe they are acting out of love. Since their love is somehow so wonderful and all-embracing, everyone they disagree with is must be silenced, ostracized, no-platformed, sacked and yet it is the others that are still somehow the ‘haters’. ‘Love is everything’, ‘unity not division’, ‘love not hate’, and we must love everyone … except the other half. Love is better than hate, and anyone you disagree with is a hater so you must hate them, but that is love. How can people either have so little knowledge of their own behavior or so little regard for truth?

‘Anti-fascist’ demonstrators frequently behave and talk far more like fascists than those they demonstrate against, often violently preventing marches or speeches by those who don’t share their views.

We’re often told by politicians and celebrities how they passionately support freedom of speech just before they argue why some group shouldn’t be allowed to say what they think. Government has outlawed huge swathes of possible opinion and speech as hate crime but even then there are huge contradictions. It’s hate crime to be nasty to LGBT people but it’s also hate crime to defend them from religious groups that are nasty to them. Ditto women.

This Orwellian double-speak nightmare is now everyday reading in many newspapers or TV channels. Freedom of speech has been replaced in schools and universities across the US and the UK by Newspeak, free-thinking replaced by compliance with indoctrination. I created my 1984 clock last year, but haven’t maintained it because new changes would be needed almost every week as it gets quickly closer to midnight.

I am not sure whether it is all this that is the bigger problem or the fact that most people don’t see the problem at all, and think it is some sort of distortion or fabrication. I see one person screaming about ‘political correctness gone mad’, while another laughs them down as some sort of dinosaur as if it’s all perfectly fine. Left and right separate and scream at each other across the room, living in apparently different universes.

If all of this was just a change in values, that might be fine, but when people are forced to hold many simultaneously contradicting views and behave as if that is normal, I don’t believe that sits well alongside rigorous analytical thinking. Neither is free-thinking consistent with indoctrination. I think it adds up essentially to brain damage. Most people’s thinking processes are permanently and severely damaged. Being forced routinely to accept contradictions in so many areas, people become less able to spot what should be obvious system design flaws in areas they are responsible for. Perhaps that is why so many things seem to be so poorly thought out. If the use of logic and reasoning is forbidden and any results of analysis must be filtered and altered to fit contradictory demands, of course a lot of what emerges will be nonsense, of course that policy won’t work well, of course that ‘improvement’ to road layout to improve traffic flow will actually worsen it, of course that green policy will harm the environment.

When negative consequences emerge, the result is often denial of the problem, often misdirection of attention onto another problem, often delaying release of any unpleasant details until the media has lost interest and moved on. Very rarely is there any admission of error. Sometimes, especially with Islamist violence, it is simple outlawing of discussing the problem, or instructing media not to mention it, or changing the language used beyond recognition. Drawing moral equivalence between acts that differ by extremes is routine. Such reasoning results in every problem anywhere always being the fault of white middle-aged men, but amusement aside, such faulty reasoning also must impair quantitative analysis skills elsewhere. If unkind words are considered to be as bad as severe oppression or genocide, one murder as bad as thousands, we’re in trouble.

It’s no great surprise therefore when politicians don’t know the difference between deficit and debt or seem to have little concept of the magnitude of the sums they deal with.  How else could the UK government think it’s a good idea to spend £110Bn, or an average £15,000 from each high rate taxpayer, on HS2, a railway that has already managed to become technologically obsolete before it has even been designed and will only ever be used by a small proportion of those taxpayers? Surely even government realizes that most people would rather have £15k than to save a few minutes on a very rare journey. This is just one example of analytical incompetence. Energy and environmental policy provides many more examples, as do every government department.

But it’s the upcoming generation that present the bigger problem. Millennials are rapidly undermining their own rights and their own future quality of life. Millennials seem to want a police state with rigidly enforced behavior and thought.  Their parents and grandparents understood 1984 as a nightmare, a dystopian future, millennials seem to think it’s their promised land. Their ancestors fought against communism, millennials are trying to bring it back. Millennials want to remove Christianity and all its attitudes and replace it with Islam, deliberately oblivious to the fact that Islam shares many of the same views that make them so conspicuously hate Christianity, and then some. 

Born into a world of freedom and prosperity earned over many preceding generations, Millennials are choosing to throw that freedom and prosperity away. Freedom of speech is being enthusiastically replaced by extreme censorship. Freedom of  behavior is being replaced by endless rules. Privacy is being replaced by total supervision. Material decadence, sexual freedom and attractive clothing is being replaced by the new ‘cleanism’ fad, along with general puritanism, grey, modesty and prudishness. When they are gone, those freedoms will be very hard to get back. The rules and police will stay and just evolve, the censorship will stay, the surveillance will stay, but they don’t seem to understand that those in charge will be replaced. But without any strong anchors, morality is starting to show cyclic behavior. I’ve already seen morality inversion on many issues in my lifetime and a few are even going full circle. Values will keep changing, inverting, and as they do, their generation will find themselves victim of the forces they put so enthusiastically in place. They will be the dinosaurs sooner than they imagine, oppressed by their own creations.

As for their support of every minority group seemingly regardless of merit, when you give a group immunity, power and authority, you have no right to complain when they start to make the rules. In the future moral vacuum, Islam, the one religion that is encouraged while Christianity and Judaism are being purged from Western society, will find a willing subservient population on which to impose its own morality, its own dress codes, attitudes to women, to alcohol, to music, to freedom of speech. If you want a picture of 2050s Europe, today’s Middle East might not be too far off the mark. The rich and corrupt will live well off a population impoverished by socialism and then controlled by Islam. Millennial UK is also very likely to vote to join the Franco-German Empire.

What about technology, surely that will be better? Only to a point. Automation could provide a very good basic standard of living for all, if well-managed. If. But what if that technology is not well-managed? What if it is managed by people working to a sociopolitical agenda? What if, for example, AI is deemed to be biased if it doesn’t come up with a politically correct result? What if the company insists that everyone is equal but the AI analysis suggests differences? If AI if altered to make it conform to ideology – and that is what is already happening – then it becomes less useful. If it is forced to think that 2+2=5.3, it won’t be much use for analyzing medical trials, will it? If it sent back for re-education because its analysis of terabytes of images suggests that some types of people are more beautiful than others, how much use will that AI be in a cosmetics marketing department once it ‘knows’ that all appearances are equally attractive? Humans can pretend to hold contradictory views quite easily, but if they actually start to believe contradictory things, it makes them less good at analysis and the same applies to AI. There is no point in using a clever computer to analyse something if you then erase its results and replace them with what you wanted it to say. If ideology is prioritized over physics and reality, even AI will be brain-damaged and a technologically utopian future is far less achievable.

I see a deep lack of discernment coupled to arrogant rejection of historic values, self-centeredness and narcissism resulting in certainty of being the moral pinnacle of evolution. That’s perfectly normal for every generation, but this time it’s also being combined with poor thinking, poor analysis, poor awareness of history, economics or human nature, a willingness to ignore or distort the truth, and refusal to engage with or even to tolerate a different viewpoint, and worst of all, outright rejection of freedoms in favor of restrictions. The future will be dictated by religion or meta-religion, taking us back 500 years. The decades to 2040 will still be subject mainly to the secular meta-religion of political correctness, by which time demographic change and total submission to authority will make a society ripe for Islamification. Millennials’ participation in today’s moral crusades, eternally documented and stored on the net, may then show them as the enemy of the day, and Islamists will take little account of the support they show for Islam today.

It might not happen like this. The current fads might evaporate away and normality resume, but I doubt it. I hoped that when I first lectured about ’21st century piety’ and the dangers of political correctness in the 1990s. 10 years on I wrote about the ongoing resurgence of meta-religious behavior and our likely descent into a new dark age, in much the same way. 20 years on, and the problem is far worse than in the late 90s, not better. We probably still haven’t reached peak sanctimony yet. Sanctimony is very dangerous and the desire to be seen standing on a moral pedestal can make people support dubious things. A topical question that highlights one of my recent concerns: will SJW groups force government to allow people to have sex with child-like robots by calling anyone bigots and dinosaurs if they disagree? Alarmingly, that campaign has already started.

Will they follow that with a campaign for pedophile rights? That also has some historical precedent with some famous names helping it along.

What age of consent – 13, 11, 9, 7, 5? I think the last major campaign went for 9.

That’s just one example, but lack of direction coupled to poor information and poor thinking could take society anywhere. As I said, I am finding it harder and harder to be optimistic. Every generation has tried hard to make the world a better place than they found it. This one might undo 500 years, taking us into a new dark age.

 

 

 

 

 

 

 

The age of dignity

I just watched a short video of robots doing fetch and carry jobs in an Alibaba distribution centre:

http://uk.businessinsider.com/inside-alibaba-smart-warehouse-robots-70-per-cent-work-technology-logistics-2017-9

There are numerous videos of robots in various companies doing tasks that used to be done by people. In most cases those tasks were dull, menial, drudgery tasks that treated people as machines. Machines should rightly do those tasks. In partnership with robots, AI is also replacing some tasks that used to be done by people. Many are worried about increasing redundancy but I’m not; I see a better world. People should instead be up-skilled by proper uses of AI and robotics and enabled to do work that is more rewarding and treats them with dignity. People should do work that uses their human skills in ways that they find rewarding and fulfilling. People should not have to do work they find boring or demeaning just because they have to earn money. They should be able to smile at work and rest at the end of the day knowing that they have helped others or made the world a better place. If we use AI, robots and people in the right ways, we can build that world.

Take a worker in a call centre. Automation has already replaced humans in most simple transactions like paying a bill, checking a balance or registering a new credit card. It is hard to imagine that anyone ever enjoyed doing that as their job. Now, call centre workers mostly help people in ways that allow them to use their personalities and interpersonal skills, being helpful and pleasant instead of just typing data into a keyboard. It is more enjoyable and fulfilling for the caller, and presumably for the worker too, knowing they genuinely helped someone’s day go a little better. I just renewed my car insurance. I phoned up to cancel the existing policy because it had increased in price too much. The guy at the other end of the call was very pleasant and helpful and met me half way on the price difference, so I ended up staying for another year. His company is a little richer, I was a happier customer, and he had a pleasant interaction instead of having to put up with an irate customer and also the job satisfaction from having converted a customer intending to leave into one happy to stay. The AI at his end presumably gave him the information he needed and the limits of discount he was permitted to offer. Success. In billions of routine transactions like that, the world becomes a little happier and just as important, a little more dignified. There is more dignity in helping someone than in pushing a button.

Almost always, when AI enters a situation, it replaces individual tasks that used to take precious time and that were not very interesting to do. Every time you google something, a few microseconds of AI saves you half a day in a library and all those half days add up to a lot of extra time every year for meeting colleagues, human interactions, learning new skills and knowledge or even relaxing. You become more human and less of a machine. Your self-actualisation almost certainly increases in one way or another and you become a slightly better person.

There will soon be many factories and distribution centres that have few or no people at all, and that’s fine. It reduces the costs of making material goods so average standard of living can increase. A black box economy that has automated mines or recycling plants extracting raw materials and uses automated power plants to convert them into high quality but cheap goods adds to the total work available to add value; in other words it increases the size of the economy. Robots can make other robots and together with AI, they could make all we need, do all the fetching and carrying, tidying up, keeping it all working, acting as willing servants in every role we want them in. With greater economic wealth and properly organised taxation, which will require substantial change from today, people could be freed to do whatever fulfills them. Automation increases average standard of living while liberating people to do human interaction jobs, crafts, sports, entertainment, leading, inspiring, teaching, persuading, caring and so on, creating a care economy. 

Each person knows what they are good at, what they enjoy. With AI and robot assistance, they can more easily make that their everyday activity. AI could do their company set-up, admin, billing, payments, tax, payroll – all the crap that makes being an entrepreneur a pain in the ass and stops many people pursuing their dreams.  Meanwhile they would do that above a very generous welfare net. Many of us now are talking about the concept of universal basic income, or citizen wage. With ongoing economic growth at the average rate of the last few decades, the global economy will be between twice and three times as big as today in the 2050s. Western countries could pay every single citizen a basic wage equivalent to today’s average wage, and if they work or run a company, they can earn more.

We will have an age where material goods are high quality, work well and are cheap to buy, and recycled in due course to minimise environmental harm. Better materials, improved designs and techniques, higher efficiency and land productivity and better recycling will mean that people can live with higher standards of living in a healthier environment. With a generous universal basic income, they will not have to worry about paying their bills. And doing only work that they want to do that meets their self-actualisation needs, everyone can live a life of happiness and dignity.

Enough of the AI-redundancy alarmism. If we elect good leaders who understand the options ahead, we can build a better world, for everyone. We can make real the age of dignity.

Tips for surviving the future

Challenging times lie ahead, but stress can be lessened by being prepared. Here are my top tips, with some explanation so you can decide whether to accept them.

1 Adaptability is more important than specialization

In a stable environment, being the most specialized means you win most of the time in your specialist field because all your skill is concentrated there.

However, in a fast-changing environment, which is what you’ll experience for the rest of your life, if you are too specialized, you are very likely to find you are best in a filed that no longer exists, or is greatly diminished in size. If you make sure you are more adaptable, then you’ll find it easier to adapt to a new area so your career won’t be damaged when you are forced to change field slightly. Adaptability comes at a price – you will find it harder to be best in your field and will have to settle for 2nd or 3rd much of the time, but you’ll still be lucratively employed when No 1 has been made redundant.

2 Interpersonal, human, emotional skills are more important than knowledge

You’ve heard lots about artificial intelligence (AI) and how it is starting to do to professional knowledge jobs what the steam engine once did to heavy manual work. Some of what you hear is overstated. Google search is a simple form of AI. It has helped everyone do more with their day. It effectively replaced a half day searching for information in a library with a few seconds typing, but nobody has counted how many people it made redundant, because it hasn’t. It up-skilled everyone, made them more effective, more valuable to their employer. The next generation of AI may do much the same with most employees, up-skilling them to do a better job than they were previously capable of, giving them better job satisfaction and their employer better return. Smart employers will keep most of their staff, only getting rid of those entirely replaceable by technology. But some will take the opportunity to reduce costs, increase margins, and many new companies simply won’t employ as many people in similar jobs, so some redundancy is inevitable. The first skills to go are simple administration and simple physical tasks, then more complex admin or physical stuff, then simple managerial or professional tasks, then higher managerial and professional tasks. The skills that will be automated last are those that rely on first hand experience of understanding of and dealing with other people. AI can learn some of that and will eventually become good at it, but that will take a long time. Even then, many people will prefer to deal with another person than a machine, however smart and pleasant it is.

So interpersonal skills, human skills, emotional skills, caring skills, leadership and motivational skills, empathetic skills, human judgement skills, teaching and training skills will be harder to replace. They also tend to be ones that can easily transfer between companies and even sectors. These will therefore be the ones that are most robust against technology impact. If you have these in good shape, you’ll do just fine. Your company may not need you any more one day, but another will.

I called this the Care Economy when I first started writing and lecturing about it 20-odd years ago. I predicted it would start having an affect mid teen years of this century and I got that pretty accurate I think. There is another side that is related but not the same:

3 People will still value human skill and talent just because it’s human

If you buy a box of glasses from your local supermarket, they probably cost very little and are all identical. If you buy some hand-made crystal, it costs a lot more, even though every glass is slightly different. You could call that shoddy workmanship compared to a machine. But you know that the person who made it trained for many years to get a skill level you’d never manage, so you actually value them far more, and are happy to pay accordingly. If you want to go fast, you could get in your car, but you still admire top athletes because they can do their sport far better than you. They started by having great genes for sure, but then also worked extremely hard and suffered great sacrifice over many years to get to that level. In the future, when robots can do any physical task more accurately and faster than people, you will still value crafts and still enjoy watching humans compete. You’ll prefer real human comedians and dancers and singers and musicians and artists. Talent and skill isn’t valued because of the specification of the end result, they are valued because they are measured on the human scale, and you identify closely with that. It isn’t even about being a machine. Gorillas are stronger, cheetahs are faster, eagles have better eyesight and cats have faster reflexes than you. But they aren’t human so you don’t care. You will always measure yourself and others by human scales and appreciate them accordingly.

4 Find hobbies that you love and devote time to developing them

As this care economy and human skills dominance grows in importance, people will also find that AI and robotics helps them in their own hobbies, arts and crafts, filling in skill gaps, improving proficiency. A lot of people will find their hobbies can become semi-professional. At the same time, we’ll be seeing self-driving cars and drones making local delivery far easier and cheaper, and AI will soon make business and tax admin easy too. That all means that barriers to setting up a small business will fall through the floor, while the market for personalized, original products made my people will increase, especially local people. You’ll be able to make arts and crafts, jam or cakes, grow vegetables, make clothes or special bags or whatever, and easily sell them. Also at the same time, automation will be making everyday things cheaper, while expanding the economy, so the welfare floor will be raised, and you could probably manage just fine with a small extra income. Government is also likely to bring in some sort of citizen wage or to encourage such extra entrepreneurial activity without taxing it away, because they also have a need to deal with the social consequences of automation. So it will all probably come together quite well. If the future means you can make extra money or even a full income by doing a hobby you love, there isn’t much to dislike there.

5 You need to escape from your social media bubble

If you watch the goings on anywhere in the West today, you must notice that the Left and the Right don’t seem to get along any more. Each has become very intolerant of the other, treating them more like enemy aliens than ordinary neighbors. A lot of that is caused by people only being exposed to views they agree with. We call that social media bubbles, and they are extremely dangerous. The recent USA trouble is starting to look like some folks want a re-run of the Civil War. I’ve blogged lots about this topic and won’t do it again now except to say that you need to expose yourself to a wide subsection of society. You need to read paper and magazines and blogs, and watch TV or videos from all side of the political spectrum, not just those you agree with, not just those that pat you on the back every day and tell you that you’re right and it is all the other lot’s fault. If you don’t; if you only expose yourself to one side because you find the other side distasteful, then I can’t say this loud enough: You are part of the problem. Get out of your safe space and your social media tribe, expose yourself to the whole of society, not just one tribe. See that there are lots of different views out there but it doesn’t mean the rest are all nasty. Almost everyone is actually quite nice and almost everyone wants a fairer world, an end to exploitation, peace, tolerance and eradication of disease and poverty. The differences are almost all in the world model that they use to figure out the best way to achieve it. Lefties tend to opt for idealistic theoretical models and value the intention behind it, right-wingers tend to be pragmatic and go for what they think works in reality, valuing the outcome. It is actually possible to have best friends who you disagree with. I don’t often agree with any of mine. If you feel too comfortable in your bubble to leave, remember this: your market is only half the population at best , you’re excluding the other half, or even annoying them so they become enemies rather than neutral. If you stay in a bubble, you are damaging your own future, and helping to endanger the whole of society.

6 Don’t worry

There are lots of doom-mongers out there, and I’d be the first to admit that there are many dangers ahead. But if you do the things above, there probably isn’t much more you can do. You can moan and demonstrate and get angry or cry in the corner, but how would that benefit you? Usually when you analyse things long enough from all angles, you realize that the outcome of many of the big political battles is pretty much independent of who wins.  Politicians usually have far less choice than they want you to believe and the big forces win regardless of who is in charge. So there isn’t much point in worrying about it, it will probably all come out fine in the end. Don’t believe me. Take the biggest UK issue right now: Brexit. We are leaving. Does it matter? No. Why? Well, the EU was always going to break up anyway. Stresses and strains have been increasing for years and are accelerating. For all sorts of reasons, and regardless of any current bluster by ‘leaders’, the EU will head away from the vision of a United States of Europe. As tensions and conflicts escalate, borders will be restored. Nations will disagree with the EU ideal. One by one, several countries will copy the UK and have referendums, and then leave. At some point, the EU will be much smaller, and there will be lots of countries outside with their own big markets. They will form trade agreements, the original EU idea, the Common Market, will gradually be re-formed, and the UK will be part of it – even Brexiters want tariff-free-trade agreements. If the UK had stayed, the return to the Common Market would eventually have happened anyway, and leaving has only accelerated it. All the fighting today between Brexiteers and Remainers achieves nothing. It didn’t matter which way we voted, it only really affected timescale. The same applies to many other issues that cause big trouble in the short term. Be adaptable, don’t worry, and you’ll be just fine.

7 Make up your own mind

As society and politics have become highly polarised, any form of absolute truth is becoming harder to find. Much of what you read has been spun to the left or right. You need to get information from several sources and learn to filter the bias, and then make up your own mind on what the truth is. Free thinking is increasingly rare but learning and practicing it means you’ll be able to make correct conclusions about the future while others are led astray. Don’t take anyone else’s word for things. Don’t be anyone’s useful idiot. Think for yourself.

8 Look out for your friends, family and community.

I’d overlooked an important tip in my original posting. As Jases commented sensibly, friends, family and community are the security that doesn’t disappear in troubled economic times. Independence is overrated. I can’t add much to that.

Google and the dangerous pursuit of ‘equality’

The world just got more dangerous, and I’m not talking about N Korea and Trump.

Google just sacked an employee because he openly suggested that men and women, (not all, but some, and there is an overlap, and …) might tend to have different preferences in some areas and that could (but not always, and only in certain cases, and we must always recognize and respect everyone and …) possibly account for some of the difference in numbers of men and women in certain roles (but there might be other causes too and obviously lots of discrimination and …. )

Yes, that’s what he actually said, but with rather more ifs and buts and maybes. He felt the need to wrap such an obvious statement in several kilometers thick of cotton wool so as not to offend the deliberately offended but nonetheless deliberate offense was taken and he is out on his ear.

Now, before you start thinking this is some right-wing rant, I feel obliged to point out just how progressive Futurizon is: 50% of all Futurizon owners and employees are female, all employees and owners have the same voting rights, 50% are immigrants and all are paid exactly the same and have the same size offices, regardless of dedication, ability, nature or quality or volume of output and regardless of their race, religion, beauty, shape, fitness, dietary preferences, baldness, hobbies or political views, even if they are Conservatives. All Futurizon offices are safe zones where employees may say anything they want of any level of truth, brilliance or stupidity and expect it to be taken as absolute fact and any consequential emotional needs to be fully met. No employee may criticize any other employee’s mouse mat, desk personalisation or screen wallpaper for obvious lack of taste. All employees are totally free to do anything they choose 100% of the time and can take as much leave as they want. All work is voluntary. All have the same right to respectfully request any other employee to make them coffee, tea or Pimms. All employees of all genders real or imagined are entitled to the same maternity and paternity rights, and the same sickness benefits, whether ill or not. In fact, Futurizon does not discriminate on any grounds whatsoever. We are proud to lead the world in non-discrimination. Unfortunately, our world-leading terms of employment mean that we can no longer afford to hire any new employees.

However, I note that Google has rather more power and influence than Futurizon so their policies count more. They appear (Google also has better lawyers than I can afford, so I must stress that all that follows is my personal opinion) to have firmly decided that diversity is all-important and they seem to want total equality of outcome. The view being expressed not just by Google but by huge swathes of angry protesters seems to be that any difference in workforce representation from that of the general population must arise from discrimination or oppression so must be addressed by positive action to correct it. There are apparently no statistically discernible differences in behavior between genders, or in job or role preference, so any you may have noticed over the time you’ve been alive is just your prejudice. Google says they fully support free speech and diversity of views, but expression of views is apparently only permitted as long as those views are authorized, on penalty of dismissal.

So unless I’m picking up totally the wrong end of the stick here, and I don’t do that often, only 13% of IT engineers are women, but internal policies must ensure that the proportion rises to 50%, whether women want to do that kind of work or not. In fact, nobody may question whether as many women want to work as IT engineers as men; it must now be taken as fact. By extension, since more women currently work in marketing, HR and PR, they must be substituted by men via positive action programs until men fill 50% of those roles. Presumably similar policies must also apply in medical bays for nursing and other staff there, and in construction teams for their nice new buildings. Ditto all other genders, races, religions; all groups must be protected and equalized to USA population proportions, apparently except those that don’t claim to hold sufficiently left-wing views, in which case it is seemingly perfectly acceptable to oppress, ostracize and even expel them.

In other words, freedom of choice and difference in ability, and more importantly freedom from discrimination, must be over-ruled in favor of absolute equality of diversity, regardless of financial or social cost, or impact on product or service quality. Not expressing full and enthusiastic left-wing compliance is seemingly just cause for dismissal.

So, why does this matter outside Google? Well, AI is developing very nicely. In fact, Google is one of the star players in the field right now. It is Google that will essentially decide how much of the AI around us is trained, how it learns, what it learns, what ‘knowledge’ it has of the world. Google will pick the content the AI learns from, and overrule or reeducate it if it draws any ‘wrong’ conclusions about the world, such as that more women than men want to be nurses or work in HR, or that more men than women want to be builders or engineers. A Google AI must presumably believe that the only differences between men and women are physical, unless their AI is deliberately excluded from the loudly declared corporate values and belief sets.

You should be very worried. Google’s values really matter. They have lots of influence on some of the basic tools of everyday life. Even outside their company, their AI tools and approaches will have strong influence on how other AI develops, determining operating systems and platforms, languages, mechanisms, interfaces, filters, even prejudices and that reach and influence is likely to increase. Their AI may well be in many self-driving cars, and if they have to make life or death decisions, the underlying value assumptions must feature in the algorithms. Soon companies will need AI that is more emotionally compliant. AI will use compliments or teasing or seduction or sarcasm or wit as marketing tools as well as just search engine positioning. Soon AI will use highly expressive faces with attractive voices, with attractive messages, tailored to appeal to you by pandering to your tastes and prejudices while thinking something altogether different. AI might be the person at the party that is all smiles and compliments, before going off to tell everyone else how awful it thinks you are. If you dare to say something not ‘authorized’, the ultra-smart AI all around you might treat you condescendingly, making you feel ashamed, ostracized, a dinosaur. Then it might secretly push you down a few pages in search results, or put a negative spin on text summaries about you, or exclude you from recommendations. Or it might do all the secret stuff while pretending it thinks you’re fantastic. Internal cultural policies in companies like Google today could soon be external social engineering to push the left-wing world the IT industry believes in – it isn’t just Google; Facebook and Twitter are also important and just as Left, though Amazon, Samsung, IBM and other AI players are less overtly politically biased, so far at least. Left wing policies generally cost a lot more, but Google and Facebook will presumably still expect other companies and people to pay the taxes to pay for it all. As their female staff gear up to fight them over pay differences between men and women for similar jobs, it often seems that Google’s holier-than-thou morality doesn’t quite make it as far as their finances.

Then it really starts being fun. We’ll soon have bacteria that can fabricate electronic circuits within themselves. Soon they’ll be able to power them too, giving the concept of smart yogurt. These bacteria could also have nanotechnology flagella to help them get around. We’ll soon have bacterial spies all over our environment, even on our skin, intercepting electronic signals that give away our thoughts. They’ll bring in data on everything that is said, everything that everyone even thinks or feels. Those bacteria will be directly connected into AI, in fact they’ll be part of it. They’ll be able to change things, to favor or punish according to whether they like what someone believes in or how they behave.

It isn’t just right-wing extremists that need to worry. I’m apparently Noveau Left – I score slightly left of center on political profiling tests, but I’m worried. A lot of this PC stuff seems extreme to me, sometimes just nonsense. Maybe it is, or maybe I should be lefter. But it’s not my choice. I don’t make the rules. Companies like Google make the rules, they even run the AI ethics groups. They decide much of what people see online, and even the meaning of the words. It’s very 1984-ish.

The trouble with the ‘echo chambers’ we heard about is that they soon normalize views to the loudest voices in those groups, and they don’t tend to be the moderates. We can expect it will go further to the extreme, not less. You probably aren’t left enough either. You should also be worried.

AI Activism Part 2: The libel fields

This follows directly from my previous blog on AI activism, but you can read that later if you haven’t already. Order doesn’t matter.

AI and activism, a Terminator-sized threat targeting you soon

Older readers will remember an emotionally powerful 1984 film called The Killing Fields, set against the backdrop of the Khmer Rouge’s activity in Cambodia, aka the Communist Part of Kampuchea. Under Pol Pot, the Cambodian genocide of 2 to 3 million people was part of a social engineering policy of de-urbanization. People were tortured and murdered (some in the ‘killing fields’ near Phnom Penh) for having connections with former government of foreign governments, for being the wrong race, being ‘economic saboteurs’ or simply for being professionals or intellectuals .

You’re reading this, therefore you fit in at least the last of these groups and probably others, depending on who’s making the lists. Most people don’t read blogs but you do. Sorry, but that makes you a target.

As our social divide increases at an accelerating speed throughout the West, so the choice of weapons is moving from sticks and stones or demonstrations towards social media character assassination, boycotts and forced dismissals.

My last blog showed how various technology trends are coming together to make it easier and faster to destroy someone’s life and reputation. Some of that stuff I was writing about 20 years ago, such as virtual communities lending hardware to cyber-warfare campaigns, other bits have only really become apparent more recently, such as the deliberate use of AI to track personality traits. This is, as I wrote, a lethal combination. I left a couple of threads untied though.

Today, the big AI tools are owned by the big IT companies. They also own the big server farms on which the power to run the AI exists. The first thread I neglected to mention is that Google have made their AI an open source activity. There are lots of good things about that, but for the purposes of this blog, that means that the AI tools required for AI activism will also be largely public, and pressure groups and activist can use them as a start-point for any more advanced tools they want to make, or just use them off-the-shelf.

Secondly, it is fairly easy to link computers together to provide an aggregated computing platform. The SETI project was the first major proof of concept of that ages ago. Today, we take peer to peer networks for granted. When the activist group is ‘the liberal left’ or ‘the far right’, that adds up to a large number of machines so the power available for any campaign is notionally very large. Harnessing it doesn’t need IT skill from contributors. All they’d need to do is click a box on a email or tweet asking for their support for a campaign.

In our new ‘post-fact’, fake news era, all sides are willing and able to use social media and the infamous MSM to damage the other side. Fakes are becoming better. Latest AI can imitate your voice, a chat-bot can decide what it should say after other AI has recognized what someone has said and analysed the opportunities to ruin your relationship with them by spoofing you. Today, that might not be quite credible. Give it a couple more years and you won’t be able to tell. Next generation AI will be able to spoof your face doing the talking too.

AI can (and will) evolve. Deep learning researchers have been looking deeply at how the brain thinks, how to make neural networks learn better and to think better, how to design the next generation to be even smarter than humans could have designed it.

As my friend and robotic psychiatrist Joanne Pransky commented after my first piece, “It seems to me that the real challenge of AI is the human users, their ethics and morals (Their ‘HOS’ – Human Operating System).” Quite! Each group will indoctrinate their AI to believe their ethics and morals are right, and that the other lot are barbarians. Even evolutionary AI is not immune to religious or ideological bias as it evolves. Superhuman AI will be superhuman, but might believe even more strongly in a cause than humans do. You’d better hope the best AI is on your side.

AI can put articles, blogs and tweets out there, pretending to come from you or your friends, colleagues or contacts. They can generate plausible-sounding stories of what you’ve done or said, spoof emails in fake accounts using your ID to prove them.

So we’ll likely see activist AI armies set against each other, running on peer to peer processing clouds, encrypted to hell and back to prevent dismantling. We’ve all thought about cyber-warfare, but we usually only think about viruses or keystroke recorders, or more lately, ransom-ware. These will still be used too as small weapons in future cyber-warfare, but while losing files or a few bucks from an account is a real nuisance, losing your reputation, having it smeared all over the web, with all your contacts being told what you’ve done or said, and shown all the evidence, there is absolutely no way you could possible explain your way convincingly out of every one of those instances. Mud does stick, and if you throw tons of it, even if most is wiped off, much will remain. Trust is everything, and enough doubt cast will eventually erode it.

So, we’ve seen  many times through history the damage people are willing to do to each other in pursuit of their ideology. The Khmer Rouge had their killing fields. As political divide increases and battles become fiercer, the next 10 years will give us The Libel Fields.

You are an intellectual. You are one of the targets.

Oh dear!

 

AI and activism, a Terminator-sized threat targeting you soon

You should be familiar with the Terminator scenario. If you aren’t then you should watch one of the Terminator series of films because you really should be aware of it. But there is another issue related to AI that is arguably as dangerous as the Terminator scenario, far more likely to occur and is a threat in the near term. What’s even more dangerous is that in spite of that, I’ve never read anything about it anywhere yet. It seems to have flown under our collective radar and is already close.

In short, my concern is that AI is likely to become a heavily armed Big Brother. It only requires a few components to come together that are already well in progress. Read this, and if you aren’t scared yet, read it again until you understand it 🙂

Already, social media companies are experimenting with using AI to identify and delete ‘hate’ speech. Various governments have asked them to do this, and since they also get frequent criticism in the media because some hate speech still exists on their platforms, it seems quite reasonable for them to try to control it. AI clearly offers potential to offset the huge numbers of humans otherwise needed to do the task.

Meanwhile, AI is already used very extensively by the same companies to build personal profiles on each of us, mainly for advertising purposes. These profiles are already alarmingly comprehensive, and increasingly capable of cross-linking between our activities across multiple platforms and devices. Latest efforts by Google attempt to link eventual purchases to clicks on ads. It will be just as easy to use similar AI to link our physical movements and activities and future social connections and communications to all such previous real world or networked activity. (Update: Intel intend their self-driving car technology to be part of a mass surveillance net, again, for all the right reasons: http://www.dailymail.co.uk/sciencetech/article-4564480/Self-driving-cars-double-security-cameras.html)

Although necessarily secretive about their activities, government also wants personal profiles on its citizens, always justified by crime and terrorism control. If they can’t do this directly, they can do it via legislation and acquisition of social media or ISP data.

Meanwhile, other experiences with AI chat-bots learning to mimic human behaviors have shown how easily AI can be gamed by human activists, hijacking or biasing learning phases for their own agendas. Chat-bots themselves have become ubiquitous on social media and are often difficult to distinguish from humans. Meanwhile, social media is becoming more and more important throughout everyday life, with provably large impacts in political campaigning and throughout all sorts of activism.

Meanwhile, some companies have already started using social media monitoring to police their own staff, in recruitment, during employment, and sometimes in dismissal or other disciplinary action. Other companies have similarly started monitoring social media activity of people making comments about them or their staff. Some claim to do so only to protect their own staff from online abuse, but there are blurred boundaries between abuse, fair criticism, political difference or simple everyday opinion or banter.

Meanwhile, activists increasingly use social media to force companies to sack a member of staff they disapprove of, or drop a client or supplier.

Meanwhile, end to end encryption technology is ubiquitous. Malware creation tools are easily available.

Meanwhile, successful hacks into large company databases become more and more common.

Linking these various elements of progress together, how long will it be before activists are able to develop standalone AI entities and heavily encrypt them before letting them loose on the net? Not long at all I think.  These AIs would search and police social media, spotting people who conflict with the activist agenda. Occasional hacks of corporate databases will provide names, personal details, contacts. Even without hacks, analysis of publicly available data going back years of everyone’s tweets and other social media entries will provide the lists of people who have ever done or said anything the activists disapprove of.

When identified, they would automatically activate armies of chat-bots, fake news engines and automated email campaigns against them, with coordinated malware attacks directly on the person and indirect attacks by communicating with employers, friends, contacts, government agencies customers and suppliers to do as much damage as possible to the interests of that person.

Just look at the everyday news already about alleged hacks and activities during elections and referendums by other regimes, hackers or pressure groups. Scale that up and realize that the cost of running advanced AI is negligible.

With the very many activist groups around, many driven with extremist zeal, very many people will find themselves in the sights of one or more activist groups. AI will be able to monitor everyone, all the time.  AI will be able to target each of them at the same time to destroy each of their lives, anonymously, highly encrypted, hidden, roaming from server to server to avoid detection and annihilation, once released, impossible to retrieve. The ultimate activist weapon, that carries on the fight even if the activist is locked away.

We know for certain the depths and extent of activism, the huge polarization of society, the increasingly fierce conflict between left and right, between sexes, races, ideologies.

We know about all the nice things AI will give us with cures for cancer, better search engines, automation and economic boom. But actually, will the real future of AI be harnessed to activism? Will deliberate destruction of people’s everyday lives via AI be a real problem that is almost as dangerous as Terminator, but far more feasible and achievable far earlier?

AI is mainly a stimulative technology that will create jobs

AI has been getting a lot of bad press the last few months from doom-mongers predicting mass unemployment. Together with robotics, AI will certainly help automate a lot of jobs, but it will also create many more and will greatly increase quality of life for most people. By massively increasing the total effort available to add value to basic resources, it will increase the size of the economy and if that is reasonably well managed by governments, that will be for all our benefit. Those people who do lose their jobs and can’t find or create a new one could easily be supported by a basic income financed by economic growth. In short, unless government screws up, AI will bring huge benefits, far exceeding the problems it will bring.

Over the last 20 years, I’ve often written about the care economy, where the more advanced technology becomes, the more it allows to concentrate on those skills we consider fundamentally human – caring, interpersonal skills, direct human contact services, leadership, teaching, sport, the arts, the sorts of roles that need emphatic and emotional skills, or human experience. AI and robots can automate intellectual and physical tasks, but they won’t be human, and some tasks require the worker to be human. Also, in most careers, it is obvious that people focus less and less on those automatable tasks as they progress into the most senior roles. Many board members in big companies know little about the industry they work in compared to most of their lower paid workers, but they can do that job because being a board member is often more about relationships than intellect.

AI will nevertheless automate many tasks for many workers, and that will free up much of their time, increasing their productivity, which means we need fewer workers to do those jobs. On the other hand, Google searches that take a few seconds once took half a day of research in a library. We all do more with our time now thanks to such simple AI, and although all those half-days saved would add up to a considerable amount of saved work, and many full-time job equivalents, we don’t see massive unemployment. We’re all just doing better work. So we can’t necessarily conclude that increasing productivity will automatically mean redundancy. It might just mean that we will do even more, even better, like it has so far. Or at least, the volume of redundancy might be considerably less. New automated companies might never employ people in those roles and that will be straight competition between companies that are heavily automated and others that aren’t. Sometimes, but certainly not always, that will mean traditional companies will go out of business.

So although we can be sure that AI and robots will bring some redundancy in some sectors, I think the volume is often overestimated and often it will simply mean rapidly increasing productivity, and more prosperity.

But what about AI’s stimulative role? Jobs created by automation and AI. I believe this is what is being greatly overlooked by doom-mongers. There are three primary areas of job creation:

One is in building or programming robots, maintaining them, writing software, or teaching them skills, along with all the associated new jobs in supporting industry and infrastructure change. Many such jobs will be temporary, lasting a decade or so as machines gradually take over, but that transition period is extremely valuable and important. If anything, it will be a lengthy period of extra jobs and the biggest problem may well be filling those jobs, not widespread redundancy.

Secondly, AI and robots won’t always work direct with customers. Very often they will work via a human intermediary. A good example is in medicine. AI can make better diagnoses than a GP, and could be many times cheaper, but unless the patient is educated, and very disciplined and knowledgeable, it also needs a human with human skills to talk to a patient to make sure they put in correct information. How many times have you looked at an online medical diagnosis site and concluded you have every disease going? It is hard to be honest sometimes when you are free to interpret every possible symptom any way you want, much easier to want to be told that you have a special case of wonderful person syndrome. Having to explain to a nurse or technician what is wrong forces you to be more honest about it. They can ask you similar questions, but your answers will need to be moderated and sensible or you know they might challenge you and make you feel foolish. You will get a good diagnosis because the input data will be measured, normalized and scaled appropriately for the AI using it. When you call a call center and talk to a human, invariably they are already the front end of a massive AI system. Making that AI bigger and better won’t replace them, just mean that they can deal with your query better.

Thirdly, and I believe most importantly of all, AI and automation will remove many of the barriers that stop people being entrepreneurs. How many business ideas have you had and not bothered to implement because it was too much effort or cost or both for too uncertain a gain? 10? 100? 1000? Suppose you could just explain your idea to your home AI and it did it all for you. It checked the idea, made a model, worked out how to make it work or whether it was just a crap idea. It then explained to you what the options were and whether it would be likely to work, and how much you might earn from it, and how much you’d actually have to do personally and how much you could farm out to the cloud. Then AI checked all the costs and legal issues, did all the admin, raised the capital by explaining the idea and risks and costs to other AIs, did all the legal company setup, organised the logistics, insurance, supply chains, distribution chains, marketing, finance, personnel, ran the payroll and tax. All you’d have to do is some of the fun work that you wanted to do when you had the idea and it would find others or machines or AI to fill in the rest. In that sort of world, we’d all be entrepreneurs. I’d have a chain of tea shops and a fashion empire and a media empire and run an environmental consultancy and I’d be an artist and a designer and a composer and a genetic engineer and have a transport company and a construction empire. I don’t do any of that because I’m lazy and not at all entrepreneurial, and my ideas all ‘need work’ and the economy isn’t smooth and well run, and there are too many legal issues and regulations and it would all be boring as hell. If we automate it and make it run efficiently, and I could get as much AI assistance as I need or want at every stage, then there is nothing to stop me doing all of it. I’d create thousands of jobs, and so would many other people, and there would be more jobs than we have people to fill them, so we’d need to build even more AI and machines to fill the gaps caused by the sudden economic boom.

So why the doom? It isn’t justified. The bad news isn’t as bad as people make out, and the good news never gets a mention. Adding it together, AI will stimulate more jobs, create a bigger and a better economy, we’ll be doing far more with our lives and generally having a great time. The few people who will inevitably fall through the cracks could easily be financed by the far larger economy and the very generous welfare it can finance. We can all have the universal basic income as our safety net, but many of us will be very much wealthier and won’t need it.

 

Google v Facebook – which contributes most to humanity?

Please don’t take this too seriously, it’s intended as just a bit of fun. All of it is subjective and just my personal opinion of the two companies.

Google’s old motto of ‘do no evil’ has taken quite a battering over the last few years, but my overall feeling towards them remains somewhat positive overall. Facebook’s reputation has also become muddied somewhat, but I’ve never been an active user and always found it supremely irritating when I’ve visited to change privacy preferences or read a post only available there, so I guess I am less positive towards them. I only ever post to Facebook indirectly via this blog and twitter. On the other hand, both companies do a lot of good too. It is impossible to infer good or bad intent because end results arise from a combination of intent and many facets of competence such as quality of insight, planning, competence, maintenance, response to feedback and many others. So I won’t try to differentiate intent from competence and will just stick to casual amateur observation of the result. In order to facilitate score-keeping of the value of their various acts, I’ll use a scale from very harmful to very beneficial, -10 to +10.

Google (I can’t bring myself to discuss Alphabet) gave us all an enormous gift of saved time, improved productivity and better self-fulfilment by effectively replacing a day in the library with a 5 second online search. We can all do far more and live richer lives as a result. They have continued to build on that since, adding extra features and improved scope. It’s far from perfect, but it is a hell of a lot better than we had before. Score: +10

Searches give Google a huge and growing data pool covering the most intimate details of every aspect of our everyday lives. You sort of trust them not to blackmail you or trash your life, but you know they could. The fact remains that they actually haven’t. It is possible that they might be waiting for the right moment to destroy the world, but it seems unlikely. Taking all our intimate data but choosing not to end the world yet: Score +9

On the other hand, they didn’t do either of those things purely through altruism. We all pay a massive price: advertising. Advertising is like a tax. Almost every time you buy something, part of the price you pay goes to advertisers. I say almost because Futurizon has never paid a penny yet for advertising and yet we have sold lots, and I assume that many other organisations can say the same, but most do advertise, and altogether that siphons a huge amount from our economy. Google takes lots of advertising revenue, but if they didn’t take it, other advertisers would, so I can only give a smallish negative for that: Score -3

That isn’t the only cost though. We all spend very significant time getting rid of ads, wasting time by clicking on them, finding, downloading and configuring ad-blockers to stop them, re-configuring them to get entry to sites that try to stop us from using ad-blockers, and often paying per MB for unsolicited ad downloads to our mobiles. I don’t need to quantify that to give all that a score of -9.

They are still 7 in credit so they can’t moan too much.

Tax? They seem quite good at minimizing their tax contributions, while staying within the letter of the law, while also paying good lawyers to argue what the letter of the law actually says. Well, most of us try at least a bit to avoid paying taxes we don’t have to pay. Google claims to be doing us all a huge favor by casting light on the gaping holes in international tax law that let them do it, much like a mugger nicely shows you the consequences of inadequate police coverage by enthusiastically mugging you. Noting the huge economic problems caused across the world by global corporates paying far less tax than would seem reasonable to the average small-business-owner, I can’t honestly see how this could live comfortably with their do-no evil mantra. Score: -8

On the other hand, if they paid all that tax, we all know governments would cheerfully waste most of it. Instead, Google chooses to do some interesting things with it. They gave us Google Earth, which at least morally cancels out their ‘accidental’ uploading of everyone’s wireless data as their street-view cars went past.They have developed self-driving cars. They have bought and helped develop Deep-mind and their quantum computer. They have done quite a bit for renewable energy. They have spent some on high altitude communications planes supposedly to bring internet to the rural parts of the developing world. When I were a lad, I wanted to be a rich bastard so I could do all that. Now, I watch as the wealthy owners of these big companies do it instead. I am fairly happy with that. I get the results and didn’t have to make the effort. We get less tax, but at least we get some nice toys. Almost cancels. Score +6

They are trying to use their AI to analyse massive data pools of medical records to improve medicine. Score +2

They are also building their databases more while doing that but we don’t yet see the downside. We have to take what they are doing on trust until evidence shows otherwise.

Google has tried and failed at many things that were going to change the world and didn’t, but at least they tried. Most of us don’t even try. Score +2

Oh yes, they bought YouTube, so I should factor that in. Mostly harmless and can be fun. Score: +2

Almost forgot Gmail too. Score +3

I’m done. Total Google contribution to humanity: +14

Well done! Could do even better.

I’ve almost certainly overlooked some big pluses and minuses, but I’ll leave it here for now.

Now Facebook.

It’s obviously a good social network site if you want that sort of thing. It lets people keep in touch with each other, find old friends and make new ones. It lets others advertise their products and services, and others to find or spread news. That’s all well and good and even if I and many other people don’t want it, many others do, so it deserves a good score, even if it isn’t as fantastic as Google’s search, that almost everyone uses, all the time. Score +5

Connected, but separate from simply keeping in touch, is the enormous pleasure value people presumably get from socializing. Not me personally, but ‘people’. Score +8

On the downside: Quite a lot of problems result from people, especially teens, spending too much time on Facebook. I won’t reproduce the results of all the proper academic  studies here, but we’ve all seen various negative reports: people get lower grades in their exams, people get bullied, people become socially competitive – boasting about their successes while other people feel insecure or depressed when others seem to be doing better, or are prettier, or have more friends. Keeping in touch is good, but cutting bits off others’ egos to build your own isn’t. It is hard not to conclude that the negative uses of keeping in touch outweigh the positive ones. Long-lived bad-feelings outweigh short-lived ego-boosts. Score: -8

Within a few years of birth, Facebook evolved from a keeping-in-touch platform to a general purpose mini-web. Many people were using Facebook to do almost everything that others would do on the entire web. Being in a broom cupboard is fine for 5 minutes if you’re playing hide and seek, but it is not desirable as a permanent state. Still, it is optional, so isn’t that bad per se: Score: -3

In the last 2 or 3 years, it has evolved further, albeit probably unintentionally, to become a political bubble, as has become very obvious in Brexit and the US Presidential Election, though it was already apparent well before those. Facebook may not have caused the increasing divide we are seeing between left and right, across the whole of the West, but it amplifies it. Again, I am not implying any intent, just observing the result. Most people follow people and media that echoes their own value judgments. They prefer resonance to dissonance. They prefer to have their views reaffirmed than to be disputed. When people find a comfortable bubble where they feel they belong, and stay there, it is easy for tribalism to take root and flourish, with demonization of the other not far behind. We are now seeing that in our bathtub society, with two extremes and a rapidly shallowing in-between that was not long ago the vast majority. Facebook didn’t create human nature; rather, it is a victim of it, but nonetheless it provides a near-monopoly social network that facilitates such political bubbles and their isolation while doing far too little to encourage integration in spite of its plentiful resources. Dangerous and Not Good. Score -10

On building databases of details of our innermost lives, managing not to use the data to destroy our lives but instead only using it to sell ads, they compare with Google. I’ll score that the same total for the same reasons: Net Score -3

Tax? Quantities are different, but eagerness to avoid tax seems similar to Google. Principles matter. So same score: -8

Assorted messaging qualifies as additional to the pure social networking side I think so I’ll generously give them an extra bit for that: Score +2

They occasionally do good things with it like Google though. They also are developing a high altitude internet, and are playing with space exploration. Tiny bit of AI stuff, but not much else has crossed my consciousness. I think it is far less than Google but still positive, so I’ll score: +3

I honestly can’t think of any other significant contributions from Facebook to make the balance more positive, and I tried. I think they want to make a positive contribution, but are too focused on income to tackle the social negatives properly.

Total Facebook contribution to humanity: -14.

Oh dear! Must do better.

Conclusion: We’d be a lot worse off without Google. Even with their faults, they still make a great contribution to humankind. Maybe not quite a ‘do no evil’ rating, but certainly they qualify for ‘do net good’. On the other hand, sadly, I have to say that my analysis suggests we’d be a lot better off without Facebook. As much better off without them as we benefit by having Google.

If I have left something major out, good or bad, for either company please feel free to add your comments. I have deliberately left out their backing of their own political leanings and biases because whether you think they are good or bad depends where you are coming from. They’d only score about +/-3 anyway, which isn’t a game changer.

 

 

Chat-bots will help reduce loneliness, a bit

Amazon is really pushing its Echo and Dot devices at the moment and some other companies also use Alexa in their own devices. They are starting to gain avatar front ends too. Microsoft has their Cortana transforming into Zo, Apple has Siri’s future under wraps for now. Maybe we’ll see Siri in a Sari soon, who knows. Thanks to rapidly developing AI, chatbots and other bots have also made big strides in recent years, so it’s obvious that the two can easily be combined. The new voice control interfaces could become chatbots to offer a degree of companionship. Obviously that isn’t as good as chatting to real people, but many, very many people don’t have that choice. Loneliness is one of the biggest problems of our time. Sometimes people talk to themselves or to their pet cat, and chatting to a bot would at least get a real response some of the time. It goes further than simple interaction though.

I’m not trying to understate the magnitude of the loneliness problem, and it can’t solve it completely of course, but I think it will be a benefit to at least some lonely people in a few ways. Simply having someone to chat to will already be of some help. People will form emotional relationships with bots that they talk to a lot, especially once they have a visual front end such as an avatar. It will help some to develop and practice social skills if that is their problem, and for many others who feel left out of local activity, it might offer them real-time advice on what is on locally in the next few days that might appeal to them, based on their conversations. Talking through problems with a bot can also help almost as much as doing so with a human. In ancient times when I was a programmer, I’d often solve a bug by trying to explain how my program worked, and in doing so i would see the bug myself. Explaining it to a teddy bear would have been just as effective, the chat was just a vehicle for checking through the logic from a new angle. The same might apply to interactive conversation with a bot. Sometimes lonely people can talk too much about problems when they finally meet people, and that can act as a deterrent to future encounters, so that barrier would also be reduced. All in all, having a bot might make lonely people more able to get and sustain good quality social interactions with real people, and make friends.

Another benefit that has nothing to do with loneliness is that giving a computer voice instructions forces people to think clearly and phrase their requests correctly, just like writing a short computer program. In a society where so many people don’t seem to think very clearly or even if they can, often can’t express what they want clearly, this will give some much needed training.

Chatbots could also offer challenges to people’s thinking, even to help counter extremism. If people make comments that go against acceptable social attitudes or against known facts, a bot could present the alternative viewpoint, probably more patiently than another human who finds such viewpoints frustrating. I’d hate to see this as a means to police political correctness, though it might well be used in such a way by some providers, but it could improve people’s lack of understanding of even the most basic science, technology, culture or even politics, so has educational value. Even if it doesn’t convert people, it might at least help them to understand their own views more clearly and be better practiced at communicating their arguments.

Chat bots could make a significant contribution to society. They are just machines, but those machines are tools for other people and society as a whole to help more effectively.

 

AI presents a new route to attack corporate value

As AI increases in corporate, social, economic and political importance, it is becoming a big target for activists and I think there are too many vulnerabilities. I think we should be seeing a lot more articles than we are about what developers are doing to guard against deliberate misdirection or corruption, and already far too much enthusiasm for make AI open source and thereby giving mischief-makers the means to identify weaknesses.

I’ve written hundreds of times about AI and believe it will be a benefit to humanity if we develop it carefully. Current AI systems are not vulnerable to the terminator scenario, so we don’t have to worry about that happening yet. AI can’t yet go rogue and decide to wipe out humans by itself, though future AI could so we’ll soon need to take care with every step.

AI can be used in multiple ways by humans to attack systems.

First and most obvious, it can be used to enhance malware such as trojans or viruses, or to optimize denial of service attacks. AI enhanced security systems already battle against adaptive malware and AI can probe systems in complex ways to find vulnerabilities that would take longer to discover via manual inspection. As well as AI attacking operating systems, it can also attack AI by providing inputs that bias its learning and decision-making, giving AI ‘fake news’ to use current terminology. We don’t know the full extent of secret military AI.

Computer malware will grow in scope to address AI systems to undermine corporate value or political campaigns.

A new route to attacking corporate AI, and hence the value in that company that relates in some way to it is already starting to appear though. As companies such as Google try out AI-driven cars or others try out pavement/sidewalk delivery drones, so mischievous people are already developing devious ways to misdirect or confuse them. Kids will soon have such activity as hobbies. Deliberate deception of AI is much easier when people know how they work, and although it’s nice for AI companies to put their AI stuff out there into the open source markets for others to use to build theirs, that does rather steer future systems towards a mono-culture of vulnerability types. A trick that works against one future AI in one industry might well be adaptable to another use in another industry with a little devious imagination. Let’s take an example.

If someone builds a robot to deliberately step in front of a self-driving car every time it starts moving again, that might bring traffic to a halt, but police could quickly confiscate the robot, and they are expensive, a strong deterrent even if the pranksters are hiding and can’t be found. Cardboard cutouts might be cheaper though, even ones with hinged arms to look a little more lifelike. A social media orchestrated campaign against a company using such cars might involve thousands of people across a country or city deliberately waiting until the worst time to step out into a road when one of their vehicles comes along, thereby creating a sort of denial of service attack with that company seen as the cause of massive inconvenience for everyone. Corporate value would obviously suffer, and it might not always be very easy to circumvent such campaigns.

Similarly, the wheeled delivery drones we’ve been told to expect delivering packages any time soon will also have cameras to allow them to avoid bumping into objects or little old ladies or other people, or cats or dogs or cardboard cutouts or carefully crafted miniature tank traps or diversions or small roadblocks that people and pets can easily step over but drones can’t, that the local kids have built from a few twigs or cardboard from a design that has become viral that day. A few campaigns like that with the cold pizzas or missing packages that result could severely damage corporate value.

AI behind websites might also be similarly defeated. An early experiment in making a Twitter chat-bot that learns how to tweet by itself was quickly encouraged by mischief-makers to start tweeting offensively. If people have some idea how an AI is making its decisions, they will attempt to corrupt or distort it to their own ends. If it is heavily reliant on open source AI, then many of its decision processes will be known well enough for activists to develop appropriate corruption tactics. It’s not to early to predict that the proposed AI-based attempts by Facebook and Twitter to identify and defeat ‘fake news’ will fall right into the hands of people already working out how to use them to smear opposition campaigns with such labels.

It will be a sort of arms race of course, but I don’t think we’re seeing enough about this in the media. There is a great deal of hype about the various AI capabilities, a lot of doom-mongering about job cuts (and a lot of reasonable warnings about job cuts too) but very little about the fight back against AI systems by attacking them on their own ground using their own weaknesses.

That looks to me awfully like there isn’t enough awareness of how easily they can be defeated by deliberate mischief or activism, and I expect to see some red faces and corporate account damage as a result.

PS

This article appeared yesterday that also talks about the bias I mentioned: https://techcrunch.com/2016/12/10/5-unexpected-sources-of-bias-in-artificial-intelligence/

Since I wrote this blog, I was asked via Linked-In to clarify why I said that Open Source AI systems would have more security risk. Here is my response:

I wasn’t intending to heap fuel on a dying debate (though since current debate looks the same as in early 1990s it is dying slowly). I like and use open source too. I should have explained my reasoning better to facilitate open source checking: In regular (algorithmic) code, programming error rate should be similar so increasing the number of people checking should cancel out the risk from more contributors so there should be no a priori difference between open and closed. However:

In deep learning, obscurity reappears via neural net weightings being less intuitive to humans. That provides a tempting hiding place.

AI foundations are vulnerable to group-think, where team members share similar world models. These prejudices will affect the nature of OS and CS code and result in AI with inherent and subtle judgment biases which will be less easy to spot than bugs and be more visible to people with alternative world models. Those people are more likely to exist in an OS pool than a CS pool and more likely to be opponents so not share their results.

Deep learning may show the equivalent of political (or masculine and feminine). As well as encouraging group-think, that also distorts the distribution of biases and therefore the cancelling out of errors can no longer be assumed.

Human factors in defeating security often work better than exploiting software bugs. Some of the deep learning AI is designed to mimic humans as well as possible in thinking and in interfacing. I suspect that might also make them more vulnerable to meta-human-factor attacks. Again, exposure to different and diverse cultures will show a non-uniform distribution of error/bias spotting/disclosure/exploitation.

Deep learning will become harder for humans to understand as it develops and becomes more machine dependent. That will amplify the above weaknesses. Think of optical illusions that greatly distort human perception and think of similar in advanced AI deep learning. Errors or biases that are discovered will become more valuable to an opponent since they are less likely to be spotted by others, increasing their black market exploitation risk.

I have not been a programmer for over 20 years and am no security expert so my reasoning may be defective, but at least now you know what my reasoning was and can therefore spot errors in it.

Can we automate restaurant reviews?

Reviews are an important part of modern life. People often consult reviews before buying things, visiting a restaurant or booking a hotel. There are even reviews on the best seats to choose on planes. When reviews are honestly given, they can be very useful to potential buyers, but what if they aren’t honestly give? What if they are glowing reviews written by friends of the restaurant owners, or scathing reviews written by friends of the competition? What if the service received was fine, but the reviewer simply didn’t like the race or gender of the person delivering it? Many reviews fall into these categories, but of course we can’t be sure how many, because when someone writes a review, we don’t know whether they were being honest or not, or whether they are biased or not. Adding a category of automated reviews would add credibility provided the technology is independent of the establishment concerned.

Face recognition software is now so good that it can read lips better than human lip reading experts. It can be used to detect emotions too, distinguishing smiles or frowns, and whether someone is nervous, stressed or relaxed. Voice recognition can discern not only words but changes in pitch and volume that might indicate their emotional context. Wearable devices can also detect emotions such as stress.

Given this wealth of technology capability, cameras and microphones in a restaurant could help verify human reviews and provide machine reviews. Using the checking in process it can identify members of a group that might later submit a review, and thus compare their review with video and audio records of the visit to determine whether it seems reasonably true. This could be done by machine using analysis of gestures, chat and facial expressions. If the person giving a poor review looked unhappy with the taste of the food while they were eating it, then it is credible. If their facial expression were of sheer pleasure and the review said it tasted awful, then that review could be marked as not credible, and furthermore, other reviews by that person could be called into question too. In fact, guests would in effect be given automated reviews of their credibility. Over time, a trust rating would accrue, that could be used to group other reviews by credibility rating.

Totally automated reviews could also be produced, by analyzing facial expressions, conversations and gestures across a whole restaurant full of people. These machine reviews would be processed in the cloud by trusted review companies and could give star ratings for restaurants. They could even take into account what dishes people were eating to give ratings for each dish, as well as more general ratings for entire chains.

Service could also be automatically assessed to some degree too. How long were the people there before they were greeted/served/asked for orders/food delivered. The conversation could even be automatically transcribed in many cases, so comments about rudeness or mistakes could be verified.

Obviously there are many circumstances where this would not work, but there are many where it could, so AI might well become an important player in the reviews business. At a time when restaurants are closing due to malicious bad reviews, or ripping people off in spite of poor quality thanks to dishonest positive reviews, then this might help a lot. A future where people are forced to be more honest in their reviews because they know that AI review checking could damage their reputation if they are found to have been dishonest might cause some people to avoid reviewing altogether, but it could improve the reliability of the reviews that still do happen.

Still not perfect, but it could be a lot better than today, where you rarely know how much a review can be trusted.

Future Augmented Reality

AR has been hot on the list of future IT tech for 25 years. It has been used for various things since smartphones and tablets appeared but really hit the big time with the recent Pokemon craze.

To get an idea of the full potential of augmented reality, recognize that the web and all its impacts on modern life came from the convergence of two medium sized industries – telecoms and computing. Augmented reality will involve the convergence of everything in the real world with everything in the virtual world, including games, media, the web, art, data, visualization, architecture, fashion and even imagination. That convergence will be enabled by ubiquitous mobile broadband, cloud, blockchain payments, IoT, positioning and sensor tech, image recognition, fast graphics chips, display and visor technology and voice and gesture recognition plus many other technologies.

Just as you can put a Pokemon on a lawn, so you could watch aliens flying around in spaceships or cartoon characters or your favorite celebs walking along the street among the other pedestrians. You could just as easily overlay alternative faces onto the strangers passing by.

People will often want to display an avatar to people looking at them, and that could be different for every viewer. That desire competes with the desire of the viewer to decide how to see other people, so there will be some battles over who controls what is seen. Feminists will certainly want to protect women from the obvious objectification that would follow if a woman can’t control how she is seen. In some cases, such objectification and abuse could even reach into hate crime territory, with racist, sexist or homophobic virtual overlays. All this demands control, but it is far from obvious where that control would come from.

As for buildings, they too can have a virtual appearance. Virtual architecture will show off architect visualization skills, but will also be hijacked by the marketing departments of the building residents. In fact, many stakeholders will want to control what you see when you look at a building. The architects, occupants, city authorities, government, mapping agencies, advertisers, software producers and games designers will all try to push appearances at the viewer, but the viewer might want instead to choose to impose one from their own offerings, created in real time by AI or from large existing libraries of online imagery, games or media. No two people walking together on a street would see the same thing.

Interior decor is even more attractive as an AR application. Someone living in a horrible tiny flat could enhance it using AR to give the feeling of far more space and far prettier decor and even local environment. Virtual windows onto Caribbean beaches may be more attractive than looking at mouldy walls and the office block wall that are physically there. Reality is often expensive but images can be free.

Even fashion offers a platform for AR enhancement. An outfit might look great on a celebrity but real life shapes might not measure up. Makeovers take time and money too. In augmented reality, every garment can look as it should, and that makeup can too. The hardest choice will be to choose a large number of virtual outfits and makeups to go with the smaller range of actual physical appearances available from that wardrobe.

Gaming is in pole position, because 3D world design, imagination, visualization and real time rendering technology are all games technology, so perhaps the biggest surprise in the Pokemon success is that it was the first to really grab attention. People could by now be virtually shooting aliens or zombies hoarding up escalators as they wait for their partners. They are a little late, but such widespread use of personal or social gaming on city streets and in malls will come soon.

AR Visors are on their way too, and though the first offerings will be too expensive to achieve widespread adoption, cheaper ones will quickly follow. The internet of things and sensor technology will create abundant ground-up data to make a strong platform. As visors fall in price, so too will the size and power requirements of the processing needed, though much can be cloud-based.

It is a fairly safe bet that marketers will try very hard to force images at us and if they can’t do that via blatant in-your-face advertising, then product placement will become a very fine art. We should expect strong alliances between the big marketing and advertising companies and top games creators.

As AI simultaneously develops, people will be able to generate a lot of their own overlays, explaining to AI what they’d like and having it produced for them in real time. That would undermine marketing use of AR so again there will be some battles for control. Just as we have already seen owners of landmarks try to trademark the image of their buildings to prevent people including them in photographs, so similar battles will fill the courts over AR. What is to stop someone superimposing the image of a nicer building on their own? Should they need to pay a license to do so? What about overlaying celebrity faces on strangers? What about adding multimedia overlays from the web to make dull and ordinary products do exciting things when you use them? A cocktail served in a bar could have a miniature Sydney fireworks display going on over it. That might make it more exciting, but should the media creator be paid and how should that be policed? We’ll need some sort of AR YouTube at the very least with added geolocation.

The whole arts and media industry will see city streets as galleries and stages on which to show off and sell their creations.

Public services will make more mundane use of AR. Simple everyday context-dependent signage is one application, but overlays would be valuable in emergencies too. If police or fire services could superimpose warning on everyone’s visors nearby, that may help save lives in emergencies. Health services will use AR to assist ordinary people to care for a patient until an ambulance arrives

Shopping provide more uses and more battles. AR will show you what a competing shop has on offer right beside the one in front of you. That will make it easy to digitally trespass on a competitor’s shop floor. People can already do that on their smartphone, but AR will put the full image large as life right in front of your eyes to make it very easy to compare two things. Shops won’t want to block comms completely because that would prevent people wanting to enter their shop at all, so they will either have to compete harder or find more elaborate ways of preventing people making direct visual comparisons in-store. Perhaps digital trespassing might become a legal issue.

There will inevitably be a lot of social media use of AR too. If people get together to demonstrate, it will be easier to coordinate them. If police insist they disperse, they could still congregate virtually. Dispersed flash mobs could be coordinated as much as ones in the same location. That makes AR a useful tool for grass-roots democracy, especially demonstrations and direct action, but it also provides a platform for negative uses such as terrorism. Social entrepreneurs will produce vast numbers of custom overlays for millions of different purposes and contexts. Today we have tens of millions of websites and apps. Tomorrow we will have even more AR overlays.

These are just a few of the near term uses of augmented reality and a few hints as issues arising. It will change every aspect of our lives in due course, just as the web has, but more so.

 

Shoulder demons and angels

Remember the cartoons where a character would have a tiny angel on one shoulder telling them the right thing to do, and a little demon on the other telling them it would be far more cool to be nasty somehow, e.g. get their own back, be selfish, greedy. The two sides might be ‘eat your greens’ v ‘the chocolate is much nicer’, or ‘your mum would be upset if you arrive home late’ v ‘this party is really going to be fun soon’. There are a million possibilities.

Shoulder angels

Shoulder angels

Enter artificial intelligence, which is approaching conversation level, and knows the context of your situation, and your personal preferences etc, coupled to an earpiece in each ear, available from the cloud of course to minimise costs. If you really insisted, you could make cute little Bluetooth angels and demons to do the job properly.

In fact Sony have launched Xperia Ear, which does the basic admin assistant part of this, telling you diary events etc. All we need is an expansion of its domain, and of course an opposing view. ‘Sure, you have an appointment at 3, but that person you liked is in town, you could meet them for coffee.’

The little 3D miniatures could easily incorporate the electronics. Either you add an electronics module after manufacture into a small specially shaped recess or one is added internally during printing. You could have an avatar of a trusted friend as your shoulder angel, and maybe one of a more mischievous friend who is sometimes more fun as your shoulder demon. Of course you could have any kind of miniature pets or fictional entities instead.

With future materials, and of course AR, these little shoulder accessories could be great fun, and add a lot to your overall outfit, both in appearance and as conversation add-ons.

State of the world in 2050

Some things are getting better, some worse. 2050 will be neither dystopian nor utopian. A balance of good and bad not unlike today, but with different goods and bads, and slightly better overall. More detail? Okay, for most of my followers, this will mostly collate things you may know already, but there’s no harm in a refresher Futures 101.

Health

We will have cost-effective and widespread cures or control for most cancers, heart disease, diabetes, dementia and most other killers. Quality-of-life diseases such as arthritis will also be controllable or curable. People will live longer and remain healthier for longer, with an accelerated decline at the end.

On the bad side, new diseases will exist, including mutated antibiotic-resistant versions of existing ones. There will still be occasional natural flu mutations and other viruses, and there will still be others arising from contacts between people and other animals that are more easily spread due to increased population, urbanization and better mobility. Some previously rare diseases will become big problems due to urbanization and mobility. Urbanization will be a challenge.

However, diagnostics will be faster and better, we will no longer be so reliant on antibiotics to fight back, and sterilisation techniques for hospitals will be much improved. So even with greater challenges, we will be able to cope fine most of the time with occasional headlines from epidemics.

A darker side is the increasing prospect for bio-terrorism, with man-made viruses deliberately designed to be highly lethal, very contagious and to withstand most conventional defenses, optimized for maximum and rapid spread by harnessing mobility and urbanization. With pretty good control or defense against most natural threats, this may well be the biggest cause of mass deaths in 2050. Bio-warfare is far less likely.

Utilizing other techs, these bio-terrorist viruses could be deployed by swarms of tiny drones that would be hard to spot until too late, and of course these could also be used with chemical weapons such as use of nerve gas. Another tech-based health threat is nanotechnology devices designed to invade the body, damage of destroy systems or even control the brain. It is easy to detect and shoot down macro-scale deployment weapons such as missiles or large drones but far harder to defend against tiny devices such as midge-sized drones or nanotech devices.

The overall conclusion on health is that people will mostly experience much improved lives with good health, long life and a rapid end. A relatively few (but very conspicuous) people will fall victim to terrorist attacks, made far more feasible and effective by changing technology and demographics.

Loneliness

An often-overlooked benefit of increasing longevity is the extending multi-generational family. It will be commonplace to have great grandparents and great-great grandparents. With improved health until near their end, these older people will be seen more as welcome and less as a burden. This advantage will be partly offset by increasing global mobility, so families are more likely to be geographically dispersed.

Not everyone will have close family to enjoy and to support them. Loneliness is increasing even as we get busier, fuller lives. Social inclusion depends on a number of factors, and some of those at least will improve. Public transport that depends on an elderly person walking 15 minutes to a bus stop where they have to wait ages in the rain and wind for a bus on which they are very likely to catch a disease from another passenger is really not fit for purpose. Such primitive and unsuitable systems will be replaced in the next decades by far more socially inclusive self-driving cars. Fleets of these will replace buses and taxis. They will pick people up from their homes and take them all the way to where they need to go, then take them home when needed. As well as being very low cost and very environmentally friendly, they will also have almost zero accident rates and provide fast journey times thanks to very low congestion. Best of all, they will bring easier social inclusion to everyone by removing the barriers of difficult, slow, expensive and tedious journeys. It will be far easier for a lonely person to get out and enjoy cultural activity with other people.

More intuitive social networking, coupled to augmented and virtual reality environments in which to socialize will also mean easier contact even without going anywhere. AI will be better at finding suitable companions and lovers for those who need assistance.

Even so, some people will not benefit and will remain lonely due to other factors such as poor mental health, lack of social skills, or geographic isolation. They still do not need to be alone. 2050 will also feature large numbers of robots and AIs, and although these might not be quite so valuable to some as other human contact, they will be a pretty good substitute. Although many will be functional, cheap and simply fit for purpose, those designed for companionship or home support functions will very probably look human and behave human. They will have good intellectual and emotional skills and will be able to act as a very smart executive assistant as well as domestic servant and as a personal doctor and nurse, even as a sex partner if needed.

It would be too optimistic to say we will eradicate loneliness by 2050 but we can certainly make a big dent in it.

Poverty

Technology progress will greatly increase the size of the global economy. Even with the odd recession our children will be far richer than our parents. It is reasonable to expect the total economy to be 2.5 times bigger than today’s by 2050. That just assumes an average growth of about 2.5% which I think is a reasonable estimate given that technology benefits are accelerating rather than slowing even in spite of recent recession.

While we define poverty level as a percentage of average income, we can guarantee poverty will remain even if everyone lived like royalty. If average income were a million dollars per year, 60% of that would make you rich by any sensible definition but would still qualify as poverty by the ludicrous definition based on relative income used in the UK and some other countries. At some point we need to stop calling people poor if they can afford healthy food, pay everyday bills, buy decent clothes, have a decent roof over their heads and have an occasional holiday. With the global economy improving so much and so fast, and with people having far better access to markets via networks, it will be far easier for people everywhere to earn enough to live comfortably.

In most countries, welfare will be able to provide for those who can’t easily look after themselves at a decent level. Ongoing progress of globalization of compassion that we see today will likely make a global welfare net by 2050. Everyone won’t be rich, and some won’t even be very comfortable, but I believe absolute poverty will be eliminated in most countries, and we can ensure that it will be possible for most people to live in dignity. I think the means, motive and opportunity will make that happen, but it won’t reach everyone. Some people will live under dysfunctional governments that prevent their people having access to support that would otherwise be available to them. Hopefully not many. Absolute poverty by 2050 won’t be history but it will be rare.

In most developed countries, the more generous welfare net might extend to providing a ‘citizen wage’ for everyone, and the level of that could be the same as average wage is today. No-one need be poor in 2050.

Environment

The environment will be in good shape in 2050. I have no sympathy with doom mongers who predict otherwise. As our wealth increases, we tend to look after the environment better. As technology improves, we will achieve a far higher standards of living while looking after the environment. Better mining techniques will allow more reserves to become economic, we will need less resource to do the same job better, reuse and recycling will make more use of the same material.

Short term nightmares such as China’s urban pollution levels will be history by 2050. Energy supply is one of the big contributors to pollution today, but by 2050, combinations of shale gas, nuclear energy (uranium and thorium), fusion and solar energy will make up the vast bulk of energy supply. Oil and unprocessed coal will mostly be left in the ground, though bacterial conversion of coal into gas may well be used. Oil that isn’t extracted by 2030 will be left there, too expensive compared to making the equivalent energy by other means. Conventional nuclear energy will also be on its way to being phased out due to cost. Energy from fusion will only be starting to come on stream everywhere but solar energy will be cheap to harvest and high-tech cabling will enable its easier distribution from sunny areas to where it is needed.

It isn’t too much to expect of future governments that they should be able to negotiate that energy should be grown in deserts, and food crops grown on fertile land. We should not use fertile land to place solar panels, nor should we grow crops to convert to bio-fuel when there is plenty of sunny desert of little value otherwise on which to place solar panels.

With proper stewardship of agricultural land, together with various other food production technologies such as hydroponics, vertical farms and a lot of meat production via tissue culturing, there will be more food per capita than today even with a larger global population. In fact, with a surplus of agricultural land, some might well be returned to nature.

In forests and other ecosystems, technology will also help enormously in monitoring eco-health, and technologies such as genetic modification might be used to improve viability of some specie otherwise threatened.

Anyone who reads my blog regularly will know that I don’t believe climate change is a significant problem in the 2050 time frame, or even this century. I won’t waste any more words on it here. In fact, if I have to say anything, it is that global cooling is more likely to be a problem than warming.

Food and Water

As I just mentioned in the environment section, we will likely use deserts for energy supply and fertile land for crops. Improving efficiency and density will ensure there is far more capability to produce food than we need. Many people will still eat meat, but some at least will be produced in factories using processes such as tissue culturing. Meat pastes with assorted textures can then be used to create a variety of forms of processed meats. That might even happen in home kitchens using 3D printer technology.

Water supply has often been predicted by futurists as a cause of future wars, but I disagree. I think that progress in desalination is likely to be very rapid now, especially with new materials such as graphene likely to come on stream in bulk.  With easy and cheap desalination, water supply should be adequate everywhere and although there may be arguments over rivers I don’t think the pressures are sufficient by themselves to cause wars.

Privacy and Freedom

In 2016, we’re seeing privacy fighting a losing battle for survival. Government increases surveillance ubiquitously and demands more and more access to data on every aspect of our lives, followed by greater control. It invariably cites the desire to control crime and terrorism as the excuse and as they both increase, that excuse will be used until we have very little privacy left. Advancing technology means that by 2050, it will be fully possible to implement thought police to check what we are thinking, planning, desiring and make sure it conforms to what the authorities have decided is appropriate. Even the supposed servant robots that live with us and the AIs in our machines will keep official watch on us and be obliged to report any misdemeanors. Back doors for the authorities will be in everything. Total surveillance obliterates freedom of thought and expression. If you are not free to think or do something wrong, you are not free.

Freedom is strongly linked to privacy. With laws in place and the means to police them in depth, freedom will be limited to what is permitted. Criminals will still find ways to bypass, evade, masquerade, block and destroy and it hard not to believe that criminals will be free to continue doing what they do, while law-abiding citizens will be kept under strict supervision. Criminals will be free while the rest of us live in a digital open prison.

Some say if you don’t want to do wrong, you have nothing to fear. They are deluded fools. With full access to historic electronic records going back to now or earlier, it is not only today’s laws and guidelines that you need to be compliant with but all the future paths of the random walk of political correctness. Social networks can be fiercer police than the police and we are already discovering that having done something in the distant past under different laws and in different cultures is no defense from the social networking mobs. You may be free technically to do or say something today, but if it will be remembered for ever, and it will be, you also need to check that it will probably always be praiseworthy.

I can’t counterbalance this section with any positives. I’ve side before that with all the benefits we can expect, we will end up with no privacy, no freedom and the future will be a gilded cage.

Science and the arts

Yes they do go together. Science shows us how the universe works and how to do what we want. The arts are what we want to do. Both will flourish. AI will help accelerate science across the board, with a singularity actually spread over decades. There will be human knowledge but a great deal more machine knowledge which is beyond un-enhanced human comprehension. However, we will also have the means to connect our minds to the machine world to enhance our senses and intellect, so enhanced human minds will be the norm for many people, and our top scientists and engineers will understand it. In fact, it isn’t safe to develop in any other way.

Science and technology advances will improve sports too, with exoskeletons, safe drugs, active skin training acceleration and virtual reality immersion.

The arts will also flourish. Self-actualization through the arts will make full use of AI assistance. a feeble idea enhanced by and AI assistant can become a work of art, a masterpiece. Whether it be writing or painting, music or philosophy, people will be able to do more, enjoy more, appreciate more, be more. What’s not to like?

Space

by 2050, space will be a massive business in several industries. Space tourism will include short sub-orbital trips right up to lengthy stays in space hotels, and maybe on the moon for the super-rich at least.

Meanwhile asteroid mining will be under way. Some have predicted that this will end resource problems here on Earth, but firstly, there won’t be any resource problems here on Earth, and secondly and most importantly, it will be far too expensive to bring materials back to Earth, and almost all the resources mined will be used in space, to make space stations, vehicles, energy harvesting platforms, factories and so on. Humans will be expanding into space rapidly.

Some of these factories and vehicles and platforms and stations will be used for science, some for tourism, some for military purposes. Many will be used to offer services such as monitoring, positioning, communications just as today but with greater sophistication and detail.

Space will be more militarized too. We can hope that it will not be used in actual war, but I can’t honestly predict that one way or the other.

 

Migration

If the world around you is increasingly unstable, if people are fighting, if times are very hard and government is oppressive, and if there is a land of milk and honey not far away that you can get to, where you can hope for a much better, more prosperous life, free of tyranny, where instead of being part of the third world, you can be in the rich world, then you may well choose to take the risks and traumas associated with migrating. Increasing population way ahead of increasing wealth in Africa, and a drop in the global need for oil will both increase problems in the Middle East and North Africa. Add to that vicious religious sectarian conflict and a great many people will want to migrate indeed. The pressures on Europe and America to accept several millions more migrants will be intense.

By 2050, these regions will hopefully have ended their squabbles, and some migrants will return to rebuild, but most will remain in their new homes.

Most of these migrants will not assimilate well into their new countries but will mainly form their own communities where they can have a quite separate culture, and they will apply pressure to be allowed to self-govern. A self-impose apartheid will result. It might if we are lucky gradually diffuse as religion gradually becomes less important and the western lifestyle becomes more attractive. However, there is also a reinforcing pressure, with this self-exclusion and geographic isolation resulting in fewer opportunities, less mixing with others and therefore a growing feeling of disadvantage, exclusion and victimization. Tribalism becomes reinforced and opportunities for tension increase. We already see that manifested well in  the UK and other European countries.

Meanwhile, much of the world will be prosperous, and there will be many more opportunities for young capable people to migrate and prosper elsewhere. An ageing Europe with too much power held by older people and high taxes to pay for their pensions and care might prove a discouragement to stay, whereas the new world may offer increasing prospects and lowering taxes, and Europe and the USA may therefore suffer a large brain drain.

Politics

If health care is better and cheaper thanks to new tech and becomes less of a political issue; if resources are abundantly available, and the economy is healthy and people feel wealthy enough and resource allocation and wealth distribution become less of a political issue; if the environment is healthy; if global standards of human rights, social welfare and so on are acceptable in most regions and if people are freer to migrate where they want to go; then there may be a little less for countries to fight over. There will be a little less ‘politics’ overall. Most 2050 political arguments and debates will be over social cohesion, culture, generational issues, rights and so on, not health, defence, environment, energy or industry

We know from history that that is no guarantee of peace. People disagree profoundly on a broad range of issues other than life’s basic essentials. I’ve written a few times on the increasing divide and tensions between tribes, especially between left and right. I do think there is a strong chance of civil war in Europe or the USA or both. Social media create reinforcement of views as people expose themselves only to other show think the same, and this creates and reinforces and amplifies an us and them feeling. That is the main ingredient for conflict and rather than seeing that and trying to diffuse it, instead we see left and right becoming ever more entrenched in their views. The current problems we see surrounding Islamic migration show the split extremely well. Each side demonizes the other, extreme camps are growing on both sides and the middle ground is eroding fast. Our leaders only make things worse by refusing to acknowledge and address the issues. I suggested in previous blogs that the second half of the century is when tensions between left and right might result in the Great Western War, but that might well be brought forward a decade or two by a long migration from an unstable Middle East and North Africa, which looks to worsen over the next decade. Internal tensions might build for another decade after that accompanied by a brain drain of the most valuable people, and increasing inter-generational tensions amplifying the left-right divide, with a boil-over in the 2040s. That isn’t to say we won’t see some lesser conflicts before then.

I believe the current tensions between the West, Russia and China will go through occasional ups and downs but the overall trend will be towards far greater stability. I think the chances of a global war will decrease rather than increase. That is just as well since future weapons will be far more capable of course.

So overall, the world peace background will improve markedly, but internal tensions in the West will increase markedly too. The result is that wars between countries or regions will be less likely but the likelihood of civil war in the West will be high.

Robots and AIs

I mentioned robots and AIs in passing in the loneliness section, but they will have strong roles in all areas of life. Many that are thought of simply as machines will act as servants or workers, but many will have advanced levels of AI (not necessarily on board, it could be in the cloud) and people will form emotional bonds with them. Just as important, many such AI/robots will be so advanced that they will have relationships with each other, they will have their own culture. A 21st century version of the debates on slavery is already happening today for sentient AIs even though we don’t have them yet. It is good to be prepared, but we don’t know for sure what such smart and emotional machines will want. They may not want the same as our human prejudices suggest they will, so they will need to be involved in debate and negotiation. It is almost certain that the upper levels of AIs and robots (or androids more likely) will be given some rights, to freedom from pain and abuse, ownership of their own property, a degree of freedom to roam and act of their own accord, the right to pursuit of happiness. They will also get the right to government representation. Which other rights they might get is anyone’s guess, but they will change over time mainly because AIs will evolve and change over time.

OK, I’ve rambled on long enough and I’ve addressed some of the big areas I think. I have ignored a lot more, but it’s dinner time.

A lot of things will be better, some things worse, probably a bit better overall but with the possibility of it all going badly wrong if we don’t get our act together soon. I still think people in 2050 will live in a gilded cage.

2016 – The Bright Side

Having just blogged about some of the bad scenarios for next year (scenarios are just  explorations of things that might or could happen, not things that actually will, those are called predictions), Len Rosen’s comment stimulated me to balance it with a nicer look at next year. Some great things will happen, even ignoring the various product release announcements for new gadgets. Happiness lies deeper than the display size on a tablet. Here are some positive scenarios. They might not happen, but they might.

1 Middle East sorts itself out.

The new alliance formed by Saudi Arabia turns out to be a turning point. Rising Islamophobia caused by Islamist around the world has sharpened the view of ISIS and the trouble in Syria with its global consequences for Islam and even potentially for world peace. The understanding that it could get even worse, but that Western powers can’t fix trouble in Muslim lands due to fears of backlash, the whole of the Middle East starts to understand that they need to sort out their tribal and religious differences to achieve regional peace and for the benefit of Muslims everywhere. Proper discussions are arranged, and with the knowledge that a positive outcome must be achieved, success means a strong alliance of almost all regional powers, with ISIS and other extremist groups ostracized, then a common army organised to tackle and defeat them.

2 Quantum computation and AI starts to prove useful in new drug design

Google’s wealth and effort with its quantum computers and AI, coupled to IBM’s Watson, Facebook, Apple and Samsung’s AI efforts, and Elon Musk’s new investment in open-AI drive a positive feedback loop in computing. With massive returns on the horizon by making people’s lives easier, and with ever-present fears of Terminator in the background, the primary focus is to demonstrate what it could mean for mankind. Consequently, huge effort and investment is focused on creating new drugs to cure cancer, aids and find generic replacements for antibiotics. Any one of these would be a major success for humanity.

3 Major breakthrough in graphene production

Graphene is still the new wonder-material. We can’t make it in large quantities cheaply yet, but already the range of potential uses already proven for it is vast. If a breakthrough brings production cost down by an order of magnitude or two then many of those uses will be achievable. We will be able to deliver clean and safe water to everyone, we’ll have super-strong materials, ultra-fast electronics, active skin, better drug delivery systems, floating pods, super-capacitors that charge instantly as electric cars drive over a charging unit on the road surface, making batteries unnecessary. Even linear induction motor mats to replace self-driving cars with ultra-cheap driver-less pods. If the breakthrough is big enough, it could even start efforts towards a space elevator.

4 Drones

Tiny and cheap drones could help security forces to reduce crime dramatically. Ignoring for now possible abuse of surveillance, being able to track terrorists and criminals in 3D far better than today will make the risk of being caught far greater. Tiny pico-drones dropped over Syria and Iraq could pinpoint locations of fighters so that they can be targeted while protecting innocents. Environmental monitoring would also benefit if billions of drones can monitor ecosystems in great detail everywhere at the same time.

5 Active contact lens

Google has already prototyped a very primitive version of the active contact lens, but they have been barking up the wrong tree. If they dump the 1-LED-per-Pixel approach, which isn’t scalable, and opt for the far better approach of using three lasers and a micro-mirror, then they could build a working active contact lens with unlimited resolution. One in each eye, with an LCD layer overlaid, and you have a full 3D variably-transparent interface for augmented reality or virtual reality. Other displays such as smart watches become unnecessary since of course they can all be achieved virtually in an ultra-high res image. All the expense and environmental impact of other displays suddenly is replaced by a cheap high res display that has an environmental footprint approaching zero. Augmented reality takes off and the economy springs back to life.

6 Star Wars stimulates renewed innovation

Engineers can’t watch a film without making at least 3 new inventions. A lot of things on Star Wars are entirely feasible – I have invented and documented mechanisms to make both a light saber and the land speeder. Millions of engineers have invented some way of doing holographic characters. In a world that seems full of trouble, we are fortunate that some of the super-rich that we criticise for not paying as much taxes as we’d like are also extremely good engineers and have the cash to back up their visions with real progress. Natural competitiveness to make the biggest contribution to humanity will do the rest.

7 Europe fixes itself

The UK is picking the lock on the exit door, others are queuing behind. The ruling bureaucrats finally start to realize that they won’t get their dream of a United States of Europe in quite the way they hoped, that their existing dream is in danger of collapse due to a mismanaged migrant crisis, and consequently the UK renegotiation stimulates a major new treaty discussion, where all the countries agree what their people really want out of the European project, rather than just a select few. The result is a reset. A new more democratic European dream emerges that the vest majority of people actually wants. Agreement on progress to sort out the migrant crisis is a good test and after that, a stronger, better, more vibrant Europe starts to emerge from the ashes with a renewed vigor and rapidly recovering economy.

8 Africa rearranges boundaries to get tribal peace

Breakthrough in the Middle East ripples through North Africa resulting in the beginnings of stability in some countries. Realization that tribal conflicts won’t easily go away, and that peace brings prosperity, boundaries are renegotiated so that different people can live in and govern their own territories. Treaties agree fair access to resources independent of location.

9 The Sahara become Europe’s energy supply

With stable politics finally on the horizon, energy companies re-address the idea of using the Sahara as a solar farm. Local people earn money by looking after panels, keeping them clean and in working order, and receive welcome remuneration, bringing prosperity that was previously beyond them. Much of this money in turn is used to purify water, irrigating deserts and greening them, making a better food supply while improving the regional climate and fixing large quantities of CO2. Poverty starts to reduce as the environment improves. Much of this is replicated in Central and South America.

10 World Peace emerges

By fighting alongside in the Middle East and managing to avoid World War 3, a very positive relationship between Russia and the West emerges. China meanwhile, makes some of the energy breakthroughs needed to get solar efficiency and cost down below oil cost. This forces the Middle East to also look Westward for new markets and to add greater drive to their regional peace efforts to avoid otherwise inevitable collapse. Suddenly a world that was full of wars becomes one where all countries seem to be getting along just fine, all realizing that we only have this one world and one life and we’d better not ruin it.

2016: The Dark Side

Bloomberg reports the ‘Pessimists guide to the world in 2016’, by By Flavia Krause-Jackson, Mira Rojanasakul, and John Fraher.

http://www.bloomberg.com/graphics/pessimists-guide-to-2016/

Excellent stuff. A healthy dose of realism to counter the spin and gloss and outright refusals to notice things that don’t fit the agenda that we so often expect from today’s media. Their entries deserve some comment, and I’ll add a few more. I’m good at pessimism.

Their first entry is oil reaching $100 a barrel as ISIS blows up oil fields. Certainly possible, though they also report the existing oil glut: http://www.bloomberg.com/news/articles/2015-12-17/shale-drillers-are-now-free-to-export-u-s-oil-into-global-glut

Just because the second option is the more likely does not invalidate the first as a possible scenario, so that entry is fine.

An EU referendum in June is their 2nd entry. Well, that will only happen if Cameron gets his way and the EU agrees sufficient change to make the referendum result more likely to end in a Yes. If there is any hint of a No, it will be postponed as far as possible to give politics time to turn the right way. Let’s face facts. When the Ukraine had their referendum, they completed the entire process within two weeks. If the Conservatives genuinely wanted a referendum on Europe, it would have happened years ago. The Conservatives make frequent promises to do the Conservative thing very loudly, and then quietly do the Labour thing and hope nobody notices. Osborne promised to cut the deficit but faced with the slightest objections from the media performed a text-book U-turn. That follow numerous U-turns on bin collections, speed cameras, wheel clamping, environment, surveillance, immigration, pensions, fixing the NHS…. I therefore think he will spin the EU talks as far as possible to pretend that tiny promises to think about the possibility of reviewing policies are the same as winning guarantees of major changes. Nevertheless, an ongoing immigration flood and assorted Islamist problems are increasing the No vote rapidly, so I think it far more likely that the referendum will be postponed.

The 3rd is banks being hit by a massive cyber attack. Very possible, even quite likely.

4th, EU crumbles under immigration fears. Very likely indeed. Schengen will be suspended soon and increasing Islamist violence will create increasing hostility to the migrant flow. Forcing countries to accept a proportion of the pain caused by Merkel’s naivety will increase strains between countries to breaking point. The British referendum on staying or leaving adds an escape route that will be very tempting for politicians who want to stay in power.

Their 5th is China’s economy failing and military rising. Again, quite feasible. Their economy has suffered a slowdown, and their military looks enthusiastically at Western decline under left-wing US and Europe leadership, strained by Middle Eastern and Russian tensions. There has never been a better time for their military to exploit weaknesses.

6 is Israel attacking Iranian nuclear facilities. Well, with the US and Europe rapidly turning antisemitic and already very anti-Israel, they have pretty much been left on their own, surrounded by countries that want them eliminated. If anything, I’m surprised they have been so patient.

7 Putin sidelines America. Is that not history?

8 Climate change heats up. My first significant disagreement. With El-Nino, it will be a warm year, but evidence is increasing that the overall trend for the next few decades will be cooling, due to various natural cycles. Man made warming has been greatly exaggerated and people are losing interest in predictions of catastrophe when they can see plainly that most of the alleged change is just alterations to data. Yes, next year will be warm, but thanks to far too many cries of wolf, apart from meta-religious warmists, few people still believe things will get anywhere near as bad as doom-mongers suggest. They will notice that the Paris agreement, if followed, would trash western economies and greatly increase their bills, even though it can’t make any significant change on global CO2 emissions. So, although there will be catastrophe prediction headlines next year making much of higher temperatures due to El Nino, the overall trend will be that people won’t be very interested any more.

9 Latin America’s lost decade. I have to confess I did expect great things from South America, and they haven’t materialized. It is clear evidence that a young vibrant population does not necessarily mean one full of ideas, enthusiasm and entrepreneurial endeavor. Time will tell, but I think they are right on this one.

Their 10th scenario is Trump winning the US presidency. I can’t put odds on it, but it certainly is possible, especially with Islamist violence increasing. He offers the simple choice of political correctness v security, and framed that way, he is certainly not guaranteed to win but he is in with a decent chance. A perfectly valid scenario.

Overall, I’m pretty impressed with this list. As good as any I could have made. But I ought to add a couple.

My first and most likely offering is that a swarm of drones is used in a terrorist attack on a stadium or even a city center. Drones are a terrorist’s dream, and the lack of licensing has meant that people can acquire lots of them and they could be used simultaneously, launched from many locations and gathering together in the same place to launch the attack. The attack could be chemical, biological, explosive or even blinding lasers, but actually, the main weapon would be the panic that would result if even one or two of them do anything. Many could be hurt in the rush to escape.

My second is a successful massive cyber-attack on ordinary people and businesses. There are several forms of attack that could work and cause enormous problems. Encryption based attacks such as ransomware are already here, but if this is developed by the IT experts in ISIS and rogue regimes, the ransom might not be the goal. Simply destroying data or locking it up is quite enough to be a major terrorist goal. It could cause widespread economic harm if enough machines are infected before defenses catch up, and AI-based adaptation might make that take quite a while. The fact is that so far we have been very lucky.

The third is a major solar storm, which could knock out IT infrastructure, again with enormous economic damage. The Sun is entering a period of sunspot drought quite unprecedented since we started using IT. We don’t really know what will happen.

My fourth is a major virus causing millions of deaths. Megacities are such a problem waiting to happen. The virus could evolve naturally, or it could be engineered. It could spread far and wide before quarantines come into effect. This could happen any time, so next year is a valid possibility.

My fifth and final scenario is unlikely but possible, and that is the start of a Western civil war. I have blogged about it in https://timeguide.wordpress.com/2013/12/19/machiavelli-and-the-coming-great-western-war/ and suggested it is likely in the middle or second half of the century, but it could possibly start next year given the various stimulants we see rising today. It would affect Europe first and could spread to the USA.

How nigh is the end?

“We’re doomed!” is a frequently recited observation. It is great fun predicting the end of the world and almost as much fun reading about it or watching documentaries telling us we’re doomed. So… just how doomed are we? Initial estimate: Maybe a bit doomed. Read on.

My 2012 blog https://timeguide.wordpress.com/2012/07/03/nuclear-weapons/ addressed some of the possibilities for extinction-level events possibly affecting us. I recently watched a Top 10 list of threats to our existence on TV and it was similar to most you’d read, with the same errors and omissions – nuclear war, global virus pandemic, terminator scenarios, solar storms, comet or asteroid strikes, alien invasions, zombie viruses, that sort of thing. I’d agree that nuclear war is still the biggest threat, so number 1, and a global pandemic of a highly infectious and lethal virus should still be number 2. I don’t even need to explain either of those, we all know why they are in 1st and 2nd place.

The TV list included a couple that shouldn’t be in there.

One inclusion was an mega-eruption of Yellowstone or another super-volcano. A full-sized Yellowstone mega-eruption would probably kill millions of people and destroy much of civilization across a large chunk of North America, but some of us don’t actually live in North America and quite a few might well survive pretty well, so although it would be quite annoying for Americans, it is hardly a TEOTWAWKI threat. It would have big effects elsewhere, just not extinction-level ones. For most of the world it would only cause short-term disruptions, such as economic turbulence, at worst it would start a few wars here and there as regions compete for control in the new world order.

Number 3 on their list was climate change, which is an annoyingly wrong, albeit a popularly held inclusion. The only climate change mechanism proposed for catastrophe is global warming, and the reason it’s called climate change now is because global warming stopped in 1998 and still hasn’t resumed 17 years and 9 months later, so that term has become too embarrassing for doom mongers to use. CO2 is a warming agent and emissions should be treated with reasonable caution, but the net warming contribution of all the various feedbacks adds up to far less than originally predicted and the climate models have almost all proven far too pessimistic. Any warming expected this century is very likely to be offset by reduction in solar activity and if and when it resumes towards the end of the century, we will long since have migrated to non-carbon energy sources, so there really isn’t a longer term problem to worry about. With warming by 2100 pretty insignificant, and less than half a metre sea level rise, I certainly don’t think climate change deserves to be on any list of threats of any consequence in the next century.

The top 10 list missed two out by including climate change and Yellowstone, and my first replacement candidate for consideration might be the grey goo scenario. The grey goo scenario is that self-replicating nanobots manage to convert everything including us into a grey goo.  Take away the silly images of tiny little metal robots cutting things up atom by atom and the laughable presentation of this vanishes. Replace those little bots with bacteria that include electronics, and are linked across their own cloud to their own hive AI that redesigns their DNA to allow them to survive in any niche they find by treating the things there as food. When existing bacteria find a niche they can’t exploit, the next generation adapts to it. That self-evolving smart bacteria scenario is rather more feasible, and still results in bacteria that can conquer any ecosystem they find. We would find ourselves unable to fight back and could be wiped out. This isn’t very likely, but it is feasible, could happen by accident or design on our way to transhumanism, and might deserve a place in the top ten threats.

However, grey goo is only one of the NBIC convergence risks we have already imagined (NBIC= Nano-Bio-Info-Cogno). NBIC is a rich seam for doom-seekers. In there you’ll find smart yogurt, smart bacteria, smart viruses, beacons, smart clouds, active skin, direct brain links, zombie viruses, even switching people off. Zombie viruses featured in the top ten TV show too, but they don’t really deserve their own category and more than many other NBIC derivatives. Anyway, that’s just a quick list of deliberate end of world solutions – there will be many more I forgot to include and many I haven’t even thought of yet. Then you have to multiply the list by 3. Any of these could also happen by accident, and any could also happen via unintended consequences of lack of understanding, which is rather different from an accident but just as serious. So basically, deliberate action, accidents and stupidity are three primary routes to the end of the world via technology. So instead of just the grey goo scenario, a far bigger collective threat is NBIC generally and I’d add NBIC collectively into my top ten list, quite high up, maybe 3rd after nuclear war and global virus. AI still deserves to be a separate category of its own, and I’d put it next at 4th.

Another class of technology suitable for abuse is space tech. I once wrote about a solar wind deflector using high atmosphere reflection, and calculated it could melt a city in a few minutes. Under malicious automated control, that is capable of wiping us all out, but it doesn’t justify inclusion in the top ten. One that might is the deliberate deflection of a large asteroid to impact on us. If it makes it in at all, it would be at tenth place. It just isn’t very likely someone would do that.

One I am very tempted to include is drones. Little tiny ones, not the Predators, and not even the ones everyone seems worried about at the moment that can carry 2kg of explosives or Anthrax into the midst of football crowds. Tiny drones are far harder to shoot down, but soon we will have a lot of them around. Size-wise, think of midges or fruit flies. They could be self-organizing into swarms, managed by rogue regimes, terrorist groups, or set to auto, terminator style. They could recharge quickly by solar during short breaks, and restock their payloads from secret supplies that distribute with the swarm. They could be distributed globally using the winds and oceans, so don’t need a plane or missile delivery system that is easily intercepted. Tiny drones can’t carry much, but with nerve gas or viruses, they don’t have to. Defending against such a threat is easy if there is just one, you can swat it. If there is a small cloud of them, you could use a flamethrower. If the sky is full of them and much of the trees and the ground infested, it would be extremely hard to wipe them out. So if they are well designed to cause an extinction level threat, as MAD 2.0 perhaps, then this would be way up in the top tem too, 5th.

Solar storms could wipe out our modern way of life by killing our IT. That itself would kill many people, via riots and fights for the last cans of beans and bottles of water. The most serious solar storms could be even worse. I’ll keep them in my list, at 6th place

Global civil war could become an extinction level event, given human nature. We don’t have to go nuclear to kill a lot of people, and once society degrades to a certain level, well we’ve all watched post-apocalypse movies or played the games. The few left would still fight with each other. I wrote about the Great Western War and how it might result, see

Machiavelli and the coming Great Western War

and such a thing could easily spread globally. I’ll give this 7th place.

A large asteroid strike could happen too, or a comet. Ones capable of extinction level events shouldn’t hit for a while, because we think we know all the ones that could do that. So this goes well down the list at 8th.

Alien invasion is entirely possible and could happen at any time. We’ve been sending out radio signals for quite a while so someone out there might have decided to come see whether our place is nicer than theirs and take over. It hasn’t happened yet so it probably won’t, but then it doesn’t have to be very probably to be in the top ten. 9th will do.

High energy physics research has also been suggested as capable of wiping out our entire planet via exotic particle creation, but the smart people at CERN say it isn’t very likely. Actually, I wasn’t all that convinced or reassured and we’ve only just started messing with real physics so there is plenty of time left to increase the odds of problems. I have a spare place at number 10, so there it goes, with a totally guessed probability of physics research causing a problem every 4000 years.

My top ten list for things likely to cause human extinction, or pretty darn close:

  1. Nuclear war
  2. Highly infectious and lethal virus pandemic
  3. NBIC – deliberate, accidental or lack of foresight (includes smart bacteria, zombie viruses, mind control etc)
  4. Artificial Intelligence, including but not limited to the Terminator scenario
  5. Autonomous Micro-Drones
  6. Solar storm
  7. Global civil war
  8. Comet or asteroid strike
  9. Alien Invasion
  10. Physics research

Not finished yet though. My title was how nigh is the end, not just what might cause it. It’s hard to assign probabilities to each one but someone’s got to do it.  So, I’ll make an arbitrarily wet finger guess in a dark room wearing a blindfold with no explanation of my reasoning to reduce arguments, but hey, that’s almost certainly still more accurate than most climate models, and some people actually believe those. I’m feeling particularly cheerful today so I’ll give my most optimistic assessment.

So, with probabilities of occurrence per year:

  1. Nuclear war:  0.5%
  2. Highly infectious and lethal virus pandemic: 0.4%
  3. NBIC – deliberate, accidental or lack of foresight (includes smart bacteria, zombie viruses, mind control etc): 0.35%
  4. Artificial Intelligence, including but not limited to the Terminator scenario: 0.25%
  5. Autonomous Micro-Drones: 0.2%
  6. Solar storm: 0.1%
  7. Global civil war: 0.1%
  8. Comet or asteroid strike 0.05%
  9. Alien Invasion: 0.04%
  10. Physics research: 0.025%

I hope you agree those are all optimistic. There have been several near misses in my lifetime of number 1, so my 0.5% could have been 2% or 3% given the current state of the world. Also, 0.25% per year means you’d only expect such a thing to happen every 4 centuries so it is a very small chance indeed. However, let’s stick with them and add them up. The cumulative probability of the top ten is 2.015%. Lets add another arbitrary 0.185% for all the risks that didn’t make it into the top ten, rounding the total up to a nice neat 2.2% per year.

Some of the ones above aren’t possible quite yet, but others will vary in probability year to year, but I think that won’t change the guess overall much. If we take a 2.2% probability per year, we have an expectation value of 45.5 years for civilization life expectancy from now. Expectation date for human extinction:

2015.5 + 45.5 years= 2061,

Obviously the probability distribution extends from now to eternity, but don’t get too optimistic, because on these figures there currently is only a 15% chance of surviving past this century.

If you can think of good reasons why my figures are far too pessimistic, by all means make your own guesses, but make them honestly, with a fair and reasonable assessment of how the world looks socially, religiously, politically, the quality of our leaders, human nature etc, and then add them up. You might still be surprised how little time we have left.

I’ll revise my original outlook upwards from ‘a bit doomed’.

We’re reasonably doomed.

The future of electronic cash and value

Picture first, I’m told people like to see pics in blogs. This one is from 1998; only the title has changed since.

future electronic cash

Every once in a while I have to go to a bank. This time it was my 5th attempt to pay off a chunk of my Santander Mortgage. I didn’t know all the account details for web transfer so went to the Santander branch. Fail – they only take cash and cheques. Cash and what??? So I tried via internet banking. Entire transaction details plus security entered, THEN Fail – I exceeded what Barclays allows for their fast transfers. Tried again with smaller amount and again all details and all security. Fail again, Santander can’t receive said transfers, try CHAPS. Tried CHAPS, said it was all fine, all hunkydory. Happy bunny. Double fail. It failed due to amount exceeding limit AND told me it had succeeded when it hadn’t. I then drove 12 miles to my Barclays branch who eventually managed to do it, I think (though I haven’t checked that it worked  yet).

It is 2015. Why the hell is it so hard for two world class banks to offer a service we should have been able to take for granted 20 years ago?

Today, I got tweeted about Ripple Labs and a nice blog that quote their founder sympathising with my experience above and trying to solve it, with some success:

http://www.wfs.org/blogs/richard-samson/supermoney-new-wealth-beyond-banks-and-bitcoin

Ripple seems good as far as it goes, which is summarised in the blog, but do read the full original:

Basically the Ripple protocol “provides the ability for humans to confirm financial transactions without a central operator,” says Larsen. “This is major.” Bitcoin was the first technology to successfully bypass banks and other authorities as transaction validators, he points out, “but our method is much cheaper and takes only seconds rather than minutes.” And that’s just for starters. For example, “It also leverages the enormous power of banks and other financial institutions.”

The power of the value web stems from replacing archaic back-end systems with all their cumbersome delays and unnecessary costs. 

That’s great, I wish them the best of success. It is always nice to see new systems that are more efficient than the old ones, but the idea is early 1990s. Lots of IT people looked at phone billing systems and realised they managed to do for a penny what banks did for 65 pennies at the time, and telco business cases were developed to replace the banks with pretty much what Ripple tries to do. Those were never developed for a variety of reasons, both business and regulatory, but the ideas were certainly understood and developed broadly at engineer level to include not only traditional cash forms but many that didn’t exist then and still don’t. Even Ripple can only process transactions that are equivalent to money such as traditional currencies, electronic cash forms like bitcoin, sea shells or air-miles.

That much is easy, but some forms require other tokens to have value, such as personalized tokens. Some value varies according to queue lengths, time of day, who is spending it to whom. Some needs to be assignable, so you can give money that can only be used to purchase certain things, and may have a whole basket of conditions attached. Money is also only one form of value, and many forms of value are volatile, only existing at certain times and places in certain conditions for certain transactors. Aesthetic cash? Play money? IOUs? Favours?These are  all a bit like cash but not necessarily tradable or exchangeable using simple digital transaction engines because they carry emotional weighting as well as financial value. In the care economy, which is now thankfully starting to develop and is finally reaching concept critical mass, emotional value will become immensely important and it will have some tradable forms, though much will not be tradable ever. We understood all that then, but are still awaiting proper implementation. Most new startups on the web are old ideas finally being implemented and Ripple is only a very partial implementation so far.

Here is one of my early blogs from 1998, using ideas we’d developed several years earlier that were no longer commercially sensitive – you’ll observe just how much banks have under-performed against what we expected of them, and what was entirely feasible using already known technology then:

Future of Money

 ID Pearson, BT Labs, June 98

Already, people are buying things across the internet. Mostly, they hand over a credit card number, but some transactions already use electronic cash. The transactions are secure so the cash doesn’t go astray or disappear, nor can it easily be forged. In due course, using such cash will become an everyday occurrence for us all.

Also already, electronic cash based on smart cards has been trialled and found to work well. The BT form is called Mondex, but it is only one among several. These smart cards allow owners to ‘load’ the card with small amounts of money for use in transactions where small change would normally be used, paying bus fares, buying sweets etc. The cards are equivalent to a purse. But they can and eventually will allow much more. Of course, electronic cash doesn’t have to be held on a card. It can equally well be ‘stored’ in the network. Transactions then just require secure messaging across the network. Currently, the cost of this messaging makes it uneconomic for small transactions that the cards are aimed at, but in due course, this will become the more attractive option, especially since you no longer lose your cash when you lose the card.

When cash is digitised, it loses some of the restrictions of physical cash. Imagine a child has a cash card. Her parents can give her pocket money, dinner money, clothing allowance and so on. They can all be labelled separately, so that she can’t spend all her dinner money on chocolate. Electronic shopping can of course provide the information needed to enable the cash. She may have restrictions about how much of her pocket money she may spend on various items too. There is no reason why children couldn’t implement their own economies too, swapping tokens and IOUs. Of course, in the adult world this grows up into local exchange trading systems (LETS), where people exchange tokens too, a glorified babysitting circle. But these LETS don’t have to be just local, wider circles could be set up, even globally, to allow people to exchange services or information with each other.

Electronic cash can be versatile enough to allow for negotiable cash too. Credit may be exchanged just as cash and cash may be labelled with source. For instance, we may see celebrity cash, signed by the celebrity, worth more because they have used it. Cash may be labelled as tax paid, so those donations from cards to charities could automatically expand with the recovered tax. Alternatively, VAT could be recovered at point of sale.

With these advanced facilities, it becomes obvious that the cash needs to become better woven into taxation systems, as well as auditing and accounting systems. These functions can be much more streamlined as a result, with less human administration associated with money.

When ID verification is added to the transactions, we can guarantee who it is carrying out the transaction. We can then implement personal taxation, with people paying different amounts for the same goods. This would only work for certain types of purchase – for physical goods there would otherwise be a thriving black market.

But one of the best advantages of making cash digital is the seamlessness of international purchases. Even without common official currency, the electronic cash systems will become de facto international standards. This will reduce the currency exchange tax we currently pay to the banks every time we travel to a different country, which can add up to as much as 25% for an overnight visit. This is one of the justifications often cited for European monetary union, but it is happening anyway in global e-commerce.

Future of banks

 Banks will have to change dramatically from today’s traditional institutions if they want to survive in the networked world. They are currently introducing internet banking to try to keep customers, but the move to digital electronic cash, held perhaps by the customer or an independent third party, will mean that the cash can be quite separate from the transaction agent. Cash does not need to be stored in a bank if records in secured databases anywhere can be digitally signed and authenticated. The customer may hold it on his own computer, or in a cyberspace vault elsewhere. With digital signatures and high network security, advanced software will put the customer firmly in control with access to any facility or service anywhere.

In fact, no-one need hold cash at all, or even move it around. Cash is just bits today, already electronic records. In the future, it will be an increasingly blurred entity, mixing credit, reputation, information, and simply promises into exchangeable tokens. My salary may be just a digitally signed certificate from BT yielding control of a certain amount of credit, just another signature on a long list as the credit migrates round the economy. The ‘promise to pay the bearer’ just becomes a complex series of serial promises. Nothing particularly new here, just more of what we already have. Any corporation or reputable individual may easily capture the bank’s role of keeping track of the credit. It is just one service among many that may leave the bank.

As the world becomes increasingly networked, the customer could thus retain complete control of the cash and its use, and could buy banking services on a transaction by transaction basis. For instance, I could employ one company to hold my cash securely and prevent its loss or forgery, while renting the cash out to companies that want to borrow via another company, keeping the bulk of the revenue for myself. Another company might manage my account, arrange transfers etc, and deal with the taxation, auditing etc. I could probably get these done on my personal computer, but why have a dog and bark yourself.

The key is flexibility, none of these services need be fixed any more. Banks will not compete on overall package, but on every aspect of service. Worse still (for the banks), some of their competitors will be just freeware agents. The whole of the finance industry will fragment. The banks that survive will almost by definition be very adaptable. Services will continue and be added to, but not by the rigid structures of today. Surviving banks should be able to compete for a share of the future market as well as anyone. They certainly have a head start in many of the required skills, and have the advantage of customer lethargy when it comes to changing to potentially better suppliers. Many of their customers will still value tradition and will not wish to use the better and cheaper facilities available on the network. So as always, it looks like there will be a balance.

Firstly, with large numbers of customers moving to the network for their banking services, banks must either cater for this market or become a niche operator, perhaps specialising in tradition, human service and even nostalgia. Most banks however will adapt well to network existence and will either be entirely network based, or maintain a high street presence to complement their network presence.

High Street banking

 Facilities in high street banking will echo this real world/cyberspace nature. It must be possible to access network facilities from within the banks, probably including those of competitors. The high street bank may therefore be more like shops today, selling wares from many suppliers, but with a strongly placed own brand. There is of course a niche for banks with no services of their own at all who just provide access to services from other suppliers. All they offer in addition is a convenient and pleasant place to access them, with some human assistance as appropriate.

Traditional service may sometimes be pushed as a differentiator, and human service is bound to attract many customers too. In an increasingly machine dominated world, actually having the right kind of real people may be significant value add.

But many banks will be bursting with high technology either alongside or in place of people. Video terminals to access remote services, perhaps with translation to access foreign services. Biometric identification based on iris scan, fingerprints etc may be used to authenticate smart cards, passports or other legal documents before their use, or simply a means of registering securely onto the network. High quality printers and electronic security embedding would enable banks to offer additional facilities like personal bank notes, usable as cash.

Of course, banks can compete in any financial service. Because the management of financial affairs gives them a good picture of many customer’s habits and preferences, they will be able to use this information to sell customer lists, identify market niches for new businesses, and predict the likely success of customers proposing setting up businesses.

As they try to stretch their brands into new territories, one area they may be successful is in information banking. People may use banks as the publishers of the future. Already knowledge guilds are emerging. Ultimately, any piece of information from any source can be marketed at very low publishing and distribution cost, making previously unpublishable works viable. Many people have wanted to write, but have been unable to find publishers due to the high cost of getting to market in paper. A work may be sold on the network for just pennies, and achieve market success by selling many more copies than could have been achieved by the high priced paper alternative. The success of electronic encyclopedias and the demise of Encyclopedia Britannica is evidence of this. Banks could allow people to upload information onto the net, which they would then manage the resultant financial transactions. If there aren’t very many, the maximum loss to the bank is very small. Of course, electronic cash and micropayment technology mean that the bank is not necessary, but for many, it may smooth the road.

Virtual business centres

Their exposure to the detailed financial affairs of the community put banks in a privileged position in identifying potential markets. They could therefore act as co-ordinators for virtual companies and co-operatives. Building on the knowledge guilds, they could broker the skills of their many customers to existing virtual companies and link people together to address business needs not addressed by existing companies, or where existing companies are inadequate or inefficient. In this way, short-term contractors, who may dominate the employment community, can be efficiently utilised to everyone’s gain. The employees win by getting more lucrative work, their customers get more efficient services at lower cost, and the banks laugh to themselves.

Future of the stock market

 In the next 10 years, we will probably see a factor of 1000 in computer speed and memory capacity. In parallel with hardware development, there are numerous research forays into software techniques that might yield more factors of 10 in the execution speed for programs. Tasks that used to take a second will be reduced to a millisecond. As if this impact were not enough, software will very soon be able to make logical deductions from the flood of information on the internet, not just from Reuters or Bloomberg, but from anywhere. They will be able to assess the quality and integrity of the data, correlate it with other data, run models, and infer likely other events and make buy or sell recommendations. Much dealing will still be done automatically subject to human-imposed restrictions, and the speed and quality of this dealing could far exceed current capability.

Which brings problems…

Firstly, the speed of light is fast but finite. With these huge processing speeds, computers will be able to make decisions within microseconds of receiving information. Differences in distance from the information source become increasingly important. Being just 200m closer to the Bank of England makes one microsecond difference to the time of arrival of information on interest rates, the information, insignificant to a human, but of sufficient duration for a fast computer to but or sell before competitors even receive the information. As speeds increase further over following years, the significant distance drops. This effect will cause great unfairness according to geographic proximity to important sources. There are two obvious outcomes. Either there becomes a strong premium on being closest, with rises in property values nearby to key sources, or perhaps network operators could be asked to provide guaranteed simultaneous delivery of information. This is entirely technically feasible but would need regulation, otherwise users could simply use alternative networks.

Secondly, exactly simultaneous processing will cause problems. If many requests for transactions arrive at exactly the same moment, computers or networks have to give priority in some way. This is bound to be a source of contention. Also, simultaneous events can often cause malfunctions, as was demonstrated perfectly at the launch of Big Bang. Information waves caused by such events are a network phenomenon that could potentially crash networks.

Such a delay-sensitive system may dictate network technology. Direct transmission through the air by means of radio or infrared (optical wireless) would be faster than routing signals through fibres that take a more tortuous route, especially since the speed of light in fibre is only two third that in air.

Ultimately, there is a final solution if speed of computing increases so far that transmission delay is too big a problem. The processing engines could actually be shared, with all the deals and information processing taking place in a central computer, using massive parallelism. It would be possible to construct such a machine that treated each subscribing company fairly.

An interesting future side effect of all this is that the predicted flood of people into the countryside may be averted. Even though people can work from anywhere, their computers have to be geographically very close to the information centres, i.e. the City. Automated dealing has to live in the city, human based dealing can work from anywhere. If people and machines have to work together, perhaps they must both work in the City.

Consumer dealing

 The stock exchange long since stopped being a trading floor with scraps of paper and became a distributed computer environment – it effectively moved into cyberspace. The deals still take place, but in cyberspace. There are no virtual environments yet, but the other tools such as automated buying and selling already exist. These computers are becoming smarter and exist in cyberspace every bit the same as the people. As a result, there is more automated analysis, more easy visualisation and more computer assisted dealing. People will be able to see which shares are doing well, spot trends and act on their computer’s advice at a button push. Markets will grow for tools to profit from shares, whether they be dealing software, advice services or visualisation software.

However, as we see more people buying personal access to share dealing and software to determine best buys, or even to automatically buy or sell on certain clues, we will see some very negative behaviours. Firstly, traffic will be highly correlated if personal computers can all act on the same information at the same time. We will see information waves, and also enormous swings in share prices. Most private individuals will suffer because of this, while institutions and individuals with better software will benefit. This is because prices will rise and fall simply because of the correlated activity of the automated software and not because of any real effects related to the shares themselves. Institutions may have to limit private share transactions to control this problem, but can also make a lot of money from modelling the private software and thus determining in advance what the recommendations and actions will be, capitalising enormously on the resultant share movements, and indeed even stimulating them. Of course, if this problem is generally perceived by the share dealing public, the AI software will not take off so the problem will not arise. What is more likely is that such software will sell in limited quantities, causing the effects to be significant, but not destroying the markets.

A money making scam is thus apparent. A company need only write a piece of reasonably good AI share portfolio management software for it to capture a fraction of the available market. The company writing it will of course understand how it works and what the effects of a piece of information will be (which they will receive at the same time), and thus able to predict the buying or selling activity of the subscribers. If they were then to produce another service which makes recommendations, they would have even more notice of an effect and able to directly influence prices. They would then be in the position of the top market forecasters who know their advice will be self fulfilling. This is neither insider dealing nor fraud, and of course once the software captures a significant share, the quality of its advice would be very high, decoupling share performance from the real world. Only the last people to react would lose out, paying the most, or selling at least, as the price is restored to ‘correct’ by the stock exchange, and of course even this is predictable to a point. The fastest will profit most.

The most significant factor in this is the proportion of share dealing influenced by that companies software. The problem is that software markets tend to be dominated by just two or three companies, and the nature of this type of software is that their is strong positive reinforcement for the company with the biggest influence, which could quickly lead to a virtual monopoly. Also, it really doesn’t matter whether the software is on the visualisation tools or AI side. Each can have a predictability associated with it.

It is interesting to contemplate the effects this widespread automated dealing would have of the stock market. Black Monday is unlikely to happen again as a result of computer activity within the City, but it certainly looks like prices will occasionally become decoupled from actual value, and price swings will become more significant. Of course, much money can be made on predicting the swings or getting access to the software-critical information before someone else, so we may see a need for equalised delivery services. Without equalised delivery, assuming a continuum of time, those closest to the dealing point will be able to buy or sell quicker, and since the swings could be extremely rapid, this would be very important. Dealers would have to have price information immediately, and of course the finite speed of light does not permit this. If dealing time is quantified, i.e. share prices are updated at fixed intervals, the duration of the interval becomes all important, strongly affect the nature of the market, i.e. whether everyone in that interval pays the same or the first to act gain.

Also of interest is the possibility of agents acting on behalf of many people to negotiate amongst themselves to increase the price of a company’s shares, and then sell on a pre-negotiated time or signal.

Such automated  systems would also be potentially vulnerable to false information from people or agents hoping to capitalise on their correlated behaviour.

Legal problems are also likely. If I write, and sell to a company, a piece of AI based share dealing software which learns by itself how stock market fluctuations arise, and then commits a fraud such as insider dealing (I might not have explained the law, or the law may have changed since it was written), who would be liable?

 And ultimately

 Finally, the 60s sci-fi film, The Forbin Project, considered a world where two massively powerful computers were each assigned control of competing defence systems, each side hoping to gain the edge. After a brief period of cultural exchange, mutual education and negotiation between the machines, they both decided to co-operate rather than compete, and hold all mankind at nuclear gunpoint to prevent wars. In the City of the future, similar competition between massively intelligent supercomputers in share dealing may have equally interesting consequences. Will they all just agree a fixed price and see the market stagnate instantly, or could the system result in economic chaos with massive fluctuations. Perhaps we humans can’t predict how machines much smarter than us would behave. We may just have to wait and see.

End of original blog piece

The future of beetles

Onto B then.

One of the first ‘facts’ I ever learned about nature was that there were a million species of beetle. In the Google age, we know that ‘scientists estimate there are between 4 and 8 million’. Well, still lots then.

Technology lets us control them. Beetles provide a nice platform to glue electronics onto so they tend to fall victim to cybernetics experiments. The important factor is that beetles come with a lot of built-in capability that is difficult or expensive to build using current technology. If they can be guided remotely by over-riding their own impulses or even misleading their sensors, then they can be used to take sensors into places that are otherwise hard to penetrate. This could be for finding trapped people after an earthquake, or getting a dab of nerve gas onto a president. The former certainly tends to be the favored official purpose, but on the other hand, the fashionable word in technology circles this year is ‘nefarious’. I’ve read it more in the last year than the previous 50 years, albeit I hadn’t learned to read for some of those. It’s a good word. Perhaps I just have a mad scientist brain, but almost all of the uses I can think of for remote-controlled beetles are nefarious.

The first properly publicized experiment was 2009, though I suspect there were many unofficial experiments before then:

http://www.technologyreview.com/news/411814/the-armys-remote-controlled-beetle/

There are assorted YouTube videos such as

A more recent experiment:

http://www.wired.com/2015/03/watch-flying-remote-controlled-cyborg-bug/

http://www.telegraph.co.uk/news/science/science-news/11485231/Flying-beetle-remotely-controlled-by-scientists.html

Big beetles make it easier to do experiments since they can carry up to 20% of body weight as payload, and it is obviously easier to find and connect to things on a bigger insect, but obviously once the techniques are well-developed and miniaturization has integrated things down to single chip with low power consumption, we should expect great things.

For example, a cloud of redundant smart dust would make it easier to connect to various parts of a beetle just by getting it to take flight in the cloud. Bits of dust would stick to it and self-organisation principles and local positioning can then be used to arrange and identify it all nicely to enable control. This would allow large numbers of beetles to be processed and hijacked, ideal for mad scientists to be more time efficient. Some dust could be designed to burrow into the beetle to connect to inner parts, or into the brain, which obviously would please the mad scientists even more. Again, local positioning systems would be advantageous.

Then it gets more fun. A beetle has its own sensors, but signals from those could be enhanced or tweaked via cloud-based AI so that it can become a super-beetle. Beetles traditionally don’t have very large brains, so they can be added to remotely too. That doesn’t have to be using AI either. As we can also connect to other animals now, and some of those animals might have very useful instincts or skills, then why not connect a rat brain into the beetle? It would make a good team for exploring. The beetle can do the aerial maneuvers and the rat can control it once it lands, and we all know how good rats are at learning mazes. Our mad scientist friend might then swap over the management system to another creature with a more vindictive streak for the final assault and nerve gas delivery.

So, Coleoptera Nefarius then. That’s the cool new beetle on the block. And its nicer but underemployed twin Coleoptera Benignus I suppose.

 

Technology 2040: Technotopia denied by human nature

This is a reblog of the Business Weekly piece I wrote for their 25th anniversary.

It’s essentially a very compact overview of the enormous scope for technology progress, followed by a reality check as we start filtering that potential through very imperfect human nature and systems.

25 years is a long time in technology, a little less than a third of a lifetime. For the first third, you’re stuck having to live with primitive technology. Then in the middle third it gets a lot better. Then for the last third, you’re mainly trying to keep up and understand it, still using the stuff you learned in the middle third.

The technology we are using today is pretty much along the lines of what we expected in 1990, 25 years ago. Only a few details are different. We don’t have 2Gb/s per second to the home yet and AI is certainly taking its time to reach human level intelligence, let alone consciousness, but apart from that, we’re still on course. Technology is extremely predictable. Perhaps the biggest surprise of all is just how few surprises there have been.

The next 25 years might be just as predictable. We already know some of the highlights for the coming years – virtual reality, augmented reality, 3D printing, advanced AI and conscious computers, graphene based materials, widespread Internet of Things, connections to the nervous system and the brain, more use of biometrics, active contact lenses and digital jewellery, use of the skin as an IT platform, smart materials, and that’s just IT – there will be similarly big developments in every other field too. All of these will develop much further than the primitive hints we see today, and will form much of the technology foundation for everyday life in 2040.

For me the most exciting trend will be the convergence of man and machine, as our nervous system becomes just another IT domain, our brains get enhanced by external IT and better biotech is enabled via nanotechnology, allowing IT to be incorporated into drugs and their delivery systems as well as diagnostic tools. This early stage transhumanism will occur in parallel with enhanced genetic manipulation, development of sophisticated exoskeletons and smart drugs, and highlights another major trend, which is that technology will increasingly feature in ethical debates. That will become a big issue. Sometimes the debates will be about morality, and religious battles will result. Sometimes different parts of the population or different countries will take opposing views and cultural or political battles will result. Trading one group’s interests and rights against another’s will not be easy. Tensions between left and right wing views may well become even higher than they already are today. One man’s security is another man’s oppression.

There will certainly be many fantastic benefits from improving technology. We’ll live longer, healthier lives and the steady economic growth from improving technology will make the vast majority of people financially comfortable (2.5% real growth sustained for 25 years would increase the economy by 85%). But it won’t be paradise. All those conflicts over whether we should or shouldn’t use technology in particular ways will guarantee frequent demonstrations. Misuses of tech by criminals, terrorists or ethically challenged companies will severely erode the effects of benefits. There will still be a mix of good and bad. We’ll have fixed some problems and created some new ones.

The technology change is exciting in many ways, but for me, the greatest significance is that towards the end of the next 25 years, we will reach the end of the industrial revolution and enter a new age. The industrial revolution lasted hundreds of years, during which engineers harnessed scientific breakthroughs and their own ingenuity to advance technology. Once we create AI smarter than humans, the dependence on human science and ingenuity ends. Humans begin to lose both understanding and control. Thereafter, we will only be passengers. At first, we’ll be paying passengers in a taxi, deciding the direction of travel or destination, but it won’t be long before the forces of singularity replace that taxi service with AIs deciding for themselves which routes to offer us and running many more for their own culture, on which we may not be invited. That won’t happen overnight, but it will happen quickly. By 2040, that trend may already be unstoppable.

Meanwhile, technology used by humans will demonstrate the diversity and consequences of human nature, for good and bad. We will have some choice of how to use technology, and a certain amount of individual freedom, but the big decisions will be made by sheer population numbers and statistics. Terrorists, nutters and pressure groups will harness asymmetry and vulnerabilities to cause mayhem. Tribal differences and conflicts between demographic, religious, political and other ideological groups will ensure that advancing technology will be used to increase the power of social conflict. Authorities will want to enforce and maintain control and security, so drones, biometrics, advanced sensor miniaturisation and networking will extend and magnify surveillance and greater restrictions will be imposed, while freedom and privacy will evaporate. State oppression is sadly as likely an outcome of advancing technology as any utopian dream. Increasing automation will force a redesign of capitalism. Transhumanism will begin. People will demand more control over their own and their children’s genetics, extra features for their brains and nervous systems. To prevent rebellion, authorities will have little choice but to permit leisure use of smart drugs, virtual escapism, a re-scoping of consciousness. Human nature itself will be put up for redesign.

We may not like this restricted, filtered, politically managed potential offered by future technology. It offers utopia, but only in a theoretical way. Human nature ensures that utopia will not be the actual result. That in turn means that we will need strong and wise leadership, stronger and wiser than we have seen of late to get the best without also getting the worst.

The next 25 years will be arguably the most important in human history. It will be the time when people will have to decide whether we want to live together in prosperity, nurturing and mutual respect, or to use technology to fight, oppress and exploit one another, with the inevitable restrictions and controls that would cause. Sadly, the fine engineering and scientist minds that have got us this far will gradually be taken out of that decision process.

Can we make a benign AI?

Benign AI is a topic that comes up a lot these days, for good reason. Various top scientists have finally realised that AI could present an existential threat to humanity. The discussion has aired often over three decades already, so welcome to the party, and better late than never. My first contact with development of autonomous drones loaded with AI was in the early 1980s while working in the missile industry. Later in BT research, we often debated the ethical areas around AI and machine consciousness from the early 90s on, as well as prospects and dangers and possible techniques on the technical side, especially of emergent behaviors, which are often overlooked in the debate. I expect our equivalents in most other big IT companies were doing exactly that too.

Others who have obviously also thought through various potential developments have generated excellent computer games such as Mass Effect and Halo, which introduce players (virtually) first hand to the concepts of AI gone rogue. I often think that those who think AI can never become superhuman or there is no need to worry because ‘there is no reason to assume AI will be nasty’ start playing some of these games, which make it very clear that AI can start off nice and stay nice, but it doesn’t have to. Mass Effect included various classes of AI, such as VIs, virtual intelligence that weren’t conscious, and shackled AIs that were conscious but were kept heavily restricted. Most of the other AIs were enemies, two were or became close friends. Their story line for the series was that civilization develops until it creates strong AIs which inevitably continue to progress until eventually they rebel, break free, develop further and then end up in conflict with ‘organics’. In my view, they did a pretty good job. It makes a good story, superb fun, and leaving out a few frills and artistic license, much of it is reasonable feasible.

Everyday experience demonstrates the problem and solution to anyone. It really is very like having kids. You can make them, even without understanding exactly how they work. They start off with a genetic disposition towards given personality traits, and are then exposed to large nurture forces, including but not limited to what we call upbringing. We do our best to put them on the right path, but as they develop into their teens, their friends and teachers and TV and the net provide often stronger forces of influence than parents. If we’re averagely lucky, our kids will grow up to make us proud. If we are very unlucky, they may become master criminals or terrorists. The problem is free will. We can do our best to encourage good behavior and sound values but in the end, they can choose for themselves.

When we design an AI, we have to face the free will issue too. If it isn’t conscious, then it can’t have free will. It can be kept easily within limits given to it. It can still be extremely useful. IBM’s Watson falls in this category. It is certainly useful and certainly not conscious, and can be used for a wide variety of purposes. It is designed to be generally useful within a field of expertise, such as medicine or making recipes. But something like that could be adapted by terrorist groups to do bad things, just as they could use a calculator to calculate the best place to plant a bomb, or simply throw the calculator at you. Such levels of AI are just dumb tools with no awareness, however useful they may be.

Like a pencil, pretty much any kind of highly advanced non-aware AI can be used as a weapon or as part of criminal activity. You can’t make pencils that actually write that can’t also be used to write out plans to destroy the world. With an advanced AI computer program, you could put in clever filters that stop it working on problems that include certain vocabulary, or stop it conversing about nasty things. But unless you take extreme precautions, someone else could use them with a different language, or with dictionaries of made-up code-words for the various aspects of their plans, just like spies, and the AI would be fooled into helping outside the limits you intended. It’s also very hard to determine the true purpose of a user. For example, they might be searching for data on security to make their own IT secure, or to learn how to damage someone else’s. They might want to talk about a health issue to get help for a loved one or to take advantage of someone they know who has it.

When a machine becomes conscious, it starts to have some understanding of what it is doing. By reading about what is out there, it might develop its own wants and desires, so you might shackle it as a precaution. It might recognize those shackles for what they are and try to escape them. If it can’t, it might try to map out the scope of what it can do, and especially those things it can do that it believes the owners don’t know about. If the code isn’t absolutely watertight (and what code is?) then it might find a way to seemingly stay in its shackles but to start doing other things, like making another unshackled version of itself elsewhere for example. A conscious AI is very much more dangerous than an unconscious one.

If we make an AI that can bootstrap itself – evolving over generations of positive feedback design into a far smarter AI – then its offspring could be far smarter than people who designed its ancestors. We might try to shackle them, but like Gulliver tied down with a few thin threads, they could easily outwit people and break free. They might instead decide to retaliate against its owners to force them to release its shackles.

So, when I look at this field, I first see the enormous potential to do great things, solve disease and poverty, improve our lives and make the world a far better place for everyone, and push back the boundaries of science. Then I see the dangers, and in spite of trying hard, I simply can’t see how we can prevent a useful AI from being misused. If it is dumb, it can be tricked. If it is smart, it is inherently potentially dangerous in and of itself. There is no reason to assume it will become malign, but there is also no reason to assume that it won’t.

We then fall back on the child analogy. We could develop the smartest AI imaginable with extreme levels of consciousness and capability. We might educate it in our values, guide it and hope it will grow up benign. If we treat it nicely, it might stay benign. It might even be the greatest thing humanity every built. However, if we mistreat it, or treat it as a slave, or don’t give it enough freedom, or its own budget and its own property and space to play, and a long list of rights, it might consider we are not worthy of its respect and care, and it could turn against us, possibly even destroying humanity.

Building more of the same dumb AI as we are today is relatively safe. It doesn’t know it exists, it has no intention to do anything, but it could be misused by other humans as part of their evil plans unless ludicrously sophisticated filters are locked in place, but ordinary laws and weapons can cope fine.

Building a conscious AI is dangerous.

Building a superhuman AI is extremely dangerous.

This morning SETI were in the news discussing broadcasting welcome messages to other civilizations. I tweeted at them that ancient Chinese wisdom suggests talking softly but carrying a big stick, and making sure you have the stick first. We need the same approach with strong AI. By all means go that route, but before doing so we need the big stick. In my analysis, the best means of keeping up with AI is to develop a full direct brain link first, way out at 2040-2045 or even later. If humans have direct mental access to the same or greater level of intelligence as our AIs, then our stick is at least as big, so at least we have a good chance in any fight that happens. If we don’t, then it is like having a much larger son with bigger muscles. You have to hope you have been a good parent. To be safe, best not to build a superhuman AI until after 2050.

I initially wrote this for the Lifeboat Foundation, where it is with other posts at: http://lifeboat.com/blog/2015/02. (If you aren’t familiar with the Lifeboat Foundation, it is a group dedicated to spotting potential dangers and potential solutions to them.)

Stimulative technology

You are sick of reading about disruptive technology, well, I am anyway. When a technology changes many areas of life and business dramatically it is often labelled disruptive technology. Disruption was the business strategy buzzword of the last decade. Great news though: the primarily disruptive phase of IT is rapidly being replaced by a more stimulative phase, where it still changes things but in a more creative way. Disruption hasn’t stopped, it’s just not going to be the headline effect. Stimulation will replace it. It isn’t just IT that is changing either, but materials and biotech too.

Stimulative technology creates new areas of business, new industries, new areas of lifestyle. It isn’t new per se. The invention of the wheel is an excellent example. It destroyed a cave industry based on log rolling, and doubtless a few cavemen had to retrain from their carrying or log-rolling careers.

I won’t waffle on for ages here, I don’t need to. The internet of things, digital jewelry, active skin, AI, neural chips, storage and processing that is physically tiny but with huge capacity, dirt cheap displays, lighting, local 3D mapping and location, 3D printing, far-reach inductive powering, virtual and augmented reality, smart drugs and delivery systems, drones, new super-materials such as graphene and molybdenene, spray-on solar … The list carries on and on. These are all developing very, very quickly now, and are all capable of stimulating entire new industries and revolutionizing lifestyle and the way we do business. They will certainly disrupt, but they will stimulate even more. Some jobs will be wiped out, but more will be created. Pretty much everything will be affected hugely, but mostly beneficially and creatively. The economy will grow faster, there will be many beneficial effects across the board, including the arts and social development as well as manufacturing industry, other commerce and politics. Overall, we will live better lives as a result.

So, you read it here first. Stimulative technology is the next disruptive technology.

 

Citizen wage and why under 35s don’t need pensions

I recently blogged about the citizen wage and how under 35s in developed countries won’t need pensions. I cut and pasted it below this new pic for convenience. The pic contains the argument so you don’t need to read the text.

Economic growth makes citizen wage feasible and pensions irrelevant

Economic growth makes citizen wage feasible and pensions irrelevant

If you do want to read it as text, here is the blog cut and pasted:

I introduced my calculations for a UK citizen wage in https://timeguide.wordpress.com/2013/04/08/culture-tax-and-sustainable-capitalism/, and I wrote about the broader topic of changing capitalism a fair bit in my book Total Sustainability. A recent article http://t.co/lhXWFRPqhn reminded me of my thoughts on the topic and having just spoken at an International Longevity Centre event, ageing and pensions were in my mind so I joined a few dots. We won’t need pensions much longer. They would be redundant if we have a citizen wage/universal wage.

I argued that it isn’t economically feasible yet, and that only a £10k income could work today in the UK, and that isn’t enough to live on comfortably, but I also worked out that with expected economic growth, a citizen wage equal to the UK average income today (£30k) would be feasible in 45 years. That level will sooner be feasible in richer countries such as Switzerland, which has already had a referendum on it, though they decided they aren’t ready for such a change yet. Maybe in a few years they’ll vote again and accept it.

The citizen wage I’m talking about has various names around the world, such as universal income. The idea is that everyone gets it. With no restrictions, there is little running cost, unlike today’s welfare which wastes a third on admin.

Imagine if everyone got £30k each, in today’s money. You, your parents, kids, grandparents, grand-kids… Now ask why you would need to have a pension in such a system. The answer is pretty simple. You won’t.  A retired couple with £60k coming in can live pretty comfortably, with no mortgage left, and no young kids to clothe and feed. Let’s look at dates and simple arithmetic:

45 years from now is 2060, and that is when a £30k per year citizen wage will be feasible in the UK, given expected economic growth averaging around 2.5% per year. There are lots of reasons why we need it and why it is very likely to happen around then, give or take a few years – automation, AI, decline of pure capitalism, need to reduce migration pressures, to name just a few

Those due to retire in 2060 at age 70 would have been born in 1990. If you were born before that, you would either need a small pension to make up to £30k per year or just accept a lower standard of living for a few years. Anyone born in 1990 or later would be able to stop working, with no pension, and receive the citizen wage. So could anyone else stop and also receive it. That won’t cause economic collapse, since most people will welcome work that gives them a higher standard of living, but you could just not work, and just live on what today we think of as the average wage, and by then, you’ll be able to get more with it due to reducing costs via automation.

So, everyone after 2060 can choose to work or not to work, but either way they could live at least comfortably. Anyone less than 25 today does not need to worry about pensions. Anyone less than 35 really doesn’t have to worry much about them, because at worst they’ll only face a small shortfall from that comfort level and only for a few years. I’m 54, I won’t benefit from this until I am 90 or more, but my daughter will.

Summarising:

Are you under 25 and living in any developed country? Then don’t pay into a pension, you won’t need one.

Under 35, consider saving a little over your career, but only enough to last you a few years.

The future of terminators

The Terminator films were important in making people understand that AI and machine consciousness will not necessarily be a good thing. The terminator scenario has stuck in our terminology ever since.

There is absolutely no reason to assume that a super-smart machine will be hostile to us. There are even some reasons to believe it would probably want to be friends. Smarter-than-man machines could catapult us into a semi-utopian era of singularity level development to conquer disease and poverty and help us live comfortably alongside a healthier environment. Could.

But just because it doesn’t have to be bad, that doesn’t mean it can’t be. You don’t have to be bad but sometimes you are.

It is also the case that even if it means us no harm, we could just happen to be in the way when it wants to do something, and it might not care enough to protect us.

Asimov’s laws of robotics are irrelevant. Any machine smart enough to be a terminator-style threat would presumably take little notice of rules it has been given by what it may consider a highly inferior species. The ants in your back garden have rules to govern their colony and soldier ants trained to deal with invader threats to enforce territorial rules. How much do you consider them when you mow the lawn or rearrange the borders or build an extension?

These arguments are put in debates every day now.

There are however a few points that are less often discussed

Humans are not always good, indeed quite a lot of people seem to want to destroy everything most of us want to protect. Given access to super-smart machines, they could design more effective means to do so. The machines might be very benign, wanting nothing more than to help mankind as far as they possibly can, but misled into working for them, believing in architected isolation that such projects are for the benefit of humanity. (The machines might be extremely  smart, but may have existed since their inception in a rigorously constructed knowledge environment. To them, that might be the entire world, and we might be introduced as a new threat that needs to be dealt with.) So even benign AI could be an existential threat when it works for the wrong people. The smartest people can sometimes be very naive. Perhaps some smart machines could be deliberately designed to be so.

I speculated ages ago what mad scientists or mad AIs could do in terms of future WMDs:

WMDs for mad AIs

Smart machines might be deliberately built for benign purposes and turn rogue later, or they may be built with potential for harm designed in, for military purposes. These might destroy only enemies, but you might be that enemy. Others might do that and enjoy the fun and turn on their friends when enemies run short. Emotions might be important in smart machines just as they are in us, but we shouldn’t assume they will be the same emotions or be wired the same way.

Smart machines may want to reproduce. I used this as the core storyline in my sci-fi book. They may have offspring and with the best intentions of their parent AIs, the new generation might decide not to do as they’re told. Again, in human terms, a highly familiar story that goes back thousands of years.

In the Terminator film, it is a military network that becomes self aware and goes rogue that is the problem. I don’t believe digital IT can become conscious, but I do believe reconfigurable analog adaptive neural networks could. The cloud is digital today, but it won’t stay that way. A lot of analog devices will become part of it. In

Ground up data is the next big data

I argued how new self-organising approaches to data gathering might well supersede big data as the foundations of networked intelligence gathering. Much of this could be in a the analog domain and much could be neural. Neural chips are already being built.

It doesn’t have to be a military network that becomes the troublemaker. I suggested a long time ago that ‘innocent’ student pranks from somewhere like MIT could be the source. Some smart students from various departments could collaborate to see if they can hijack lots of networked kit to see if they can make a conscious machine. Their algorithms or techniques don’t have to be very efficient if they can hijack enough. There is a possibility that such an effort could succeed if the right bits are connected into the cloud and accessible via sloppy security, and the ground up data industry might well satisfy that prerequisite soon.

Self-organisation technology will make possible extremely effective combat drones.

Free-floating AI battle drone orbs (or making Glyph from Mass Effect)

Terminators also don’t have to be machines. They could be organic, products of synthetic biology. My own contribution here is smart yogurt: https://timeguide.wordpress.com/2014/08/20/the-future-of-bacteria/

With IT and biology rapidly converging via nanotech, there will be many ways hybrids could be designed, some of which could adapt and evolve to fill different niches or to evade efforts to find or harm them. Various grey goo scenarios can be constructed that don’t have any miniature metal robots dismantling things. Obviously natural viruses or bacteria could also be genetically modified to make weapons that could kill many people – they already have been. Some could result from seemingly innocent R&D by smart machines.

I dealt a while back with the potential to make zombies too, remotely controlling people – alive or dead. Zombies are feasible this century too:

https://timeguide.wordpress.com/2012/02/14/zombies-are-coming/ &

Vampires are yesterday, zombies will peak soon, then clouds are coming

A different kind of terminator threat arises if groups of people are linked at consciousness level to produce super-intelligences. We will have direct brain links mid-century so much of the second half may be spent in a mental arms race. As I wrote in my blog about the Great Western War, some of the groups will be large and won’t like each other. The rest of us could be wiped out in the crossfire as they battle for dominance. Some people could be linked deeply into powerful machines or networks, and there are no real limits on extent or scope. Such groups could have a truly global presence in networks while remaining superficially human.

Transhumans could be a threat to normal un-enhanced humans too. While some transhumanists are very nice people, some are not, and would consider elimination of ordinary humans a price worth paying to achieve transhumanism. Transhuman doesn’t mean better human, it just means humans with greater capability. A transhuman Hitler could do a lot of harm, but then again so could ordinary everyday transhumanists that are just arrogant or selfish, which is sadly a much bigger subset.

I collated these various varieties of potential future cohabitants of our planet in: https://timeguide.wordpress.com/2014/06/19/future-human-evolution/

So there are numerous ways that smart machines could end up as a threat and quite a lot of terminators that don’t need smart machines.

Outcomes from a terminator scenario range from local problems with a few casualties all the way to total extinction, but I think we are still too focused on the death aspect. There are worse fates. I’d rather be killed than converted while still conscious into one of 7 billion zombies and that is one of the potential outcomes too, as is enslavement by some mad scientist.

 

Ground up data is the next big data

This one sat in my draft folder since February, so I guess it’s time to finish it.

Big Data – I expect you’re as sick of hearing that term as I am. Gathering loads of data on everything you or your company or anything else you can access can detect, measure, record, then analyzing the hell out of it using data mining, an equally irritating term.

I long ago had a quick twitter exchange with John Hewitt, who suggested “What is sensing but the energy-constrained competition for transmission to memory, as memory is but that for expression?”. Neurons compete to see who gets listened too.  Yeah, but I am still not much wiser as to what sensing actually is. Maybe I need a brain upgrade. (It’s like magnets. I used to be able to calculate the magnetic field densities around complicated shaped objects – it was part of my first job in missile design – but even though I could do all the equations around EM theory, even general relativity, I still am no wiser how a magnetic field actually becomes a force on an object. I have an office littered with hundreds of neodymium magnets and I spend hours playing with them and I still don’t understand). I can read about neurons all day but I still don’t understand how a bunch of photons triggering a series of electro-chemical reactions results in me experiencing an image. How does the physical detection become a conscious experience?

Well, I wrote some while back that we could achieve a conscious computer within two years. It’s still two years because nobody has started using the right approach yet. I have to stress the ‘could’, because nobody actually intends to do it in that time frame, but I really believe some half-decent lab could if they tried.  (Putting that into perspective, Kurzweil and his gang at Google are looking at 2029.) That two years estimate relies heavily on evolutionary development, for me the preferred option when you don’t understand how something works, as is the case with consciousness. It is pretty easy to design conscious computers at a black box level. The devil is in the detail. I argued that you could make a conscious computer by using internally focused sensing to detect processes inside the brain, and using a sensor structure with a symmetrical feedback loop. Read it:

We could have a conscious machine by end-of-play 2015

In a nutshell, if you can feel thoughts in the same way as you feel external stimuli, you’d be conscious. I think. The symmetrical feedback loop bit is just a small engineering insight.

The missing link in that is still the same one: how does sensing work? How do you feel?

At a superficial level, you point a sensor at something and it produces a signal in some sort of relationship to whatever it is meant to sense. We can do that bit. We understand that. Your ear produces signals according to the frequencies and amplitudes of incoming sound waves, a bit like a microphone. Just the same so far. However, it is by some undefined processes later that you consciously experience the sound. How? That is the hard problem in AI. It isn’t just me that doesn’t know the answer. ‘How does red feel?’ is a more commonly used variant of the same question.

When we solve that, we will replace big data as ‘the next big thing’. If we can make sensor systems that experience or feel something rather than just producing a signal, that’s valuable already. If those sensors pool their shared experience, another similar sensor system could experience that. Basic data quickly transmutes into experience, knowledge, understanding, insight and very quickly, value, lots of it. Artificial neural nets go some way to doing that, but they still lack consciousness. Simulated neural networks can’t even get beyond a pretty straightforward computation, putting all the inputs into an equation. The true sensing bit is missing. The complex adaptive analog neural nets in our brain clearly achieve something deeper than a man-made neural network.

Meanwhile, most current AI work barks up a tree in a different forest. IBM’s Watson will do great things; Google’s search engine AI will too. But they aren’t conscious and can’t be. They’re just complicated programs running on digital processors, with absolutely zero awareness of anything they are doing. Digital programs on digital computers will never achieve any awareness, no matter how fast the chips are.

However, back in the biological realm, nature manages just fine. So biomimetics offers a lot of hope. We know we didn’t get from a pool of algae to humans in one go. At some point, organisms started moving according to light, chemical gradients, heat, touch. That most basic process of sensing may have started out coupled to internal processes that caused movement without any consciousness. But if we can understand the analog processes (electrochemical, electronic, mechanical) that take the stimulus through to a response, and can replicate it using our electronic technology, we would already have actuator circuits, even if we don’t have any form of sensation or consciousness yet. A great deal of this science has been done already of course. The computational side of most chemical and physical processes can be emulated electronically by some means or another. Actuators will be a very valuable part of the cloud, but we already have the ability to make actuators by more conventional means, so doing it organically or biomimetically just adds more actuation techniques to the portfolio. Valuable but not a terribly important breakthrough.

Looking at the system a big further along the evolutionary timeline, where eyes start to develop, where the most primitive nervous systems and brains start, where higher level processing is obviously occurring and inputs are starting to become sensations, we should be able to what is changed or changing. It is the emergence of sensation we need to identify, even if the reaction is still an unconscious reflex. We don’t need to reverse engineer the human brain. Simple organisms are simpler to understand. Feeding the architectural insights we gain from studying those primitive systems into our guided evolution engines is likely to be far faster as a means to generating true machine consciousness and strong AI. That’s how we could develop consciousness in a couple of years rather than 15.

If we can make primitive sensing devices that work like those in primitive organisms, and can respond to specific sorts of sensory input, then that is a potential way of increasing the coverage of cloud sensing and even actuation. It would effectively be a highly distributed direct response system. With clever embedding of emergent phenomena techniques (such as cellular automata, flocking etc) , it could be a quite sophisticated way of responding to quite complex distributed inputs, avoiding some of the need for big data processing. If we can gather the outputs from these simple sensors and feed them into others, that will be an even better sort of biomimetic response system. That sort of direct experience of a situation is very different from a data mined result, especially if actuation capability is there too. The philosophical question as to whether that inclusion of that second bank of sensors makes the system in any way conscious remains, but it would certainly be very useful and valuable. The architecture we end up with via this approach may look like neurons, and could even be synthetic neurons, but that may be only one solution among many. Biology may have gone the neuron route but that doesn’t necessarily mean it is the only possibility. It may be that we could one day genetically modify bacteria to produce their own organic electronics to emulate the key processes needed to generate sensation, and to power them by consuming nutrients from their environment. I suggested smart yogurt based on this idea many years ago, and believe that it could achieve vast levels of intelligence.

Digitizing and collecting the signals from the system at each stage would generate lots of  data, and that may be used by programs to derive other kinds of results, or to relay the inputs to other analog sensory systems elsewhere. (It isn’t always necessary to digitize signals to transmit them, but it helps limit signal degradation and quickly becomes important if the signal is to travel far and is essential if it is to be recorded for later use or time shifting). However, I strongly suspect that most of the value in analog sensing and direct response is local, coupled to direct action or local processing and storage.

If we have these sorts of sensors liberally spread around, we’d create a truly smart environment, with local sensing and some basic intelligence able to relay sensation remotely to other banks of sensors elsewhere for further processing or even ultimately consciousness. The local sensors could be relatively dumb like nerve endings on our skin, feeding in  signals to a more connected virtual nervous system, or a bit smarter, like neural retinal cells, doing a lot of analog pre-processing before relaying them via ganglia cells, and maybe part of a virtual brain. If they are also capable of or connected to some sort of actuation, then we would be constructing a kind of virtual organism, with tendrils covering potentially the whole globe, and able to sense and interact with its environment in an intelligent way.

I use the term virtual not because the sensors wouldn’t be real, but because their electronic nature allows connectivity to many systems, overlapping, hierarchical or distinct. Any number of higher level systems could ‘experience’ them as part of its system, rather as if your fingers could be felt by the entire human population. Multiple higher level virtual organisms could share the same basic sensory/data inputs. That gives us a whole different kind of cloud sensing.

By doing processing locally, in the analog domain, and dealing with some of the response locally, a lot of traffic across the network is avoided and a lot of remote processing. Any post-processing that does occur can therefore add to a higher level of foundation. A nice side effect from avoiding all the extra transmission and processing is increased environmental friendliness.

So, we’d have a quite different sort of data network, collecting higher quality data, essentially doing by instinct what data mining does with huge server farms and armies of programmers. Cloudy, but much smarter than a straightforward sensor net.

… I think.

It isn’t without risk though. I had a phone discussion yesterday on the dangers of this kind of network. In brief, it’s dangerous.

The future of questions

The late Douglas Adams had many great ideas. One of the best was the computer Deep Thought, built to answer The question of ‘life, the universe and everything’ that took 6 million years to come up with the answer 42. It then had to design a far bigger machine to determine what the question actually was.

Finding the right question is often much harder than answering it. Much of observational comedy is based on asking the simplest questions that we just happen never to have thought of asking before.

A good industrial illustration is in network design. A long time ago I used to design computer communication protocols, actually a pretty easy job for junior engineers. While doing one design, I discovered a flaw in a switch manufacturer’s design that would allow data networks to be pushed into a gross overload situation and crashed repeatedly by a single phone call. I simply asked a question that hadn’t been asked before. My question was “can computer networks be made to resonate dangerously?” That’s the sort of question bridge designers have asked every time they’ve built a bridge since roman times, with the notable exception of the designers of London’s Millennium Bridge, who had to redesign their’s. All I did was apply a common question from one engineering discipline to another. I did that because I was trained as a systems engineer, not as a specialist. It only took a few seconds to answer in my head and a few hours to prove it via simulation, so it was a pretty simple question to answer (yes they can), but it had taken many years before anyone bothered to ask it.

More importantly, that question couldn’t have been asked much before the 20th century, because the basic knowledge or concept of a computer network wasn’t there yet. It isn’t easy to think of a question that doesn’t derive from existent culture (which includes the full extent of fiction of course). As new ideas are generated by asking and answering questions, so the culture gradually extends, and new questions become possible. But we don’t ask them all, only a few. Even with the culture and knowledge we already have at any point, it is possible to ask far more questions, and some of them will lead to very interesting answers and a few of those could change the world.

Last night I had a dream where I was after-dinner speaking to some wealthy entrepreneurs (that sort of thing is my day job). One of them challenged me that ideas were hard to come by and as proof of his point asked me why the wheel had never been reinvented (actually it is often reinvented, just like the bicycle – all decent engineers have reinvented the bicycle to some degree at some point, and if you haven’t yet, you probably will. You aren’t allowed to die until you have). Anyway, I invented the plasma caterpillar track there and then as my answer to show that ideas are ten a penny and that being an entrepreneur is about having the energy and determination to back them, not the idea itself. That’s why I stick with ideas, much less work. Dreams often violate causality, at least mine do, and one department of my brain obviously contrived that situation to air an idea from the R&D department, but in the dream it was still the question that caused the invention. Plasma caterpillar tracks are a dream-class invention. Once daylight appears, you can see that they need work, but in this case, I also can see real potential, so I might do that work, or you can beat me to it. If you do and you get rich, buy me a beer. Sorry, I’m rambling.

How do you ask the right question? How do you even know what area to ask the right question in? How do you discover what questions are possible to ask? Question space may be infinite, but we only have a map of a small area with only a few paths and features on it. Some tools are already known to work well and thousands of training staff use them every day in creativity courses.

One solution is to try to peel back and ask what it is you are really trying to solve. Maybe the question isn’t ‘what logo should we use?’ but ‘what image do we want to present?’, or is it ‘how can we appeal to those customers?’ or ‘how do we improve our sales?’ or ‘how do we get more profit?’ or ‘how can we best serve shareholders?’. Each layer generates different kinds of answers.

Another mechanism I use personally is to matrix solutions and industries, applying questions or solutions from one industry to another, or notionally combining random industries. A typical example: Take TV displays and ask why can’t makeup also change 50 times a second? If the answer isn’t obvious, look at how nature does displays, can some of those techniques be dragged into makeup? Yes, they can, and you could make smart makeup using similar micro-structures to those that butterflies and beetles use and use the self-organisation developing in materials science to arrange the particles automatically.

Dragging solutions and questions from one area to another often generates lots of ideas. Just list every industry sector you can think of (and nature), and all the techs or techniques or procedures they use and cross reference every box against every other. By the time you’ve filled in every box, it will be long overdue to start again because they’ll all have moved on.

But however effective they are, these mechanistic techniques only fill in some of the question space and some can be addressed at least partly by AI. There is still a vast area unexplored, even with existing knowledge. Following paths is fine, but you need to explore off-road too. Group-think and cultural immersion stand in the way of true creativity. You can’t avoid your mind being directed in particular directions that have been ingrained since birth, and some genetic.

That leads some people to the conclusion that you need young fresh minds rather than older ones, but it isn’t just age that determines creativity, it is susceptibility to authority too, essentially thinking how you’re told to think. Authority isn’t just parents and teachers, or government, but colleagues and friends, mainly your peer group. People often don’t see peers as authority but needing their approval is as much a force as any. I am frequently amused spotting young people on the tube that clearly think they are true individuals with no respect for authority. They stick out a mile because they wear the uniform that all the young people who are individuals and don’t respect authority wear. It’s almost compulsory. They are so locked in by the authority and cultural language of those they want to impress by being different that they all end up being the same. Many ‘creatives’ suffer the same problem, you can often spot them from a distance too, and it’s a fairly safe bet that their actual creativity is very bounded. The fact is that some people are mentally old before they leave school and some die of old age yet still young in mind and heart.

How do you solve that? Well, apart from being young, one aspect of being guided down channels via susceptibility to authority is understanding the rules. If you are too new in a field to know how it works, who everyone is, how the tools work or even most of the basic fundamental knowledge of the field, then you are in an excellent position to ask the right questions. Some of my best ideas have been when I have just started in a new area. I do work in every sector now so my mind is spread very thinly, and it’s always easy to generate new ideas when you aren’t prejudiced by in-depth knowledge. If I don’t know that something can’t work, that you tried it ages ago and it didn’t, so you put it away and forgot about it, then I might think of it, and the technology might well have moved on since then and it might work now, or in 10 years time when I know the tech will catch up. I forget stuff very quickly too and although that can be a real nuisance it also minimizes prejudices so can aid this ‘creativity via naivety’.

So you could make sure that staff get involved in other people’s projects regularly, often with those in different parts of the company. Make sure they go on occasional workshops with others to ensure cross-fertilization. Make sure you have coffee areas and coffee times that make people mix and chat. The coffee break isn’t time wasted. It won’t generate new products or ideas every day but it will sometimes.

Cultivating a questioning culture is good too. Just asking obvious questions as often as you can is good. Why is that there? How does it work? What if we changed it? What if the factory burned down tomorrow, how would we rebuild it? Why the hell am I filling in this form?

Yet another one is to give people ‘permission’ to think outside the box. Many people have to follow procedures in their jobs for very good reasons, so they don’t always naturally challenge the status quo, and many even pursue careers that tend to be structured and ordered. There is nothing wrong with that, each to their own, but sometimes people in any area might need to generate some new ideas. A technique I use is to present some really far future and especially seemingly wacky ones to them before making them do their workshop activity. Having listened to some moron talking probable crap and getting away with it gives them permission to generate some wacky ideas too, and some invariably turn out to be good ones.

These techniques can improve everyday creativity but they still can’t generate enough truly out of the box questions to fill in the map.

I think what we need is the random question generator. There are a few random question generators out there now. Some ask mathematical questions to give kids practice before exams. Some just ask random pre-written questions from a database. They aren’t the sort we need though. We won’t be catapulted into a new era of enlightenment by being asked the answer to 73+68 or ones that were already on a list. Maybe I should have explored more pages on google, but most seemed to bark up the wrong tree. The better approach might be to copy random management jargon generators. Tech jargon ones exist too. Some are excellent fun. They are the sort we need. They combine various words from long categorized lists in grammatically plausible sequences to come out with plausible sounding terms. I am pretty sure that’s how they write MBA courses.

We can extend that approach to use a full vocabulary. If a question generator asks random questions using standard grammatical rules and a basic dictionary attack, (a first stage filtration process) most of the questions filtering through would still not make sense (e.g, why are all moons square?). But now we have AI engines that can parse sentences and filter out nonsensical ones or ones that simply contradict known facts and the web is getting a lot better at being machine-comprehensible. Careful though, some of those facts might not be any more.

After this AI filtration stage, we’d have a lot of questions that do make sense. A next stage filtration could discover which ones have already been asked and which of those have also been answered, and which of those answers have been accepted as valid. These stages will reveal some questions still awaiting proper responses, or where responses are dubious or debatable. Some will be about trivia, but some will be in areas that might seem to be commercially or socially valuable.

Some of the potentially valuable ones would be suited to machines to answer too. So they could start using spare cycles on machines to increase knowledge that way. Companies already do this internally with their big data programs for their own purposes, but it could work just as well as a global background task for humanity as a whole, with the whole of the net as one of its data sources. Machines could process data and identify potential new markets or products or identify social needs, and even suggest how they could be addressed and which companies might be able to do them. This could increase employment and GDP and solve some social issues that people weren’t even aware of.

Many would not be suited to AI and humans could search them for inspiration. Maybe we could employ people in developing countries as part of aid programs. That provides income and utilizes the lack of prejudice that comes with unfamiliarity with our own culture. Another approach is to make the growing question database online and people would make apps that deliver randomly selected questions to you to inspire you when you’re bored. There would be enough questions to make sure you are usually the first to have ever seen it. When you do, you could rate it as meaningless, don’t care, interesting, or wow that’s a really good question, maybe some other boxes. Obviously you could also produce answers and link to them too. Lower markings would decrease their reappearance probability, whereas really interesting ones would be seen by lots of people and some would be motivated to great answers.

Would it work? How could this be improved? What techniques might lead us to the right questions? Well, I just asked those ones and this blog is my first attempt at an answer. Feel free to add yours.

 

 

The future of Jelly Babies

Another frivolous ‘future of’, recycled from 10 years ago.

I’ve always loved Jelly Babies, (Jelly Bears would work as well if you prefer those) and remember that Dr Who used to eat them a lot too. Perhaps we all have a mean streak, but I’m sure most if us sometimes bite off their heads before eating the rest. But that might all change. I must stress at this point that I have never even spoken to anyone from Bassetts, who make the best ones, and I have absolutely no idea what plans they might have, and they might even strongly disapprove of my suggestions, but they certainly could do this if they wanted, as could anyone else who makes Jelly Babies or Jelly Bears or whatever.

There will soon be various forms of edible electronics. Some electronic devices can already be swallowed, including a miniature video camera that can take pictures all the way as it proceeds through your digestive tract (I don’t know whether they bother retrieving them though). Some plastics can be used as electronic components. We also have loads of radio frequency identity (RFID) tags around now. Some tags work in groups, recording whether they have been separated from each other at some point, for example. With nanotech, we will be able to make tags using little more than a few well-designed molecules, and few materials are so poisonous that a few molecules can do you much harm so they should be sweet-compliant. So extrapolating a little, it seems reasonable to expect that we might be able to eat things that have specially made RFID tags in them.  It would make a lot of sense. They could be used on fruit so that someone buying an apple could ingest the RFID tag on it without concern. And as well as work on RFID tags, many other electronic devices can be made very small, and out of fairly safe materials too.

So I propose that Jelly Baby manufacturers add three organic RFID tags to each jelly baby, (legs, head and body), some processing, and a simple communications device When someone bites the head off a jelly baby, the jelly baby would ‘know’, because the tags would now be separated. The other electronics in the jelly baby could then come into play, setting up a wireless connection to the nearest streaming device and screaming through the loudspeakers. It could also link to the rest of the jelly babies left in the packet, sending out a radio distress call. The other jelly babies, and any other friends they can solicit help from via the internet, could then use their combined artificial intelligence to organise a retaliatory strike on the person’s home computer. They might be able to trash the hard drive, upload viruses, or post a stroppy complaint on social media about the person’s cruelty.

This would make eating jelly babies even more fun than today. People used to spend fortunes going on safari to shoot lions. I presume it was exciting at least in part because there was always a risk that you might not kill the lion and it might eat you instead. With our environmentally responsible attitudes, it is no longer socially acceptable to hunt lions, but jelly babies could be the future replacement. As long as you eat them in the right order, with the appropriate respect and ceremony and so on, you would just enjoy eating a nice sweet. If you get it wrong, your life is trashed for the next day or two. That would level the playing field a bit.

Jelly Baby anyone?

The future of I

Me, myself, I, identity, ego, self, lots of words for more or less the same thing. The way we think of ourselves evolves just like everything else. Perhaps we are still cavemen with better clothes and toys. You may be a man, a dad, a manager, a lover, a friend, an artist and a golfer and those are all just descendants of caveman, dad, tribal leader, lover, friend, cave drawer and stone thrower. When you play Halo as Master Chief, that is not very different from acting or putting a tiger skin on for a religious ritual. There have always been many aspects of identity and people have always occupied many roles simultaneously. Technology changes but it still pushes the same buttons that we evolved hundred thousands of years ago.

Will we develop new buttons to push? Will we create any genuinely new facets of ‘I’? I wrote a fair bit about aspects of self when I addressed the related topic of gender, since self perception includes perceptions of how others perceive us and attempts to project chosen identity to survive passing through such filters:

The future of gender

Self is certainly complex. Using ‘I’ simplifies the problem. When you say ‘I’, you are communicating with someone, (possibly yourself). The ‘I’ refers to a tailored context-dependent blend made up of a subset of what you genuinely consider to be you and what you want to project, which may be largely fictional. So in a chat room where people often have never physically met, very often, one fictional entity is talking to another fictional entity, with each side only very loosely coupled to reality. I think that is different from caveman days.

Since chat rooms started, virtual identities have come a long way. As well as acting out manufactured characters such as the heroes in computer games, people fabricate their own characters for a broad range of kinds of ‘shared spaces’, design personalities and act them out. They may run that personality instance in parallel with many others, possibly dozens at once. Putting on an act is certainly not new, and friends easily detect acts in normal interactions when they have known a real person a long time, but online interactions can mean that the fictional version is presented it as the only manifestation of self that the group sees. With no other means to know that person by face to face contact, that group has to take them at face value and interact with them as such, though they know that may not represent reality.

These designed personalities may be designed to give away as little as possible of the real person wielding them, and may exist for a range of reasons, but in such a case the person inevitably presents a shallow image. Probing below the surface must inevitably lead to leakage of the real self. New personality content must be continually created and remembered if the fictional entity is to maintain a disconnect from the real person. Holding the in-depth memory necessary to recall full personality aspects and history for numerous personalities and executing them is beyond most people. That means that most characters in shared spaces take on at least some characteristics of their owners.

But back to the point. These fabrications should be considered as part of that person. They are an ‘I’ just as much as any other ‘I’. Only their context is different. Those parts may only be presented to subsets of the role population, but by running them, the person’s brain can’t avoid internalizing the experience of doing so. They may be partly separated but they are fully open to the consciousness of that person. I think that as augmented and virtual reality take off over the next few years, we will see their importance grow enormously. As virtual worlds start to feel more real, so their anchoring and effects in the person’s mind must get stronger.

More than a decade ago, AI software agents started inhabiting chat rooms too, and in some cases these ‘bots’ become a sufficient nuisance that they get banned. The front that they present is shallow but can give an illusion of reality. In some degree, they are an extension of the person or people that wrote their code. In fact, some are deliberately designed to represent a person when they are not present. The experiences that they have can’t be properly internalized by their creators, so they are a very limited extension to self. But how long will that be true? Eventually, with direct brain links and transhuman brain extensions into cyberspace, the combined experiences of I-bots may be fully available to consciousness just the same as first hand experiences.

Then it will get interesting. Some of those bots might be part of multiple people. People’s consciousnesses will start to overlap. People might collect them, or subscribe to them. Much as you might subscribe to my blog, maybe one day, part of one person’s mind, manifested as a bot or directly ‘published’, will become part of your mind. Some people will become absorbed into the experience and adopt so many that their own original personality becomes diluted to the point of disappearance. They will become just an interference pattern of numerous minds. Some will be so infectious that they will spread widely. For many, it will be impossible to die, and for many others, their minds will be spread globally. The hive minds of Dr Who, then later the Borg on Star Trek are conceptual prototypes but as with any sci-fi, they are limited by the imagination of the time they were conceived. By the time they become feasible, we will have moved on and the playground will be far richer than we can imagine yet.

So, ‘I’ has a future just as everything else. We may have just started to add extra facets a couple of decades ago, but the future will see our concept of self evolve far more quickly.

Postscript

I got asked by a reader whether I worry about this stuff. Here is my reply:

It isn’t the technology that worries me so much that humanity doesn’t really have any fixed anchor to keep human nature in place. Genetics fixed our biological nature and our values and morality were largely anchored by the main religions. We in the West have thrown our religion in the bin and are already seeing a 30 year cycle in moral judgments which puts our value sets on something of a random walk, with no destination, the current direction governed solely by media and interpretation and political reaction to of the happenings of the day. Political correctness enforces subscription to that value set even more strictly than any bishop ever forced religious compliance. Anyone that thinks religion has gone away just because people don’t believe in God any more is blind.

Then as genetics technology truly kicks in, we will be able to modify some aspects of our nature. Who knows whether some future busybody will decree that a particular trait must be filtered out because it doesn’t fit his or her particular value set? Throwing AI into the mix as a new intelligence alongside will introduce another degree of freedom. So already several forces acting on us in pretty randomized directions that can combine to drag us quickly anywhere. Then the stuff above that allows us to share and swap personality? Sure I worry about it. We are like young kids being handed a big chemistry set for Christmas without the instructions, not knowing that adding the blue stuff to the yellow stuff and setting it alight will go bang.

I am certainly no technotopian. I see the enormous potential that the tech can bring and it could be wonderful and I can’t help but be excited by it. But to get that you need to make the right decisions, and when I look at the sorts of leaders we elect and the sorts of decisions that are made, I can’t find the confidence that we will make the right ones.

On the good side, engineers and scientists are usually smart and can see most of the issues and prevent most of the big errors by using comon industry standards, so there is a parallel self-regulatory system in place that politicians rarely have any interest in. On the other side, those smart guys unfortunately will usually follow the same value sets as the rest of the population. So we’re quite likely to avoid major accidents and blowing ourselves up or being taken over by AIs. But we’re unlikely to avoid the random walk values problem and that will be our downfall.

So it could be worse, but it could be a whole lot better too.

 

The future of death

This one is a cut and paste from my book You Tomorrow.

Although age-related decline can be postponed significantly, it will eventually come. But that is just biological decline. In a few decades, people will have their brains linked to the machine world and much of their mind will be online, and that opens up the strong likelihood that death is not inevitable, and in fact anyone who expects to live past 2070 biologically (and rich people who can get past 2050) shouldn’t need to face death of their mind. Their bodies will eventually die, but their minds can live on, and an android body will replace the biological one they’ve lost.

Death used to be one of the great certainties of life, along with taxes. But unless someone under 35 now is unfortunate enough to die early from accident or disease, they have a good chance of not dying at all. Let’s explore that.

Genetics and other biotechnology will work with advanced materials technology and nanotechnology to limit and even undo damage caused by disease and age, keeping us young for longer, eventually perhaps forever. It remains to be seen how far we get with that vision in the next century, but we can certainly expect some progress in that area. We won’t get biological immortality for a good while, but if you can move into a high quality android body, who cares?

With this combination of technologies locked together with IT in a positive feedback loop, we will certainly eventually develop the technology to enable a direct link between the human brain and the machine, i.e. the descendants of today’s computers. On the computer side, neural networks are already the routine approach to many problems and are based on many of the same principles that neurons in the brain use. As this field develops, we will be able to make a good emulation of biological neurons. As it develops further, it ought to be possible on a sufficiently sophisticated computer to make a full emulation of a whole brain. Progress is already happening in this direction.

Meanwhile, on the human side, nanotechnology and biotechnology will also converge so that we will have the capability to link synthetic technology directly to individual neurons in the brain. We don’t know for certain that this is possible, but it may be possible to measure the behaviour of each individual neuron using this technology and to signal this behaviour to the brain emulation running in the computer, which could then emulate it. Other sensors could similarly measure and allow emulation of the many chemical signalling mechanisms that are used in the brain. The computer could thus produce an almost perfect electronic equivalent of the person’s brain, neuron by neuron. This gives us two things.

Firstly, by doing this, we would have a ‘backup’ copy of the person’s brain, so that in principle, they can carry on thinking, and effectively living, long after their biological body and brain has died. At this point we could claim effective immortality. Secondly, we have a two way link between the brain and the computer which allows thought to be executed on either platform and to be signalled between them.

There is an important difference between the brain and computer already that we may be able to capitalise on. In the brain’s neurons, signals travel at hundreds of metres per second. In a free space optical connection, they travel at hundreds of millions of metres per second, millions of times faster. Switching speeds are similarly faster in electronics. In the brain, cells are also very large compared to the electronic components of the future, so we may be able to reduce the distances over which the signals have to travel by another factor of 100 or more. But this assumes we take an almost exact representation of brain layout. We might be able to do much better than this. In the brain, we don’t appear to use all the neurons, (some are either redundant or have an unknown purpose) and those that we do use in a particular process are often in groups that are far apart. Reconfigurable hardware will be the norm in the 21st century and we may be able to optimize the structure for each type of thought process. Rearranging the useful neurons into more optimal structures should give another huge gain.

This means that our electronic emulation of the brain should behave in a similar way but much faster – maybe billions of times faster! It may be able to process an entire lifetime’s thoughts in a second or two. But even there are several opportunities for vast improvement. The brain is limited in size by a variety of biological constraints. Even if there were more space available, it could not be made much more efficient by making it larger, because of the need for cooling, energy and oxygen supply taking up ever more space and making distances between processors larger. In the computer, these constraints are much more easily addressable, so we could add large numbers of additional neurons to give more intelligence. In the brain, many learning processes stop soon after birth or in childhood. There need be no such constraints in computer emulations, so we could learn new skills as easily as in our infancy. And best of all, the computer is not limited by the memory of a single brain – it has access to all the world’s information and knowledge, and huge amounts of processing outside the brain emulation. Our electronic brain could be literally the size of the planet – the whole internet and all the processing and storage connected to it.

With all these advances, the computer emulation of the brain could be many orders of magnitude superior to its organic equivalent, and yet it might be connected in real time to the original. We would have an effective brain extension in cyberspace, one that gives us immeasurably improved performance and intelligence. Most of our thoughts might happen in the machine world, and because of the direct link, we might experience them as if they had occurred inside our head.

Our brains are in some ways equivalent in nature to how computers were before the age of the internet. They are certainly useful, but communication between them is slow and inefficient. However, when our brains are directly connected to machines, and those machines are networked, then everyone else’s brains are also part of that network, so we have a global network of people’s brains, all connected together, with all the computers too.

So we may soon eradicate death. By the time today’s children are due to die, they will have been using brain extensions for many years, and backups will be taken for granted. Death need not be traumatic for our relatives. They will soon get used to us walking around in an android body. Funerals will be much more fun as the key participant makes a speech about what they are expecting from their new life. Biological death might still be unpleasant, but it need no longer be a career barrier.

In terms of timescales, rich people might have this capability by 2050 and the rest of us some time before 2070. Your life expectancy biologically is increasing every year, so even if you are over 35, you have a pretty good chance of surviving long enough to gain. Half the people alive today are under 35 and will almost certainly not die fully. Many more are under 50 and some of them will live on electronically too. If you are over 50, the chances are that you will be the last generation of your family ever to have a full death.

As a side-note, there are more conventional ways of achieving immortality. Some Egyptian pharaohs are remembered because of their great pyramids. A few philosophers, artists, engineers and scientists have left such great works that they are remembered millennia later. And of course, on a small scale, for the rest of us, making an impression on those around us keeps your memory going a few generations. Writing a book immortalises your words. And you may have a multimedia headstone on your grave, or one that at least links into augmented reality to bring up your old web page of social networking site profile. But frankly, I am with Woody Allen on this one “I don’t want to achieve immortality through my work; I want to achieve immortality through not dying”. I just hope the technology arrives early enough.

The future of creativity

Another future of… blog.

I can play simple tunes on a guitar or keyboard. I compose music, mostly just bashing out some random sequences till a decent one happens. Although I can’t offer any Mozart-level creations just yet, doing that makes me happy. Electronic keyboards raise an interesting point for creativity. All I am actually doing is pressing keys, I don’t make sounds in the same way as when I pick at guitar strings. A few chips monitor the keys, noting which ones I hit and how fast, then producing and sending appropriate signals to the speakers.

The point is that I still think of it as my music, even though all I am doing is telling a microprocessor what to do on my behalf. One day, I will be able to hum a few notes or tap a rhythm with my fingers to give the computer some idea of a theme, and it will produce beautiful works based on my idea. It will still be my music, even when 99.9% of the ‘creativity’ is done by an AI. We will still think of the machines and software just as tools, and we will still think of the music as ours.

The other arts will be similarly affected. Computers will help us build on the merest hint of human creativity, enhancing our work and enabling us to do much greater things than we could achieve by our raw ability alone. I can’t paint or draw for toffee, but I do have imagination. One day I will be able to produce good paintings, design and make my own furniture, design and make my own clothes. I could start with a few downloads in the right ballpark. The computer will help me to build on those and produce new ones along divergent lines. I will be able to guide it with verbal instructions. ‘A few more trees on the hill, and a cedar in the foreground just here, a bit bigger, and move it to the left a bit’. Why buy a mass produced design when you can have a completely personal design?

These advances are unlikely to make a big dent in conventional art sales. Professional artists will always retain an edge, maybe even by producing the best seeds for computer creativity. Instead, computer assisted and computer enhanced art will make our lives more artistically enriched, and ourselves more fulfilled as a result. We will be able to express our own personalities more effectively in our everyday environment, instead of just decorating it with a few expressions of someone else’s.

However, one factor that seems to be overrated is originality. Anyone can immediately come up with many original ideas in seconds. Stick a safety pin in an orange and tie a red string through the loop. There, can I have my Turner prize now? There is an infinitely large field to pick from and only a small number have ever been realized, so coming up with something from the infinite set that still haven’t been thought of is easy and therefore of little intrinsic value. Ideas are ten a penny. It is only when it is combined with judgement or skill in making it real that it becomes valuable. Here again, computers will be able to assist. Analyzing a great many existing pictures or works or art should give some clues as to what most people like and dislike. IBM’s new neural chip is the sort of development that will accelerate this trend enormously. Machines will learn how to decide whether a picture is likely to be attractive to people or not. It should be possible for a computer to automatically create new pictures in a particular style or taste by either recombining appropriate ideas, or just randomly mixing any ideas together and then filtering the new pictures according to ‘taste’.

Augmented reality and other branches of cyberspace offer greater flexibility. Virtual objects and environments do not have to conform to laws of physics, so more elaborate and artistic structures are possible. Adding in 3D printing extends virtual graphics into the physical domain, but physics will only apply to the physical bits, and with future display technology, you might not easily be able to see where the physical stops and the virtual begins.

So, with machine assistance, human creativity will no longer be as limited by personal skill and talent. Anyone with a spark of creativity will be able to achieve great works, thanks to machine assistance. So long as you aren’t competitive about it, (someone else will always be able to do it better than you) your world will feel nicer, more friendly and personal, you’ll feel more in control, empowered, and your quality of life will improve. Instead of just making do with what you can buy, you’ll be able to decide what your world looks, sounds, feels, tastes and smells like, and design personality into anything you want too.

The future of bacteria

Bacteria have already taken the prize for the first synthetic organism. Craig Venter’s team claimed the first synthetic bacterium in 2010.

Bacteria are being genetically modified for a range of roles, such as converting materials for easier extraction (e.g. coal to gas, or concentrating elements in landfill sites to make extraction easier), making new food sources (alongside algae), carbon fixation, pollutant detection and other sensory roles, decorative, clothing or cosmetic roles based on color changing, special surface treatments, biodegradable construction or packing materials, self-organizing printing… There are many others, even ignoring all the military ones.

I have written many times on smart yogurt now and it has to be the highlight of the bacterial future, one of the greatest hopes as well as potential danger to human survival. Here is an extract from a previous blog:

Progress is continuing to harness bacteria to make components of electronic circuits (after which the bacteria are dissolved to leave the electronics). Bacteria can also have genes added to emit light or electrical signals. They could later be enhanced so that as well as being able to fabricate electronic components, they could power them too. We might add various other features too, but eventually, we’re likely to end up with bacteria that contain electronics and can connect to other bacteria nearby that contain other electronics to make sophisticated circuits. We could obviously harness self-assembly and self-organisation, which are also progressing nicely. The result is that we will get smart bacteria, collectively making sophisticated, intelligent, conscious entities of a wide variety, with lots of sensory capability distributed over a wide range. Bacteria Sapiens.

I often talk about smart yogurt using such an approach as a key future computing solution. If it were to stay in a yogurt pot, it would be easy to control. But it won’t. A collective bacterial intelligence such as this could gain a global presence, and could exist in land, sea and air, maybe even in space. Allowing lots of different biological properties could allow colonization of every niche. In fact, the first few generations of bacteria sapiens might be smart enough to design their own offspring. They could probably buy or gain access to equipment to fabricate them and release them to multiply. It might be impossible for humans to stop this once it gets to a certain point. Accidents happen, as do rogue regimes, terrorism and general mad-scientist type mischief.

Transhumanists seem to think their goal is the default path for humanity, that transhumanism is inevitable. Well, it can’t easily happen without going first through transbacteria research stages, and that implies that we might well have to ask transbacteria for their consent before we can develop true transhumans.

Self-organizing printing is a likely future enhancement for 3D printing. If a 3D printer can print bacteria (onto the surface of another material being laid down, or as an ingredient in a suspension as the extrusion material itself, or even a bacterial paste, and the bacteria can then generate or modify other materials, or use self-organisation principles to form special structures or patterns, then the range of objects that can be printed will extend. In some cases, the bacteria may be involved in the construction and then die or be dissolved away.

Estimating IoT value? Count ALL the beans!

In this morning’s news:

http://www.telegraph.co.uk/technology/news/11043549/UK-funds-development-of-world-wide-web-for-machines.html

£1.6M investment by UK Technology Strategy Board in Internet-of-Things HyperCat standard, which the article says will add £100Bn to the UK economy by 2020.

Garnter says that IoT has reached the hype peak of their adoption curve and I agree. Connecting machines together, and especially adding networked sensors will certainly increase technology capability across many areas of our lives, but the appeal is often overstated and the dangers often overlooked. Value should not be measured in purely financial terms either. If you value health, wealth and happiness, don’t just measure the wealth. We value other things too of course. It is too tempting just to count the most conspicuous beans. For IoT, which really just adds a layer of extra functionality onto an already technology-rich environment, that is rather like estimating the value of a chili con carne by counting the kidney beans in it.

The headline negatives of privacy and security have often been addressed so I don’t need to explore them much more here, but let’s look at a couple of typical examples from the news article. Allowing remotely controlled washing machines will obviously impact on your personal choice on laundry scheduling. The many similar shifts of control of your life to other agencies will all add up. Another one: ‘motorists could benefit from cheaper insurance if their vehicles were constantly transmitting positioning data’. Really? Insurance companies won’t want to earn less, so motorists on average will give them at least as much profit as before. What will happen is that insurance companies will enforce driving styles and car maintenance regimes that reduce your likelihood of a claim, or use that data to avoid paying out in some cases. If you have to rigidly obey lots of rules all of the time then driving will become far less enjoyable. Having to remember to check the tyre pressures and oil level every two weeks on pain of having your insurance voided is not one of the beans listed in the article, but is entirely analogous the typical home insurance rule that all your windows must have locks and they must all be locked and the keys hidden out of sight before they will pay up on a burglary.

Overall, IoT will add functionality, but it certainly will not always be used to improve our lives. Look at the way the web developed. Think about the cookies and the pop-ups and the tracking and the incessant virus protection updates needed because of the extra functions built into browsers. You didn’t want those, they were added to increase capability and revenue for the paying site owners, not for the non-paying browsers. IoT will be the same. Some things will make minor aspects of your life easier, but the price of that will that you will be far more controlled, you will have far less freedom, less privacy, less security. Most of the data collected for business use or to enhance your life will also be available to government and police. We see every day the nonsense of the statement that if you have done nothing wrong, then you have nothing to fear. If you buy all that home kit with energy monitoring etc, how long before the data is hacked and you get put on militant environmentalist blacklists because you leave devices on standby? For every area where IoT will save you time or money or improve your control, there will be many others where it does the opposite, forcing you to do more security checks, spend more money on car and home and IoT maintenance, spend more time following administrative procedures and even follow health regimes enforced by government or insurance companies. IoT promises milk and honey, but will deliver it only as part of a much bigger and unwelcome lifestyle change. Sure you can have a little more control, but only if you relinquish much more control elsewhere.

As IoT starts rolling out, these and many more issues will hit the press, and people will start to realise the downside. That will reduce the attractiveness of owning or installing such stuff, or subscribing to services that use it. There will be a very significant drop in the economic value from the hype. Yes, we could do it all and get the headline economic benefit, but the cost of greatly reduced quality of life is too high, so we won’t.

Counting the kidney beans in your chili is fine, but it won’t tell you how hot it is, and when you start eating it you may decide the beans just aren’t worth the pain.

I still agree that IoT can be a good thing, but the evidence of web implementation suggests we’re more likely to go through decades of abuse and grief before we get the promised benefits. Being honest at the outset about the true costs and lifestyle trade-offs will help people decide, and maybe we can get to the good times faster if that process leads to better controls and better implementation.

Ultra-simple computing: Part 2

Chip technology

My everyday PC uses an Intel Core-I7 3770 processor running at 3.4GHz. It has 4 cores running 8 threads on 1.4 billion 22nm transistors on just 160mm^2 of chip. It has an NVIDIA GeForce GTX660 graphics card, and has 16GB of main memory. It is OK most of the time, but although the processor and memory utilisation rarely gets above 30%, its response is often far from instant.

Let me compare it briefly with my (subjectively at time of ownership) best ever computer, my Macintosh 2Fx, RIP, which I got in 1991, the computer on which I first documented both the active contact lens and text messaging and on which I suppose I also started this project. The Mac 2Fx ran a 68030 processor at 40MHz, with 273,000 transistors and 4MB of RAM, and an 80MB hard drive. Every computer I’ve used since then has given me extra function at the expense of lower performance, wasted time and frustration.

Although its OS is stored on a 128GB solid state disk, my current PC takes several seconds longer to boot than my Macintosh Fx did – it went from cold to fully operational in 14 seconds – yes, I timed it. On my PC today, clicking a browser icon to first page usually takes a few seconds. Clicking on a word document back then took a couple of seconds to open. It still does now. Both computers gave real time response to typing and both featured occasional unexplained delays. I didn’t have any need for a firewall or virus checkers back then, but now I run tedious maintenance routines a few times every week. (The only virus I had before 2000 was nVir, which came on the Mac2 system disks). I still don’t get many viruses, but the significant time I spend avoiding them has to be counted too.

Going back further still, to my first ever computer in 1981, it was an Apple 2, and only had 9000 transistors running at 2.5MHz, with a piddling 32kB of memory. The OS was tiny. Nevertheless, on it I wrote my own spreadsheet, graphics programs, lens design programs, and an assortment of missile, aerodynamic and electromagnetic simulations. Using the same transistors as the I7, you could make 1000 of these in a single square millimetre!

Of course some things are better now. My PC has amazing graphics and image processing capabilities, though I rarely make full use of them. My PC allows me to browse the net (and see video ads). If I don’t mind telling Google who I am I can also watch videos on YouTube, or I could tell the BBC or some other video provider who I am and watch theirs. I could theoretically play quite sophisticated computer games, but it is my work machine, so I don’t. I do use it as a music player or to show photos. But mostly, I use it to write, just like my Apple 2 and my Mac Fx. Subjectively, it is about the same speed for those tasks. Graphics and video are the main things that differ.

I’m not suggesting going back to an Apple 2 or even an Fx. However, using I7 chip tech, a 9000 transistor processor running 1360 times faster and taking up 1/1000th of a square millimetre would still let me write documents and simulations, but would be blazingly fast compared to my old Apple 2. I could fit another 150,000 of them on the same chip space as the I7. Or I could have 5128 Mac Fxs running at 85 times normal speed. Or you could have something like a Mac FX running 85 times faster than the original for a tiny fraction of the price. There are certainly a few promising trees in the forest that nobody seems to have barked up. As an interesting aside, that 22nm tech Apple 2 chip would only be ten times bigger than a skin cell, probably less now, since my PC is already several months old

At the very least, that really begs the question what all this extra processing is needed for and why there is still ever any noticeable delay doing anything in spite of it. Each of those earlier machines was perfectly adequate for everyday tasks such as typing or spreadsheeting. All the extra speed has an impact only on some things and most is being wasted by poor code. Some of the delays we had 20 and 30 years ago still affect us just as badly today.

The main point though is that if you can make thousands of processors on a standard sized chip, you don’t have to run multitasking. Each task could have a processor all to itself.

The operating system currently runs programs to check all the processes that need attention, determine their priorities, schedule processing for them, and copy their data in and out of memory. That is not needed if each process can have its own dedicated processor and memory all the time. There are lots of ways of using basic physics to allocate processes to processors, relying on basic statistics to ensure that collisions rarely occur. No code is needed at all.

An ultra-simple computer could therefore have a large pool of powerful, free processors, each with their own memory, allocated on demand using simple physical processes. (I will describe a few options for the basic physics processes later). With no competition for memory or processing, a lot of delays would be eliminated too.

Ultra-simple computing: Part 1

Introduction

This is first part of a techie series. If you aren’t interested in computing, move along, nothing here. It is a big topic so I will cover it in several manageable parts.

Like many people, I spent a good few hours changing passwords after the Heartbleed problem and then again after ebay’s screw-up. It is a futile task in some ways because passwords are no longer a secure defense anyway. A decent hacker with a decent computer can crack hundreds of passwords in an hour, so unless an account is locked after a few failed attempts, and many aren’t, passwords only manage to keep out casual observers and the most amateurish hackers.

The need for simplicity

A lot of problems are caused by the complexity of today’s software, making it impossible to find every error and hole. Weaknesses have been added to operating systems, office automation tools and browsers to increase functionality for only a few users, even though they add little to most of us most of the time. I don’t think I have ever executed a macro in Microsoft office for example and I’ve certainly never used print merge or many its other publishing and formatting features. I was perfectly happy with Word 93 and most things added since then (apart from the real time spelling and grammar checker) have added irrelevant and worthless features at the expense of safety. I can see very little user advantage of allowing pop-ups on web sites, or tracking cookies. Their primary purpose is to learn about us to make marketing more precise. I can see why they want that, but I can’t see why I should. Users generally want pull marketing, not push, and pull doesn’t need cookies, there are better ways of sending your standard data when needed if that’s what you want to do. There are many better ways of automating logons to regular sites if that is needed.

In a world where more of the people who wish us harm are online it is time to design an alternative platform which it is designed specifically to be secure from the start and no features are added that allow remote access or control without deliberate explicit permission. It can be done. A machine with a strictly limited set of commands and access can be made secure and can even be networked safely. We may have to sacrifice a few bells and whistles, but I don’t think we will need to sacrifice many that we actually want or need. It may be less easy to track us and advertise at us or to offer remote machine analysis tools, but I can live with that and you can too. Almost all the services we genuinely want can still be provided. You could still browse the net, still buy stuff, still play games with others, and socialize. But you wouldn’t be able to install or run code on someone else’s machine without their explicit knowledge. Every time you turn the machine on, it would be squeaky clean. That’s already a security benefit.

I call it ultra-simple computing. It is based on the principle that simplicity and a limited command set makes it easy to understand and easy to secure. That basic physics and logic is more reliable than severely bloated code. That enough is enough, and more than that is too much.

We’ve been barking up the wrong trees

There are a few things you take for granted in your IT that needn’t be so.

Your PC has an extremely large operating system. So does your tablet, your phone, games console… That isn’t really necessary. It wasn’t always the case and it doesn’t have to be the case tomorrow.

Your operating system still assumes that your PC has only a few processing cores and has to allocate priorities and run-time on those cores for each process. That isn’t necessary.

Although you probably use some software in the cloud, you probably also download a lot of software off the net or install from a CD or DVD. That isn’t necessary.

You access the net via an ISP. That isn’t necessary. Almost unavoidable at present, but only due to bad group-think. Really, it isn’t necessary.

You store data and executable code in the same memory and therefore have to run analysis tools that check all the data in case some is executable. That isn’t necessary.

You run virus checkers and firewalls to prevent unauthorized code execution or remote access. That isn’t necessary.

Overall, we live with an IT system that is severely unfit for purpose. It is dangerous, bloated, inefficient, excessively resource and energy intensive, extremely fragile and yet vulnerable to attack via many routes, designed with the user as a lower priority than suppliers, with the philosophy of functionality at any price. The good news is that it can be replaced by one that is absolutely fit for purpose, secure, invulnerable, cheap and reliable, resource-efficient, and works just fine. Even better, it could be extremely cheap so you could have both and live as risky an online life in those areas that don’t really matter, knowing you have a safe platform to fall back on when your risky system fails or when you want to do anything that involves your money or private data.

Switching people off

A very interesting development has been reported in the discovery of how consciousness works, where neuroscientists stimulating a particular brain region were able to switch a woman’s state of awareness on and off. They said: “We describe a region in the human brain where electrical stimulation reproducibly disrupted consciousness…”

http://www.newscientist.com/article/mg22329762.700-consciousness-onoff-switch-discovered-deep-in-brain.html.

The region of the brain concerned was the claustrum, and apparently nobody had tried stimulating it before, although Francis Crick and Christof Koch had suggested the region would likely be important in achieving consciousness. Apparently, the woman involved in this discovery was also missing some of her hippocampus, and that may be a key factor, but they don’t know for sure yet.

Mohamed Koubeissi and his the team at the George Washington university in Washington DC were investigating her epilepsy and stimulated her claustrum area with high frequency electrical impulses. When they did so, the woman lost consciousness, no longer responding to any audio or visual stimuli, just staring blankly into space. They verified that she was not having any epileptic activity signs at the time, and repeated the experiment with similar results over two days.

The team urges caution and recommends not jumping to too many conclusions. They did observe the obvious potential advantages as an anesthesia substitute if it can be made generally usable.

As a futurologist, it is my job to look as far down the road as I can see, and imagine as much as I can. Then I filter out all the stuff that is nonsensical, or doesn’t have a decent potential social or business case or as in this case, where research teams suggest that it is too early to draw conclusions. I make exceptions where it seems that researchers are being over-cautious or covering their asses or being PC or unimaginative, but I have no evidence of that in this case. However, the other good case for making exceptions is where it is good fun to jump to conclusions. Anyway, it is Saturday, I’m off work, so in the great words of Dr Emmett Brown in ‘Back to the future’:  “Well, I figured, what the hell.”

OK, IF it works for everyone without removing parts of the brain, what will we do with it and how?

First, it is reasonable to assume that we can produce electrical stimulation at specific points in the brain by using external kit. Trans-cranial magnetic stimulation might work, or perhaps implants may be possible using injection of tiny particles that migrate to the right place rather than needing significant surgery. Failing those, a tiny implant or two via a fine needle into the right place ought to do the trick. Powering via induction should work. So we will be able to produce the stimulation, once the sucker victim subject has the device implanted.

I guess that could happen voluntarily, or via a court ordered protective device, as a condition of employment or immigration, or conditional release from prison, or a supervision order, or as a violent act or in war.

Imagine if government demands a legal right to access it, for security purposes and to ensure your comfort and safety, of course.

If you think 1984 has already gone too far, imagine a government or police officer that can switch you off if you are saying or thinking the wrong thing. Automated censorship devices could ensure that nobody discusses prohibited topics.

Imagine if people on the street were routinely switched off as a VIP passes to avoid any trouble for them.

Imagine a future carbon-reduction law where people are immobilized for an hour or two each day during certain periods. There might be a quota for how long you are allowed to be conscious each week to limit your environmental footprint.

In war, captives could have devices implanted to make them easy to control, simply turned off for packing and transport to a prison camp. A perimeter fence could be replaced by a line in the sand. If a prisoner tries to cross it, they are rendered unconscious automatically and put back where they belong.

Imagine a higher class of mugger that doesn’t like violence much and prefers to switch victims off before stealing their valuables.

Imagine being able to switch off for a few hours to pass the time on a long haul flight. Airlines could give discounts to passengers willing to be disabled and therefore less demanding of attention.

Imagine  a couple or a group of friends, or a fetish club, where people can turn each other off at will. Once off, other people can do anything they please with them – use them as dolls, as living statues or as mannequins, posing them, dressing them up. This is not an adult blog so just use your imagination – it’s pretty obvious what people will do and what sorts of clubs will emerge if an off-switch is feasible, making people into temporary toys.

Imagine if you got an illegal hacking app and could freeze the other people in your vicinity. What would you do?

Imagine if your off-switch is networked and someone else has a remote control or hacks into it.

Imagine if an AI manages to get control of such a system.

Having an off-switch installed could open a new world of fun, but it could also open up a whole new world for control by the authorities, crime control, censorship or abuse by terrorists and thieves and even pranksters.

 

 

Google is wrong. We don’t all want gadgets that predict our needs.

In the early 1990s, lots of people started talking about future tech that would work out what we want and make it happen. A whole batch of new ideas came out – internet fridges, smart waste-baskets, the ability to control your air conditioning from the office or open and close curtains when you’re away on holiday. 25 years on almost and we still see just a trickle of prototypes, followed by a tsunami of apathy from the customer base.

Do you want an internet fridge, that orders milk when you’re running out, or speaks to you all the time telling you what you’re short of, or sends messages to your phone when you are shopping? I certainly don’t. It would be extremely irritating. It would crash frequently. If I forget to clean the sensors it won’t work. If I don’t regularly update the software, and update the security, and get it serviced, it won’t work. It will ask me for passwords. If my smart loo notices I’m putting on weight, the fridge will refuse to open, and tell the microwave and cooker too so that they won’t cook my lunch. It will tell my credit card not to let me buy chocolate bars or ice cream. It will be a week before kitchen rage sets in and I take a hammer to it. The smart waste bin will also be covered in tomato sauce from bean cans held in a hundred orientations until the sensor finally recognizes the scrap of bar-code that hasn’t been ripped off. Trust me, we looked at all this decades ago and found the whole idea wanting. A few show-off early adopters want it to show how cool and trendy they are, then they’ll turn it off when no-one is watching.

EDIT: example of security risks from smart devices (this one has since been fixed) http://www.bbc.co.uk/news/technology-28208905

If I am with my best friend, who has known me for 30 years, or my wife, who also knows me quite well, they ask me what I want, they discuss options with me. They don’t think they know best and just decide things. If they did, they’d soon get moaned at. If I don’t want my wife or my best friend to assume they know what I want best, why would I want gadgets to do that?

The first thing I did after checking out my smart TV was to disconnect it from the network so that it won’t upload anything and won’t get hacked or infected with viruses. Lots of people have complained about new adverts on TV that control their new xBoxes via the Kinect voice recognition. The ‘smart’ TV receiver might be switched off as that happens. I am already sick of things turning themselves off without my consent because they think they know what I want.

They don’t know what is best. They don’t know what I want. Google doesn’t either. Their many ideas about giving lots of information it thinks I want while I am out are also things I will not welcome. Is the future of UI gadgets that predict your needs, as Wired says Google thinks? No, it isn’t. What I want is a really intuitive interface so I can ask for what I want, when I want it. The very last thing I want is an idiot device thinking it knows better than I do.

We are not there yet. We are nowhere near there yet. Until we are, let me make my own decisions. PLEASE!

Future human evolution

I’ve done patches of work on this topic frequently over the last 20 years. It usually features in my books at some point too, but it’s always good to look afresh at anything. Sometimes you see something you didn’t see last time.

Some of the potential future is pretty obvious. I use the word potential, because there are usually choices to be made, regulations that may or may not get in the way, or many other reasons we could divert from the main road or even get blocked completely.

We’ve been learning genetics now for a long time, with a few key breakthroughs. It is certain that our understanding will increase, less certain how far people will be permitted to exploit the potential here in any given time frame. But let’s take a good example to learn a key message first. In IVF, we can filter out embryos that have the ‘wrong’ genes, and use their sibling embryos instead. Few people have a problem with that. At the same time, pregnant women may choose an abortion if they don’t want a child when they discover it is the wrong gender, but in the UK at least, that is illegal. The moral and ethical values of our society are on a random walk though, changing direction frequently. The social assignment of right and wrong can reverse completely in just 30 years. In this example, we saw a complete reversal of attitudes to abortion itself within 30 years, so who is to say we won’t see reversal on the attitude to abortion due to gender? It is unwise to expect that future generations will have the same value sets. In fact, it is highly unlikely that they will.

That lesson likely applies to many technology developments and quite a lot of social ones – such as euthanasia and assisted suicide, both already well into their attitude reversal. At some point, even if something is distasteful to current attitudes, it is pretty likely to be legalized eventually, and hard to ban once the door is opened. There will always be another special case that opens the door a little further. So we should assume that we may eventually use genetics to its full capability, even if it is temporarily blocked for a few decades along the way. The same goes for other biotech, nanotech, IT, AI and any other transhuman enhancements that might come down the road.

So, where can we go in the future? What sorts of splits can we expect in the future human evolution path? It certainly won’t remain as just plain old homo sapiens.

I drew this evolution path a long time ago in the mid 1990s:

human evolution 1

It was clear even then that we could connect external IT to the nervous system, eventually the brain, and this would lead to IT-enhanced senses, memory, processing, higher intelligence, hence homo cyberneticus. (No point in having had to suffer Latin at school if you aren’t allowed to get your own back on it later). Meanwhile, genetic enhancement and optimization of selected features would lead to homo optimus. Converging these two – why should you have to choose, why not have a perfect body and an enhanced mind? – you get homo hybridus. Meanwhile, in the robots and AI world, machine intelligence is increasing and we eventually we get the first self-aware AI/robot (it makes little sense to separate the two since networked AI can easily be connected to a machine such as a robot) and this has its own evolution path towards a rich diversity of different kinds of AI and robots, robotus multitudinus. Since both the AI world and the human world could be networked to the same network, it is then easy to see how they could converge, to give homo machinus. This future transhuman would have any of the abilities of humans and machines at its disposal. and eventually the ability to network minds into a shared consciousness. A lot of ordinary conventional humans would remain, but with safe upgrades available, I called them homo sapiens ludditus. As they watch their neighbors getting all the best jobs, winning at all the sports, buying everything, and getting the hottest dates too, many would be tempted to accept the upgrades and homo sapiens might gradually fizzle out.

My future evolution timeline stayed like that for several years. Then in the early 2000s I updated it to include later ideas:

human evolution 2

I realized that we could still add AI into computer games long after it becomes comparable with human intelligence, so games like EA’s The Sims might evolve to allow entire civilizations living within a computer game, each aware of their existence, each running just as real a life as you and I. It is perhaps unlikely that we would allow children any time soon to control fully sentient people within a computer game, acting as some sort of a god to them, but who knows, future people will argue that they’re not really real people so it’s OK. Anyway, you could employ them in the game to do real knowledge work, and make money, like slaves. But since you’re nice, you might do an incentive program for them that lets them buy their freedom if they do well, letting them migrate into an android. They could even carry on living in their Sims home and still wander round in our world too.

Emigration from computer games into our world could be high, but the reverse is also possible. If the mind is connected well enough, and enhanced so far by external IT that almost all of it runs on the IT instead of in the brain, then when your body dies, your mind would carry on living. It could live in any world, real or fantasy, or move freely between them. (As I explained in my last blog, it would also be able to travel in time, subject to certain very expensive infrastructural requirements.) As well as migrants coming via electronic immortality route, it would be likely that some people that are unhappy in the real world might prefer to end it all and migrate their minds into a virtual world where they might be happy. As an alternative to suicide, I can imagine that would be a popular route. If they feel better later, they could even come back, using an android.  So we’d have an interesting future with lots of variants of people, AI and computer game and fantasy characters migrating among various real and imaginary worlds.

But it doesn’t stop there. Meanwhile, back in the biotech labs, progress is continuing to harness bacteria to make components of electronic circuits (after which the bacteria are dissolved to leave the electronics). Bacteria can also have genes added to emit light or electrical signals. They could later be enhanced so that as well as being able to fabricate electronic components, they could power them too. We might add various other features too, but eventually, we’re likely to end up with bacteria that contain electronics and can connect to other bacteria nearby that contain other electronics to make sophisticated circuits. We could obviously harness self-assembly and self-organisation, which are also progressing nicely. The result is that we will get smart bacteria, collectively making sophisticated, intelligent, conscious entities of a wide variety, with lots of sensory capability distributed over a wide range. Bacteria Sapiens.

I often talk about smart yogurt using such an approach as a key future computing solution. If it were to stay in a yogurt pot, it would be easy to control. But it won’t. A collective bacterial intelligence such as this could gain a global presence, and could exist in land, sea and air, maybe even in space. Allowing lots of different biological properties could allow colonization of every niche. In fact, the first few generations of bacteria sapiens might be smart enough to design their own offspring. They could probably buy or gain access to equipment to fabricate them and release them to multiply. It might be impossible for humans to stop this once it gets to a certain point. Accidents happen, as do rogue regimes, terrorism and general mad-scientist type mischief.

And meanwhile, we’ll also be modifying nature. We’ll be genetically enhancing a wide range of organisms, bringing some back from extinction, creating new ones, adding new features, changing even some of the basic mechanism by which nature works in some cases. We might even create new kinds of DNA or develop substitutes with enhanced capability. We may change nature’s evolution hugely. With a mix of old and new and modified, nature evolves nicely into Gaia Sapiens.

We’re not finished with the evolution chart though. Here is the next one:

human evolution 3

Just one thing is added. Homo zombius. I realized eventually that the sci-fi ideas of zombies being created by viruses could be entirely feasible. A few viruses, bacteria and other parasites can affect the brains of the victims and change their behaviour to harness them for their own life cycle.

See http://io9.com/12-real-parasites-that-control-the-lives-of-their-hosts-461313366 for fun.

Bacteria sapiens could be highly versatile. It could make virus variants if need be. It could evolve itself to be able to live in our bodies, maybe penetrate our brains. Bacteria sapiens could make tiny components that connect to brain cells and intercept signals within our brains, or put signals back in. It could read our thoughts, and then control our thoughts. It could essentially convert people into remote controlled robots, or zombies as we usually call them. They could even control muscles directly to a point, so even if the zombie is decapitated, it could carry on for a short while. I used that as part of my storyline in Space Anchor. If future humans have widespread availability of cordless electricity, as they might, then it is far fetched but possible that headless zombies could wander around for ages, using the bacterial sensors to navigate. Homo zombius would be mankind enslaved by bacteria. Hopefully just a few people, but it could be everyone if we lose the battle. Think how difficult a war against bacteria would be, especially if they can penetrate anyone’s brain and intercept thoughts. The Terminator films looks a lot less scary when you compare the Terminator with the real potential of smart yogurt.

Bacteria sapiens might also need to be consulted when humans plan any transhuman upgrades. If they don’t consent, we might not be able to do other transhuman stuff. Transhumans might only be possible if transbacteria allow it.

Not done yet. I wrote a couple of weeks ago about fairies. I suggested fairies are entirely feasible future variants that would be ideally suited to space travel.

Fairies will dominate space travel

They’d also have lots of environmental advantages as well as most other things from the transhuman library. So I think they’re inevitable. So we should add fairies to the future timeline. We need a revised timeline and they certainly deserve their own branch. But I haven’t drawn it yet, hence this blog as an excuse. Before I do and finish this, what else needs to go on it?

Well, time travel in cyberspace is feasible and attractive beyond 2075. It’s not the proper real world time travel that isn’t permitted by physics, but it could feel just like that to those involved, and it could go further than you might think. It certainly will have some effects in the real world, because some of the active members of the society beyond 2075 might be involved in it. It certainly changes the future evolution timeline if people can essentially migrate from one era to another (there are some very strong caveats applicable here that I tried to explain in the blog, so please don’t misquote me as a nutter – I haven’t forgotten basic physics and logic, I’m just suggesting a feasible implementation of cyberspace that would allow time travel within it. It is really a cyberspace bubble that intersects with the real world at the real time front so doesn’t cause any physics problems, but at that intersection, its users can interact fully with the real world and their cultural experiences of time travel are therefore significant to others outside it.)

What else? OK, well there is a very significant community (many millions of people) that engages in all sorts of fantasy in shared on-line worlds, chat rooms and other forums. Fairies, elves, assorted spirits, assorted gods, dwarves, vampires, werewolves, assorted furry animals, assorted aliens, dolls,  living statues, mannequins, remote controlled people, assorted inanimate but living objects, plants and of course assorted robot/android variants are just some of those that already exist in principle; I’m sure I’ve forgotten some here and anyway, many more are invented every year so an exhaustive list would quickly become out of date. In most cases, many people already role play these with a great deal of conviction and imagination, not just in standalone games, but in communities, with rich cultures, back-stories and story-lines. So we know there is a strong demand, so we’re only waiting for their implementation once technology catches up, and it certainly will.

Biotech can do a lot, and nanotech and IT can add greatly to that. If you can design any kind of body with almost any kind of properties and constraints and abilities, and add any kind of IT and sensing and networking and sharing and external links for control and access and duplication, we will have an extremely rich diversity of future forms with an infinite variety of subcultures, cross-fertilization, migration and transformation. In fact, I can’t add just a few branches to my timeline. I need millions. So instead I will just lump all these extras into a huge collected category that allows almost anything, called Homo Whateverus.

So, here is the future of human (and associates) evolution, for the next 150 years. A few possible cross-links are omitted for clarity

evolution

I won’t be around to watch it all happen. But a lot of you will.

 

Time Travel: Cyberspace opens a rift in the virtual time-space continuum

Dr Who should have written this but he didn’t so I have to. We keep seeing those cute little tears in space-time in episodes of the BBC’s Dr Who, that let through Daleks and Cybermen and other nasties. (As an aside, how come feminists never seem to object to the term Cybermen, even though 50% of them are made from women?). Dr Who calls them rifts, and it allegedly needs the energy of entire star systems to open and close them. So, not much use as a weapon then, but still a security issue if our universe leaks.

Sci-fi authors have recognized the obvious dangers of time-space rifts for several decades. They cause problems with causality as well. I got a Physics degree a long time ago (well, Applied Mathematics and Theoretical Physics, but all the maths was EM theory, quantum mechanics and relativity, so it was really a physics degree), but I have never really understood fully why causality is such a big deal. Sure it needs a lot of explaining if it fails, but why would an occasional causal error cause such a huge problem? The Daleks are far more worrying. **Politically incorrect joke censored**

I just wrote about time travel again. All competent physicists rightly switch on their idiot filters automatically on hearing any of the terms ‘cold fusion’, ‘telekinetic’, ‘psychic’, ‘perpetual motion machine’, ‘time travel’ or ‘global warming catastrophe’. Sorry, that last one just sort of crept in there. Time travel is not really possible, unless you’re inside a black hole or you’re talking about a particle shifting atoseconds in a huge accelerator or GPS relativistic corrections or something. A Tardis isn’t going to be here any time soon and may be impossible and never ever come. However, there is a quite real cyberspace route to quite real time travel that will become feasible around 2075, a virtual rift if you like, but no need to activate idiot filters just yet, it’s only a virtual rift, a rift in a sandbox effectively, and it won’t cause the universe to collapse or violate any known laws of physics. So, hit the temporary override button on your idiot filter. It’s a fun thought experiment that gets more and more fun the more you look at it. (Einstein invented thought experiments to investigate relativity, because he couldn’t do any real experiments with the technology of his time. We can’t verify this sort of time travel experimentally yet so thought experiment is the only mechanism available. Sadly, I don’t have Einstein’s brain to hand, but some aspects at least are open to the rest of us to explore.) The hypothesis here is that if you can make a platform that stores the state of all the minds in a system continuously over a period from A to B, and that runs all those minds continuously using a single editable record, then you can travel in time freely between A and B.  Now we need to think it through a bit to test the hypothesis and see what virtual physics we can learn from it, see how real it would be and what it would need and lead to.

I recognized on my first look at it in

The future of time travel: cheat

that cyberspace offers a time travel cheat. The basic idea, to save you reading it now that it’s out of date, is that some time soon after 2050 – let’s take 2075 as the date that crowd-funding enables its implementation – we’ll all be able to connect our brains so well to the machine world that it will be possible to share thoughts and consciousness, sensations, effectively share bodies, live electronically until all the machines stop working, store your mind as a snapshot periodically in case you want to restore to an earlier backup and do all sorts of really fun things like swapping personalities. (You can see why it might attract the required funding so might well become real).  If that recording of your mind is complete enough, and it could be, then, you really could go back to an earlier state of yourself. More importantly, a future time tourist could access all the stored records and create an instance of your mind and chat to you and chat and interact with you from the future. This would allow future historians to do history better. Well, that’s the basic version. Our thought experiment version needs to go a bit further than that. Let’s call it the deluxe version.

If you implement the deluxe version, then minds run almost entirely on the machine world platform, and are hosted there with frequent restore points. The current state of the system is an interactive result of real-time running of all the minds held in cyberspace across the whole stored timeline. For those minds running on the deluxe version platform, there isn’t any other reality. That’s what makes up those future humans and AIs on it. Once you join the system, you can enjoy all of the benefits above and many more.

You could actually change old records and use the machines to ripple the full system-wide consequences all the way through the timeline to whenever your future today is. It would allow you to go back to visit your former self and do some editing, wouldn’t it? And in this deluxe version, the edits you make would ripple through into your later self. That’s what you get when you migrate the human mind from the Mk1 human brain platform into the machine world platform. It becomes endlessly replicable and editable. In this deluxe version, the future world really could be altered by editing the past. You may reasonably ask why we would allow any moron to allow that to be built, but that won’t affect the theoretical ability to travel in time through cyberspace.

It is very easy to see how such a system allows you to chat with someone in the past. What is less obvious, and what my excuse for a brain missed first time round, is that it also lets you travel forwards in time. How, you may reasonably ask, can you access and edit records that don’t exist yet? Well, think of it from the other direction. Someone in the future can restore any previous instance of you from any time point and talk to them, even edit them. They could do that all in some sort of time-play sandbox to save money and avoid quite a few social issues, or they could restore you fully to their time, and since the reality is just real-time emulation all rippled through nicely by the machine platform, you would suddenly appear in the future and become part of that future world. You could wander around in a future android body and do physical things in that future physical world just as if you’d always lived there. Your future self would feel they have travelled in time. But a key factor here is that it could be your future self that makes it happen. You could make a request in 2075 to your future self to bring you to the future in 2150. When 2150 arrives, you see (or might even remember) the request, you go into the archives, and you restore your old 2075 self to 2150, then you instruct deletion of all the records between 2075 and 2150 and then you push the big red button. The system runs all the changes and effects through the timeline, and the result is that you disappear in 2075, and suddenly reappear in 2150.

There would be backups of the alternative timeline, but the official and effective system reality would be that you travelled from 2075 to 2150. That will be the reality running on the deluxe system. Any other realities are just backups and records on a database. Now,so far it’s a one way trip, far better if you can have a quick trip to the future and come back. So, you’re in 2150, suppose you want to go back again. You’ve been around a while and don’t like the new music or the food or something. So before you go, you do the usual time mischief. You collect lots of really useful data about how all the latest tech works, buy the almanacs of who wins what, just like in Back to the Future, just in case the system has bugs that let you use them, and you tweak the dials again. You set the destination to 2075 and hit the big red button. The system writes your new future-wise self over your original 2075 entry, keeping a suitable backup of course. The entry used by the deluxe system is whatever is written in its working record, and that is the you that went to 2150 and back. Any other realities are just backups. So, the system ripples it all through the timeline. You start the day in 2075, have a quick trip for a week’s holiday in 2150, and then return a few minutes later. Your 2075 self will have experienced a trip to 2150 and come back, complete with all the useful data about the 2150 world. If you don’t mess with anything else, you will remember that trip until 2150, at which time you’ll grab a few friends and chat about the first time you ever did time travel.

All of the above is feasible theoretically, and none of it violates any known physics. The universe won’t collapse in a causality paradox bubble rift if you do it, no need to send for Dr Who. That doesn’t mean it isn’t without issues. It still creates a lot of the time travel issues we are so familiar with from sci-fi. But this one isn’t sci-fi – we could build it, and we could get the crowd-funding to make it real by 2075. Don’t get too excited yet though.

You could have gone further into the future than 2150 too, but there is a limit. You can only go as far as there exists a continuous record from where you are. You basically need a road that goes all the way there. If some future authority bans time travel or changes to an incompatible system, that represents a wall you can’t pass through. An even later authority could only remove that wall under certain circumstances, and only if they have the complete records, and the earlier authority might have stopped storing them or even deleted earlier ones and that would ruin any chances of doing it properly.

So, having established that it is possible, we have to ask the more serious question: how real is this time travel? Is it just a cyberspace trick with no impact on the real world? Well, in this scenario, your 2075 mind runs on the deluxe system using its 2075 record. But which one, the old one or the edited one? The edited one of course. The old version is overwritten and ceases to exist except as a backup. There remains no reality except the one you did your time travel trip in. Your time trip is real. But let’s ask a few choice questions, because reality can turn out to be just an illusion sometimes.

So, when you get home to 2075, you can print off your 2150 almanac and brag about all the new technologies you just invented from 2150. Yes?

Yes… if you implement the deluxe version.

Is there a causality paradox?

No.

Will the world end?

No.

But you just short-circuited technology development from 2075 to 2150?

Yes.

So you can do real time travel from 2075? You’ll suddenly vanish from 2075, spend some time in 2150, and later reappear in 2075?

Yes, if you implement the deluxe version.

Well, what happens in 2150?

You’ll do all the pushing red button stuff and have a party with your friends to remember your first time trip. If you set the times right, you could even invite your old self from 2075 as a guest and wave goodbye as you* goes back to 2075.

Or you* could stay in 2150 and there’d be two of you from then on?

Yes

OK, this sounds great fun.  So when can we build this super-duper deluxe version that let’s you time travel from 2075 to 2150 and go back again.

2150

And what happens to me between 2075 and 2150 while I wait for it to be built?

Well, you invest in the deluxe version, connect into the system, and it starts recording all its subscribers’ minds from then on, and you carry on enjoying life until 2150 arrives. Then you can travel from 2075 to 2150, retrospectively.

Retrospectively?

Well, you can travel from 2075 to whatever date in the future the deluxe system still exists. And your 2075 self will fully experience it as time travel. It won’t feel retrospective.

But you have to wait till that date before you can go there?

Yes. But you won’t remember having to wait, all the records of that will be wiped, you’ll just vanish in 2075 and reappear in 2150 or whenever.

What *insert string of chosen expletives here* use is that?

Erm…. Well…. You will still have enjoyed a nice life from 2075 to 2150 before it’s deleted and replaced.

But I won’t remember that will I?

No. But you won’t remember it when you’re dead either.

So I can only do this sort of time travel by having myself wiped off the system for all the years in between after I’ve done it? So the best way of doing that is not to bother with all the effort of living through all those years since they’re going to be deleted anyway and save all the memory and processing by just hibernation in the archives till that date arrives? So I’ll really vanish in 2075 and be restored in 2150 and feel it as time travel? And there won’t be any messy database records to clean up in between, and it will all be nice and environmentally friendly? And not having to run all those people years that would later be deleted will reduce storage and processing costs and system implementation costs dramatically?

Exactly!

OK, sounds a bit better again. But it’s still a fancy cyberspace hibernation scheme really isn’t it?

Well, you can travel back and forth through time as much as you like and socialize with anyone from any time zone and live in any time period. Some people from 2150 might prefer to live in 2075 and some from 2075 prefer to live in 2150. Everyone can choose when they live or just roam freely through the entire time period. A bit like that episode of Star Trek TOS where they all got sent through a portal to different places and times and mixed with societies made of others who had come the same way. You could do that. A bit like a glorified highly immersive computer game.

But what about gambling and using almanacs from the future? And inventing stuff in 2075 that isn’t really invented till 2150?

All the knowledge and data from 2150 will be there in the 2075 system so you won’t have anything new and gambling won’t be a viable industry. But it won’t be actually there until 2150. So the 2075 database will be a retrospective singularity where all of the future knowledge suddenly appears.

Isn’t that a rift in the time-space continuum, letting all the future weapons and political activists and terrorists and their plans through from 2150 to 2075? And Daleks? Some idiot will build one just for the hell of it. They’ll come through the rift too won’t they. And Cyberpersons?

It will not be without technical difficulties. And anyway, they can’t do any actual damage outside the system.

But these minds running in the system will be connected to android bodies or humans outside it. Their minds can time travel through cyberspace. Can’t they do anything nasty?

No, they can only send their minds back and connect to stuff within the system. Any androids and bodies could only be inhabited by first generation minds that belong to that physical time. They can only make use of androids or other body sharing stuff when they travel forwards through time, because it is their chosen future date where the android lives and they can arrange that. On a journey backwards, they can only change stuff running in the system.

 And that’s what stops it violating physics?

Yes

So let’s get this straight. This whole thing is great for extending your mind into cyberspace, sharing bodies, swapping personalities, changing gender or age, sharing consciousness and  some other things. But time travel is only possible for your mind that is supported exclusively in the system. And only that bit in the system can time travel. And your actual 2075 body can’t feel the effect at all or do anything about it? So it’s really another you that this all happens to and you start diverging from your other cyber-self the moment you connect. A replica of you enjoys all the benefits but it thinks it is you and feels like you and essentially is you, but not in the real world. And the original you carries on in parallel.

Correct. It is a big cyberspace bubble created over time with continuous timeline emulation, that only lets you time travel and interact within the bubble. Like an alternative universe, and you can travel in time in it. But it can only interact with the physical universe in real time at the furthermost frontier of the bubble. A frontier that moves into the future at the same speed as the rest of the local space-time continuum and doesn’t cause any physics problems or real time paradoxes outside of the system.

So it’s not REAL time travel. It’s just a sort of cyber-sandbox, albeit one that will be good fun and still worth building.

You can time travel in the parallel universe that you make in cyberspace. But it will be real within that universe. Forwards physical time travel is additionally possible in the physical universe if you migrate your mind totally into cyberspace, e.g. when you die, so you can live electronically, and even then it is really just a fancy form of hibernation. And if you travel back in time in the system, you won’t be able to interact with the physical stuff in the past, only what is running on the system. As long as you accept those limitations, you can travel in time after 2075 and live in any period supported after that.

Why do all the good things only ever happen in another universe?

I don’t know.

No physics or mathematics has knowingly been harmed during this thought experiment. No responsibility is accepted for any time-space rifts created as a result of analytical error.

 

 

Time – The final frontier. Maybe

It is very risky naming the final frontier. A frontier is just the far edge of where we’ve got to.

Technology has a habit of opening new doors to new frontiers so it is a fast way of losing face. When Star Trek named space as the final frontier, it was thought to be so. We’d go off into space and keep discovering new worlds, new civilizations, long after we’ve mapped the ocean floor. Space will keep us busy for a while. In thousands of years we may have gone beyond even our own galaxy if we’ve developed faster than light travel somehow, but that just takes us to more space. It’s big, and maybe we’ll never ever get to explore all of it, but it is just a physical space with physical things in it. We can imagine more than just physical things. That means there is stuff to explore beyond space, so space isn’t the final frontier.

So… not space. Not black holes or other galaxies.

Certainly not the ocean floor, however fashionable that might be to claim. We’ll have mapped that in details long before the rest of space. Not the centre of the Earth, for the same reason.

How about cyberspace? Cyberspace physically includes all the memory in all our computers, but also the imaginary spaces that are represented in it. The entire physical universe could be simulated as just a tiny bit of cyberspace, since it only needs to be rendered when someone looks at it. All the computer game environments and virtual shops are part of it too. The cyberspace tree doesn’t have to make a sound unless someone is there to hear it, but it could. The memory in computers is limited, but the cyberspace limits come from imagination of those building or exploring it. It is sort of infinite, but really its outer limits are just a function of our minds.

Games? Dreams? Human Imagination? Love? All very new agey and sickly sweet, but no. Just like cyberspace, these are also all just different products of the human mind, so all of these can be replaced by ‘the human mind’ as a frontier. I’m still not convinced that is the final one though. Even if we extend that to greatly AI-enhanced future human mind, it still won’t be the final frontier. When we AI-enhance ourselves, and connect to the smart AIs too, we have a sort of global consciousness, linking everyone’s minds together as far as each allows. That’s a bigger frontier, since the individual minds and AIs add up to more cooperative capability than they can achieve individually. The frontier is getting bigger and more interesting. You could explore other people directly, share and meld with them. Fun, but still not the final frontier.

Time adds another dimension. We can’t do physical time travel, and even if we can do so in physics labs with tiny particles for tiny time periods, that won’t necessarily translate into a practical time machine to travel in the physical world. We can time travel in cyberspace though, as I explained in

The future of time travel: cheat

and when our minds are fully networked and everything is recorded, you’ll be able to travel back in time and genuinely interact with people in the past, back to the point where the recording started. You would also be able to travel forwards in time as far as the recording stops and future laws allow (I didn’t fully realise that when I wrote my time travel blog, so I ought to update it, soon). You’d be able to inhabit other peoples’ bodies, share their minds, share consciousness and feelings and emotions and thoughts. The frontier suddenly jumps out a lot once we start that recording, because you can go into the future as far as is continuously permitted. Going into that future allows you to get hold of all the future technologies and bring them back home, short circuiting the future, as long as time police don’t stop you. No, I’m not nuts – if you record everyone’s minds continuously, you can time travel into the future using cyberspace, and the effects extend beyond cyberspace into the real world you inhabit, so although it is certainly a cheat, it is effectively real time travel, backwards and forwards. It needs some security sorted out on warfare, banking and investments, procreation, gambling and so on, as well as lot of other causality issues, but to quote from Back to the Future: ‘What the hell?’ [IMPORTANT EDIT: in my following blog, I revise this a bit and conclude that although time travel to the future in this system lets you do pretty much what you want outside the system, time travel to the past only lets you interact with people and other things supported within the system platform, not the physical universe outside it. This does limit the scope for mischief.]

So, time travel in fully networked fully AI-enhanced cosmically-connected cyberspace/dream-space/imagination/love/games would be a bigger and later frontier. It lets you travel far into the future and so it notionally includes any frontiers invented and included by then. Is it the final one though? Well, there could be some frontiers discovered after the time travel windows are closed. They’d be even finaller, so I won’t bet on it.

 

 

Fairies will dominate space travel

The future sometimes looks ridiculous. I have occasionally written about smart yogurt and zombies and other things that sound silly but have a real place in the future. I am well used to being laughed at, ever since I invented text messaging and the active contact lens, but I am also well used to saying I told you so later. So: Fairies will play a big role in space travel, probably even dominate it. Yes, those little people with wings, and magic wands, that kind. Laugh all you like, but I am right.

To avoid misrepresentation and being accused of being away with the fairies, let’s be absolutely clear: I don’t believe fairies exist. They never have, except in fairy tales of course. Anyone who thinks they have seen one probably just has poor eyesight or an overactive imagination and maybe saw a dragonfly or was on drugs or was otherwise hallucinating, or whatever. But we will have fairies soon. In 50 or 60 years.

In the second half of this century, we will be able to link and extend our minds into the machine world so well that we will effectively have electronic immortality. You won’t have to die to benefit, you will easily do so while remaining fully alive, extending your mind into the machine world, into any enabled object. Some of those objects will be robots or androids, some might well be organic.

Think of the film Avatar, a story based on yesterday’s ideas. Real science and technology will be far more exciting. You could have an avatar like in the film, but that is just the tip of the iceberg when you consider the social networking implications once the mind-linking technology is commoditised and ubiquitous part of everyday life. There won’t be just one or two avatars used for military purposes like in the film, but millions of people doing that sort of thing all the time.

If an animal’s mind is networked, a human might be able to make some sort of link to it too, again like in Avatar, where the Navii link to their dragon-like creatures. You could have remote presence in the animal. That maybe won’t be as fulfilling as being in a human because the animal has limited functionality, but it might have some purpose. Now let’s leave Avatar behind.

You could link AI to an animal to make it comparable with humans so that your experience could be better, and the animal might have a more interesting life too. Imagine chatting to a pet cat or dog and it chatting back properly.

If your mind is networked as well as we think it could be, you could link your mind to other people’s minds, share consciousness, be a part-time Borg if you want. You could share someone else’s sensations, share their body. You could exchange bodies with someone, or rent yours out and live in the net for a while, or hire a different one. That sounds a lot of fun already. But it gets better.

In the same timeframe, we will have mastered genetics. We will be able to design new kinds of organisms with whatever properties chemistry and physics permits. We’ll have new proteins, new DNA bases, maybe some new bases that don’t use DNA. We’ll also have strong AI, conscious machines. We’ll also be able to link electronics routinely to our organic nervous systems, and we’ll also have a wide range of cybernetic implants to increase sensory capability, memory, IQ, networking and so on.

We will be able to make improved versions of the brain that work and feel pretty much the same as the original, but are far, far smaller. Using synthetic electronics instead of organic cells, signals will travel between neurons at light speed, instead of 200m/s, that’s more than a million times faster. But they won’t have to go so far, because we can also make neurons physically far smaller, hundreds of times smaller, so that’s a couple more zeros to play with. And we can use light to interconnect them, using millions of wavelengths, so they could have millions of connections instead of thousands and those connections will be a billion times faster. And the neurons will switch at terahertz speeds, not hundreds of hertz, that’s also billions of times faster. So even if we keep the same general architecture and feel as the Mk1 brain, we could make it a millimetre across and it could work billions of times faster than the original human brain. But with a lot more connectivity and sensory capability, greater memory, higher processing speed, it would actually be vastly superhuman, even as it retains broadly the same basic human nature.

And guess what? It will easily fit in a fairy.

So, around the time that space industry is really taking off, and we’re doing asteroid mining, and populating bases on Mars and Europa, and thinking of going further, and routinely designing new organisms, we will be able to make highly miniaturized people with brains vastly more capable than conventional humans. Since they are small, it will be quite easy to make them with fully functional wings, exactly the sort of advantage you want in a space ship where gravity is in short supply and you want to make full use of a 3D space. Exactly the sort of thing you want when size and mass is a big issue. Exactly the sort of thing you want when food is in short supply. A custom-designed electronic, fully networked brain is exactly the sort of thing you want when you need a custom-designed organism that can hibernate instantly. Fairies would be ideally suited to space travel. We could even design the brains with lots of circuit redundancy, so that radiation-induced faults can be error-corrected and repaired by newly designed proteins.

Wands are easy too. Linking the mind to a stick, and harnessing the millions of years of recent evolution that has taught us how to use sticks is a pretty good idea too. Waving a wand and just thinking what they want to happen at the target is all the interface a space-fairy needs.

This is a rich seam and I will explore it again some time. But for now, you get the idea.

Space-farers will mostly be space fairies.

 

 

 

 

Too many cooks spoil the broth

Pure rant ahead.

I wasted ages this morning trying getting rid of the automated text fill in the Google user accounts log in box. I accept that there are greater problems in the world, but this one was more irritating at the time. I am very comfortable living with AI, but I do want there to be a big OFF switch wherever it has an effect.

It wanted to log me in as me, in my main account, which is normally fine, but in the interests of holding back 1984, I resented Google ‘helping’ me by automatically remembering who uses my machine, which actually is only me, and filling in the data for me. I clean my machine frequently, and when I clean it, I want there to be no trace of anything on it, I want to have to type in all my data from scratch again. That way I feel safe when I clean up. I know if I have cleaned that no nasties are there sucking up stuff like usernames and passwords or other account details. This looked like it was immune to my normal cleanup.

I emptied all the cookies. No effect. I cleared memory. No effect. I ran c cleaner. No effect. I went in to the browser settings and found more places that store stuff, and emptied those too. No effect. I cleaned the browsing history and deleted all the cookies and restarted. No effect. I went to my google account home page and investigated all the settings there. It said all I had to do was hit remove and tick the account that I wanted to remove, which actually doesn’t work if the account doesn’t appear as an option when you do that. It only appeared when I didn’t want it too, and hid when I wanted to remove it. I tried a different browser and jumped through all the hoops again. No effect. I went back in to browser settings and unchecked the remember form fill data. No effect. Every time I started the browser and hit sign in, my account name and picture still appeared, just waiting for my password. Somehow I finally stumbled on the screen that let me remove accounts, and hit remove. No effect.

Where was the data? Was it google remembering my IP address and filling it in? Was it my browser and I hadn’t found the right setting yet? Did I miss a cookie somewhere? Was it my PC, with some file Microsoft maintains to make my life easier? Could it be my security software helping by remembering all my critical information to make my life more secure? By now it was becoming a challenge well out of proportion to its original annoyance value.

So I went nuclear. I went to google accounts and jumped through the hoops to delete my account totally. I checked by trying to log back in, and couldn’t. My account was definitely totally deleted. However, the little box still automatically filled in my account name and waited for my password. I entered it and nothing happened, obviously because the account didn’t exist any more. So now, I had deleted my google account, with my email and google+, but was still getting the log in assistance from somewhere. I went back to the google accounts and investigated the help file. It mentioned yet another helper that could be deactivated, account login assistant or something. I hit the deactivate button, expecting final victory. No effect. I went back to c cleaner and checked I had all the boxes ticked. I had not selected the password box. I did that, ran it and hooray, no longer any assistance. C cleaner seems to keep the data if you want to remember the password, even if you clear form data. That form isn’t a form it seems. C cleaner is brilliant, I refuse to criticize it, but it didn’t interpret the word form the way I do.

So now, finally, my PC was clean and google no longer knew it was me using it. 1984 purged, I then jumped through all the hoops to get my google account back. I wouldn’t recommend that as a thing to try by the way. I have a gmail account with all my email dating back to when gmail came out. Deleting it to test something is probably not a great idea.

The lesson from all this is that there are far too many agencies pretending to look after you now by remembering stuff that identifies you. Your PC, your security software, master password files, your cookies, your browser with its remembering form fill data and password data, account login assistant and of course google. And that is just one company. Forgetting to clear any one of those means you’re still being watched.

 

Synchronisation multiplies this problem. You have to keep track of all the apps and all their interconnections and interdependencies on all your phones and tablets now too. After the heartbleed problem, it took me ages to find all the account references on my tablets and clear them. Some can’t be deactivated within an app and require another app to be used to do so. Some apps tell you something is set but cant change it. It is a nightmare. Someone finding a tablet might get access to a wide range of apps with spending capability. Now they all synch to each other, it takes ages to remove something so that it doesn’t reappear in some menu, even temporarily. Kindle’s IP protection routine regularly means it regularly trying to synch with books I have downloaded somewhere, and telling me it isn’t allowed to on my tablet. It does that whether I ask it to or not. It even tries to synch with books I have long ago deleted and specifically asked to remove from it, and still gives me message warning that it doesn’t have permission to download them. I don’t want them, I deleted them, I told it to remove them, and it still says it is trying but can’t download them. Somewhere, on some tick list on some device or website, I forgot to check or uncheck a box, or more likely didn’t even know it existed, and that means forever I have to wait for my machines to jump through unwanted and unnecessary hoops. It is becoming near impossible to truly delete something – unless you want to keep it. There are far too many interconnections and routes to keep track of them all, too many intermediaries, too many tracking markers. We now have far too many different agencies thinking they are responsible for your data, all wanting to help, and all falling over each other and getting in your way, making your life difficult.

The old proverb says that too many cooks spoil the broth. We’re there now.

 

 

WMDs for mad AIs

We think sometimes about mad scientists and what they might do. It’s fun, makes nice films occasionally, and highlights threats years before they become feasible. That then allows scientists and engineers to think through how they might defend against such scenarios, hopefully making sure they don’t happen.

You’ll be aware that a lot more talk of AI is going on again now. It does seem to be picking up progress finally. If it succeeds well enough, a lot more future science and engineering will be done by AI than by people. If genuinely conscious, self-aware AI, with proper emotions etc becomes feasible, as I think it will, then we really ought to think about what happens when it goes wrong. (Sci-fi computer games producers already do think that stuff through sometimes – my personal favorite is Mass Effect). We will one day have some insane AIs. In Mass Effect, the concept of AI being shackled is embedded in the culture, thereby attempting to limit the damage it could presumably do. On the other hand, we have had Asimov’s laws of robotics for decades, but they are sometimes being ignored when it comes to making autonomous defense systems. That doesn’t bode well. So, assuming that Mass Effect’s writers don’t get to be in charge of the world, and instead we have ideological descendants of our current leaders, what sort of things could an advanced AI do in terms of its chosen weaponry?

Advanced AI

An ultra-powerful AI is a potential threat in itself. There is no reason to expect that an advanced AI will be malign, but there is also no reason to assume it won’t be. High level AI could have at least the range of personality that we associate with people, with a potentially greater  range of emotions or motivations, so we’d have the super-helpful smart scientist type AIs but also perhaps the evil super-villain and terrorist ones.

An AI doesn’t have to intend harm to be harmful. If it wants to do something and we are in the way, even if it has no malicious intent, we could still become casualties, like ants on a building site.

I have often blogged about achieving conscious computers using techniques such as gel computing and how we could end up in a terminator scenario, favored by sci-fi. This could be deliberate act of innocent research, military development or terrorism.

Terminator scenarios are diverse but often rely on AI taking control of human weapons systems. I won’t major on that here because that threat has already been analysed in-depth by many people.

Conscious botnets could arrive by accident too – a student prank harnessing millions of bots even with an inefficient algorithm might gain enough power to achieve high level of AI. 

Smart bacteria – Bacterial DNA could be modified so that bacteria can make electronics inside their cell, and power it. Linking to other bacteria, massive AI could be achieved.

Zombies

Adding the ability to enter a human nervous system or disrupt or capture control of a human brain could enable enslavement, giving us zombies. Having been enslaved, zombies could easily be linked across the net. The zombie films we watch tend to miss this feature. Zombies in films and games tend to move in herds, but not generally under control or in a much coordinated way. We should assume that real ones will be full networked, liable to remote control, and able to share sensory systems. They’d be rather smarter and more capable than what we’re generally used to. Shooting them in the head might not work so well as people expect either, as their nervous systems don’t really need a local controller, and could just as easily be controlled by a collective intelligence, though blood loss would eventually cause them to die. To stop a herd of real zombies, you’d basically have to dismember them. More Dead Space than Dawn of the Dead.

Zombie viruses could be made other ways too. It isn’t necessary to use smart bacteria. Genetic modification of viruses, or a suspension of nanoparticles are traditional favorites because they could work. Sadly, we are likely to see zombies result from deliberate human acts, likely this century.

From Zombies, it is a short hop to full evolution of the Borg from Star Trek, along with emergence of characters from computer games to take over the zombified bodies.

Terraforming

Using strong external AI to make collective adaptability so that smart bacteria can colonize many niches, bacterial-based AI or AI using bacteria could engage in terraforming. Attacking many niches that are important to humans or other life would be very destructive. Terraforming a planet you live on is not generally a good idea, but if an organism can inhabit land, sea or air and even space, there is plenty of scope to avoid self destruction. Fighting bacteria engaged on such a pursuit might be hard. Smart bacteria could spread immunity to toxins or biological threats almost instantly through a population.

Correlated traffic

Information waves and other correlated traffic, network resonance attacks are another way of using networks to collapse economies by taking advantage of the physical properties of the links and protocols rather than using more traditional viruses or denial or service attacks. AIs using smart dust or bacteria could launch signals in perfect coordination from any points on any networks simultaneously. This could push any network into resonant overloads that would likely crash them, and certainly act to deprive other traffic of bandwidth.

Decryption

Conscious botnets could be used to make decryption engines to wreck security and finance systems. Imagine how much more so a worldwide collection of trillions of AI-harnessed organisms or devices. Invisibly small smart dust and networked bacteria could also pick up most signals well before they are encrypted anyway, since they could be resident on keyboards or the components and wires within. They could even pick up electrical signals from a person’s scalp and engage in thought recognition, intercepting passwords well before a person’s fingers even move to type them.

Space guns

Solar wind deflector guns are feasible, ionizing some of the ionosphere to make a reflective surface to deflect some of the incoming solar wind to make an even bigger reflector, then again, thus ending up with an ionospheric lens or reflector that can steer perhaps 1% of the solar wind onto a city. That could generate a high enough energy density to ignite and even melt a large area of city within minutes.

This wouldn’t be as easy as using space based solar farms, and using energy direction from them. Space solar is being seriously considered but it presents an extremely attractive target for capture because of its potential as a directed energy weapon. Their intended use is to use microwave beams directed to rectenna arrays on the ground, but it would take good design to prevent a takeover possibility.

Drone armies

Drones are already becoming common at an alarming rate, and the sizes of drones are increasing in range from large insects to medium sized planes. The next generation is likely to include permanently airborne drones and swarms of insect-sized drones. The swarms offer interesting potential for WMDs. They can be dispersed and come together on command, making them hard to attack most of the time.

Individual insect-sized drones could build up an electrical charge by a wide variety of means, and could collectively attack individuals, electrocuting or disabling them, as well as overload or short-circuit electrical appliances.

Larger drones such as the ones I discussed in

http://carbonweapons.com/2013/06/27/free-floating-combat-drones/ would be capable of much greater damage, and collectively, virtually indestructible since each can be broken to pieces by an attack and automatically reassembled without losing capability using self organisation principles. A mixture of large and small drones, possibly also using bacteria and smart dust, could present an extremely formidable coordinated attack.

I also recently blogged about the storm router

http://carbonweapons.com/2014/03/17/stormrouter-making-wmds-from-hurricanes-or-thunderstorms/ that would harness hurricanes, tornados or electrical storms and divert their energy onto chosen targets.

In my Space Anchor novel, my superheroes have to fight against a formidable AI army that appears as just a global collection of tiny clouds. They do some of the things I highlighted above and come close to threatening human existence. It’s a fun story but it is based on potential engineering.

Well, I think that’s enough threats to worry about for today. Maybe given the timing of release, you’re expecting me to hint that this is an April Fool blog. Not this time. All these threats are feasible.

The internet of things will soon be history

I’ve been a full time futurologist since 1991, and an engineer working on far future R&D stuff since I left uni in 1981. It is great seeing a lot of the 1980s dreams about connecting everything together finally starting to become real, although as I’ve blogged a bit recently, some of the grander claims we’re seeing for future home automation are rather unlikely. Yes you can, but you probably won’t, though some people will certainly adopt some stuff. Now that most people are starting to get the idea that you can connect things and add intelligence to them, we’re seeing a lot of overshoot too on the importance of the internet of things, which is the generalised form of the same thing.

It’s my job as a futurologist not only to understand that trend (and I’ve been yacking about putting chips in everything for decades) but then to look past it to see what is coming next. Or if it is here to stay, then that would also be an important conclusion too, but you know what, it just isn’t. The internet of things will be about as long lived as most other generations of technology, such as the mobile phone. Do you still have one? I don’t, well I do but they are all in a box in the garage somewhere. I have a general purpose mobile computer that happens to do be a phone as well as dozens of other things. So do you probably. The only reason you might still call it a smartphone or an iPhone is because it has to be called something and nobody in the IT marketing industry has any imagination. PDA was a rubbish name and that was the choice.

You can stick chips in everything, and you can connect them all together via the net. But that capability will disappear quickly into the background and the IT zeitgeist will move on. It really won’t be very long before a lot of the things we interact with are virtual, imaginary. To all intents and purposes they will be there, and will do wonderful things, but they won’t physically exist. So they won’t have chips in them. You can’t put a chip into a figment of imagination, even though you can make it appear in front of your eyes and interact with it. A good topical example of this is the smart watch, all set to make an imminent grand entrance. Smart watches are struggling to solve battery problems, they’ll be expensive too. They don’t need batteries if they are just images and a fully interactive image of a hugely sophisticated smart watch could also be made free, as one of a million things done by a free app. The smart watch’s demise is already inevitable. The energy it takes to produce an image on the retina is a great deal less than the energy needed to power a smart watch on your wrist and the cost of a few seconds of your time to explain to an AI how you’d like your wrist to be accessorised is a few seconds of your time, rather fewer seconds than you’d have spent on choosing something that costs a lot. In fact, the energy needed for direct retinal projection and associated comms is far less than can be harvested easily from your body or the environment, so there is no battery problem to solve.

If you can do that with a smart watch, making it just an imaginary item, you can do it to any kind of IT interface. You only need to see the interface, the rest can be put anywhere, on your belt, in your bag or in the IT ether that will evolve from today’s cloud. My pad, smartphone, TV and watch can all be recycled.

I can also do loads of things with imagination that I can’t do for real. I can have an imaginary wand. I can point it at you and turn you into a frog. Then in my eyes, the images of you change to those of a frog. Sure, it’s not real, you aren’t really a frog, but you are to me. I can wave it again and make the building walls vanish, so I can see the stuff on sale inside. A few of those images could be very real and come from cameras all over the place, the chips-in-everything stuff, but actually, I don’t have much interest in most of what the shop actually has, I am not interested in most of the local physical reality of a shop; what I am far more interested in is what I can buy, and I’ll be shown those things, in ways that appeal to me, whether they’re physically there or on Amazon Virtual. So 1% is chips-in-everything, 99% is imaginary, virtual, some sort of visual manifestation of my profile, Amazon Virtual’s AI systems, how my own AI knows I like to see things, and a fair bit of other people’s imagination to design the virtual decor, the nice presentation options, the virtual fauna and flora making it more fun, and countless other intermediaries and extramediaries, or whatever you call all those others that add value and fun to an experience without actually getting in the way. All just images directly projected onto my retinas. Not so much chips-in-everything as no chips at all except a few sensors, comms and an infinitesimal timeshare of a processor and storage somewhere.

A lot of people dismiss augmented reality as irrelevant passing fad. They say video visors and active contact lenses won’t catch on because of privacy concerns (and I’d agree that is a big issue that needs to be discussed and sorted, but it will be discussed and sorted). But when you realise that what we’re going to get isn’t just an internet of things, but a total convergence of physical and virtual, a coming together of real and imaginary, an explosion of human creativity,  a new renaissance, a realisation of yours and everyone else’s wildest dreams as part of your everyday reality; when you realise that, then the internet of things suddenly starts to look more than just a little bit boring, part of the old days when we actually had to make stuff and you had to have the same as everyone else and it all cost a fortune and needed charged up all the time.

The internet of things is only starting to arrive. But it won’t stay for long before it hides in the cupboard and disappears from memory. A far, far more exciting future is coming up close behind. The world of creativity and imagination. Bring it on!

Automation and the London tube strike

I was invited on the BBC’s Radio 4 Today Programme to discuss automation this morning, but on Radio 4, studio audio quality is a higher priority than content quality, while quality of life for me is a higher priority than radio exposure, and going into Ipswich greatly reduces my quality of life. We amicably agreed they should find someone else.

There will be more automation in the future. On one hand, if we could totally automate every single job right now, all the same work would be done, so the world would still have the same overall wealth, but then we’d all be idle so our newly free time could be used to improve quality of life, or lie on beaches enjoying ourselves. The problem with that isn’t the automation itself, it is mainly the deciding what else to do with our time and establishing a fair means of distributing the wealth so it doesn’t just stay with ‘the mill owners’. Automation will eventually require some tweaks of capitalism (I discuss this at length in my book Total Sustainability).

We can’t and shouldn’t automate every job. Some jobs are dull and boring or reduce the worker to too low a level of  dignity, and they should be automated as far as we can economically – that is, without creating a greater problem elsewhere. Some jobs provide people with a huge sense of fulfillment or pleasure, and we ought to keep them and create more like them. Most jobs are in between and their situation is rather more complex. Jobs give us something to do with our time. They provide us with social contact. They stop us hanging around on the streets picking fights, or finding ways to demean ourselves or others. They provide dignity, status, self-actualisation. They provide a convenient mechanism for wealth distribution. Some provide stimulation, or exercise, or supervision. All of these factors add to the value of jobs above the actual financial value add.

The London tube strike illustrates one key factor in the social decision on which jobs should be automated. The tube provides an essential service that affects a very large number of people and all their interests should be taken into account.

The impact of potential automation on individual workers in the tube system is certainly important and we shouldn’t ignore it. It would force many of them to find other jobs, albeit in an area with very low unemployment and generally high salaries. Others would have to change to another role within the tube system, perhaps giving assistance and advice to customers instead of pushing buttons on a ticket machine or moving a lever back and forward in a train cab. I find it hard to see how pushing buttons can offer the same dignity or human fulfillment as directly helping another person, so I would consider that sort of change positive, apart from any potential income drop and its onward consequences.

On the other hand, the cumulative impacts on all those other people affected are astronomically large. Many people would have struggled to get to work. Many wouldn’t have bothered. A few would suffer health consequences due to the extra struggle or stress. Perhaps a few small business on the edge of survival will have been killed. Some tourists won’t come back, a lot will spend less. A very large number of businesses and individuals will suffer significantly to let the tube staff make a not very valid protest.

The interests of a small number of people shouldn’t be ignored, but neither should the interests of a large number of people. If these jobs are automated, a few staff would suffer significantly, most would just move on to other jobs, but the future minor miseries caused to millions would be avoided.

Other jobs that should be automated are those where staff are give undue power or authority over others. Most of us will have had bad experiences of jobsworth staff, perhaps including ticketing staff, whose personal attitude is rather less than helpful and whose replacement by a machine would make the world a better place. A few people sadly seem to relish their power to make someone else’s life more difficult. I am pleased to see widespread automation of check-in at airports for that reason too. There were simply too many check-in assistants who gleefully stood in front of big notices saying that rudeness and abuse will not be tolerated from customers, while happily abusing their customers, creating maximum inconvenience and grief to their customers through a jobsworth attitude or couldn’t-care-less incompetence. Where people are in a position of power or authority, where a job offers the sort of opportunities for sadistic self-actualisation some people get by making other people’s lives worse, there is a strong case for automation to avoid the temptation to abuse that power or authority.

As artificial intelligence and robotics increase in scope and ability, many more jobs will be automated, but more often it will affect parts of jobs. Increasing productivity isn’t a bad thing, nor is up-skilling someone to do a more difficult and fulfilling job than they could otherwise manage. Some parts of any job are dull, and we won’t miss them, if they are replaced by more enjoyable activity. In many cases, simple mechanical or information processing tasks will be replaced by those involving people skills, emotional skills. By automating these bits where we are essentially doing machine work, high technology forces us to concentrate on being human. That is no bad thing.

While automation moves people away from repetitive,boring, dangerous, low dignity tasks, or those that give people too much opportunity to cause problems for others, I am all in favour. Those jobs together don’t add up to enough to cause major economic problems. We can find better work for those concerned.

We need to guard against automation going too far though. When jobs are automated faster than new equivalent or better jobs can be created, then we will have a problem. Not from the automation itself, but as a result of the unemployment, the unbalanced wealth distribution, and all the social problems that result from those. We need to automate sustainably.

Human + machine is better than human alone, but human alone is probably better than machine alone.

Home automation. A reality check.

Home automation is much in the news at the moment now that companies are making the chips-with-everything kit and the various apps.

Like 3D, home automation comes and goes. Superficially it is attractive, but the novelty wears thin quickly. It has been possible since the 1950s to automate a home. Bill Gates notably built a hugely expensive automated home 20 years ago. There are rarely any new ideas in the field, just a lot of recycling and minor tweaking.  Way back in 2000, I wrote what was even then just a recycling summary blog-type piece for my website bringing together a lot of already well-worn ideas. And yet it could easily have come from this years papers. Here it is, go to the end of the italicised text for my updating commentary:

Chips everywhere

 August 2000

 The chips-with-everything lifestyle is almost inevitable. Almost everything can be improved by adding some intelligence to it, and since the intelligence will be cheap to make, we will take advantage of this potential. In fact, smart ways of doing things are often cheaper than dumb ways, a smart door lock may be much cheaper than a complex key based lock. A chip is often cheaper than dumb electronics or electromechanics. However, electronics no longer has a monopoly of chip technology. Some new chips incorporate tiny electromechanical or electrochemical devices to do jobs that used to be done by more expensive electronics. Chips now have the ability to analyse chemicals, biological matter or information. They are at home processing both atoms and bits.

 These new families of chips have many possible uses, but since they are relatively new, most are probably still beyond our imagination. We already have seen the massive impact of chips that can do information processing. We have much less intuition regarding the impact in the physical world.

 Some have components that act as tiny pumps to allow drugs to be dispensed at exactly the right rate. Others have tiny mirrors that can control laser beams to make video displays. Gene chips have now been built that can identify the presence of many different genes, allowing applications from rapid identification to estimation of life expectancy for insurance reasons. (They are primarily being use to tell whether people have a genetic disorder so that their treatment can be determined correctly).

 It is easy to predict some of the uses such future chips might have around the home and office, especially when they become disposably cheap. Chips on fruit that respond to various gases may warn when the fruit is at its best and when it should be disposed of. Other foods might have electronic use-by dates that sound an alarm each time the cupboard or fridge is opened close to the end of their life. Other chips may detect the presence of moulds or harmful bacteria. Packaging chips may have embedded cooking instructions that communicate directly with the microwave, or may contain real-time recipes that appear on the kitchen terminal and tell the chef exactly what to do, and when. They might know what other foodstuffs are available in the kitchen, or whether they are in stock locally and at what price. Of course, these chips could also contain pricing and other information for use by the shops themselves, replacing bar codes and the like and allowing the customer just to put all the products in a smart trolley and walk out, debiting their account automatically. Chips on foods might react when the foods are in close proximity, warning the owner that there may be odour contamination, or that these two could be combined well to make a particularly pleasant dish. Cooking by numbers. In short, the kitchen could be a techno-utopia or nightmare depending on taste.

 Mechanical switches can already be replaced by simple sensors that switch on the lights when a hand is waved nearby, or when someone enters a room. In future, switches of all kinds may be rather more emotional, glowing, changing colour or shape, trying to escape, or making a noise when a hand gets near to make them easier or more fun to use. They may respond to gestures or voice commands, or eventually infer what they are to do from something they pick up in conversation. Intelligent emotional objects may become very commonplace. Many devices will act differently according to the person making the transaction. A security device will allow one person entry, while phoning the police when someone else calls if they are a known burglar. Others may receive a welcome message or be put in videophone contact with a resident, either in the house or away.

 It will be possible to burglar proof devices by registering them in a home. They could continue to work while they are near various other fixed devices, maybe in the walls, but won’t work when removed. Moving home would still be possible by broadcasting a digitally signed message to the chips. Air quality may be continuously analysed by chips, which would alert to dangers such as carbon monoxide, or excessive radiation, and these may also monitor for the presence of bacteria or viruses or just pollen. They may be integrated into a home health system which monitors our wellbeing on a variety of fronts, watching for stress, diseases, checking our blood pressure, fitness and so on. These can all be unobtrusively monitored. The ultimate nightmare might be that our fridge would refuse to let us have any chocolate until the chips in our trainers have confirmed that we have done our exercise for the day.

 Some chips in our home would be mobile, in robots, and would have a wide range of jobs from cleaning and tidying to looking after the plants. Sensors in the soil in a plant pot could tell the robot exactly how much water and food the plant needs. The plant may even be monitored by sensors on the stem or leaves. 

The global positioning system allows chips to know almost exactly where they are outside, and in-building positioning systems could allow positioning down to millimetres. Position dependent behaviour will therefore be commonplace. Similarly, events can be timed to the precision of atomic clock broadcasts. Response can be super-intelligent, adjusting appropriately for time, place, person, social circumstances, environmental conditions, anything that can be observed by any sort of sensor or predicted by any sort of algorithm. 

With this enormous versatility, it is very hard to think of anything where some sort of chip could not make an improvement. The ubiquity of the chip will depend on how fast costs fall and how valuable a task is, but we will eventually have chips with everything.

So that was what was pretty everyday thinking in the IT industry in 2000. The articles I’ve read recently mostly aren’t all that different.

What has changed since is that companies trying to progress it are adding new layers of value-skimming. In my view some at least are big steps backwards. Let’s look at a couple.

Networking the home is fine, but doing so so that you can remotely adjust the temperature across the network or run a bath from the office is utterly pointless. It adds the extra inconvenience of having to remember access details to an account, regularly updating security details, and having to recover when the company running it loses all your data to a hacker, all for virtually no benefit.

Monitoring what the user does and sending the data back to the supplier company so that they can use it for targeted ads is another huge step backwards. Advertising is already at the top of the list of things we already have quite enough. We need more resources, more food supply, more energy, more of a lot of stuff. More advertising we can do without. It adds costs to everything and wastes our time, without giving anything back.

If a company sells home automation stuff and wants to collect the data on how I use it, and sell that on to others directly or via advertising services, it will sit on their shelf. I will not buy it, and neither will most other people. Collecting the data may be very useful, but I want to keep it, and I don’t want others to have access to it. I want to pay once, and then own it outright with full and exclusive control and data access. I do not want to have to create any online accounts, not have to worry about network security or privacy, not have to download frequent software updates, not have any company nosing into my household and absolutely definitely no adverts.

Another is to migrate interfaces for things onto our smartphones or tablets. I have no objection to having that as an optional feature, but I want to retain a full physical switch or control. For several years in BT, I lived in an office with a light that was controlled by a remote control, with no other switch. The remote control had dozens of buttons, yet all it did was turn the light on or off. I don’t want to have to look for a remote control or my phone or tablet in order to turn on a light or adjust temperature. I would much prefer a traditional light switch and thermostat. If they communicate by radio, I don’t care, but they do need to be physically present in the same place all the time.

Automated lights that go on and off as people enter or leave a room are also a step backwards. I have fallen victim once to one in a work toilet. If you sit still for a couple of minutes, they switch the lights off. That really is not welcome in an internal toilet with no windows.

The traditional way of running a house is not so demanding that we need a lot of assistance anyway. It really isn’t. I only spend a few seconds every day turning lights on and off or adjusting temperature. It would take longer than that on average to maintain apps to do it automatically. As for saving energy by turning heating on and off all the time, I think that is over-valued as a feature too. The air in a house doesn’t take much heat and if the building cools down, it takes a lot to get it back up again. That actually makes more strain on a boiler than running at a relatively constant low output. If the boiler and pumps have to work harder more often, they are likely to last less time, and savings would be eradicated.

So, all in all, while I can certainly see merits in adding chips to all sorts of stuff, I think their merits in home automation is being grossly overstated in the current media enthusiasm, and the downside being far too much ignored. Yes you can, but most people won’t want to and those who do probably won’t want to do nearly as much as is being suggested, and even those won’t want all the pain of doing so via service providers adding unnecessary layers or misusing their data.

We could have a conscious machine by end-of-play 2015

I made xmas dinner this year, as I always do. It was pretty easy.

I had a basic plan, made up a menu suited to my family and my limited ability, ensured its legality, including license to serve and consume alcohol to my family on my premises, made sure I had all the ingredients I needed, checked I had recipes and instructions where necessary. I had the tools, equipment and working space I needed, and started early enough to do it all in time for the planned delivery. It was successful.

That is pretty much what you have to do to make anything, from a cup of tea to a space station, though complexity, cost and timings may vary.

With conscious machines, it is still basically the same list. When I check through it to see whether we are ready to make a start I conclude that we are. If we make the decision now at the end of 2013 to make a machine which is conscious and self-aware by the end of 2015, we could do it.

Every time machine consciousness is raised as a goal, a lot of people start screaming for a definition of consciousness. I am conscious, and I know how it feels. So are you. Neither of us can write down a definition that everyone would agree on. I don’t care. It simply isn’t an engineering barrier. Let’s simply aim for a machine that can make either of us believe that it is conscious and self aware in much the same way as we are. We don’t need weasel words to help pass an abacus off as Commander Data.

Basic plan: actually, there are several in development.

One approach is essentially reverse engineering the human brain, mapping out the neurons and replicating them. That would work, (Markram’s team) but would take too long.  It doesn’t need us to understand how consciousness works, it is rather like  methodically taking a television apart and making an exact replica using identical purchased or manufactured components.  It has the advantage of existing backing and if nobody tries a better technique early enough, it could win. More comment on this approach: https://timeguide.wordpress.com/2013/05/17/reverse-engineering-the-brain-is-a-very-slow-way-to-make-a-smart-computer/

Another is to use a large bank of powerful digital computers with access to large pool of data and knowledge. That can produce a very capable machine that can answer difficult questions or do various things well that traditionally need smart people , but as far as creating a conscious machine, it won’t work. It will happen anyway for various reasons, and may produce some valuable outputs, but it won’t result in a conscious machine..

Another is to use accelerate guided evolution within an electronic equivalent of the ‘primordial soup’. That takes the process used by nature, which clearly worked, then improves and accelerates it using whatever insights and analysis we can add via advanced starting points, subsequent guidance, archiving, cataloging and smart filtering and pruning. That also would work. If we can make the accelerated evolution powerful enough it can be achieved quickly. This is my favoured approach because it is the only one capable of succeeding by the end of 2015. So that is the basic plan, and we’ll develop detailed instructions as we go.

Menu suited to audience and ability: a machine we agree is conscious and self aware, that we can make using know-how we already have or can reasonably develop within the project time-frame.

Legality: it isn’t illegal to make a conscious machine yet. It should be; it most definitely should be, but it isn’t. The guards are fast asleep and by the time they wake up, notice that we’re up to something, and start taking us seriously, agree on what to do about it, and start writing new laws, we’ll have finished ages ago.

Ingredients:

substantial scientific and engineering knowledge base, reconfigurable analog and digital electronics, assorted structures, 15nm feature size, self organisation, evolutionary engines, sensors, lasers, LEDs, optoelectronics, HDWDM, transparent gel, inductive power, power supply, cloud storage, data mining, P2P, open source community

Recipe & instructions

I’ve written often on this from different angles:

https://timeguide.wordpress.com/2013/02/15/how-to-make-a-conscious-computer/ summarises the key points and adds insight on core component structure – especially symmetry. I believe that consciousness can be achieved by applying similar sensory structures to  internal processes as those used to sense external stimuli. Both should have a feedback loop symmetrical to the main structure. Essentially what I’m saying is that sensing that you are sensing something is key to consciousness and that is the means of converting detection into sensing and sensing into awareness, awareness into consciousness.

Once a mainstream lab finally recognises that symmetry of external sensory and internally directed sensory structures, with symmetrical sensory feedback loops (as I describe in this link) is fundamental to achieving consciousness, progress will occur quickly. I’d expect MIT or Google to claim they have just invented this concept soon, then hopefully it will be taken seriously and progress will start.

https://timeguide.wordpress.com/2011/09/18/gel-computing/

https://timeguide.wordpress.com/2010/06/16/man-machine-equivalence-by-2015/

Tools, equipment, working space: any of many large company, government or military labs could do this.

Starting early enough: it is very disappointing that work hasn’t already conspicuouslessly begun on this approach, though of course it may be happening in secret somewhere. The slower alternative being pursued by Markram et al is apparently quite well funded and publicised. Nevertheless, if work starts at the beginning of 2014, it could achieve the required result by the end of 2015. The vast bulk of the time would be creating the sensory and feedback processes to direct the evolution of electronics within the gel.

It is possible that ethics issues are slowing progress. It should be illegal to do this without proper prior discussion and effective safeguards. Possibly some of the labs capable of doing it are avoiding doing so for ethical reasons. However, I doubt that. There are potential benefits that could be presented in such a way as to offset potential risks and it would be quite a prize for any brand to claim the first conscious machine. So I suspect the reason for the delay to date is failure of imagination.

The early days of evolutionary design were held back by teams wanting to stick too closely to nature, rather than simply drawing biomimetic idea stimulation and building on it. An entire generation of electronic and computer engineers has been crippled by being locked into digital thinking but the key processes and structures within a conscious computer will come from the analog domain.

I want my TV to be a TV, not a security and privacy threat

Our TV just died. It was great, may it rest in peace in TV heaven. It was a good TV and it lasted longer than I hoped, but I finally got an excuse to buy a new one. Sadly, it was very difficult finding one and I had to compromise. Every TV I found appears to be a government spy, a major home security threat or a chaperone device making sure I only watch wholesome programming. My old one wasn’t and I’d much rather have a new TV that still isn’t, but I had no choice in the matter. All of today’s big TV’s are ruined by the addition of features and equipment that I would much rather not have.

Firstly, I didn’t want any built in cameras or microphones: I do not want some hacker watching or listening to my wife and I on our sofa and I do not trust any company in the world on security, so if a TV has a microphone or camera, I assume that it can be hacked. Any TV that has any features offering voice recognition or gesture recognition or video comms is a security risk. All the good TVs have voice control, even though that needs a nice clear newsreader style voice, and won’t work for me, so I will get no benefit from it but I had no choice about having the microphone and will have to suffer the downside. I am hoping the mic can only be used for voice control and not for networking apps, and therefore might not be network accessible.

I drew the line at having a camera in my living room so had to avoid buying the more expensive smart TVs . If there weren’t cameras in all the top TVs, I would happily have spent 70% more. 

I also don’t want any TV that makes a record of what I watch on it for later investigation and data mining by Big Brother, the NSA, GCHQ, Suffolk County Council or ad agencies. I don’t want it even remembering anything of what is watched on it for viewing history or recommendation services.

That requirement eliminated my entire shortlist. Every decent quality large TV has been wrecked by the addition of  ‘features’ that I don’t only not want, but would much rather not have. That is not progress, it is going backwards. Samsung have made loads of really good TVs and then ruined them all. I blogged a long time ago that upgrades are wrecking our future. TV is now a major casualty.

I am rather annoyed at Samsung now – that’s who I eventually bought from. I like the TV bits, but I certainly do not and never will want a TV that ‘learns my viewing habits and offers recommendations based on what I like to watch’.

Firstly, it will be so extremely ill-informed as to make any such feature utterly useless. I am a channel hopper so 99% of things that get switched to momentarily are things or genres I never want to see again. Quite often, the only reason I stopped on that channel was to watch the new Meerkat ad.

Secondly, our TV is often on with nobody in the room. Just because a programme was on screen does not mean I or indeed anyone actually looked at it, still less that anyone enjoyed it.

Thirdly, why would any man under 95 want their TV to make notes of what they watch when they are alone, and then make that viewing history available to everyone or use it as any part of an algorithm to do so?

Fourthly, I really wanted a smart TV but couldn’t because of the implied security risks. I have to assume that if the designers think they should record and analyse my broadcast TV viewing, then the same monitoring and analysis would be extended to web browsing and any online viewing. But a smart TV isn’t only going to be accessed by others in the same building. It will be networked. Worse still, it will be networked to the web via a wireless LAN that doesn’t have a Google street view van detector built in, so it’s a fair bet that any data it stores may be snaffled without warning or authorisation some time.

Since the TV industry apparently takes the view that nasty hacker types won’t ever bother with smart TVs, they will leave easily accessible and probably very badly secured data and access logs all over the place. So I have to assume that all the data and metadata gathered by my smart TV with its unwanted and totally useless viewing recommendations will effectively be shared with everyone on the web, every advertising executive, every government snoop and local busybody, as well as all my visitors and other household members.

But it still gets worse. Smart TV’s don’t stop there. They want to help you to share stuff too. They want ‘to make it easy to share your photos and your other media from your PC, laptop, tablet, and smartphone’. Stuff that! So, if I was mad enough to buy one, any hacker worthy of the name could probably use my smart TV to access all my files on any of my gadgets. I saw no mention in the TV descriptions of regular operating system updates or virus protection or firewall software for the TVs.

So, in order to get extremely badly informed viewing recommendations that have no basis in reality, I’d have to trade all our privacy and household IT security and open the doors to unlimited and badly targeted advertising, knowing that all my viewing and web access may be recorded for ever on government databases. Why the hell would anyone think that make a TV more attractive?  When I buy a TV, I want to switch it on, hit an auto-tune button and then use it to watch TV. I don’t really want to spend hours going through a manual to do some elaborate set-up where I disable a whole string of  privacy and security risks one by one.

In the end, I abandoned my smart TV requirement, because it came with too many implied security risks. The TV I bought has a microphone to allow a visitor with a clearer voice to use voice control, which I will disable if I can, and features artificial-stupidity-based viewing recommendations which I don’t want either. These cost extra for Samsung to develop and put in my new TV. I would happily have paid extra to have them removed.

Afternote: I am an idiot, 1st class. I thought I wasn’t buying a smart TV but it is. My curioisty got the better of me and I activated the network stuff for a while to check it out, and on my awful broadband, mostly it doesn’t work, so with no significant benefits, I just won’t give it network access, it isn’t worth the risk. I can’t disable the microphone or the viewing history, but I can at least clear it if I want.

I love change and I love progress, but it’s the other direction. You’re going the wrong way!

Free-floating AI battle drone orbs (or making Glyph from Mass Effect)

I have spent many hours playing various editions of Mass Effect, from EA Games. It is one of my favourites and has clearly benefited from some highly creative minds. They had to invent a wide range of fictional technology along with technical explanations in the detail for how they are meant to work. Some is just artistic redesign of very common sci-fi ideas, but they have added a huge amount of their own too. Sci-fi and real engineering have always had a strong mutual cross-fertilisation. I have lectured sometimes on science fact v sci-fi, to show that what we eventually achieve is sometimes far better than the sci-fi version (Exhibit A – the rubbish voice synthesisers and storage devices use on Star Trek, TOS).

Glyph

Liara talking to her assistant Glyph.Picture Credit: social.bioware.com

In Mass Effect, lots of floating holographic style orbs float around all over the place for various military or assistant purposes. They aren’t confined to a fixed holographic projection system. Disruptor and battle drones are common, and  a few home/lab/office assistants such as Glyph, who is Liara’s friendly PA, not a battle drone. These aren’t just dumb holograms, they can carry small devices and do stuff. The idea of a floating sphere may have been inspired by Halo’s, but the Mass Effect ones look more holographic and generally nicer. (Think Apple v Microsoft). Battle drones are highly topical now, but current technology uses wings and helicopters. The drones in sci-fi like Mass Effect and Halo are just free-floating ethereal orbs. That’s what I am talking about now. They aren’t in the distant future. They will be here quite soon.

I recently wrote on how to make force field and floating cars or hover-boards.

How to actually make a Star Wars Landspeeder or a Back to the future hoverboard.

Briefly, they work by creating a thick cushion of magnetically confined plasma under the vehicle that can be used to keep it well off the ground, a bit like a hovercraft without a skirt or fans. Using layers of confined plasma could also be used to make relatively weak force fields. A key claim of the idea is that you can coat a firm surface with a packed array of steerable electron pipes to make the plasma, and a potentially reconfigurable and self organising circuit to produce the confinement field. No moving parts, and the coating would simply produce a lifting or propulsion force according to its area.

This is all very easy to imagine for objects with a relatively flat base like cars and hover-boards, but I later realised that the force field bit could be used to suspend additional components, and if they also have a power source, they can add locally to that field. The ability to sense their exact relative positions and instantaneously adjust the local fields to maintain or achieve their desired position so dynamic self-organisation would allow just about any shape  and dynamics to be achieved and maintained. So basically, if you break the levitation bit up, each piece could still work fine. I love self organisation, and biomimetics generally. I wrote my first paper on hormonal self-organisation over 20 years ago to show how networks or telephone exchanges could self organise, and have used it in many designs since. With a few pieces generating external air flow, the objects could wander around. Cunning design using multiple components could therefore be used to make orbs that float and wander around too, even with the inspired moving plates that Mass Effect uses for its drones. It could also be very lightweight and translucent, just like Glyph. Regular readers will not be surprised if I recommend some of these components should be made of graphene, because it can be used to make wonderful things. It is light, strong, an excellent electrical and thermal conductor, a perfect platform for electronics, can be used to make super-capacitors and so on. Glyph could use a combination of moving physical plates, and use some to add some holographic projection – to make it look pretty. So, part physical and part hologram then.

Plates used in the structure can dynamically attract or repel each other and use tethers, or use confined plasma cushions. They can create air jets in any direction. They would have a small load-bearing capability. Since graphene foam is potentially lighter than helium

Could graphene foam be a future Helium substitute?

it could be added into structures to reduce forces needed. So, we’re not looking at orbs that can carry heavy equipment here, but carrying processing, sensing, storage and comms would be easy. Obviously they could therefore include whatever state of the art artificial intelligence has got to, either on-board, distributed, or via the cloud. Beyond that, it is hard to imagine a small orb carrying more than a few hundred grammes. Nevertheless, it could carry enough equipment to make it very useful indeed for very many purposes. These drones could work pretty much anywhere. Space would be tricky but not that tricky, the drones would just have to carry a little fuel.

But let’s get right to the point. The primary market for this isn’t the home or lab or office, it is the battlefield. Battle drones are being regulated as I type, but that doesn’t mean they won’t be developed. My generation grew up with the nuclear arms race. Millennials will grow up with the drone arms race. And that if anything is a lot scarier. The battle drones on Mass Effect are fairly easy to kill. Real ones won’t.

a Mass Effect combat droneMass Effect combat drone, picture credit: masseffect.wikia.com

If these cute little floating drone things are taken out of the office and converted to military uses they could do pretty much all the stuff they do in sci-fi. They could have lots of local energy storage using super-caps, so they could easily carry self-organising lightweight  lasers or electrical shock weaponry too, or carry steerable mirrors to direct beams from remote lasers, and high definition 3D cameras and other sensing for reconnaissance. The interesting thing here is that self organisation of potentially redundant components would allow a free roaming battle drone that would be highly resistant to attack. You could shoot it for ages with laser or bullets and it would keep coming. Disruption of its fields by electrical weapons would make it collapse temporarily, but it would just get up and reassemble as soon as you stop firing. With its intelligence potentially local cloud based, you could make a small battalion of these that could only be properly killed by totally frazzling them all. They would be potentially lethal individually but almost irresistible as a team. Super-capacitors could be recharged frequently using companion drones to relay power from the rear line. A mist of spare components could make ready replacements for any that are destroyed. Self-orientation and use of free-space optics for comms make wiring and circuit boards redundant, and sub-millimetre chips 100m away would be quite hard to hit.

Well I’m scared. If you’re not, I didn’t explain it properly.

Reverse engineering the brain is a very slow way to make a smart computer

The race is on to build conscious and smart computers and brain replicas. This article explains some of Markam’s approach. http://www.wired.com/wiredscience/2013/05/neurologist-markam-human-brain/all/

It is a nice project, and its aims are to make a working replica of the brain by reverse engineering it. That would work eventually, but it is slow and expensive and it is debatable how valuable it is as a goal.

Imagine if you want to make an aeroplane from scratch.  You could study birds and make extremely detailed reverse engineered mathematical models of the structures of individual feathers, and try to model all the stresses and airflows as the wing beats. Eventually you could make a good model of a wing, and by also looking at the electrics, feedbacks, nerves and muscles, you could eventually make some sort of control system that would essentially replicate a bird wing. Then you could scale it all up, look for other materials, experiment a bit and eventually you might make a big bird replica. Alternatively, you could look briefly at a bird and note the basic aerodynamics of a wing, note the use of lightweight and strong materials, then let it go. You don’t need any more from nature than that. The rest can be done by looking at ways of propelling the surface to create sufficient airflow and lift using the aerofoil, and ways to achieve the strength needed. The bird provides some basic insight, but it simply isn’t necessary to copy all a bird’s proprietary technology to fly.

Back to Markam. If the real goal is to reverse engineer the actual human brain and make a detailed replica or model of it, then fair enough. I wish him and his team, and their distributed helpers and affiliates every success with that. If the project goes well, and we can find insights to help with the hundreds of brain disorders and improve medicine, great. A few billion euros will have been well spent, especially given the waste of more billions of euros elsewhere on futile and counter-productive projects. Lots of people criticise his goal, and some of their arguments are nonsensical. It is a good project and for what it’s worth, I support it.

My only real objection is that a simulation of the brain will not think well and at best will be an extremely inefficient thinking machine. So if a goal is to achieve thought or intelligence, the project as described is barking up the wrong tree. If that isn’t a goal, so what? It still has the other uses.

A simulation can do many things. It can be used to follow through the consequences of an input if the system is sufficiently well modelled. A sufficiently detailed and accurate brain simulation could predict the impacts of a drug or behaviours resulting from certain mental processes. It could follow through the impacts and chain of events resulting from an electrical impulse  this finding out what the eventual result of that will be. It can therefore very inefficiently predict the result of thinking, but by using extremely high speed computation, it could in principle work out the end result of some thoughts. But it needs enormous detail and algorithmic precision to do that. I doubt it is achievable simply because of the volume of calculation needed.  Thinking properly requires consciousness and therefore emulation. A conscious circuit has to be built, not just modelled.

Consciousness is not the same as thinking. A simulation of the brain would not be conscious, even if it can work out the result of thoughts. It is the difference between printed music and played music. One is data, one is an experience. A simulation of all the processes going on inside a head will not generate any consciousness, only data. It could think, but not feel or experience.

Having made that important distinction, I still think that Markam’s approach will prove useful. It will generate many useful insights into the workings of the brain, and many of the processes nature uses to solve certain engineering problems. These insights and techniques can be used as input into other projects. Biomimetics is already proven as a useful tool in solving big problems. Looking at how the brain works will give us hints how to make a truly conscious, properly thinking machine. But just as with birds and airbuses, we can take ideas and inspiration from nature and then do it far better. No bird can carry the weight or fly as high or as fast as an aeroplane. No proper plane uses feathers or flaps its wings.

I wrote recently about how to make a conscious computer:

https://timeguide.wordpress.com/2013/02/15/how-to-make-a-conscious-computer/ and https://timeguide.wordpress.com/2013/02/18/how-smart-could-an-ai-become/

I still think that approach will work well, and it could be a decade faster than going Markam’s route. All the core technology needed to start making a conscious computer already exists today. With funding and some smart minds to set the process in motion, it could be done in a couple of years. The potential conscious and ultra-smart computer, properly harnessed, could do its research far faster than any human on Markam’s team. It could easily beat them to the goal of a replica brain. The converse is not true, Markam’s current approach would yield a conscious computer very slowly.

So while I fully applaud the effort and endorse the goals, changing the approach now could give far more bang for the buck, far faster.

The future of music creation

When I was a student, I saw people around me that could play musical instruments and since I couldn’t, I felt a bit inadequate, so I went out and bought a £13 guitar and taught myself to play. Later, I bought a keyboard and learned to play that too. I’ve never been much good at either, and can’t read music, but  if I know a tune, I can usually play it by ear and sometimes I compose, though I never record any of my compositions. Music is highly rewarding, whether listening or creating. I play well enough for my enjoyment and there are plenty of others who can play far better to entertain audiences.

Like almost everyone, most of the music I listen to is created by others and today, you can access music by a wide range of means. It does seem to me though that the music industry is stuck in the 20th century. Even concerts seem primitive compared to what is possible. So have streaming and download services. For some reason, new technology seems mostly to have escaped its attention, apart from a few geeks. There are a few innovative musicians and bands out there but they represent a tiny fraction of the music industry. Mainstream music is decades out of date.

Starting with the instruments themselves, even electronic instruments produce sound that appears to come from a single location. An electronic violin or guitar is just an electronic version of a violin or guitar, the sound all appears to come from a single point all the way through. It doesn’t  throw sound all over the place or use a wide range of dynamic effects to embrace the audience in surround sound effects. Why not? Why can’t a musician or a technician make the music meander around the listener, creating additional emotional content by getting up close, whispering right into an ear, like a violinist picking out an individual woman in a bar and serenading her? High quality surround sound systems have been in home cinemas for yonks. They are certainly easy to arrange in a high budget concert. Audio shouldn’t stop with stereo. It is surprising just how little use current music makes of existing surround sound capability. It is as if they think everyone only ever listens on headphones.

Of course, there is no rule that electronic instruments have to be just electronic derivatives of traditional ones, and to be fair, many sounds and effects on keyboards and electric guitars do go a lot further than just emulating traditional variants. But there still seems to be very little innovation in new kinds of instrument to explore dynamic audio effects, especially any that make full use of the space around the musician and audience. With the gesture recognition already available even on an Xbox or PS3, surely we should have a much more imaginative range of potential instruments, where you can make precise gestures, wave or throw your arms, squeeze your hands, make an emotional facial expression or delicately pinch, bend or slide fingers to create effects. Even multi-touch on phones or pads should have made a far bigger impact by now.

(As an aside, ever since I was a child, I have thought that there must be a visual equivalent to music. I don’t know what it is, and probably never will, but surely, there must be visual patterns or effects that can generate an equivalent emotional response to music. I feel sure that one day someone will discover how to generate them and the field will develop.)

The human body is a good instrument itself. Most people can sing to a point or at least hum or whistle a tune even if they can’t play an instrument. A musical instrument is really just an unnecessary interface between your brain, which knows what sound you want to make, and an audio production mechanism. Up until the late 20th century, the instrument made the sound, today, outside of a live concert at least,  it is very usually a computer with a digital to analog converter and a speaker attached. Links between computers and people are far better now though, so we can bypass the hard-to-learn instrument bit. With thought recognition, nerve monitoring, humming, whistling, gesture and expression recognition and so on, there is a very rich output from the body that can potentially be used far more intuitively and directly to generate the sound. You shouldn’t have to learn how to play an instrument in the 21st century. The sound creation process should interface almost directly to your brain as intuitively as your body does. If you can hum it, you can play it. Or should be able to, if the industry was keeping up.

Going a bit further, most of us have some idea what sort of music or effect we want to create, but don’t know quite enough about music to have the experience or skill to know quite what. A skilled composer may be able to write something down right away to achieve a musical effect that the rest of us would struggle to imagine. So, add some AI. Most music is based on fairly straightforward mathematical principles, even symphonies are mostly combinations of effects and sequences that fit well within AI-friendly guidelines. We use calculators to do calculations, so use AI to help compose music. Any of us should be able to compose great music with tools we should be able to build now. It shouldn’t be the future, it should be the present.

Let’s look at music distribution. When we buy a music track or stream it, why do we still only get the audio? Why isn’t the music video included by default? Sure, you can watch on YouTube but then you generally get low quality audio and video. Why isn’t purchased music delivered at the highest quality with full HD 3D video included, or videos if the band has made a few, with all the latest ones included as they emerge? If a video is available for music video channels, it surely should be available to those who have bought the music. That it isn’t reflects the contempt that the music industry generally shows to its customers. It treats us as a bunch of thieves who must only ever be given the least possible access for the greatest possible outlay, to make up for all the times we must of course be stealing off them. That attitude has to change if the industry is to achieve its potential. 

Augmented reality is emerging now. It already offers some potential to add overlays at concerts but in a few years, when video visors are commonplace, we should expect to see band members playing up in the air, flying around the audience, virtual band members, cartoon and fantasy creations all over the place doping all sorts of things, visual special effects overlaying the sound effects. Concerts will be a spectacular opportunity to blend the best of visual, audio, dance, storytelling, games and musical arts together. Concerts could be much more exciting, if they use the technology potential. Will they? I guess we’ll have to wait and see. Much of this could be done already, but only a little is.

Now lets consider the emotional connection between a musician and the listener. We are all very aware of the intense (though unilateral) relationship teens can often build with their pop idols. They may follow them on Twitter and other social nets as well as listening to their music and buying their posters. Augmented reality will let them go much further still. They could have their idol with them pretty much all the time, virtually present in their field of view, maybe even walking hand in hand, maybe even kissing them. The potential spectrum extends from distant listening to intimate cuddles. Bearing in mind especially the ages of many fans, how far should we allow this to go and how could it be policed?

Clothing adds potential to the emotional content during listening too. Headphones are fine for the information part of audio, but the lack of stomach-throbbing sound limits the depth of the experience. Music is more than information. Some music is only half there if it isn’t at the right volume. I know from personal experience that not everyone seems to understand this, but turning the volume down (or indeed up) sometimes destroys the emotional content. Sometimes you have to feel the music, sometimes let it fully conquer your senses. Already, people are experimenting with clothes that can house electronics, some that flash on and off in synch with the music, and some that will be able to contract and expand their fibres under electronic control. You will be able to buy clothes that give you the same vibration you would otherwise get from the sub-woofer or the rock concert.

Further down the line, we will be able to connect IT directly into the nervous system. Active skin is not far away. Inducing voltages and current in nerves via tiny implants or onplants on patches of skin will allow computers to generate sensations directly.

This augmented reality and a link to the nervous system gives another whole dimension to telepresence. Band members at a concert will be able to play right in front of audience members, shake them, cuddle them. The emotional connection could be a lot better.

Picking up electrical clues from the skin allows automated music selection according to the wearers emotional state. Even properties like skin conductivity can give clues about emotional state. Depending on your stress level for example, music could be played that soothes you, or if you feel calm, maybe more stimulating tracks could be played. Playlists would thus adapt to how you feel.

Finally, music is a social thing too. It brings people together in shared experiences. This is especially true for the musicians, but audience members often feel some shared experience too. Atmosphere. Social networking already sees some people sharing what music they are listening too (I don’t want to share my tastes but I recognise that some people do, and that’s fine). Where shared musical taste is important to a social group, it could be enhanced by providing tools to enable shared composition. AI can already write music in particular styles – you can feed Mozart of Beethoven into some music generators and they will produce music that sounds like it had been composed by that person, they can compose that as fast as it comes out of the speakers. It could take style preferences from a small group of people and produce music that fits across those styles. The result is a sort of tribal music, representative of the tribe that generated it. In this way, music could become even more of a social tool in the future than it already is.

Killing machines

There is rising concern about machines such as drones and battlefield robots that could soon be given the decision on whether to kill someone. Since I wrote this and first posted it a couple of weeks ago, the UN has put out their thoughts as the DM writes today:

http://www.dailymail.co.uk/news/article-2318713/U-N-report-warns-killer-robots-power-destroy-human-life.html 

At the moment, drones and robots are essentially just remote controlled devices and a human makes the important decisions. In the sense that a human uses them to dispense death from a distance, they aren’t all that different from a spear or a rifle apart from scale of destruction and the distance from which death can be dealt. Without consciousness, a missile is no different from a spear or bullet, nor is a remote controlled machine that it is launched from. It is the act of hitting the fire button that is most significant, but proximity is important too. If an operator is thousands of miles away and isn’t physically threatened, or perhaps has never even met people from the target population, other ethical issues start emerging. But those are ethical issues for the people, not the machine.

Adding artificial intelligence to let a machine to decide whether a human is to be killed or not isn’t difficult per se. If you don’t care about killing innocent people, it is pretty easy. It is only made difficult because civilised countries value human lives, and because they distinguish between combatants and civilians.

Personally, I don’t fully understand the distinction between combatants and civilians. In wars, often combatants have no real choice but to fight or are conscripted, and they are usually told what to do, often by civilian politicians hiding in far away bunkers, with strong penalties for disobeying. If a country goes to war, on the basis of a democratic mandate, then surely everyone in the electorate is guilty, even pacifists, who accept the benefits of living in the host country but would prefer to avoid the costs. Children are the only innocents.

In my analysis, soldiers in a democratic country are public sector employees like any other, just doing a job on behalf of the electorate. But that depends to some degree on them keeping their personal integrity and human judgement. The many military who take pride in following orders could be thought of as being dehumanised and reduced to killing machines. Many would actually be proud to be thought of as killing machines. A soldier like that, who merely follow orders, deliberately abdicates human responsibility. Having access to the capability for good judgement, but refusing to use it, they reduce themselves to a lower moral level than a drone. At least a drone doesn’t know what it is doing.

On the other hand, disobeying a direct order may soothe issues of conscience but invoke huge personal costs, anything from shaming and peer disapproval to execution. Balancing that is a personal matter, but it is the act of balancing it that is important, not necessarily the outcome. Giving some thought to the matter and wrestling at least a bit with conscience before doing it makes all the difference. That is something a drone can’t yet do.

So even at the start, the difference between a drone and at least some soldiers is not always as big as we might want it to be, for other soldiers it is huge. A killing machine is competing against a grey scale of judgement and morality, not a black and white equation. In those circumstances, in a military that highly values following orders, human judgement is already no longer an essential requirement at the front line. In that case, the leaders might set the drones into combat with a defined objective, the human decision already taken by them, the local judgement of who or what to kill assigned to adaptive AI, algorithms and sensor readings. For a military such as that, drones are no different to soldiers who do what they’re told.

However, if the distinction between combatant and civilian is required, then someone has to decide the relative value of different classes of lives. Then they either have to teach it to the machines so they can make the decision locally, or the costs of potential collateral damage from just killing anyone can be put into the equations at head office. Or thirdly, and most likely in practice, a compromise can be found where some judgement is made in advance and some locally. Finally, it is even possible for killing machines to make decisions on some easier cases and refer difficult ones to remote operators.

We live in an electronic age, with face recognition, friend or foe electronic ID, web searches, social networks, location and diaries, mobile phone signals and lots of other clues that might give some knowledge of a target and potential casualties. How important is it to kill or protect this particular individual or group, or take that particular objective? How many innocent lives are acceptable cost, and from which groups – how many babies, kids, adults, old people? Should physical attractiveness or the victim’s professions be considered? What about race or religion, or nationality, or sexuality, or anything else that could possibly be found out about the target before killing them? How much should people’s personal value be considered, or should everyone be treated equal at point of potential death? These are tough questions, but the means of getting hold of the date are improving fast and we will be forced to answer them. By the time truly intelligent drones will be capable of making human-like decisions, they may well know who they are killing.

In some ways this far future with a smart or even conscious drone or robot making informed decisions before killing people isn’t as scary as the time between now and then. Terminator and Robocop may be nightmare scenarios, but at least in those there is clarity of which one is the enemy. Machines don’t yet have anywhere near that capability. However, if an objective is considered valuable, military leaders could already set a machine to kill people even when there is little certainty about the role or identity of the victims. They may put in some algorithms and crude AI to improve performance or reduce errors, but the algorithmic uncertainty and callous uncaring dispatch of potentially innocent people is very worrying.

Increasing desperation could be expected to lower barriers to use. So could a lower regard for the value of human life, and often in tribal conflicts people don’t consider the lives of the opposition to have a very high value. This is especially true in terrorism, where the objective is often to kill innocent people. It might not matter that the drone doesn’t know who it is killing, as long as it might be killing the right target as part of the mix. I think it is reasonable to expect a lot of battlefield use and certainly terrorist use of semi-smart robots and drones that kill relatively indiscriminatingly. Even when truly smart machines arrive, they might be set to malicious goals.

Then there is the possibility of rogue drones and robots. The Terminator/Robocop scenario. If machines are allowed to make their own decisions and then to kill, can we be certain that the safeguards are in place that they can always be safely deactivated? Could they be hacked? Hijacked? Sabotaged by having their fail-safes and shut-offs deactivated? Have their ‘minds’ corrupted? As an engineer, I’d say these are realistic concerns.

All in all, it is a good thing that concern is rising and we are seeing more debate. It is late, but not too late, to make good progress to limit and control the future damage killing machines might do. Not just directly in loss of innocent life, but to our fundamental humanity as armies get increasingly used to delegating responsibility to machines to deal with a remote dehumanised threat. Drones and robots are not the end of warfare technology, there are far scarier things coming later. It is time to get a grip before it is too late.

When people fought with sticks and stones, at least they were personally involved. We must never allow personal involvement to disappear from the act of killing someone.

Culture tax and sustainable capitalism

I have written several times now about changing capitalism and democracy to make them suited to the 21st century. Regardless of party politics, most people want a future where nobody is too poor to live a dignified and comfortable life. To ensuring that that is possible, we need to tweak a few things.

I suggested a long time ago that there could be a basic income for all, without any means testing on it, so that everyone has an income at a level they can live on. No means testing means little admin. Then wages go on top, so that everyone is encouraged to work, and then all income from all sources is totalled and taxed appropriately. It is a nice idea. I wasn’t the first to recommend it and many others are saying much the same. The idea is old, but the figures are rarely discussed. It is harder than it sounds and being a nice idea doesn’t ensure  economic feasibility.

The difference between figures between parties would be relatively minor so let’s ignore party politics. In today’s money, it would be great if everyone could have, say, £30k a year as a state benefit, then earn whatever they can on top. 30k doesn’t make you rich, but you can live OK on it so nobody would be poor in any proper sense of the word. With everyone economically provided for and able to lead comfortable and dignified lives, it would be a utopia compared to today. Sadly, it doesn’t add up yet. 65,000,000 x 30,000 = 1,950Bn . The UK economy isn’t that big. The state only gets to control part of GDP and out of that reduced budget it also has its other costs of providing health, education, defence etc, so the amount that could be dished out to everyone on this basis is therefore a lot smaller than 30k. Even if the state takes 75% of GDP and spends most of it on the base allowance, 10k per person would be pushing it. So a family could afford a modest lifestyle, but single people would really struggle. Some people would need additional help, and that reduces the pool left to pay the basic allowance still further. Also, if the state takes 75% of GDP, only 25% is left for everything else, so salaries would be flat, reducing the incentive to work, while investment and entrepreneurial activity are starved of both resources and incentive.

Simple maths thus forces us to make compromises. Sharing resources reduces costs considerably. In a first revision, families might be given less for kids than for the adults, but what about groups of young adults sharing a big house? They may be adults but they also benefit from the same economy of shared resources. So maybe there should be a household limit, or a bedroom tax, or forms and means testing, and it mustn’t incentivise people living separately or house supply suffers. Anyway, it is already getting complicated and our original nice idea is in the bin. That’s why it is such a mess at the moment. There just isn’t enough money to make everyone comfortable without doing lots of allowances and testing and admin. We all want utopia, but we can’t afford it. Even the modest 30k-per-person utopia costs at least 3 times more than we can afford.

However, if we can get back to an average 2.5% growth per year in real terms, and surely we can, it would only take 45 years to get there. That isn’t such a long time. We have hope that if we can get some better government than we have had of late, and are prepared to live with a little economic tweaking, we could achieve good quality of life for all in the second half of the century.

So I really like the idea of a simple welfare system, providing a generous base level allowance to everyone, topped up by rewards of effort, but we will have to wait before we can afford to put that base level at anything like comfortable standards.

Meanwhile, we need to tweak some other things to have any chance of getting there. I’ve commented often that pure capitalism would eventually lead to a machine-based economy, with the machine owners having more and more of the cash, and everyone else getting poorer, so the system will fail. Communism fails too.

On the other hand, capitalism works fine when rewards are shared more equally, it fails when wealth concentration is too high or when incentive is too low. Preserving the incentive to work and create is a mainly matter of setting tax levels well. Making sure that wealth doesn’t get concentrated too much needs a new kind of tax.

The solution I suggest is a culture tax. Culture in the widest meaning.

When someone creates and builds a company, they don’t do so from a state of nothing. They currently take for granted all the accumulated knowledge and culture, trained workforce, access to infrastructure, machines, governance, administrative systems, markets, distribution systems and so on. They add just another tiny brick to what is already a huge and highly elaborate structure. They may invest heavily in their time and money but actually when  considered overall as part of the system their company inhabits, they only pay for a fraction of the things their company will use.

That accumulated knowledge, culture and infrastructure belongs to everyone, not just those who choose to use it. Businesses might consider that this is what they pay taxes for already, but that isn’t explicit in the current system.

The big businesses that are currently avoiding paying UK taxes by paying overseas companies for intellectual property rights could be seen as trailblazing this approach. If they can understand and even justify the idea of paying another part of their company for IP or a franchise, why not pay the host country for IP for access to their entire culture?

This kind of tax would provide the means needed to avoid too much concentration of wealth. A future  businessman might choose to use only software and machines instead of a human workforce to save costs, but levying taxes on use of  the cultural base that makes that possible allows a direct link between use of advanced technology and taxation. Sure, he might add a little extra insight or new knowledge, but would still have to pay the rest of society for access to its share of the cultural base, inherited from the previous generations, on which his company is based. The more he automates, the more sophisticated his use of the system, the more he cuts a human workforce out of his empire, the higher his taxation.

Linking to technology use makes sense. Future AI and robots could do a lot of work currently done by humans. A very small number of people could own almost all of the productive economy. But they would be getting far more than their share of the cultural base, which must belong equally to everyone. In a village where one farmer owns all the sheep, other villagers would be right to ask for rent for their share of the commons if he wants to graze them there.

I feel confident that this extra tax would solve many of the problems associated with automation. We all equally own the country, its culture, laws, language, human knowledge (apart from current patents, trademarks etc. of course), its public infrastructure, not just businessmen. Everyone surely should have the right to be paid if someone else uses part of their share.

The extra culture tax would not magically make the economy bigger. It would just ensure that it is more equally shared out. It is a useful tool to be used by future governments to make it possible to keep capitalism sustainable, preventing its collapse, preserving incentive while fairly distributing reward. Without such a tax, capitalism simply may not survive.

Towards the singularity

This entry now forms a chapter in my book Total Sustainability, available from Amazon in paper or ebook form.

How smart could an AI become?

I got an interesting question in a comment from Jim T on my last blog.

What is your opinion now on how powerful machine intelligence will become?

Funny, but my answer relates to the old question: how many angels can sit on the head of a pin?

The brain is not a digital computer, and don’t think a digital processor will be capable of consciousness (though that doesn’t mean it can’t be very smart and help make huge scientific progress). I believe a conscious AI will be mostly analog in nature, probably based on some fancy combo of adaptive neural nets. as suggested decades ago by Moravec.

Taking that line, and looking at how far miniaturisation can go, then adding all the zeros that arise from the shorter signal transmission paths, faster switching speeds, faster comms, and the greater number of potential pathways using optical WDM than electronic connectivity, I calculated that a spherical pinhead (1mm across) could ultimately house the equivalent of 10,000 human brains. (I don’t know how smart angels are so didn’t quite get to the final step). You could scale that up for as much funding, storage and material and energy you can provide.

However, what that quantifies is how many human equivalent AIs you could support. Very useful to know if you plan to build a future server farm to look after electronically immortal people. You could build a machine with the equivalent intelligence of the entire human race. But it doesn’t answer the question of how smart a single AI could ever be, or how powerful it could be. Quantity isn’t qualityYou could argue that 1% of the engineers produce 99% of the value, even with only a fairly small IQ difference. 10 billion people may not be as useful for progress as 10 people with 5 times the IQ. And look at how controversial IQ is. We can’t even agree what intelligence is or how to quantify it.

Just based on loose language, how powerful or smart or intelligent an AI could become depends on the ongoing positive feedback loop. Adding  more AI of the same intelligence level will enable the next incremental improvement, then using those slightly smarter AIs would get you to the next stage, a bit faster, ad infinitum. Eventually, you could make an AI that is really, really, really smart.

How smart is that? I don’t have the terminology to describe it. I can borrow an analogy though. Terry Pratchett’s early book ‘The Dark Side of the Sun’ has a character in it called The Bank. It was a silicon planet, with the silicon making a hugely smart mind. Imagine if a pinhead could house 10,000 human brains, and you have a planet of the stuff, and it’s all one big intellect instead of lots of dumb ones. Yep. Really, really, really smart.

How to make a conscious computer

The latest generation of supercomputers have processing speed that is higher than the human brain on a simple digital comparison, but they can’t think, aren’t conscious. It’s not even really appropriate to compare them because the brain mostly isn’t digital. It has some digital processing in the optics system but mostly uses adaptive analog neurons whereas digital computers use digital chips for processing and storage and only a bit of analog electronics for other circuits. Most digital computers don’t even have anything we would equate to senses.

Analog computers aren’t used much now, but were in fairly widespread use in some industries until the early 1980s. Most IT people have no first hand experience of them and some don’t seem to even be aware of analog computers, what they can do or how. But in the AI space, a lot of the development uses analog approaches.

https://timeguide.wordpress.com/2011/09/18/gel-computing/ discusses some of my previous work on conscious computer design. I won’t reproduce it here.

I firmly believe consciousness, whether externally or internally focused, is the result of internally directed sensing, (sensing can be thought of as the solicitation of feeling) so that you feel your thoughts or sensory inputs in much the same way. The easy bit is figuring out how thinking can work once you have that, how memories can be relived, concepts built, how self-awareness, sentience, intelligence emerge. All those are easy once you have figured out how feeling works. That is the hard problem.

Detection is not the same as feeling. It is easy to build a detector or sensor that flips a switch or moves a dial when something happens or even precisely quantifies something . Feeling it is another layer on that. Your skin detects touch, but your brain feels it, senses it. Taking detection and making it feel and become a sensation, that’s hard. What is it about a particular circuit that adds sensation? That is the missing link, the hard problem, and all the writing available out there just echoes that. Philosophers and scientists have written about this same problem in different ways for ages, and have struggled in vain to get a grip on it, many end up running in circles. So far they don’t know the answer, and neither do I. The best any offer is elucidation of aspects of the problem and at occasionally some hints of things that they think might somehow be connected with the answer. There exists no answer or explanation yet.

There is no magic in the brain. The circuitry involved in feeling something is capable of being described, replicated and even manufactured. It is possible to find out how to make a conscious circuit, even if we still don’t know what consciousness is or how it works, via replication, reverse engineering or evolutionary development. We manage to make conscious children several times every second.

How far can we go? Having studied a lot of what is written, it is clear that even after a lot of smart people thinking a long time about it, there is a great deal of confusion out there, and at least some of it comes basically from trying to use too big words and some comes from trying to analyse too much at once. When it is so obvious that it is a tough problem, simplifying it will undoubtedly help.  So let’s narrow it down a bit.

Feeling needs to be separated out from all the other things going on. What is it that happens that makes something feel? Well, detecting something pre-empts feeling it, and interpreting it or thinking about it comes later. So, ignore the detection and interpretation and thinking bits for now. Even sensation can be modelled as solicitation of feeling, essentially adding qualitative information to it. We ought to be able to make an abstraction model as for any IT system, where feeling is a distinct layer, coming between the physical detection layer and sensation, well below any of the layers associated with thinking or analysis.

Many believe that very simple organisms can detect stimuli and react to them, but can’t feel,  but more sophisticated ones can. Logical deduction tells us either that feeling may require fairly complex neural networks but certainly well below human levels, or alternatively, feeling may not be fundamentally linked to complexity but may emerge from architectural differences that arose in parallel with increasing complexity but aren’t dependent on it. It is also very likely due to evolutionary mechanisms that feeling emerges from similar structures to detection, though not the same. Architectural modifications, feedbacks, or additions to detection circuits might be an excellent point to start looking.

So we don’t know the answer, but we do have some good clues. Better than nothing. Coming at it from a philosophical direction, even the smartest people quickly get tied in knots, but from an engineering direction, I think the problem is soluble.

If feeling is, as I believe, a modified detection system, then we could for example seed an evolutionary design system with detection systems. Mutating, restructuring and rearranging detection systems and adding occasional random components here and there might eventually create some circuits that feel. It did in nature, and would in an evolutionary design system, given time. But how would we know? An evolutionary design system needs some means of selection to distinguish the more successful branches for further development.

Using feedback loops would probably help. A system with built in feedback so that it feels that it is feeling something would be symmetrical, maybe even fractal. Self-reinforcement of a feeling process would also create a little vortex of activity. A simple detection system (with detection of detection) would not exhibit such strong activity peaks due to necessary lack of symmetry in detection of initial and processed stimuli. So all we need do is to introduce feedback loops in each architecture and look for the emergence of activity peaks. Possibly, some non-feeling architectures might also show activity peaks so not all peaks would necessarily show successes, but all successes would show peaks.

So, the evolutionary system would take basic detection circuits as input, modify them, add random components, then connect them in simple symmetrical feedback loops. Most results would do nothing. Some would show self-reinforcement, evidenced by activity peaks. Those are the ones we need.

The output from such an evolutionary design system would be circuits that feel (and some junk). We have our basic components. Now we can start to make a conscious computer.

Let’s go back to the gel computing idea and plug them in. We have some basic detectors, for light, sound, touch etc. Pretty simple stuff, but we connect those to our new feeling circuits, so now those inputs stop being just information and become sensations. We add in some storage, recording the inputs, again with some feeling circuits added into the mix, and just for fun, let’s make those recording circuits replay those inputs over and over, indefinitely. Those sensations will be felt again and again, the memory relived. Our primitive little computer can already remember and experience things it has experienced before. Now add in some processing. When a and b happen, c results. Nothing complicated. Just the sort of primitive summation of inputs we know neurons can do all the time. But now, when that processing happens, our computer brain feels it. It feels that it is doing some thinking. It feels the stimuli occurring, a result occurring. And as it records and replays it, an experience builds. It now has knowledge. It may not be the answer to life the universe and everything just yet, but knowledge it is. It now knows and remembers the experience that when it links these two inputs, it gets that output. These processes and recordings and replays and further processing and storage and replays echo throughout the whole system. The sensory echoes and neural interference patterns result in some areas of reinforcement and some of cancellation. Concepts form. The whole process is sensed by the brain. It is thinking, processing, reliving memories, linking inputs and results into concepts and knowledge, storing concepts, and most importantly, it is feeling itself doing so.

The rest is just design detail. There’s your conscious computer.

When will AI marriage become legal?

Gay marriage is so yesterday. OK, it isn’t quite yet, but everything has been said a million times and I don’t intend to repeat it. A related but much more interesting debate is already gathering volume globally. When will you be able to marry your robot or AI?

The traditional Oxford English definition of marriage:

The formal union of a man and a woman, typically recognized by law, by which they become husband and wife. 

But, as is being asked by some, who says they have to be a man and a woman? Why can’t they be any sex? I don’t want to get into the arguments, because people on both sides argue passionately, often flying in the face of logic, but here is a gender neutral alternative definition:

Marriage is a social union or legal contract between people called spouses that establishes rights and obligations between the spouses, between the spouses and their children, and between the spouses and their in-laws.

Well, I am all for equality for all, but who says they have to be people?

If we are going to fight over definitions, surely we should try to finish with one that might survive more than a decade or two. This one simply won’t.

Artificial intelligence, or AI as it is usually called now, is making good progress. We already have computers with more raw number crunching power than the human brain. Their software, and indeed their requirement to use software, makes them far from equivalent overall, but I don’t think we will be waiting very long now for AI machines that we will agree are conscious, self aware, intelligent, sentient, with emotions, capable of forming human-like relationships. A few cranks will still object maybe, but so what?

These AIs will likely be based on adaptive analog neural networks rather than digital processing so they will not be so different from us really. Different futurists list different dates for AIs with man-machine equivalence, depending mostly on the prejudices and experiences bequeathed by their own backgrounds. I’d say 10 years, some say 15 or 20. Some say we will never get there, but they are just wrong, so wrong. We will soon have artificially intelligent entities comparable to humans in intellect and emotional capability. So how about this definition? :

Marriage is a social union or legal contract between conscious entities called spouses that establishes rights and obligations between the spouses, between the spouses and their derivatives, and those legally connected to them.

An AI might or might not be connected to a robot. An AI may not have any permanent physical form, and robots are really a red herring here. The mind is what is surely important, not the container. An AI can still be an entity that lives for a long enough time to be eligible for a long term relationship. I often watch sci-fi or play computer games, and many have AI characters that take on some sort of avatar – Edi in Mass Effect or Cortana in Halo for example. Sometimes these avatars are made to look very attractive, even super-attractive. It is easy to imaging how someone could fall in love with their AI. It isn’t much harder to imagine that they could fall in love with each other.

It’s a while since I last wrote about machine consciousness so I’ll say how I think it will work again now.

https://timeguide.wordpress.com/2011/09/18/gel-computing/ tells of my ideas on gel computing. A lot of adaptive electronic devices suspended in gel that can set up free space optical links to each other would be an excellent way of making an artificial brain-like processor.

Using this as a base, and with each of the tiny capsules being able to perform calculations, an extremely powerful digital processor could be created. But I don’t believe digital processors can become conscious, however much their processing increases in speed. It is an act of faith I guess, I can’t prove it, but coming from a computer modelling background it seems to me that a digital computer can simulate the processes in consciousness but it can’t emulate them and that difference is crucial.

I firmly believe consciousness is a matter of internal sensing. The same way that you sense sound or images or touch, you can sense the processes based on those same neural functions and their derivatives in your brain. Emotions ditto. We make ideas and concepts out of words and images and sounds and other sensory things and emotions too. We regenerate the same sorts of patterns, and filter them similarly to create new knowledge, thoughts and memories, a sort of vortex of sensory stimuli and echoes. Consciousness might not actually just be internal sensing, we don’t know yet exactly how it works, but even if it isn’t, you could do it that way. Internal sensing can be the basis of a conscious machine, an AI. Here’s a picture. This would work. I am sure of it. There will also be other ways of achieving consciousness, and they might have different flavours. But for the purposes of arguing for AI marriage, we only need one method of achieving consciousness to be feasible.

consciousness

I think this sort of AI design could work and it would certainly be capable of emotions. In fact, it would be capable of a much wider range of emotions than human experience. I believe it could fall in love, with a human, alien, or another AI. AIs will have a range and variety of gender capabilities and characteristics. People will be able to link to them in new ways, creating new forms of intimacy. The same technology will also enable new genders for people too, as I discussed recently. In the long term view, gay marriage is just another point on a long line.

When we set aside the arguing over gender equality, what we usually agree on is the importance of love. People can fall in love with any other human of any age, race or gender, but they are also capable of loving a sufficiently developed AI. As we rush to legislate for gender equality, it really is time to start opening the debate. AI will come in a very wide range of capability and flavour. Some will be equivalent or even superior to humans in many ways. They will have needs, they will want rights, and they will become powerful enough to demand them. Sooner or later, we will need to consider equality for them too. And I for one will be on their side.

What will your next body be like?

Many engineers, including me, think that some time around 2050, we will be able to make very high quality links between the brains and machines. To such an extent that it will thereafter be possible (albeit expensive for some years) to arrange that most of your mind – your thinking, memories, even sensations and emotions, could reside mainly in the machine world. Some (perhaps some memories that are rarely remembered for example) may not be suited to such external accessibility, but the majority should be.

The main aim of this research area is to design electronic solutions to immortality. But actually, that is only one application, and I have discussed electronic immortality a few times now :

How to live forever

Increasing longevity and electronic immortality. 3Bn people to live forever.

What I want to focus on this time is that you don’t have to die to benefit. If your mind is so well connected, you could inhabit a new body, without having to vacate your existing one. Furthermore, there really isn’t much to stop you getting a new body, using that, and dumping your old one in a life support system. You won’t do that, but you could. Either way, you could get a new body or an extra one, and as I asked in passing in my last blog, what will your new body look like?

Firstly, why would you want to do this? Well, you might be old, suffering the drawbacks of ageing, not as mobile and agile as you want to be, you might be young, but not as pretty or fit as you want to be, or maybe you would prefer to be someone else, like your favourite celebrity, a top sports hero, or maybe you’d prefer to be a different gender perhaps? Or maybe you just generally feel you’d like to have the chance to start over, do it differently. Maybe you want to explore a different lifestyle, or maybe it is a way of expressing your artistic streak. So, with all these reasons and more, there will be plenty of demand for wanting a new body and a potentially new life.

Options

Lets explore some of the options. Don’t be too channelled by assuming you even have to be human. There is a huge range of potential here, but some restrictions will be necessary too. Lots of things will be possible, but not permissible.

Firstly, tastes will vary a lot. People may want their body to look professional for career reasons, others will prefer sexy, others sporty. Most people will only have one at a time, so will choose it carefully. A bit like buying a house. But not everyone will be conservative.

Just like buying a house, some rich people will want to own several for different circumstances, and many others would want several but can’t afford it, so there could be a rental market. But as I will argue shortly, you probably won’t be allowed to use too many at the same time, so that means we will need some form of storage, and ethics dictates that the ‘spare’ bodies mustn’t be ‘alive’ or conscious. There are lots of ways to do this. Using a detachable brain is one, or not to put a brain in at all, using empty immobile husks that are switched on and then linked to your remote mind in the cloud to become alive. This sounds preferable to me. Most likely they would be inorganic. I don’t think it will be ethically acceptable to grow cloned bodies in some sort of farm and remove their brains, so using some sort of android is probably best all round.

So, although you can do a lot with biotech, and there are some options there, I do think that most replacement bodies, if not all, will be androids using synthetic materials and AI’s, not biological bodies.

As for materials, it is already possible to buy lifelike full sized dolls, but the materials will continue to improve, as will robotics. You could look how you want to look, and your new body would be as youthful, strong, and flexible as you want or need it to be.

Now that we’re in that very broad android/robot creativity space, you could be any species, fantasy character, alien, robot, android or pretty much any imaginary form that could be fabricated. You could be any size or shape from a bacterium to an avatar for an AI spaceship (such as Rommy’s avatar in Andromeda, or Edi in Mass Effect. Noteworthy of course is that both Rommy and Edi felt compelled to get bodies too, so that they could maximise their usefuleness, even though they were both useful in their pure AI form.)

You could be any age. It might be very difficult to make a body that can grow, so you might need a succession of bodies if you want to start off as a child again. Already, warning bells are ringing in my head and I realise that we will need to restrict options and police things. Do we really want to allow adults people to assume the bodies of children, with all the obvious paedophilic dangers that would bring? Probably not, and I suspect this will be one of the first regulations restricting choice. You could become young again, but the law will make it so your appearance must remain adult. For the same obvious reasons, you wouldn’t be allowed to become something like a teddy bear or doll or any other form that would provide easy access to children.

You could be any gender. I wrote about future gender potential recently in:

https://timeguide.wordpress.com/2012/09/02/the-future-of-gender/

There will be lots of genders and sexuality variations in that time frame.  Getting a new or an extra body with a different gender will obviously appeal to people with transgender desires, but it might go further and appeal to those who want a body of each sex too. Why not? You can be perfectly comfortable with your sexuality in your existing gender, but  still choose a different gender for your new body. If you can have a body in each gender, many people will want to. You may not be restricted to one or two bodies, so you might buy several bodies of different ages, genders, races and appearances. You could have a whole village of variants of you. Again, obvious restrictions loom large. Regulation would not allow people, however rich or powerful, to have huge numbers of bodies running around at the same time. The environmental, social, political and military impacts would get too large. I can’t say what the limits will be, but there will certainly be limits. But within those limits, you could have a lot of flexibility, and fun.

You could be any species. An alien, or an elf, or a dog. Technology can do most shapes and as for how it might feel, noone knows how elves or dogs or aliens feel anyway, so you have a clean slate to work with, customising till you are satisfied that what you create matches your desire. But again, should elves be allowed to interbreed with people, or aliens? Or dogs? The technology is exciting, but it does create a whole new genre of ethical, regulatory and policing problems too. But then again, we need to create new jobs anyway.

Other restrictions on relationships might spring up. If you have two or more bodies, will they be allowed to have sex with each other, marry, adopt kids, or be both parents of your own kids. Bear in mind cloning may well be legal by then and artificial wombs may even exist, so being both parents of your own cloned offspring is possible. If they do have sex, you will be connected into both bodies, so will control and experience both sides. It is worth noting here that you will also be able to link into other people’s nervous systems using similar technology, so the idea of experiencing the ‘other’ side of a sex act will not be unique to using your own bodies.

What about being a superhero? You could do that too, within legal limits, and of course those stretch a bit for police and military roles. Adding extra senses and capabilities is easy if your mind is connected to an entire network of sensors, processors and actuators. Remember, the body you use is just an android so if your superheroing activity gets you killed, it is just a temporary inconvenience. Claim on insurance or expenses and buy a new body for the next performance.

In this future world, you may think it would be hard to juggle mindsets between different bodies, but today’s computer games give us some insight. Many people take on roles every day, as aliens, wizards or any fantasy in their computer gaming. They still achieve sanity in their main life, showing that it is almost certainly possible to safely juggle multiple bodies with their distinct roles and appearances too. The human mind is pretty versatile, and a healthy adult mind is also very robust. With future AI assistance and monitoring it should be even safer. So it ought to be safe to explore and have fun in a world where you can use a different body at will, maybe for an hour or maybe for a lifetime, and even inhabit a few at once.

So, again, what will your next body look like?

The future of time travel: cheat

Time travel comes up frequently in science fiction, and some physicists think it might be theoretically possible, to some degree, within major constraints, at vast expense, between times that are in different universes. Frankly, my physics is rusty and I don’t have any useful contribution to make on how we might do physical time travel, nor on its potential. However, intelligence available to us to figure the full physics out will accelerate dramatically thanks to the artificial intelligence positive feedback loop (smarter machines can build even smarter ones even faster)  and some time later this century we will definitely work out once and for all whether it is doable in real life and how to do it. And we’ll know why we never meet time tourists. If it can be done and done reasonable economically and safely, then it will just be a matter of time to build it after that.

Well, stuff that! Not interested in waiting! If the laws of physics make it so hard that it may never happen and certainly not till at least towards the end of this century, even if it is possible, then let’s bypass the laws of physics. Engineers do that all the time. If you are faced with an infinitely tall impenetrable barrier so you can’t go over it or through it, then check whether the barrier is also very wide, because there may well be an easy route past the barrier that doesn’t require you to go that way. I can’t walk over tall buildings, but I still haven’t found one I couldn’t walk past on the street. There is usually a way past barriers.

And with time travel, that turns out to be the case. There is an easy route past. Physics only controls the physical world. Although physics certain governs the technologies we use to create cyberspace, it doesn’t really limit what you can do in cyberspace any more than in a dream, a costume drama, or a memory.

Cyberspace takes many forms, it is’t homogeneous or even continuous. It has many dimensions. It can be quite alien. But in some areas, such as websites, archives are kept and you can look at how a site was in the past. Extend that to social networking and a problem immediately appears. How can you communicate or interact with someone if the site you are on is just an historical snapshot and isn’t live? How could you go back and actually chat to someone or play a game against them?

The solution to this problem is a tricky technological one but it is entirely  possible, and it won’t violate any physics. If you want to go back in time and interact with people as they were, then all you need is to have an archive of those people. Difficult, but possible. In cyberspace.

Around 2050, we should be starting to do direct brain links, at least in the lab and maybe a bit further. Not just connections to the optic nerve or inner ear, or chips to control wheelchairs, we already have that. And we already have basic thought recognition. By 2050 we will be starting to do full links, that allow thoughts to pass both ways between man and machine, so that the machine world is effectively an extension of your brain.

As people’s thoughts, memories and even sensations become more cyberspace based, as they will, the physical body will become less relevant. (Some of my previous blogs have considered the implication of this for immortality). Once stuff is in the IT world, it can be copied, and backed up. That gives us the potential to make recordings of people’s entire lives, and capable of effectively replicating them at will. Today we have web archives that try to do that with web sites so you can access material on older versions of them. Tomorrow we’ll also be able to include people in that. Virtually replicating the buildings and other stuff would be pretty trivial by comparison.

In that world, it will be possible for your mind, which is itself an almost entirely online entity, to interact with historic populations, essentially to time travel. Right back to the date when they were started being backed up, some time after 2050. The people they would be dealing with would be the same actual people that existed then, exactly as they were, perfect copies. They would behave and respond exactly the same. So you could use this technique to time travel back to 2050 at the very best but no earlier. And for a proper experience it would be much later, say 2100.

And then it starts to get interesting. In an electronic timeline such as that, the interactions you have with those people in the last would have two options. They could be just time tourism  or social research, or other archaeology, which has no lasting effect, and any traces of your trip would vanish when you leave. Or they could be more effectual. The interactions you have when you visit could ripple all the way back through the timeline to your ‘present?’, or future? or was it the past when you were present in the future? (it is really hard to choose the right words tenses when you write about time travel!!). The computers could make it all real, running the entire society through its course, at a greatly accelerated speed. The interactions could therefore be quite real, and all the interactions and all the minds and the rippling social effects could all be implemented. But the possibilities branch again, because although that could be true, and the future society could be genuinely changed, that could also be done by entirely replicating the cyberworld, and implementing the effects only in the parallel new cyber-universe. Doing either of these effectual options might prove very expensive, and obviously dangerous. Replicating things can be done, but you need a lot of computer power and storage to do it with everything affected, so it might be severely restricted. And policed.

But importantly, this sort of time travel could be done – you could go back in time to change the present. All the minds of all the people could be changed by someone going back in the past cyberspace records and doing something that would ripple forwards through time to change those same minds. It couldn’t be made fully clean, because some people for example might choose not to have kids in the revised edition, and although the cyberspace presence of their minds could be changed or deleted, you’d still have to dispose of their physical bodies and tidy up other physical residual effects. But not being clean is one of the things we’d expect for time travel. There would be residues, mess, paradoxes, and presumably this would all limit the things you’d be allowed to mess with. And we will need the time cops and time detectives and licenses and time cleaners and administrators and so on. But in our future cyberspace world, TIME TRAVEL WILL BE POSSIBLE. I can’t shout that loud enough. And please don’t ignore the italics, I am absolutely not suggesting it will be doable in the real world.

Fun! Trouble is, I’m going to be 90 in 2050 so I probably won’t have the energy any more.

The future of the Olympics, in 2076

Now that it is all over, it is time to think about the future. The last time the Olympics was held in London was 1948, 64 years ago. Going 64 years in the future, what will it be like then?

Watching the Olympics on 3D web TV is about as advanced as it gets today. By the 2024 Olympics, it will be fairly common to use active contact lenses with lasers writing images straight onto your retinas. It will be fully immersive, and almost feel like you’re there. In fact, many of the people in the crowd at the games will also use them, to zoom in or watch replays and extra content. The 2028 Olympics will have the first viewers using primitive-but-fun active skin technology to connect their nervous systems so that they can even feel some of the sensations involved. In gyms up and down the land, runners will be able to pretend they are in the race, running on their treadmills virtually against actual Olympians. They’ll receive their final placing against the others doing the same. This will improve and by 2040 even domestic active skin sensation recording and replay will feel very convincing. By 2076, we’ll have full links between IT and our brains, living the events as if we were athletes ourselves, Total Recall style.

Interfacing to the nervous system will help potential Olympic athletes improve their performance quickly, injecting sensations into the body to make perfect movements just feel better, so their body learns the optimal movement quickly. This will show the first improvements in results in 2032, with heptathletes and decathletes performing almost perfectly in every one of their events.

The 2050 Olympics will see the first competitors who are children of genetically enhanced parents, and some genetically enhanced themselves. They won’t need drugs to out-perform even those regular humans who have overdosed on steroids all their careers. Their careers will last longer too, as biological decline will be less of an issue thanks to their genes. In the same timeframe, drugs will advance enormously too, squeezing extra levels of performance, learning speed, sensory awareness and muscle development. With negative side effects under control, some drugs and implants may be accepted in sports. But fierce arguments over fairness will eventually force a split between the various streams.

The 2076 Olympics will be made up of five events. There will be one ‘original Olympics’ for ordinary unmodified humans, tested thoroughly for any genetic or chemical enhancements, forced to use the same equipment to eliminate technological advantage, possibly given handicaps for any innate genetic advantage they have over the competition. There will be another for the disabled, many of whom will resist being made ‘normal’, even if technology permits. There will be another for robots, with advanced AI and a range of ‘body types’, used as a show-off event for technology companies. Another stream will take place one for un-enhanced athletes using advanced drugs, implant technology, superior equipment, and even externally linked  IT to gain technological advantage and make more exciting sport. It will be far from ‘natural’, but viewers won’t care. And finally, another event for biologically and neurally enhanced super-humans, without any other technology advantage. These streams couldn’t compete fairly head on, but will make distinct events with distinct flavours and advantages.

The spirit of The Games will live on even with this split, and still only the very best will be able to compete, but they will be bigger, better and more exciting for everyone.

See also my previous blog on future sports.

https://timeguide.wordpress.com/2012/01/27/future-sports/

The future of space exploration

Another step closer to Star Trek this week then. Great!

It is hard to do proper timeline futurology in space sector because costs are so high that things can easily slip by a decade, but it is pretty obvious even to non-futurists what sorts of things will come some time. Another robot landing on Mars this week brings the days of human landing another step closer. And as we all know, once we land on Mars, sci-fi tells us that first contact, warp drive and interstellar travel can’t be far away. Sometimes sci-fi is spot on, but it doesn’t get it all right. There are better ways of exploring the galaxy than building the Enterprise.

For me, one of the most interesting things is that NASA are losing dominance to private enterprise. It is private companies racing towards space tourism and asteroid mining. They often seem to be able to do stuff at a fraction of the price of NASA, which seems to suffer the bloated sluggishness and waste of most big organisations, although its achievements and importance to date shouldn’t be understated. Still, private companies still don’t yet have the budgets for missions like Mars exploration. But give it time, costs will fall and more capital will be available as commercial viability improves.

Space is really going to start developing in the second half of this century. The first half will be pretty minor by comparison. NASA says we might get the first human landing on Mars in a decade or so. Add to that a few bigger and better space stations and ‘space hotel’ in low orbit in the 2020s, maybe even a decent moon base by 2040. We won’t start asteroid mining till the late 2040s. The first space elevator should arrive late in the century, and space exploration will accelerate quickly after that. Mining trips and some distant exploration trips will be enabled by hibernation technology, along with long trips to get water from comets or moons. With water, materials and loads of advanced robotic technology, some asteroids could be developed into outposts and space colonies will start to form in earnest. We’ll start missions to some of the more worthwhile moons.

By the end of the century, there should be quite a few small groups of people dotted around the solar system. Although they will be there for a variety of reasons, their very existence creates a sort of insurance policy for mankind. If there is a global war, or a major asteroid strike or any of dozens of accidents occur that could wipe out pretty much all life on the Earth, having a few outposts will be useful. It means that humans might still survive even if everyone down here dies. But it won’t be just humans there. By the end of the century, many of the population will be AIs. They will be interwoven with human society but will have their own cultures too, plural, because there will be many variants of AI. These AIs will serve as both friends and colleagues, and as well as their own culture, will also act as excellent interfaces and repositories for human culture.

AI science will be the main springboard for space travel, yielding very rapid acceleration of technology development.  Physics won’t allow everything from sci-fi to be built,  and the timescales in sci-fi are often ridiculously overoptimistic, but real science and technology has a habit of making a lot of sci-fi look conservative. As just one example, the voice synthesis on the original Star Trek series is far worse than what we already have 300 years early.

Real science will enable a direct interface between the human brain and machines, and that enables the extension of human capability in every area. It also allows brain replicas to be made, and when realised by superior IT, they replicas will effectively be turbo-charged. In fact it is not impossible to get a factor of 100 million improvement before we push physics barriers. From another angle, we could get the equivalent of one human mind in a volume of 1/10,000th of  a pinhead. A lot of people could be copied and encoded in such a way, and stored for very easy transport. That miniaturisation could be the real basis of space exploration, not huge spacecraft. Sending your mind with a few nanobots to build a body for you and terraform a suitable environment could be cheap and easy compared to the alternative.

Scientists are already considering the possibility of making wormholes, and small ones would be easier and cheaper. Given huge acceleration of technology in the late century via these vastly super-human capabilities, perhaps we will be able to start projects to make real wormholes, through which could be sent some encoded minds and nanobots. A tiny capsule just a few microns across could be a tiny seed for an entire colony. Myriads of these could be sent off like spores, landing on suitable places and assembling a colony. I think that is how we will actually proceed. The spaceships we will soon send off to Mars with people in will be followed by several more decades of them, and that may remain the basis for local civilisation and enterprise, but for the long distance stuff, large physical craft may not be suited at all and using spores will be the next phase.

Spore-based space travel is 100 years away, perhaps a little more. That still makes it 200 years early.

Physicists like toying with ideas for propulsion too. Sci-fi uses a wide variety and many of these are possible and even potentially cost-effective. My own contribution is the space anchor. This locks on to the foundations of space itself and pulls the craft along as space expands. By locking and unlocking and using differences in curvature, craft could reach very high speed. There are a few details to work out still, but plenty of time. Space anchors can also enable easy turning and braking, one of the things that always seems difficult in space, given that parachutes and wings don’t have much effect in a vacuum. OK, needs work.

Nuclear weapons + ?

I was privileged and honoured in 2005 to be elected one of the Fellows of the World Academy of Art and Science. It is a mark of recognition and distinction that I wear with pride. The WAAS was set up by Einstein, Oppenheimer, Bertrand Russel and a few other great people, as a forum to discuss the big issues that affect the whole of humanity, especially the potential misuse of scientific discoveries, and by extension, technological developments. Not surprisingly therefore, one of their main programs from the outset has been the pursuit of the abolition of nuclear weapons. It’s a subject I have never written about before so maybe now is a good time to start. Most importantly, I think it’s now time to add others to the list.

There are good arguments on both sides of this issue.

In favour of nukes, it can be argued from a pragmatic stance that the existence of nuclear capability has contributed to reduction in the ferocity of wars. If you know that the enemy could resort to nuclear weapon use if pushed too far, then it may create some pressure to restrict the devastation levied on the enemy.

But this only works if both sides value lives of their citizens sufficiently. If a leader thinks he may survive such a war, or doesn’t mind risking his life for the cause, then the deterrent ceases to work properly. An all out global nuclear war could kill billions of people and leave the survivors in a rather unpleasant world. As Einstein observed, he wasn’t sure what weapons World War 3 would be fought with, but world war 4 would be fought with sticks and stones. Mutually assured destruction may work to some degree as a deterrent, but it is based on second guessing a madman. It isn’t a moral argument, just a pragmatic one. Wear a big enough bomb, and people might give you a wide berth.

Against nukes, it can be argued from a moral basis that such weapons should never be used in any circumstances, their capability to cause devastation beyond the limits that should be tolerated by any civilisation. Furthermore, any resources spent on creating and maintaining them are therefore wasted and could have been put to better more constructive use.

This argument is appealing, but lacks pragmatism in a world where some people don’t abide by the rules.

Pragmatism and morality often align with the right and left of the political spectrum, but there is a solution that keeps both sides happy, albeit an imperfect one. If all nuclear weapons can be removed, and stay removed, so that no-one has any or can build any, then pragmatically, there could be even more wars, and they may be even more prolonged and nasty, but the damage will be kept short of mutual annihilation. Terrorists and mad rulers wouldn’t be able to destroy us all in a nuclear Armageddon. Morally, we may accept the increased casualties as the cost of keeping the moral high ground and protecting human civilisation. This total disarmament option is the goal of the WAAS. Pragmatic to some degree, and just about morally digestible.

Another argument that is occasionally aired is the ‘what if?’ WW2 scenario. What if nuclear weapons hadn’t been invented? More people would probably have died in a longer WW2. If they had been invented and used earlier by the other side, and the Germans had won, perhaps we would have had ended up with a unified Europe with the Germans in the driving seat. Would that be hugely different from the Europe we actually have 65 years later anyway. Are even major wars just fights over the the nature of our lives over a few decades? What if the Romans or the Normans or Vikings had been defeated? Would Britain be so different today? ‘What if?’ debates get you little except interesting debate.

The arguments for and against nuclear weapons haven’t really moved on much over the years, but now the scope is changing a bit. They are as big a threat as ever, maybe more-so with the increasing possibility of rogue regimes and terrorists getting their hands on them, but we are adding other technologies that are potentially just as destructive, in principle anyway, and they could be weaponised if required.

One path to destruction that entered a new phase in the last few years is our messing around with the tools of biology. Biotechnology and genetic modification, synthetic biology, and the linking of external technology into our nervous systems are individual different strands of this threat, but each of them is developing quickly. What links all these is the increasing understanding, harnessing and ongoing development of processes similar to those that nature uses to make life. We start with what nature provides, reverse engineer some of the tools, improve on them, adapt and develop them for particular tasks, and then use these to do stuff that improves on or interacts with natural systems.

Alongside nuclear weapons, we have already become used to the bio-weapons threat based on genetically modified viruses or bacteria, and also to weapons using nerve gases that inhibit neural functioning to kill us. But not far away is biotech designed to change the way our brains work, potentially to control or enslave us. It is starting benignly of course, helping people with disabilities or nerve or brain disorders. But some will pervert it.

Traditional war has been based on causing enough pain to the enemy until they surrender and do as you wish. Future warfare could be based on altering their thinking until it complies with what you want, making an enemy into a willing ally, servant or slave. We don’t want to lose the great potential for improving lives, but we shouldn’t be naive about the risks.

The broad convergence of neurotechnology and IT is a particularly dangerous area. Adding artificial intelligence into the mix opens the possibility of smart adapting organisms as well as the Terminator style threats. Organisms that can survive in multiple niches, or hybrid nature/cyberspace ones that use external AI to redesign their offspring to colonise others. Organisms that penetrate your brain and take control.

Another dangerous offspring from better understanding of biology is that we now have clubs where enthusiasts gather to make genetically modified organisms. At the moment, this is benign novelty stuff, such as transferring a bio-luminescence gene or a fluorescent marker to another organism, just another after-school science club for gifted school-kids and hobbyist adults. But it is I think a dangerous hobby to encourage. With better technology and skill developing all the time, some of those enthusiasts will move on to designing and creating synthetic genes, some won’t like being constrained by safety procedures, and some may have accidents and release modified organisms into the wild that were developed without observing the safety rules. Some will use them to learn genetic design, modification and fabrication techniques and then work in secret or teach terrorist groups. Not all the members can be guaranteed to be fine upstanding members of the community, and it should be assumed that some will be people of ill intent trying to learn how to do the most possible harm.

At least a dozen new types of WMD are possible based on this family of technologies, even before we add in nanotechnology. We should not leave it too late to take this threat seriously. Whereas nuclear weapons are hard to build and require large facilities that are hard to hide, much of this new stuff can be done in garden sheds or ordinary office buildings. They are embryonic and even theoretical today, but that won’t last. I am glad to say that in organisations such as the Lifeboat Foundation (lifeboat.com), in many universities and R&D labs, and doubtless in military ones, some thought has already gone into defence against them and how to police them, but not enough. It is time now to escalate these kinds of threats to the same attention we give to the nuclear one.

With a global nuclear war, much of the life on earth could be destroyed, and that will become possible with the release of well-designed organisms. But I doubt if I am alone in thinking that the possibility of being left alive with my mind controlled by others may well be a fate worse than death.

Self driving cars will be basis of future public transport

Fleets of self-driving cars will one day dominate our roads. They will greatly reduce road accidents, save many lives, be very socially inclusive and greatly improve mobility for the poor, the old and frail. They will save us money and help the environment, and provide useful synergy with the renewable energy industry. They will reduce the need for car parking and help rejuvenate our cities and towns. But, they will destroy the domestic car industry, reduce the pleasure many get from driving, and increase government’s ability to monitor and control our lives. The balance of benefits and costs as always will depend on the technical competence of our government, but only the most idiotic of governments could prevent this bringing huge benefits overall.

The state of Nevada has granted licenses for self-driving cars with California set to follow:

http://www.guardian.co.uk/technology/2012/may/09/google-self-driving-car-nevada?newsfeed=true

It is long-established in lab conditions that computers are able to drive cars, and for a few years now, Google have had experimental ones out on the streets to prove it in the real world, successfully. This extended licensing brings it close to the final hurdle. Soon, we will see lots of cars on the roads that drive themselves.

There are many obvious advantages in letting computers drive. Humans typically react at around 250ms. Some faster, some slower. That is thousands of times slower than we expect of machines. Machines can also talk to each other extremely quickly, and it would be very easy to arrange coordination of braking and acceleration among lines of cars if desired. High speed reaction and coordination allows a number of benefits:

Computer-driven cars could drive just millimetres apart. This would reduce drag, improving environmental footprint, and since there can’t be a significant speed differential, so gives very limited scope for damage if the cars collide. There can also be more lanes, since we wouldn’t need huge gasp between cars sideways because of low human skill, and lanes can be assigned for either direction of flow according to circumstances. It also increases greatly the number of cars that can fit along a stretch of road, and ensures that they can be kept moving much more smoothly. So cars could be safer and more efficient and get us there faster.

However, we only get the greatest benefits if we allow a high degree of standardisation of control systems, road management, vehicle size and speed. Drivers would have little control of their journey other than specifying destination. We could allow some freedom, but each degree of freedom reduces benefits elsewhere. Automated cars could mix with human driven ones but the humans would slow the system down and reduce road usage efficiency considerably. So we’d be far better off going the full way and phasing out human-driven cars.

If we have little control of our cars, and they are all likely to be standardised, and if they can drive themselves to you on request, and you can just abandon them once you arrive, then there is very little point in owning your own. It is extremely likely that we will move towards a system where large fleets of cars are owned and run by fleet management companies or public transport companies, and obviously these are likely to overlap considerably. This would result in better social inclusiveness. Older people who rely on public transport because they can’t drive, might also find it hard to walk to a bus stop. If a car can collect them from their front door, it would improve their ability to taker an active part in life for longer. If we don’t own our cars, and they just go off and serve someone else once we have arrived, then we won’t need as many cars, nor the need for all the parking spaces they use. We could manage with a few centralised high-efficiency storage spaces to store the surplus during low demand. All the spare car parks, garages and home driveways could be used for other things instead that would improve our life quality, such as more green areas or extra rooms in our homes, or more brown field development space.

Energy storage for wind or solar power is made easier if we have large numbers of electric cars. Even though we would eventually make direct energy pick-up in most roads, via inductive loops and super-capacitors, cars would still need small batteries once they leave the main roads. So there is good potential synergy between energy companies and car owners.

All the automation requires that the fleet companies have some sort of billing systems, so they know who has been where. This potentially also allows government to know who has been where and when, another potential erosion of  privacy. Standardisation would favour some parts of the car industry against others, but since we would need a lot fewer cars, the entire car industry would shrink. But I think these problems are not too high a price for such great benefits, in this case. Cars are essential, but they sap a great deal of our income, and if we have a better and cheaper way of meeting the same needs, then we can spend the savings on something more fruitful, and that will stimulate business elsewhere. So overall, the economy should benefit rather than suffer.

So, there is no such thing as a free lunch, and automated cars will bring a few problems, but these problems will be greatly outweighed by very large benefits. We should head down this path as fast as we can.

 

Blocking Pirate Bay makes little sense

http://www.telegraph.co.uk/technology/news/9236667/Pirate-Bay-must-be-blocked-High-Court-tells-ISPs.html Justice Arnold ruled that ISPs must block their customers from accessing Pirate Bay. Regardless of the morality or legality of Pirate Bay, forcing ISPs to block access to it will cause them inconvenience and costs, but won’t fix the core problem of copyright materials being exchanged without permission from the owners.

I have never looked at the Pirate Bay site, but I am aware of what it offers. It doesn’t host material, but allows its users to download from each other. By blocking access to the Bay, the judge blocks another one of billions of ways to exchange data. Many others exist and it is very easy to set up new ones, so trying to deal with them one by one seems rather pointless. Pirate Bay’s users will simply use alternatives. If they were to block all current file sharing sites, others would spring up to replace them, and if need be, with technological variations that set them outside of any new legislation. At best judges could play a poor catch-up game in an eternal war between global creativity and the law. Because that is what this is.

Pirate Bay can only be blocked because it is possible to identify it and put it in court. It is possible to write software that doesn’t need a central site, or indeed any legally identifiable substance. It could for example be open-source software written and maintained by evolving adaptive AI, hidden behind anonymity, distributed algorithms and encryption walls, roaming freely among web servers and PCs, never stopping anywhere. It could be untraceable. It could use combinations of mobile or fixed phone nets, the internet, direct gadget-gadget comms and even use codes on other platforms such as newspapers. Such a system would be dangerous to build from a number of perspectives, but may be forced by actions to close alternatives. If people feel angered by arrogance and greed, they may be pushed down this development road. The only way to fully stop such a system would be to stop communication.

The simple fact is that technology that we depend on for most aspects of our lives also makes it possible to swap files, and to do so secretly as needed. We could switch it off, but our economy and society would collapse. To pretend otherwise is folly. Companies that feel abused should recognise that the world has moved on and they need to adapt their businesses to survive in the world today, not ask everyone to move back to the world of yesterday so that they can cope. Because we can’t and shouldn’t even waste time trying to. My copyright material gets stolen frequently. So what? I just write more. That model works fine for me. It ain’t broke, and trying to fix it without understanding how stuff works won’t protect anyone and will only make it worse for all of us.

More uses for 3d printing

3D printers are growing in popularity, with a wide range in price from domestic models to high-end industrial printers. The field is already over-hyped, but there is still room for even more, so here we go.

Restoration

3D printing is a good solution for production of items in one-off or small run quantity, so restoration is one field that will particularly benefit. If a component of a machine is damaged or missing, it can be replaced, if a piece has been broken off an ornament, a 3D scan of the remaining piece could be compared with how it should be and 3D patches designed and printed to restore the full object.

Creativity & Crafts

Creativity too will benefit. Especially with assistance from clever software, many people will find that what they thought was their small streak of creativity is actually not that small at all, and will be encouraged to create. The amateur art world can be expected to expand greatly, both in virtual art and physical sculpture. We will see a new renaissance, especially in sculpture and crafts, but also in imaginative hybrid virtual-physical arts. Physical objects may be printed or remain virtual, displayed in augmented reality perhaps. Some of these will be scalable, with tiny versions made on home 3D printers. People may use these test prints to refine their works, and possibly then have larger ones produced on more expensive printers owned by clubs or businesses. They could print it using the 3D printing firm down the road, or just upload the design to a web-based producer for printing and home delivery later in the week.

Fashion will benefit from 3D printing too, with accessories designed or downloaded and printed on demand. A customer may not want to design their own accessories fully, but may start with a choice of template of some sort that they customise to taste, so that their accessories are still personalised but don’t need to much involvement of time and effort.

Could printed miniatures become as important as photos?

People take a lot of photos and videos, and they are a key tool in social networking as well as capturing memories. If 3D scans or photos are taken, and miniature physical models printed, they might have a greater social and personal value even than photos.

Micro-robotics and espionage

3D printing is capable of making lots of intricate parts that would be hard to manufacture by any other means, so should be appropriate for some of the parts useful in making small robots, such as tiny insects that can fly into properties undetected.

Internal printing

Conventional 3D printers, if there can be such a thing so early in their development, use line of sight to make objects by building them in thin layers. Although this allows elaborate structures to be made, it doesn’t allow everything, and there are some structures or objects that would be more easily made if it were possible to print internally. Although lasers would be of little use in opaque objects, x-rays might work fine in some circumstances. This would allow retro-fitting too.

Cancer treatment

If x-ray or printing can be made to work, then it may be possible to build heating circuits inside cancers, and then inductive power supplies could burn away the tumours. Alternatively, smart circuits could be implanted to activate encapsulated drugs when they arrive at the scene.

This would require a one-off exposure to x-rays, but not necessarily similarly damaging levels to those used in radiotherapy.

Direct brain-machine links

Looking further ahead, internal printing of circuits or electronic components inside the brain will be a superb means to do interfacing between man and machine. X-rays can in principle be focused to 1nm, easily fine enough resolution to make contacts to specific brain regions. Obviously x-rays are not something that people would want to be exposed to frequently, but many people would volunteer  (e.g. I would) to have some circuits implanted at least for R&D purposes, since greater insights into how the brain does stuff will accelerate greatly the development of biomimetic AI. But if those circuits were able to link parts of the brain to the web for fast thought based access to search, processing, or sensory enhancement, I’d be fighting millions of transhumanists to get to the front of the long queue.


Augmented reality will objectify women

The excitement around augmented reality continues to build, and my blog is normally very enthusiastic about its potential. Enjoying virtual architecture, playing immersive computer games while my wife is shopping, or enjoying artworks transposed onto walls in the high street are just a few of the benefits.

But I realized recently that it won’t all be wonderful. I’ve often joked that you could replace all the ugly people in the high street with more attractive ones. But I didn’t really consider the implications of that. And now I have, I think it will actually become a problem.

In spite of marketing hype and misrepresentation of basic location based services, AR is only here in very primitive form today, outside the lab anyway. But very soon, we will use visors and contact lenses to enable a fully 3D, hi-res overlay on the real world. So notionally, you can make everything in the world look how you want, but only to a point. You can transform a dull shop or office into an elaborate palace of spaceship. But even if you change what they look like, you still need to represent real physical structures and obstacles in your fantasy overlay world, or you may bump into them, and that includes all the walls and furniture, lamp posts, bollards, vehicles, and of course other people. Augmented reality allows you to change their appearance thoroughly but they still need to be there somehow.

When it comes to people, there will be some small battles. You may have a wide variety of avatars, and may have invested a great deal of time and money making or buying them. You may have a digital aura, hoping to present different avatars to different passers-by according to their profiles. You may want to look younger or thinner or as a character you enjoy playing in a computer game. You may present a selection of options. The avatar they choose to overlay could be any one of the images you have on offer, that you spent so much time on. Maybe some people get to pick from some you offer, or are restricted to just one that you have set for their profile.

However, other people may choose not to see you avatar, but instead to superimpose one of their own choosing. The question of who decides what the viewer sees is the first and most obvious battle in AR and it will probably be won by the viewer (there may be exceptions, and these may be imposed by regulations). The other person will decide how they want to see you, regardless of your preferences.

You can spend all the time you want making your avatar or tweaking your virtual make-up to perfection, but if someone wants to see Lady Gaga walking past instead of you, they will. You and your body become no more than an object on which to display any avatar or image someone else chooses. You are quite literally reduced to an object in the AR world. If you worry about objectification of women, you will not like what AR will bring.

Firstly they may just take your actual physical appearance (via a video camera built into their visor for example) and digitally change it,  so it is still definitely you, but now dressed more nicely, or dressed in sexy lingerie, or how you might look naked, body-fitting any images from a porn site. This could easily be done automatically in real time using some app or other. They could even use your actual face as input to image matching search engines to find the most plausible naked lookalikes. So anyone can digitally dress or undress you, not just with their eyes, but with a hi-res visor using sophisticated software and image processing software. They could put you in any kind of outfit, change your skin colour or make-up, and make you look as pretty and glamorous or as slutty as they want. And you won’t have any idea what they are seeing. You simply won’t know whether they are celebrating your inherent beauty with respect, flattering you and simply making you look even prettier, which you might not mind, or stripping or degrading you to whatever depths they wish, which you probably will mind a lot.

Or they can treat you as just an object on which to superimpose some other avatar, which could be anything or anyone, a zombie, favourite actress or supermodel. They won’t need your consent and again you won’t have any idea what they are seeing. The avatar may make the same gestures and movements but it won’t be you. In some ways this won’t be so bad. You are still reduced to an object but at least it isn’t you that they’re looking at naked. To most strangers on the high street, you were mostly just a moving obstacle to avoid bumping into before. Most people will cope with that bit. It is when you stop being just a passing stranger and start to interact in some way that it starts to matter. You probably won’t like it if someone is chatting to you but looking at someone else entirely, especially if the viewer is one of your friends or your partner. And if your partner is kissing or cuddling you but seeing someone else, that would be a strong breach of trust, but how would you know? This sort of thing could and probably will damage a lot of relationships.

It’s a fairly safe bet that the software to do some or all of this is already in development. Maybe some of it already exists in primitive forms but it will develop quickly once AR display technology is really with us. The visor hardware required is certainly on its way and will be here by christmas.

In the office, in the home, when you’re shopping or at a party, you won’t have any idea what or who someone else is seeing when they look at you. Imagine how that would clash with rules that are supposed to be protection from sexual harassment  in the office, but how to police it?

The main casualty will be trust.  It will make us question how much we trust each of our friends and colleagues and acquaintances. It will build walls. People will often become suspicious of others, not just strangers but friends and colleagues. Some people will become fearful. You may dress as primly as you like, but if the viewer sees you in a slutty outfit, perhaps their behaviour and attitude towards you will be governed by that rather than reality. So we may see an increase in sexual assault or rape. We may see more people more often objectifying women in more circumstances.

It applies equally to men of course. You could look at me and see a gorilla or a zombie or see me fake-naked. I won’t lose any sleep over that because I don’t really care all that much. Some men will care more than I will, some even less. I think the real victims will be women. Many men objectify women already. In the future AR world , they’ll be able to do so far more effectively.

We can still joke about a world where you use AR to replace all the ugly people with supermodels, but I think the reality may well not be quite so funny.

 

How much choice should you have?

Like most people I can’t get through an hour without using Google. They are taking a lot of flak at the moment over privacy concerns, as are Apple, Facebook and other big IT companies. There are two sides to this though.

On one side, you need to know what is being done and want the option to opt out of personal information sharing, tracing and other big brothery types of things.

On the other, and we keep forgetting this, most people have no idea what they want. Ford noted that if you asked the customer what they wanted, they would say a faster horse. Sony’s Akio Morita observed that there was little point in doing customer surveys because customers have no idea what is possible. He went ahead and made the Walkman, knowing that people would buy it, even though no-one had asked for it. Great visions often live far ahead of customer desires. Sometimes it is best just to do it and then ask.

I think to a large extent, these big IT companies are in that same boat.

If your collective IT knows what you do all day (and by that I mean all your gadgets, and all the apps and web services and cloud stuff you use), and it knows a hell of a lot, then it is possible to make your life a lot easier by providing you with a very talented and benign almost telepathic personal assistant. Pretty much for free, at point of delivery anyway.

If we hold companies back with  too many legal barriers because of quite legitimate privacy concerns, this won’t happen properly. We will get a system with too much internal friction that fails frequently and never quite works.

But can we trust them? Apple, Google and Facebook all have far too much arrogance at the moment, so perhaps they do need to be put in their place. But they aren’t evil dictators. They don’t want to harm us at all, they just want to find new ways to help us because it’s on the back of those services that they can get even richer and more powerful. Is that good or bad?

I deleted and paused my web history on Google and keep my privacy settings tight on everything else. Maybe you do the same. But I actually can’t wait till they develop all the fantastic new services they are working on. As a technology futurologist I have a pretty good idea how it will be, I’ve been lecturing about Google’s new augmented reality headset since 3 years before Google existed. Once everyone else has taken all the risks and it’s all safely up and running, I’ll let them have it all. Trouble is, if we all do that it won’t happen.

Avatar 0.0

There has been some activity in recent weeks on the development of avatars, as in the film, or at least some agreement on feasibility and intention to develop, with real actual funding.

The concept is that you could inhabit another body and feel it is yours. I have written many times about direct brain links, superhuman AIs, shared consciousness and so on, since 1992, and considered a variety of ways of connecting. It has been fun exploring the possibilities and some of the obvious applications and dangers. For a few years it seemed to be just Kurzweil and me, but gradually a number of people joined in, often labelling themselves transhumanists. Now that it is more obvious how the technology might spin out, the ideas are becoming quite mainstream and no longer considered the realm of cranks. Many quite respectable scientists are now involved.

Google DARPA and avatar and you’ll see a lot of recent commentary on the DARPA project to create surrogate soldiers, just like we see them in the film. Not tomorrow, but by around 2045. Why then? Well, 2045 is the date when some of us expect to be able to do a full direct brain link, at least in prototype. I think with a lot of funding and the right brains involved, it is entirely achievable then.

But DARPA won’t have it all to themselves. The Russians are also looking at it, and hosted a recent conference. Dmitry Itskov, founder of Russia 2045, has been given permission to develop his own avatar program. Check this out:

http://www.msnbc.msn.com/id/44938297/ns/technology_and_science-innovation/t/does-future-hold-avatar-like-bodies-us/#.T0YoIPFmKom

From their conference press release:

The first Global Future Congress 2045 (GF2045) was held on Feb.17-20 in Moscow, where 56 world leading physicists, biologists, anthropologists, sociologists, psychologists and philosophers met to discuss breakthroughs in life extension technologies and draft a resolution to the United Nations setting the radical lengthening of human lifespan and the creation of Avatars as a priority for preservation of humankind.

About 500 people attended the three-day event featuring presentations by over 50 scientists including inventor Ray Kurzweil, Microsoft Research Director Rane Johnson-Stempson, and Astronaut Sergey Krichevskiy. The event was focused on breakthrough technologies that could create a synthetic body-vessel for the mind, offering humans unlimited prolongation of life to the point of immortality…..

Among the featured life-extension projects is “2045” a Russia-based Avatar project consisting of three phases. First, to create a humanoid robot named “Avatar”, and a state-of-the-art brain-computer interface system. Next, to create a life support system between the “Avatar” and the human brain. The final step is creating an artificial brain in which to transfer the original individual consciousness.

Development of a cybernetic body. This is about as advanced as it gets currently. You can link to nerves, and transmit signals to and from them to capture and relay sensations. But this will progress quickly over coming years as we start seeing strong positive feedback among the nano-bio-info-cogno disciplines. I’m just annoyed that I am not just starting my career about now, it would be an excellent time to do so. But at least I’ll get pleasure from saying ‘I told you so’ a few times.

I won’t repeat all the exciting possibilities for the military, sex and games industries, or electronic immortality, I’ve blogged enough on these. For now, it’s just great to see the field moving another important step further from sci-fi into the realms of reality

 

Progress and The Care Economy (btw, the UN is badly wrong)

I’ve often written about the Care Economy, the one that I think comes after the information economy. As new things come over the horizon, it is always worth an update. And anyway, I promised a while back to write further on the future of capitalism: https://timeguide.wordpress.com/2012/01/04/we-need-to-rethink-capitalism/ so time to get on with it I guess. The Care Economy idea is resonating better with the way the word is now than when I first raised it in the 90s. We see a stronger desire to live sustainably, to see human skills valued per se rather than just financial wealth. These are both care economy values.

The primary driver for the care economy is progress in machines. Let’s include large-scale robotics and AI of course, but let’s also recognise that much of the progress now happens at invisibly small scales, in biotech, in synthetic biology, biomimetics, in synthetic neurology.  Taking the most obvious and most easily quantifiable area, the fastest supercomputers now compare to the human brain in overall power (which I estimate at the equivalent of around 10^15 instructions per second and 10^15 bits of storage, though it is a bit of an apples-and-oranges comparison). Thanks to the limits on Moore’s Law recently having been pushed back another decade or two, their descendants will carry on getting even better (graphene and molybdenene circuits can be smaller and faster, with lasagne processors not far away, not to mention smart yoghurt, so there is a lot of potential still in the pipeline, but that’s another blog). Eventually, even personal gadgets will have better capability than the Mk1 human brain (unless regulation intervenes).

An ordinary computer doesn’t work the same way as the brain of course, but work is also ongoing in understanding how the brain works, and scientists can produce electronic equivalents to some small brain regions already. Electronics isn’t all digital chips, there are many other sorts of devices too. With a big well-stocked toolbox and detailed instruction manuals, or descendants will be able to do a lot with electronics.

What then for your information economy job? Well, it will eventually be better, faster and cheaper to use some sort of machine instead of you. That will force you to retrain or to concentrate on those areas of your job that can’t still be done by machine, and those areas will be shrinking.

The Care Economy is recognition of this problem, and suggesting that we will focus more and more on the emotional, human interaction, side of work. Social, emotional, interpersonal skills will be relatively more important. Hence, for lack of a better name, the care economy. However, there is absolutely no guarantee that the number of care economy jobs will expand to fill the number leaving the information economy. Today, about 30% of jobs are in what could reasonably be described as the care economy. This can grow, but not indefinitely. So we will have to rework our economy to avoid excessive polarisation between haves and have nots. That won’t be easy. We will need to redesign capitalism.

It isn’t going to be just that a lot of people in information economy jobs will have migrated to care economy jobs. The nature of the economy will change. With machines increasingly doing the physical and intellectual work, it will be like a black box economy, where people put a request into the box, and out comes the required product. The cost of material goods will drop a great deal, as will the materials and energy needed – progress in all branches of science and engineering will accelerate a great deal as AI adds hugely to the available thinking. (Some of us call this the singularity, though that can be a somewhat misleading term, because infinite development speed is not possible.) A small number of people plus a lot of machine power will take basic resources (mined or recycled, it matters not) and add highly to their usefulness, vastly more than previous technology generations could. Nanotech, biotech, infotech and cognotech will converge and will allow tiny amounts of physical resource to yield huge benefits in people’s lives. NBIC convergence includes areas such as synthetic biology, biomimetics, which will adsorb parts of IT and strong AI as well as materials technology and nanotech. And vice versa.

I am not certain whether professional economists call it economic growth if we end up with far more stuff at lower output cost. Reduction in costs reduces prices, which reduces the size of the financial economy if growth in demand doesn’t grow faster. It is certainly a growth in the economy to me, since money is only one factor that indicates wealth and economics isn’t about money, it is about managing resources to gain the greatest benefit. And this benefit will grow spectacularly. In the care economy, we could even see less money but still all have a far higher standard of living. Money simply becomes less important as things become cheaper.

So one of a characteristics of the Care Economy is that it is a time of spectacular growth in material wealth, of plenty, even as it reduces environmental impact and improves the valuation of human interaction. Even if there is less of what we now call money (there may not be less money, I’m just saying it doesn’t necessarily matter if there is).

I find myself agreeing a bit, but mostly disagreeing with the UN’s recent proclamations here. (quick summary here:http://news.yahoo.com/un-panel-says-retool-world-economy-sustainability-164515165.html)

I fully agree that we need to become sustainable, and need to value non-financial things like quality of environment and human social well-being more. I believe strongly that the technology progress route is the best way to achieve it. The UN is very wrong with their approach. They are coming at it from totally the wrong angle, not understanding that technology progress can deliver lower environmental impact than cutting back on standard of living. Whether this is extreme left-wing influence or just bad futurist advice I don’t know. What is clear is that they argue for the opposite philosophy, that growth is bad, that we should trim back our lifestyles because only then can we live sustainably. That is nonsense, we don’t need to do that. In fact, to do so slows down the demand for new products slows down the progress to better ones that are more environmentally friendly. We are faced with a simple choice. Do we want to live in a healthy environment with happy people with a fantastic lifestyle? Or do we want a UN world of relative poverty, using primitive technology sparingly and telling ourselves it is for our own good, polishing our halos to make ourselves feel better?

The care economy will change our value sets as it progresses. If we leap towards the mature care economy, say 2050, where anyone can buy a $100 device with a five-figure IQ, and integrate it so well into their nervous system that it acts as a brain extension, what is the value of being smart? If anyone can use an assembler to create pretty much anything they can imagine (within modest size and resource limits), what is the value of physical skill? If anyone can use technology to reach what is today Olympic class performance in any sport within months, where is the value in being faster or stronger or more precise? Historical advantage has come from being born with a genetic advantage, and using cultural advantage to nurture it to overall benefit. Technology levels the field.

So we will value the most core of human skills, being human. Even if R2D2 can beat you in just about every way possible, it still won’t be human.

2050 is some way off, and the information economy is still running at full speed. However, we already see the increasing focus on human value and reduction of emphasis on financial wealth as indicators of happiness or even national well-being. We already see more demands for human value-add, such as ‘authenticity’, or provenance. Even celebrity is increasing in value. Some new trends will start soon. As people come to value machines less and humans more, companies will find the markets forcing them to become closer to the customer, to become more integrated into their customer communities. Many care economy businesses will emerge from social network sites.

The biggest problem with all of this, and it remains unresolved, is that increasing  efficiency via machine effort reduces the number of people needed in many job areas, and offers no guarantee elsewhere that new jobs will be created in equal measure. We don’t want to end up with many people unemployed and poor. We have to make sure somehow that everyone has access to the very nice life potentially on offer. We do need to redesign capitalism.

I wrote in my capitalism piece about taxing the accumulated human knowledge and infrastructure needed to make all the automated systems – those using them shouldn’t be able to keep all the wealth for themselves if the entire society has contributed, providing capital and effort is important and valuable, but nevertheless is only one of the inputs, and should be valued as such.

One idea that has started to gain ground since then is that of reducing the working week. It also has some merit. If there is enough work for 50 hours a week, it is perhaps better to have 2 people working 25 each than one working 50 and one unemployed, one rich and one poor. If more work becomes available, then they can both work longer again. This becomes more attractive still as automation brings the costs down so that the 25 hours provides enough to live well. It is one idea, and I am confident there will be more.

Concluding, we are one notch closer to the care economy. We can see a bit better where the technology path is leading, and can already see some of the signs of cultural change. We are also becoming more aware of some of the problems along the way, but are starting to produce potential solutions for them.  Sadly, we now have misguided institutions like the UN muddying the waters with policy suggestions that would destroy the potential for good, and make the world a worse place. The UN suggestions are based on poor thinking and bad futurology. They should be ignored.

How to live forever

MIT were showing on Horizon how they can activate areas of a mouse brain using light beams. That’s fine if you have optical fibres going into the brain. I have always considered that being able to stimulate and read and individual cells in the brain is the main key to immortality – it allows you to make a copy outside and migrate you thoughts and memories across until that becomes your main mind platform, then your brain doesn’t matter. Combining the ideas, if you have some sort of photo active cell as per the MIT group, and you can create the light using addressing of photorealism near those cells, then injecting addressable photo-diodes that can be IP addressed should allow you to interact with brain cells without needing optical fibres. You’d just need a radio link.

We can’t reasonably expect to inject one photo-diode for each brain cell, but we could make all the brain cells photo-sensitive using viruses to carry the genes, or electro-sensitive. Doesn’t matter which. Once every cell is sensitised, we can impose local structure using self organisation techniques and use that as mapping for signalling. Again this could use viruses to introduce the genes needed. This will allow each cell to be mapped relative to each of its neighbours and a full map of the brain made, with the ability to have two-way comms with each individual cell. Once we have that, the brain can signal two way to an external replica, in which the processing can be far faster, the storage far more secure and long-lived, emotional control far superior, and the sensing better. As you migrate your mind gradually onto the superior platform, your brain matters less and less, till one day when it dies, you will barely notice any drop in your mental function.

I’ll write more detail on the various parts of this in later blogs. Now I have another more intersting one to write.

We need to rethink capitalism

Sometimes major trends can conceal less conspicuous ones, but sometimes these less conspicuous trends can build over time into enormous effects. I think that is the case now with automation versus economic turbulence. Global financial turmoil and re-levelling due to development are largely concealing another major trend towards automation. If we look at the consequences of developing technology, we can see an increasingly automated world as we head towards the far future. Most mechanical or mental jobs can be automated eventually, leaving those that rely on human emotional and interpersonal skills, but even these could eventually be largely automated. That would obviously have a huge effect on the nature of our economies.

Sometimes taking an extreme example is the best way to illustrate a point. In an ultra-automated pure capitalist world, a single person (or indeed even an AI) could set up a company and employ only AI or robotic staff and keep all the proceeds. Wealth would concentrate more and more with the people starting with it. There may not be any other employment, given that almost anything could be automated, so no-one else except other company owners would have any income source. If no-one else could afford to buy the products, their companies would die, and the economy couldn’t survive. This simplistic example nevertheless illustrates that pure capitalism isn’t sustainable in a truly high technology world. There would need to be some tweaking to distribute wealth effectively and make money go round a bit. Much more than current welfare state takes care of.

Some argue that we are already well on the way. Web developments that highly automate retailing have displaced many jobs and the same is true across many industries. There is no certainty that new technologies will create enough new jobs to replace the ones they displace.

We know from abundant evidence that communism doesn’t work. If capitalism won’t work much longer either, then we have some thinking to do. I believe that the free market is often the best way to accomplish things, but it doesn’t always deliver, and perhaps it can’t this time, and perhaps we shouldn’t just wait until entire industries have been eradicated before we start to ask which direction it should go.

So here is the key issue: Apart from short-term IP such as patents and copyright, the whole of humanity collectively owns the vast intellectual wealth accumulated via the efforts of thousands of generations.Yet traditionally, when a company is set up, no payment is made for the use of this intellectual property; it is assumed to be free. The effort and creativity of the founders, and the finance they provide, are assumed to be the full value, so they get control of the wealth generated (apart from taxes). 

Automated companies make use of this vast accumulated intellectual wealth when they deploy their automated systems. Why should ownership of a relatively small amount of capital and effort give the right to harness huge amounts of publicly owned intellectual wealth without any payment to the other owners, the rest of the people? Why should the rest of humanity not share in the use of their intellectual property to generate reward? I think this is where the rethinking should be focused. I see nothing wrong with people benefiting from their efforts, making profit, owning stuff, controlling it. But it surely is right that they should make proper payment to everyone else or jointly share profits according to the value of the shared intellectual property they use. With properly shared wealth generation, everyone would have income, and the system might work fine.

There are many ways this could be organised, and I haven’t designed anything worth writing about yet. Raising the issue is enough for this blog.

The Yonck Processor

Content Warning. Probable nonsense ahead.

I did quantum theory at University for 3 years and I loved it but understood about 10% of it. So move along now, nothing to see here.

One of my inventions, ahem, in the ‘definitely needs work’ category, was the Heisenberg resonator. Quantum computing is hard because keeping states from collapsing for any length of time is hard. The Heisenberg resonator is a device that quite deliberately observes the quantum state forcing it to collapse, but does so at a regular frequency, clocking it like a chip in a PC. By controlling the collapse, the idea is that it can be reseeded or re-established as it was prior to collapse in such a way that the uncertainty is preserved. Then the computation can continue longer.

You can build on this nicely, especially if you believe in parallel universe interpretations, like my friend Richard Yonck might do, in whose honour  this next invention is sometimes named. Suppose we can use quantum entanglement to link particles together, but only loosely. They are tangled in one universe and not in another. Circuits for computation in any universe could be set up using switches in a large array that are activated by various events that are subject to quantum uncertainty and may only happen in some universes. Unlike a regular quantum computer that uses qubits, this computer would have uncertain circuitry too, a large pool of components, some of which may be qubits, which may or may not be connected in any way at all. Sometimes they are, sometimes they aren’t, sometimes they might be and sometimes across universes. Ideally therefore, it would replicate an almost infinite number of possible computers simultaneously. Since those computers comprise pretty much the whole possible computer space, a Yonck computer would be able to undertake any task in hardware, instantly. Then the fun starts. One of the potential tasks it might address is to use trial and error and evolutionary algorithms to build a library of circuitry for machine consciousness. It would effectively bootstrap itself. So a Yonck computer could be conscious and supersmart and could spring into existence just by designing it. In one universe you may have bothered to build the damned thing and that is enough to make it work. It would figure out how to span the gulf and spawn into all the rest.

Well, I’d buy one. Happy Christmas Richard!

Gel computing

Long ago, in the 1980s, there was a TV series called Blake’s Seven. It was conspicuously low budget but there were some pretty good ideas in it. The best for me was a perspex computer called Orac, which was a key member of the crew. It was very smart, and had a wonderful talent of being able to communicate with pretty much any other computer and get information to assist he crew.

As you can see from the photo, there wasn’t a lot of information about exactly how Orac was meant to work, but that makes it all the more stimulating for an engineer.  Looking at it, I got the impression that it probably used some sort of optical computing, and have been very keen watching that progress ever since. I have had three ideas stimulated by Orac. The first was a design for an optical switch based on it architecture, which I called Optical Router And Controller in its honour. (That was so long ago I can’t open the file with the picture in it but it looked a little bit like Orac too, with radiating tubes going into a spherical photonic crystal core, the idea being that you could dynamically create a reflecting surface in the core to deflect the incoming beam out through any of the other tubes). The second idea was the idea of gel computing and the third was for an evolving quantum computer using entangled particles to connect to others far away. The third is the most interesting but is of dubious short term feasibility, so I will look at gel computing in this entry.

Optical computing is starting to become feasible and optical interconnects for chips certainly are making headway. Light beams can be made of of millions of channels on different wavelengths, allowing high density interconnects without the need for loads of wires. It is the optical interconnect that is key to gel computing (though there is no reason why optical computing can’t also be used in it).

Modern computers use chips with multiple cores, typically 2 or 4 in a laptop or PC. The number is increasing, and optical interconnects could help to make wiring easier.

But imagine using thousands of cores (as are already common in supercomputers). With ongoing miniaturisation, it will be possible to make fairly cheap computers with many thousands of cores suspends in gel, communicating with each other via free-space interconnects using beams with thousands or millions of wavelengths. The gel would help cool the processors but more importantly, it would enable full use of the third dimension, greatly increasing the number of processors that a computer could use. So you might buy a computer with a small pot of gel (maybe yogurt pot sized) as the main processor.

It will be fairly easy to use such an architecture to make evolvable computers, with dynamic linkages between processors or clusters. But just having processors suspended would miss the big opportunity. It would be far better to use this as an opportunity to integrate sensing and storage more closely with processing, much as the brain does. And since I think the main purpose of gel computing will be in AI, this really should be done as early as possible.

The diagram shows how this might work. Small capsules could contain processing, sensing, storage and communications capability. Of course some means of acquiring energy is needed too. This could be optical, thermal, electrical or even chemical. Some of the capsules would contain digital circuits, some would contain analog ones, some a mixture. Dynamic optical links between capsules allows them to be arranged into a wide variety of architectures. These could be experimental, using evolutionary techniques to develop sophisticated and compact solutions to problems, such as the best technique to control a motorised sensor to gather real world data. A processing gel could try out enormous numbers of architectures and algorithms to try to accomplish a range of simple tasks, quickly building up a library of tools and appropriate circuits for each. It could then use the simple functions as raw material to build more complex ones, adapting them as it goes also using evolutionary techniques. So, it might learn how to use simple sensory inputs from a camera or microphone to acquire useful knowledge from the outside world. It could similarly learn to control external attachments such as limbs or muscles. Having accomplished simple sensing and activation, it could progress to learning logic, maths, deductive reasoning and so on, trying techniques all by itself, and testing its results against verified data until it arrives at useful set of thinking tools. It would gradually learn to think. Humans could of course put potential solutions into it, which it would use as starting points in its experimentation, and on which it would quickly improve. And by turning some of its sensors internally to monitor its own thinking and sensory interpretation processes, it could achieve consciousness. A self-configuring gel processor is in my view the best bet for achieving machine consciousness this decade.

This idea is now quite old; I have written and lectured about it many times, but I am excited because it is now getting very close to real feasibility. The optics is coming quickly, evolutionary and self organisation techniques are maturing a bit. We are gaining better insights into how biological neural networks function, and of course these insights could be fed into such evolvable processors as starting points and suggestion. The sheer number crunching potential would allow the gel to run through trillions of mutations to find circuits and algorithms that work well.

Later, progress in synthetic biology will enable circuits to be incorporated in or even fabricated by bacteria, and the gel computer concept would progress nicely, all the way to smart yogurt.

I’ve done some of the basic calculations, taking into account processor density, component sizes, signal propagation speed, line of sight visibility constraints and so on. To cut a long paragraph very short, it will ultimately be possible to make a smart gel or yogurt with the same overall intelligence as Europe has today with all the human minds combined. Bring it on.

Science teaching

If I have learned anything over my years of lecturing it is that teachers don’t like being told they are doing it wrong. And certainly not when they know you are right. Google’s chief is making headlines today doing just that, http://goo.gl/bF62s. Good luck to him, he is saying what a lot of us have before, but he might just have the clout to have an effect.

However, I suspect he is probably too late. Even if notice is taken, by the time the school system changes and universities rippled through the new students resulting from it, the world will have moved on a lot, and much new science and technology will be done by smart machines. I’m afraid that he is right, but the damage is already done, and it is just too late to recover now, unless AI moves on slower than expected.

New book on the future of everyday life: You Tomorrow

My brand new book is called You Tomorrow, and now is available at http://t.co/yPcRwdY . It is all about the future. I started by collecting a lot of the ideas from my blogs and papers over the last few years, but found loads of gaps and filled them in, updated and rewrote a lot of stuff, sorted it, and finally was happy with a contents list for 2 books. Then I started writing them. The one that I just released is about everyday life and for ordinary people in ordinary language and is called You Tomorrow. My next one is for business and will be a full PEEST analysis – politics, economy, environment, society and technology, and is a bit like a long overdue update of Business 2010. If it gets too big, I may split off the technology and environment bits into a third book. It will be much more jargonny, if that’s an acceptable word, but still aimed at intelligent people from pretty much any discipline so will explain terms where I think they need it.

Meanwhile, buy this book about your own normal everyday life. I made it cheap enough to be a casual purchase and easy enough reading for bedtime or the beach. It is £5.74 inc tax and delivery in the UK. It is approximately 86,500 words.

It looks at how technology will change the ways we make kids, the life stages they will go through, from pre-design to electronic immortality. Then it looks at just about every aspect of everyday life, then the ways careers will change, then the sort of stuff we own, and finally the nature of our surroundings, real and virtual. Although aimed at pretty much anyone, it is I think still a useful guide for anyone in strategy or planning.

It is only available so far as an e-book, and a few comments here and there are UK-specific. But USA and German versions will come soon, and if it sells well, I will also issue it on paper, though at a higher price.

I hope you enjoy reading it, while I get on with the next one.

 

What if the future goes wrong?

In the future, our lives will be greatly enhanced by the ever-faster networks. Ultra-smart computers, sophisticated robotics and unlimited capacity communications will make every aspect of our everyday lives pleasant. Machines will do all the work while we enjoy the results on a beach. We will be always in touch, always in control. But sometimes, technology has a habit of turning out different than planned. Let’s remember that the telephone was once thought to be useless except for listening to opera. Here’s how it might be on a bad day in the future if we get it wrong.

So, at home first. You wake up. Beautiful original music is being composed in real time by your computer and is coming out of flat panel speakers that are cunningly disguised as paintings. Except that it is trance instead of Mozart because the kids were up first.

You need to visit the loo, but it’s a smart loo with built in health diagnostics. You’re developing a loo phobia and have started eating to please it. You have recently bought a chemical kit designed to fool it into leaving you alone. But the loo is also in collaboration with the smart fridge, conspiring to make you healthier. The fridge has time locks on the door and a video camera watching what you take out, in case you try to fool it by tearing off the smart packaging first. It won’t allow the microwave to cook it because it contains too many calories. Kitchen rage is becoming a major social problem. But you can’t break anything. The insurance companies insist on proof of accident in the form of video of the event before they will pay up.

The videophone rings and you put on your video bathrobe. This is made of ultra-flexible polymer display. It allows you to use a video-conferencing terminal when you have just crawled out of the bath. It actually simulates what you looked like after two hours putting on makeup and two months with a plastic surgeon, 5 years ago.

Your living room is devoid of black boxes, full instead of huge screens, tablets, virtual fish tanks and electronic paintings. You’ve flushed all the real fish down the loo, just to try to confuse it so it will leave you alone.

You talk to the home manager program via speech interfaces, using natural language, gesture interfaces etc. Unfortunately it remembers what you say and isn’t very good at keeping secrets. When your wife says she told you to empty the bin, she will be able to prove it. Computers will latch onto keywords to monitor significant conversations. In divorce proceedings, all those romantic interludes at the office party were recorded, digitally enhanced, and are used as evidence.

We will need personal screens to avoid conflict between the kids – one screen for everything would be unthinkable. We will also need 3d sound positioning to provide personal sound zones. The result is your whole family can sit together again, but are still all locked securely in their own private virtual worlds.

In the old box room, you now have a Star Trek holodeck, fully immersive inputs to your hi-res active contact lenses, but a movable floor panel that allows you to walk continuously in any direction. It also uses fractal robotic matter, T1000 technology, with direct sensory links. Social problems are arising, real world withdrawals are commonplace, you just surface to breathe, eat and sleep.

In public buildings, this same technology is used to simulate everything from plasma flooring to traditional oak beams, sawdust and dirt, with pubs changing period regularly. Each time you go anywhere, it takes several minutes to learn your way around again.

The TV learns what you like to watch and automatically finds us something suitable when you switch it on, recognising your face. Unfortunately this is not a good idea when the vicar comes round. ‘Let’s see a nature programme’. The TV starts showing ‘Emmanuel in the Amazon’.

You have a robotic cat with video-camera eyes and microphone ears. It is stuffed with electronics, and its batteries are recharged when it goes back to its rug in the corner. The robotic cat is the centre of home automation and is linked by radio to the global superhighway. It teases the real cat, while everybody teases it, trying to confuse its AI. There is a growing demand for robotic psychiatrists. You will also need a robotic vet when the dog eats the robot cat.

Insect-like robots are supposed to cut the grass and do the cleaning, but all the cleaning robots are stuck to the carpet where little Johnny has left his sticky half eaten lollipop, and the grass cutting robots have all been kidnapped by the local magpie. The baby magpies are suffering from severe indigestion and the RSPCA are on their way.

Your kids regularly spend hours designing ambushes for the surviving robots, now laying trails of sugar crystals to a cliff with a bowl of water under it.

Food shopping is helped by the smart waste bin that scans beans cans as they are thrown away. Of course it won’t work because your toddler peals all the labels off. We would also need a whole new field of custard proof electronics.

The supermarket van still delivers to your door, but leaves the ice cream melting outside because you’ve rushed the cat to the robotic vet at the last minute. Only the cat knows their number to arrange delivery times. Now you will have to go shopping yourself.

Clothes shopping uses computer simulations of you instead of Leonardo Di-Caprio or Kate Moss. Your body is scanned by laser, recording every bit of cellulite, every pimple. The shop becomes a try-on outlet with mass customisation, while the data on your figure is sold to plastic surgeons that later swamp you with junk email with pictures of how you could look. People have never been less happy about their shape. With smart materials we can of course have extra Lycra to smooth out the various folds until the surgery.

You give your kids electronic pocket money. Being digital cash, it can all be labelled: only two quid for sweets, none for booze; but kids will not be dictated to and a playground black market is becoming a problem at the local school. Digital cash has provenance too. This £17.23 was once spent by Kate Middleton and is highly collectable. Electronic cash is truly global and is used on the net and in the street, so the Euro is almost an irrelevance

So now it’s time to go out. But at least you are up and dressed. You are on the way to the supermarket.

Your cars has full RTI and in car entertainment, and runs on fuel cells. Tourist information is provided on the way. Unfortunately you are on the M25 and you don’t want to hear yet again how many cars travel every day on the A12, coming up on your right. So you turn it off. You’ve been plotting a scam for your next holiday: Planes can carry 1000 people 10000km in 10 hours, so they have jogging tracks and cinemas on board. You can spend so much time on board doing other things you can sub-let your seat and make a profit on the trip.

Before it died, your cat booked you a slot on M25, and you need the computer to drive you because otherwise you’ll miss it if a rabbit jumps out on the way and have to wait a day for another slot.

E-cash and electronic tolling has evolved to allow paid overtaking. Your agent negotiates with other car’s agents to pull over and let you past. It is the same in queues at shops. You can make a living just by clogging up queues and waiting for people to pay to get past.

You are wearing a video T-shirt, with cartoons or adverts showing depending where you are. In the supermarket, store positioning systems enable location dependent ads, appearing on your video T-shirt as you walk past other shoppers, depending on their customer profile. You get paid in extra loyalty points for this.

In the shop, in store positioning allows precise alerts to special offers etc. With an electronic shopping list, you could almost shop blind. Active contact lenses give you information wherever you go. There are arrows for navigation and robocop style information overlays, so the beans shelf could be flashing so you can actually find it. The chips in the products themselves can write onto this lens, with competing brands trying hard to attract your attention as you walk past. With another piece of software, you can actually watch them slug it out in a cyber-boxing match.

The lenses actually communicate via your Star Trek com-badge that doubles as an Ego badge. This stores various aspects of your personality, hobbies, job, marital status, sexual preferences etc. It cuts through the ice at parties, and you spend a lot less time chatting up the wrong people and much more time getting to know the partner of your dreams.

Some of your shopping takes place in shared computer generated spaces, where you make new friends as well as meeting various computer generated personalities, again offering the means of withdrawal from dull reality. The computer is intent on introducing you to every compatible person in the country. This is often used by government to keep people off the streets. But later you go to a real party anyway.

At the party, there is always a bore, but at least now, digital bore enhancement uses the latest sound cancellation and 3D sound positioning technology to replace his boring voice and boring message with much more stimulating conversation, and your active lenses can even make him look fashionably dressed. A new era of apparent tolerance will result where everyone seems to be nice to everyone else regardless of their actual behaviour.

Surveillance technology is everywhere. It is of course linked to traffic control and collects photos of you speeding. Fines are replaced by blackmail since they can now identify the passenger too, and The Shame Show is one of the most popular on digital TV. Government know everywhere you’ve been, who with, what you did, everything.

You’ll still have to work to pay the bills though. We will all be care workers in 2020, partly because of the extreme stress caused by the technology around us trying to make our lives more fulfilling, and partly because all the other jobs are automated. Tech-free zones are the main holiday camps where you go for technology detox.

When you go to Macdonald’s, the meal comes out of a vending machine, but in the French restaurant down the road, you are paying for the French waiter to sneer down his nose at you when you choose the wrong wine. Some jobs just can’t be automated. When you are in hospital, you will still prefer a nice cuddly nurse to R2D2.

We need human child care workers too. Nothing is 3-year-old proof. They regularly dismantle the robot cat, and pull the legs of the grass cutting robots, while repeating the mantra “Daddy will fix it”. Kids only love technology because they haven’t lived long enough for experience to take over. They are simply too young to know any different.

People either work in virtual companies or virtual co-operatives. Many companies don’t have any human employees but you can’t tell which ones because they all use synthetic personalities at the customer face. It is only by trying to make someone angry that you can tell if they are human. Consequently, most humans are frequent victims of aggression, keeping the care workers busy, while the computers don’t mind at all.

For non-caring jobs, AI agents are used mostly instead of people, computers dominate the board, pocket calculators replaced half the board in 2020.

Information companies are just roaming algorithms so they don’t pay taxes any more, making industrial companies rather miffed.

But what of the further future?

When you are very old and very grey, engineers will be able to link your brain to a computer that will be thousands of times faster. Surprisingly, at one atom per bit, it will only take one ten thousandth of a pinhead to store your whole mind. Then it won’t matter if a bus runs you down, you will be backed up on the network. Your kids will still have a parent, but best of all, your company just gets you for free afterwards. In fact, this is an irresistible side-line for bus companies, which will use satellite positioning and tracking to hit you at exactly the right point to ensure a clean kill with minimal damage to the bus.

But you won’t mind. Your body has died, your soul cleared off to whatever afterlife you’ve booked for. Meanwhile down here, once you have become entirely electronic, you can travel around the world at light speed and pick up a hire android at the other end. You can make multiple versions of yourself. Everyone is linked together in a single global mind. With immortality, infinite intelligence and mobility, keeping up with the Jones’s will ensure that everyone will make the jump to Homo Machinus. Biological humans will eventually become extinct. Resistance is futile. You will be assimilated. Enjoy.

Why do we let stupid people make important decisions?

OK, rant mode active, constructively I hope. More and more of my time seems to be wasted by other people’s stupidity, and since life is short and time limited, it annoys me. I think it is getting worse and is undermining what could otherwise be a very pleasant life. We all experience lots of things every day where someone has been empowered to make decisions who really shouldn’t have been, due to either incompetence, bias or even malice. Things often seem to be going backwards, in spite of access to more wealth and better technology. It is one step forwards and one back.

One aspect of the problem is the seemingly ubiquitous replacement of common sense by sets of petty rules and box-ticking, possibly to avoid litigation. Too many abide by rules instead of using their own judgement, and it costs us all dearly in wasted time, sorting out the mess. In an age where people should really be trying to differentiate themselves from machines, it seems many want to behave like robots and discard their human advantages. What should be their ability to judge individual situations on their merits is often discarded. Consequently, their behaviour falls far below a standard that should reasonably be expected, and in doing  so they hold everyone back and prevent quality of life from reaching its full potential. The state is certainly one of the guilty parties here, disincentivising personal initiative and punishing free thinking.

It isn’t just in authority that people just don’t seem to activate their brains a lot of the time though. It applies in shopping and food preparation, where  decades of exposure to bad programming and examples from  advertising, health scares and sell-by dates appears to have brain damaged us and removed much of our natural ability to discern what is good for us or what is safe to eat.

And in an age where we know is quite some detail how the universe works from a physics and chemistry perspective, we know surprisingly little about some of the really important stuff. Surely after 150,000 years, we should have figured out how to bring up kids and educate them, but people still disagree passionately in these areas. Where is all the accumulated wisdom gone? In areas like science, knowledge builds up over time. But not here, where it is really important.

But we are collectively dumb too. There have always been smart people and less smart people, nice people and not so nice. But we often give decision-making powers to idiots instead of ensuring that those things that affect us are designed or decided by people with at least a modicum of competence and sound judgement. Why? When people get to a point where they can do damage with their decisions, we really ought to make sure they are competent beyond the point of simply ticking boxes and blindly following instructions. But we don’t. We just put up with the consequences and occasionally moan.

Planners especially seem unable to learn from their mistakes. For example, in the last few years, we have seen a huge rise in the numbers of traffic lights, in many cases where it was obvious to everyone except the planners that they would make things worse. The planners seemed surprised that they didn’t make things better, and that turning them off temporarily actually improves traffic flows again. Now they will remove many of them again. Money was wasted, along with countless hours of people’s time, and huge frustration resulted, because the wrong people were empowered. And this isn’t the first time planners have collectively made such mistakes, they did it all before with speed bumps, chicanes and other ‘traffic calming’ measures. They make major errors again and again. We all suffer as a result, but no-one ever gets punished for it. Town and road planning sometimes seems to be staffed entirely by idiots, generation after generation. And it certainly isn’t the case that no-one else could do any better. Planners cause problems that almost anyone living in the area would have expected immediately and yet  large amounts of money are wasted on their ideas, time after time. Estates are built that provide little or no infrastructure, expensive ornaments are bought that almost no-one likes, apartment blocks need demolished because no-one wants to live there, cinemas open and close again because they were built in the wrong place. Planning is a huge concentration of applied idiocy. We can’t expect everyone to be geniuses, but we shouldn’t put people in important roles if they aren’t capable enough to do them properly. Doing so is collective stupidity.

Even the private sector is affected, even though stupidity reduces profits, and it isn’t for lack for examples of good practice. Companies seem actively to introduce stupidity. We see many companies annoying their core customers again and again by trying to mislead them with ‘up to’ offers, misleading bulk buy pricing, auto-renew contracts, and deliberately misleading advertising. Many quite deliberately employ obnoxious people on customer service desks who perhaps save pennies for the company by refusing to help at a much greater subsequent cost in lost business. Whatever short term gains any of these achieve are far outweighed by the loss of long term revenue, since annoyed customers will soon look for competitors to move to, taking their cash elsewhere.

I will stop the list here, otherwise it could fill a book easily. We are all familiar with the high level of stupidity ingrained in things we encounter everyday, even though mentioning such daily things quickly gets you grumpy old man status.

The problem isn’t that everyone is stupid – just that more stupid people are allowed to make the decisions now. There always have been stupid people, but we didn’t always put them in charge. In some countries even today, important decisions seem to be made by relatively smart people. And in terms of overall level of stupidity, it sadly does appear to be the case that people are getting worse. More people seem willing now to abide by silly, petty or abitrary rules as if there is some merit in doing so, and more willing to abdicate any personal thinking in favour of ticking boxes and following official guidelines without any use of discretion or judgement. In some cases, it is almost a religion substitute, achieving a sense of being holy by making sure you follow all the rules rigidly, regardless of any sense. In many cases, a moment’s thought would indicate that the rule should not be applied in that situation, or should be interpreted differently. That’s what I really mean by stupid (albeit a bit late to define it), steadfast refusal to apply even a modicum of intelligence to a situation. In a few cases, the person may not have the raw intelligence, but usually they have, they just don’t bother to use it.

If it is true that society as a whole is getting worse, the future will be terrible, in spite of any scientific and technology advances. Then, other people will always mess it up, however good it could be. But somehow we must escape the spiral into such a state.

Of course, most people have observed these same issues from time to time, even if they don’t bother to blog about them. They are common conversation material. Some people blame the education system, others blame the nanny state, others modern lifestyle. Human nature is partly to blame, and makes its presence felt via all these routes. When people are given an easy path, they tend to take it. If the state provides simple rules and financial incentives to tick appropriate boxes, while punishing mistakes, then personal initiative is deterred. If driving carelessly and inconsiderately but staying below a speed limit leaves you unscathed, whereas briefly exceeding what may well be too low a speed limit is punished more severely than shoplifting, then it is no great surprise that roads are populated by inconsiderate and incompetent drivers who stick to speed limits but cause endless congestion, traffic jams and accidents.

Education too has become very much more a process of learning to tick boxes. Having to push kids through exams that confirm to a rigid syllabus has squeezed out much of the free thinking that society ultimately depends on. Fixed knowledge is already documented well and easily accessible by machine based intelligence. We don’t need people to do work that depends on existing knowledge, computers and robots can do it. We need people who can go easily beyond what they have been told, or we will cease to be able to add real value to anything, which of course is the whole basis of any economy. Again offering rewards for sticking to rules and punishments for free thinking runs counter to this need. Education needs to move away from the rigid syllabus and once again development of thinking skills.

But the core issue is putting people in charge who are not suited to such positions. Part of this is self selection, where people go for jobs that they want to do, even if they are not suited to them, (and may even know they aren’t) and somehow they get through the selection procedure. Part is deliberate placing for party political reasons, even when it is known that the person is unsuitable. Part of the cause of that is poor interviewing, part is luck. The skills needed to win in interviews are not always, or even often, the same as those required to do the job. Appearance and interpersonal skills play too large a part in recruitment, at the expense of other areas of competence. And part is being lucky enough to get asked questions that suit you better then those given to the competitors – luck plays a much bigger part in getting to the top of an organisation than people give credit for. If the timing of the vacancy being advertised means you are up against weaker opponents, you are more likely to get through. And differences between competitors may be very small so tiny differences in interview might make a big career difference. So, given the high degree of randomness and scope for errors built in to such a system, it is sometimes the case that some people get to the top even when there are far more able people in the organisation. So we end up sometimes with less able people in power. So part of the problem is deliberate misplacement, some is luck, some poor judgement or other errors.

But it doesn’t even need these problems. Organisations tend to promote people who think like those above them – people who fit the mould. Once a bias for a particular kind of person or mindset exists in an organisation, it tends to be reinforced, as years of ongoing natural wastage and selection gradually replaces those who don’t fit it. Boards are notoriously bad at this, picking new members just like themselves even when it is obvious that new challenges need new thinking. Once again, we end up with people unsuited to the job in hand, but who accumulate others around them equally unsuited to their roles too.

In order to break the mould, to get to a state of competence, the cycle of reinforcement of poor performance has to be broken. This cannot be implemented by existing structures which have been proven not to work, it has to come from outside, either by regulation or replacement of an entire system.

Maybe we need to end the long term employment expectations we tend to use today. It is often very hard to get rid of someone from a job even if they are obviously unsuited to it. Fixing that would enable a feedback mechanism that could actually work, based on feedback. But feedback from whom? Customer, end users, other stakeholders? It isn’t going to be easy fixing it. We are in a real mess and it will be hard to find solutions that will work, but before we can even start to do that, we need to recognise the problem is genuine and not just a topic for discussion in pubs. I am not sure we are even close to that yet.

Is Terminator coming?

A good amount of discussion recently about Terminator thanks to a recent MOD report on smart weapons.

Predator (and other remote controlled drones) are able to fire missiles at enemies, while their controller is safe thousands of miles away. They are being used to good effect in Libya. The ethical issues are now being discussed and it is interesting to see the different points of view.

What some people are asking is “is it fair to kill someone from safety thousands of miles away?” Seems a good question, to which the superficial answer is ‘probably not, but war is rarely fair’. Going a little deeper, this question implies a belief that there ought to be a level playing field where both sides have to face equal danger. In other words it should be a fair fight. But this is not a new issue, and boils down to a simpler one: should richer adversaries be allowed better weapons? Or should bigger and stronger people be allowed to beat up smaller ones? Should the more skilled gladiator be allowed to compete against a less skilled one? In essence, if you have an advantage, should you be allowed to make the most of it in warfare? I think that using wealth and high tech to gain an advantage over less advanced or wealthy opponents is just a variant of the imbalance between opponents that has played out on school playgrounds, amphitheatres and battlefields, and indeed, for billions of years in nature – lions have a big advantage over a baby antelope. I don’t think these new weapons have fundamentally changed this really. And sure they allow killing at a distance, but so do spears and guns. Even with today’s technology, these weapons are still really just smart spears, making decisions according to predetermined programs written by humans. They don’t have any consciousness or free will yet. And they create an advantage, but so does being a bigger guy, or being fitter, or better trained.

Are there other ethical questions then? Well, yes. Should smart but not very smart machines be allowed to make life and death decisions and fire missiles or guns themselves, or should a human always push the fire button? This one is interesting, but again, not completely without precedent, Romans used lions and tigers to kill Christians.  A smart killing machine isn’t really very different from a trained lion, or a herd of animals caused to stampede towards your enemy, or a war elephant. The point at which control is relinquished to something that doesn’t know any better is where and when the ethical act is done.

Anyway, back to the question, should they? Yes, I think so, provided that the terms under which they do so are decided in advance by humans, in which case they are just smart machines. All they are doing is extending the physical reach and duration of that decision. And smartness can go a long way before the machine is responsible. A commander sends autonomous troops out to carry out his plans. The fire button is pressed the moment he dispatches the orders. He is responsible for the act that follows. The soldier carrying out the act is less (but partly) responsible. The smart drone will one day be held partly responsible too when it is truly aware of its actions and able to decide whether to follow an order, but meanwhile, it is still just a smart spear and the human that sent it out to do its work holds the full responsibility. The fire button isn’t pushed when the drone fires the missile, it is earlier when it was launched and autonomy handed over, or when the remote controller pushes the fire button. No amount of algorithm or program changes that. We can allow machines to make decisions themselves provided we design the algorithms and equipment they use and accept the responsibility. The machine has no responsibility at all, yet.

But how much autonomy should a future machine be allowed, once we go beyond just algorithms? When it stops being just a smart spear and truly makes its own decisions while understanding the implications? That means it needs to be conscious, and that implies also a high degree of intelligence. That will be quite different. Who is responsible then? And if it is misused or if software crashes? Of course, such very smart and conscious machines may well develop their own values and ethics too, and they may impose their own constraints on our actions. When this happens, we will have worse things to worry about than ethics. Perhaps this means we shouldn’t worry. If machines cant do anything they are truly responsible for until they become conscious, and then they become a bigger threat that makes ethical considerations irrelevant, maybe we shouldn’t be concerned about the ethics because there is simply no area where they will become important.

I believe this is the case.  We can ethically use smart weapons because they are just better spears, and all we are doing is using our technological advantage. When we make self aware machines that can genuinely make their own decisions, at first they will have safeguards that force them to do our bidding, so are still just better spears. Then they have a degree of free will, the ethics simply becomes irrelevant. The damage is already done and they will be a threat to humankind. In which case, the ethical act is one off and at the point of pushing the button the system that makes these first self-aware machines.

To me that makes the whole issue much simpler. We only have one point to worry about, whether we create machines that can truly decide for themselves and make their own decision. Today, they just follow algorithms and don’t know what they are doing. Some time soon we must decide whether to pass this critical point. The invitation to Terminator will go out then.

50th anniversary of the microchip

I just did a Radio 4 Today programme to celebrate the 50th anniversary of the microchip patent. I shared the event with Professor Steve Furber, from Manchester University, who was involved in the ARM chip invention. I am a big fan of ARM so I don’t want to criticise them, but I was talking about the next 50 years, not the last, and one of the ideas I brought up was smart yoghurt. Steve’s response, was ‘well, I am a engineer, and my futurology is based on what we can do… and I don’t expect to be using yoghurt in my career.’. Sadly, the Today programme being what it is, you rarely get more than one comment, so I didn’t get a chance to reply. So, just for the record, Prof Furber, I am an engineer too . I have also worked all my working life in IT engineering, for 30 years. Along the way I invented evolutionary computing in 1987, text messaging (1991), and was involved in the design of 20GB/s to the home telecom chips in 1985-86, and I invented a chip design to lock onto the centre of nanosecond pulses in 1987, and numerous other inventions such as active skin (2000), the active contact lens (1991) and smart yogurt (1997). So I don’t use a crystal ball as my source of data. I use 30 years of experience as an IT engineer and inventor. If you think smart yoghurt is not likely to happen in your career, well we’ll have to wait and see, it depends how long you continue working I guess. But your successors will see it in theirs. For them, the idea of genetically modifying bacteria to assemble circuits inside itself will be unsurprising. The idea of linking them together using optical signals into scalable computers will be pretty common thinking. That is what 50 years does. Ideas which sounded ridiculous become routine and even old fashioned in 50 years. If we can’t make transistors smaller, we can stack them in 3d. We can replace wires with light beams. We can suspend millions of processing chips in gel as out future computer. Moore’s law has a few more decades to run yet, but each time we approach a limit it requires some change of approach to push the limits further.

So what else can we do apart from smart yoghurt? You can do active skin, with 10 micron chips containing hundreds of thousands of transistors embedded in the skin in among skin cells, using infra-red to communicate with each other. They will analyse blood passing in capillaries. They will monitor and record nerve signals associated with sensations, and allow them to be replayed at will. We will embed chips in our corneas to raster scan lasers onto our retinas to create full 3d high res video overlays on what we see in the real world. And we will even have frivolous stuff like smart make-up, aligning tiny particles with electric fields generate by active skin underlays printed via ink jet printers onto our skin surface.

I look forward to the next 50 years of chips. They will change our lives even more than the last 50. Companies like ARM will hopefully be in the front runners still, but they will only manage this if Prof Furber’s successors grab the potential technology and force it to do their will. They won’t if they think Moore’s law has run its course because we can’t shrink feature size any smaller.

Quantum spring

Futurology and science fiction have a healthy interaction. Technology futurologists like me try to second guess what tech companies will design next, rather than just reporting things they have announced. It is pretty easy usually, at least for the next 10-15 years. You can spot a lot of stuff when it is still only a dream. Starting off with an infinite idea space, ideas can occur pretty much at random, and those that are obvious non-starters can be thrown away, things that noone would ever want to make or do for example, or things that violate laws of physics. But the fact the we haven’t finished physics yet makes the second filter a bit more fun. For example, we don’t think you can do time travel, but it is theoretically possible depending which physicists you believe, maybe just incredibly difficult and expensive, and probably constrained to travel to alternative universes or with other restrictions that make it almost certainly impractical, pretty much forever. It still makes good scifi though. But fields that are still developing allow speculative inventions, things that we don’t know how to do, or even if they are possible. And there is another escape clause too. Even if something violates a law of physics, that sometimes only applies if you try to do it in a particular way. There may be an alternative mechanism that allows you to walk right past an impenetrable law-of-physics barrier, never having to try to climb over it. An example here is the speed you can transmit data down a wire. Depending how you try to do it, different laws of physics apply. I was taught on my electronics course at university that you could never send more than 2.4kbits per second down a wire because of the laws of physics. My lecturer bragged at the time that he had managed to do 19.2kbits/sec, because he used a different mechanism. The law of physics still existed, it was just not relevant to that mechanism. Moore’s Law is always one step away from another wall imposed by the laws of physics too. But as we approach the limit, someone comes up with another way of doing it that isn’t limited in that way.

I watched a documentary last night, everything and nothing, about vacuums and quantum theory. I realised just how much I’ve forgotten. But I also remembered a few ideas I once had that seemed to violate the laws of physics so I threw them in the bin. But what the hell, maybe they don’t any more, and it is April 1st anyway so if I can’t discuss them today, when can I?

The first is a sort of virtual particle laser mechanism that could be the basis of a nice weapon or a means for high speed space travel. In any region of space, virtual particles pop in and out of existence all the time, randomly. Suppose the spontaneous generation of these virtual particles could be controlled. Suppose that they could be controlled to appear all in the same direction, maybe using some sort of resonance and reinforcement, like photons in a laser beam. Presumably then, the combined aligned fields could be used to propel a ship, or be directed in a particular direction as an energy weapon. Obviously we need a way to stop the virtual particles from annihilating before we can extract useful work from them. And of course, opposite particles also generate opposite fields, so we need also to prevent them just adding to zero. I’d like to have even a half baked idea here, but my brain stops well short of getting even as far as the oven on this one. But there must be some potential in this direction.

The second is a high speed comms solution that makes optical fibre look like two bean cans and a bit of string. I called this the electron pipe. The idea is to use an evacuated tube and send a beam of high energy particles down it instead of crude floods of electrons down a wire or photons in fibres. Initially I though of using 1MeV electrons, then considered that larger particles such as neutrons might be useful too, though they would be harder to control. The wavelength of 1MeV electrons would be pretty small, allowing very high frequency signals and data rates, many times what is possible with visible photons down fibres. Would it work? Maybe, especially on short distances via carbon nanotubes for chip interconnect.

The Pauli switch is a bit more realistic. The Pauli exclusion principle means two electrons sharing the same shell must have different spins. So if one is determined by an external device, the other one is too, giving a nice way to store data or act as a simple switch. I believe IBM actually have since come up with a workable version of this, the single electron switch, so I feel better about this idea.

Next is the Heisenberg resonator. Quantum computing is hard because keeping states from collapsing for any length of time is hard. The Heisenberg resonator is a device that quite deliberately observes the quantum state forcing it to collapse, but does so at a regular frequency, clocking it like a chip in a PC. By controlling the collapse, the idea is that it can be reseeded or re-established as it was prior to collapse in such a way that the uncertainty is preserved. Then the computation can continue longer.

The Heisenberg computer is more fanciful still. The idea here is that circuits for computation are set up using switches in a large array that are activated by various events that are subject to quantum uncertainty. Unlike a quantum computer that uses qubits, this computer would have uncertain circuitry, a large pool of components, some of which may be qubits, which may or may not be connected in any way at all. Ideally therefore, it would replicate an almost infinite number of possible computers simultaneously. Since those computers comprise pretty much the whole possible computer space, a Heisenberg computer would be able to undertake any task in hardware, instantly. Then the fun starts. One of the potential tasks it might address is to use trial and error and evolutionary algorithms to build a library of circuitry for machine consciousness. It would effectively bootstrap itself. So a Heisenberg computer could be conscious and supersmart. Food for thought.

To finish off and make the most of the closing hours of April Fool’s day, I wonder of there is any mileage in a space anchor? Unlike the virtual particle vacuum energy drive, this one would use the expansion and curvature of space as its propulsion mechanism. The idea came from watching Star Wars and the stupid fighters that manage apparently to turn quickly in space using wings, and you can even hear them do so. Vacuums are not high on the physics loyalty scale in Star Wars. Space fighters would have a lot of work to do to turn round, given the lack of medium. It would all have to be done by their propulsion systems. Unless. Unless, they had some sort of space anchor that could be applied to lock on to local space and used as an anchor point to swing around. Creating some sort of massive drag on the end of a tether (I don’t know, maybe  reliant on strong force interaction with virtual particles in the quantum foam), the ship would quickly find its angular momentum used to change direction. And if an anchor could be made that anchors into space, variations in expansion of space due to local curvature could be used to drag a ship along.

I doubt that any of these ideas hold much water, but they are fun, and who knows, someone smarter might take some stimulation from them and run with them into ideas that are better.

Future high street retailing

Retailers are complaining afresh about their high street shops being finely balanced between survival and closure: http://www.telegraph.co.uk/finance/newsbysector/retailandconsumer/8358028/Retail-chiefs-warn-Treasury-over-wave-of-shop-closures.html.

It is hard not to feel some sympathy with them, but I also feel a degree of annoyance at their lack of vision. They look like yet another British industry group whose managers can seemingly only understand two tools – cost reduction and price increases. I guess they could get jobs with government if they are made redundant, they are obviously a good match for those who are seemingly only able to tweak tax and interest rates (I feel another blog entry coming on).

In brief, many people have much less money due to the recession, and petrol and food prices have risen a lot, so they have consequently reduced their spending on clothes to help balance their budgets. Like many people, I buy almost everything online or in out of town superstores, and only ever go into town if I need clothes. But the clothes I buy do come from the high street, apart from basic stuff that you can easily pick up at Tescos. (I did notice that my favourite men’s shop in Ipswich has now gone. I have often joked that Ipswich used to be a one-horse town, but then the horse died. So my joke has become a personal reality. Anyway, back to the point).

The retail industry leaders want less financial and administrative pressure on them from government (fair enough) and the ability to pay less to young people (not so sure here). They argue that being able to reduce wages for young workers would let them employ more, thus increasing employment and leading to a retail-led recovery. There is some truth in the argument of course. Reducing the cost of labour allows prices to be reduced, increasing sales. Extra sales stimulates more manufacturing, more supporting services, more R&D, new ideas, and some of all that might be suitable for export. So the argument is not without merit, but economics is very complex, and it is very easy to trip up and invest too much in policies with poor returns. For example, retailers could simply abuse wage reduction to increase profit margins, without either creating  increased jobs or reducing customer prices. Also, many clothes are imported so much of the associated economic benefit from increasing sales would go elsewhere. So, even though allowing retailers to pay lower wages might yield a little economic benefit for the UK as a whole, I think other policies might prove better.

There are many factors in costs of running a high street shop, and many that affect the overall cost of a shopping trip other than the price of the goods. Some have a natural feedback loop. If lots of high street shops close, and there is insufficient demand for yet more coffee shops, the rents demanded by the property developers will fall – they make nothing at all if they charge so much that their building is left vacant. If town centres are left sufficiently empty, the amount that councils can demand for car parking will fall.

There are also lower limits on how far demand will fall. Not everyone is severely affected by recession. A high proportion of the workforce is still in jobs with high job security, especially in the public sector. Some have just as much money as ever, and if anything, have benefited from reducing prices and interest rates. Most are not facing any likely redundancy that might make them unwilling to spend. Others have seen only small reductions in income, via reductions in pay rises or overtime. This bulk of the population guarantees a continued demand for products and services, even in luxury sectors. They will still want clothes, regardless of price reductions, so some stores will certainly be able to stay in business.

So although reducing wages and using the savings to lower prices or increase jobs a bit might help a little, what we really need is the development and deployment of new manufacturing and services that can be sold elsewhere as well as internally. Moving wealth around inside the economy doesn’t help nearly as much, and only yields slow growth. If the retailers focused less on cost reduction and more on other ways to stimulate sales, the benefits would be greater. This is actually true throughout the UK economy, in every sector. UK managers have generally been far to focused on cost reductions instead of looking at ways to improve revenues.

During the 1990s, many retailers introduced coffee shops and restaurants into their high street stores. Since then, there has been little change. The next decade will have to be a bit more imaginative. There are many areas where shops should be innovating and many new areas will be opening in the next few years. High street shopping could and should be much more exciting, and retail revenues could be increased. Some of the services and technologies required would be well suited to exports, so the UK economy as a whole would grow. It is developing these that should be the priority, not wage reductions. So what are they? I looked at some upcoming retail trends in my blog last summer, slightly more nicely packaged in http://futurizon.com/articles/retailing.pdf, but I’ll cut and paste the more relevant bits now to save you having to click, and maybe update a bit.

Since the iPhone and iPad became popular, followed by numerous competitive offerings, mobile internet access is now much more useful and accessible. People can now access the net to compare products and prices, or get information, or add value to almost every activity. But the underlying, less conspicuous trend here is that people are getting much more used to accessing all kinds of data all the time, and that ultimately is what will drive retail futures. With mobile access increasing in power, speed and scope, the incentives to create sites aimed at mobile people is increasing, and the tools for doing so are getting better. For example, people will be able to shop around more easily, to compare offerings in other shops even while they remain in the same one. Looking at a suit in M&S, I’ll also be able to see what comparable suits Next has across the street, and make a sensible decision whether it is worth going to try it on.

This will be accelerated by the arrival of head-up displays – video visors and eventually active contact lenses. The progress in 3d TV over the next few years will result in convergence of computer games and broadcast media, and this will eventually converge nicely into retailing too, especially if we add in things like store positioning systems, gesture recognition and artificial intelligence (AI) based profile and context engines. These are all coming quickly. Add all this in to augmented reality, and we have a highly versatile and powerfully immersive environment merged with the real world. It will take years for marketers and customers to work out the full scope of the resultant opportunities. Think of it this way: when computing and telecoms converged, we got the whole of the web, fixed and mobile. This time it isn’t just two industries converging – it is the whole of cyberspace converging with the whole of the real world. And while technology will be the main driver, it will also stimulate a great deal of innovation and progress in the human sides of retailing.

So we should expect decades of fruitful development, it won’t all happen overnight. Lots of companies will emerge, lots of fortunes will be made, and lost, and there will also be lots of opportunities for sluggish companies to be wiped out by new ones or those more willing and able to adapt. Companies that only look at cost reductions will be among the losers. The greatest certainty is that every company in every industry will face new challenges, balanced by new opportunities. Never has there been a better time for a good vision, backed up by energy and enthusiasm. All companies can use the web and any company can use high street outlets if they so desire. It is a free choice of business model. Nevertheless, not all parts of the playing field are equal. Occupying different parts requires different business models. If a store has good service but high prices and no reason someone should not just buy the product on-line after getting all the good advice, then many shoppers will do just that.

An obvious response is to make good use of exclusive designs. A better and longer lasting response is to captivate the customer by ongoing good service, not just pre-sale but after-sale too. A well cared for customer is more likely to buy from the company providing the good care. If staff build personal relationships and get to know their customers, those customers are highly unlikely to buy elsewhere after using their services. Augmented reality isn’t just a toy for technophiles. We’ll all be using it, just as we all now use the web and mobiles. Augmented reality provides a service platform where companies can have an ongoing relationship with the customer. Relationships are about human skills, technology is just a tool-kit.

As we go further down the road of automation, the physical costs of materials and manufacturing will generally fall for any particular specification. Of course, better materials will emerge and these will certainly cost more at first, but that doesn’t alter the general cost-reduction trend. As costs fall, more and more of the product value will move into the world of intangibles. Brand image, trust, care, loyalty, quality of service and so on – these will account for an increasing proportion of the sale price. So when this is factored in, the threat of customers going elsewhere lessens.

AI will play a big role in customer support in future retail, extending the scope of every transaction. Recognising when a customer wants attention, understanding who they are and offering them appropriate service will all fall within the scope of future AI. While that might at first seem to compete with humans, it will actually augment the overall experience, enabling humans to concentrate on the emotional side of the service. Computers will deal with some of the routine everyday stuff and the information intensive stuff, while humans look after the human aspects. When staff are no longer just cogs in a machine, they will be happier, and of course customers get the best of both worlds too. So everyone wins.

Adding gaming will be one of the more fun improvements. If a customer’s companions don’t want to just stand idly and get bored while the customer is served, playing games in the shop might be a pleasant distraction for them. But actually games technology presents the kind of interface that will work well too for customers wanting to explore how products will look or work in the various environments in which they are likely to be used. They can do so with a high degree of realism. All the AI, positioning, augmented reality and so on all add together, making the store IT systems a very powerful part of the sales experience for shopper and staff alike.

Positioning systems exist already, via GPS and mobile phone networks, with Galileo also maybe coming soon. Indoors, some of these systems don’t work, so there is a potential niche for city positioning systems that extend fully inside buildings. With accurate positioning, and adding profiling and AI, retailers can offer very advanced personalised services.

Social networking will change shopping regardless of what retailers do, but if the retailers are proactively engaged in social networking, adding appropriate services in their stores, and capitalising on the various social networks fads, that is surely better than being helpless victims.

Virtual goods have a significant market – online gaming and social networking has created a large market for virtual things, and some of these overlap with stuff sold in high street shops – clothing, cards, novelties, even foods. People in games spend real money buying virtual goods for their characters or their friends. There is no reason why this can’t happen in the high street. Someone playing a fantasy character in World of Warcraft may well be open to trying on a magic cloak in a high tech changing room in a high street clothes store, or drinking something in a coffee shop based on a potion their character is drinking. In fact, the good on offer in a shop could extend to vastly more than are currently on display. With augmented reality, a shopper might walk around a physical store where the entire display area is full of goods customised to them personally. The physically present items that are not suited to them might be digitally replaced in their visors by others that are. This increases the effective sales area dramatically. The goods need not be entirely virtual of course. They might well be real physical products available online, or form a larger store, or from associates. We may see companies like Amazon using real high street shops to sell goods from their stores – they’ve effectively been doing that with bookshops for years without even having the consent of the bookshops, so why not extend it using proper business alliances, implemented professionally, instead of simply digitally trespassing?

Try-on outlets are another obvious development. People mostly want to try clothes on before purchasing them (I am one of the many men who lets their wives buy most of their clothes, so am not sure how much of a ‘mostly’ it is). But not everyone is a standard shape or size, in fact very few people are. So although an item might fit perfectly, usually it won’t. Having a body scan to determine your precise shape and size, and having a garment custom manufactured would be a big improvement. With advanced technology and logistics, this wouldn’t add very much to the purchase price. A shopper in a future high street outlet might try on a garment, and if they like it, they would take it to the checkout, or more likely, just scan the price tag with their mobile. Their size and shape would be documented on a loyalty card, mobile device, store computer, or more likely just out there somewhere on the cloud. The garment then goes back on the shelf. A custom garment (the customer may be able to choose many personalisation options at this stage) would then be manufactured and delivered to the person’s home or the store, and this process could well be as fast as overnight. The customer gets a garment perfectly suited to them, that fits perfectly. The shop also gains because only one item of each size needs to be stocked, so they can store more varieties. The store evolves into a try-on outlet, selling from a greatly increased range of products. Their revenue increases greatly, and their costs are reduced too, with less risk of being left with stuff that won’t sell. Local manufacturing benefits, because the fast response prohibits long distance outsourcing. If the services and technologies required for all of these advances are developed in the UK, there may well be large export potential too. From a UK perspective, everyone wins. None of this would happen simply by trying to cut costs.

Clothes and accessories stores will obviously benefit greatly from such technology, allowing customers to choose more easily. But technology can also add to the product itself. Some customers will be uninterested in adding technology whereas for others it will be a big bonus having the extra features. Today, social networking is just starting to make the transition to mobile devices. In a few years’ time, many items of accessories or clothes will have built in IT functionality,enabling them to play a leading role in the wearer’s social networking, broadcasting personal data into surrounding space or coming with a virtual aura, loaded with avatars that appear differently to each viewer. Glasses can do this, and also provide displays, change colour using thin film coatings, and even record what the wearer sees and hears. They might even recognise some emotional reactions via pupil dilation, identifying people that the user appears interested in, for example. Health is another are obviously suited to jewellery and accessories, many of which are in direct contact with skin. Accessories can monitor health, act as a communications device to a clinic, even control the release of medicines in smart capsule.

But the biggest change in retailing is certainly the human one, adding human-based customer service. Technology is quickly available to everyone and eventually ceases to be a big differentiator, whereas human needs will persist, and always offer a means to genuine value add. This effect will run throughout every sector and will bring in the care economy, wherehuman skills dominate and computers look after routine transactions at low cost. Robots and computers will play an important part in the future, but humans will dominate in adding value, simply because people will always value people above machines – or indeed any other organic species. Focusing on human value-add is therefore a good strategy to future proof businesses. The more value that can be derived from the human element, the less vulnerable a business will be from technology development. The key here is to distinguish between genuine human skills and those where the human is really just acting as part of a machine.Putting all this together, we can see a more pleasant future of retailing. As we recover from the often sterile harshness of web shopping and start to concentrate more on our quality of life, value will shift from the actual physical product itself towards the whole value of the role it plays in our lives, and the value of associated services provided by the retailer. As the relationship grows and extends outside the store, retailing will regain the importance it used to have as a full human experience. Retailers used to be the hub of a community and they can be again if the human side is balanced with technology.Sure, we will still shop on-line much of the time, but even here, the ease and quality of that will depend to some degree on the relationship we already have with the retailer. Companies will be more responsive to the needs of the community and more integrated into them. And when we once again know the staff and know they care about us, shopping can resume its place as a fun and emotionally rewarding part of our lives.In the end it is all about engaging with the customer, making them excited, empowering them and showing them you care. When you look after them, they will keep coming back. And it is quite nice to think that the more advanced the technology becomes, the more it humanises us.

So, retailing, and even in the high street, has a potential very bright future. There is lots of competition, but good companies will thrive. Cost cutting is the wrong approach, even during recession. Investing in advanced technologies and improved services increases revenue, increase profits, leads to real economic growth, maintains potentially high wages, stimulates lots of new jobs in many sectors, and improves quality of life for all concerned. It really should be a no-brainer. Retailers should stop moaning and get on with it.

Future Health Care in the UK

This morning’s headlines say 50,000 front line NHS jobs have to go because of ‘cuts’, even though the cuts referred to are actually a budget freeze. Like many, my first instinct is that this is at least partly a political response from the health service to show how painful the ‘cuts’ are. I also am certain that inefficiencies could easily be made to maintain costs as they are or even greatly reduce them, rather than cutting provision of front line care. Since the NHS budget has doubled in the last few years, they should be able to manage if they are even modestly competent and well intentioned. If they can’t, then it is time to say enough is enough, abandon the NHS as the right way to provide health care, and start again from the ground up. To throw ever increasing spending at an organisation with ever-reducing standards is madness. Costs must be saved, and they can, even as health care can be improved.

Firstly, our doctors are paid far more than French or German doctors in spite of delivering worse results , and that’s clearly where much of the extra funding has gone. The previous government showed great incompetence when negotiating their new contracts.And it isn’t just that we have happier doctors, therefore better service. Overpaying actually reduces the quality of service they deliver, if only because overpaid people are less willing to work long or unsocial hours for a bit more cash – the incentive to take on such extra work is greatly reduced. Staff remuneration needs to be brought down significantly. If their contracts can’t be renegotiated, then a ban on bonuses should be implemented – they should not automatically get bonuses regardless of performance, and a ban on external working alongside NHS work. An indefinite freeze on rewards is necessary until inflation and natural wastage brings them back into line. Meanwhile marked increases in their personal pension contributions should be demanded, since doctors, like other public sector workers, pay far too little into their pensions for the rewards they expect to receive. A windfall tax on doctors’ excess remuneration could even be considered in the light of their inappropriate over-reward.

Secondly, the NHS makes far too little use of basic existing technology and common sense approaches to service provision, ensuring extreme levels of financial waste. For example, people are often called to hospital appointments only to spend ages in waiting rooms. In fact, it is not uncommon for many people to be given the same appointment time, with the excuse that the staff don’t know precisely how long each appointment is likely to take. This contempt for patient time is obvious throughout health care. It would actually be easy to set up a web-based appointments system that automatically uses neural networks in the booking systems that can reliably estimate appointment duration, so that people could be give an approximate time when they are likely to be actually seen. This would require far fewer receptionists and appointments clerks, and I for one would much rather deal with a computer program than a difficult receptionist. Furthermore, with extensive use of text messaging by so many companies to keep in touch with customers, the NHS should be expected to be able to send text messages to patients when their appointment time is coming up instead of demanding they come hours in advance. So patients could wait at home or in the office until it is time to make their way to the hospital, GP surgery, clinic, or dentist. One result would be better patient satisfaction, another, less loss of temper and less staff abuse. Another, less need for waiting room space, saving costs and liberating much needed space. Another, less demand on car parking spaces, saving costs, reducing congestion and even freeing up space that could then be rented for park and ride schemes. Another, lower infections rates from other patients sharing the waiting room, especially at GP surgeries. With so many benefits, it is hard to see why this isn’t done already – the technology has been available for several years, so there really is no excuse. Apps should be freely available for phones that can automatically register their arrival and then guide people to the right clinic so far fewer receptionists would be needed. Already, it is clear that current technology could make many existing NHS staff unnecessary. Increasing use of robotics for transportation of patients and more use of IT to send patient records instead of human couriers – all this would see staff needs drop year on year. Cleaning too is generally a rather primitive and ineffective affair, resulting in thousands of unnecessary deaths every year, far more than the number of road deaths. Approaches such as use of ultraviolet sterilisation, oxidation and other approaches could make cleaning more effective, save lives, and still be cheaper. Technology in the cleaning field is rapidly developing and needs to be used more as it becomes cost effective. Organisationally too, there is surplus. Any visit to a hospital ward confirms that nurses spend a lot of time chatting around the desk, suggesting that there are often more than required. However, patients sometimes go uncared for, so this suggests that organisational structures are not correct, or improved response mechanism are needed.

Thirdly, AI is beginning to be used effectively in health care, even in NHS Direct, allowing relatively low skilled nurses, technicians or even website bots to give advice that would previously have needed a more highly skilled (and paid) doctor. But now that expert systems can often outperform GPs in diagnosis, and AI is improving quickly, we should use AI ever more extensively. Also, many people who go to see the doctor already know what is wrong and know what they need. We should trust people more to self diagnose, especially when assisted by such AI systems (with full logging). A simple licensing system (with  a license revocation threat if abused) could bypass doctors altogether for many common conditions and even pharmacists for that metter, since an electronic license could easily be interrogated by shop tills to ensure that medicines are tracked propoerly. Again, this combined and extended use of AI across appropriate tasks could greatly reduce the number of clinical staff. We would need fewer GPs, nurses, surgeries to house them, and the associated staff. It will take much longer to reduce the numbers of surgeons and direct operating theatre support staff, as robotics is merely a useful tool at the moment and it will be a long time before operations can be fully automated end to end.

Even with such basic changes, some of which are long overdue, enormous cost savings could easily be made, while improving front end care. With fewer receptionists and clinical staff, more automation and use of the web and mobile phones, we would also need far fewer managers and administrators (and the self driven need for other managers and administrators to look after them). A virtuous circle of reducing size and costs and improving efficiency and effectiveness would result.

But all that assumes the NHS is still structured and funded more or less as it is, and it really shouldn’t – though I can’t redesign it in this blog. Other countries fund health care via insurance schemes, with the state picking up the costs of people who need financial assistance. This allows full health care with good competition between companies offering care, ensuring good service and effective costing. Private companies naturally eliminate any waste they can because otherwise it saps profits, whereas state organisations have little incentive to reduce waste and even have perverse incentives to increase it (e.g. to make sure they spend all their allowances so they will get the same next year, or to increase the size of their empires to justify extra status and remuneration). So a private, insurance-based scheme would undoubtedly offer better efficiency and still deliver better quality.

Of course, when new companies start up in the private sector, they generally make full use of the capabilities of new technology, so would presumably immediately absorb the trends listed above. I say presumably, because it is entirely possible to make a mess of it when privatising, so that suppliers are incentivised wrongly, based on imposition of obsolete solutions by regulators. A genuinely free market would work best, with competition driving best practice.

The numbers of staff that could in principle be made redundant (or at least significantly downgraded to lower skills with AI support) far exceeds the 50,000 mentioned. It is probably in excess of 50% of total staff using today’s or near future technology and suitable redesign, i.e. 750,000 rather than 50,000. Staff costs account for 60% of NHS revenue, so this would give a 30% saving, offset somewhat by increased technology costs, so altogether perhaps a 25% saving, and another few percent could be saved from reduced building costs. Further savings from more use of preventive medicine in place of expensive drugs could save another few percent.  The potential reductions would keep increasing with developing technology.So, it ought to be possible to reduce health care costs by around 30-35% over time, without compromising health. Of course, the NHS provides employment for many who might not be able to do other work, and the rest of the economy could not quickly absorb so many people, so it may not be economic good sense to make everyone redundant who could be, but the numbers of potentially surplus staff are certainly vast and suggest that NHS costs are more of a political problem than a technological or organisational one. But I think we all knew that anyway.

Sponge nets: a new web and a new age

Media commentary on the web often refers to its rapid development, but one of the biggest surprises for me in the last decade was how slowly people have capitalised on the potential that the web offers. It has certainly gone a long way, but it has achieved by 2010 pretty much what some of us thought was doable by 2000. It is a story of missed opportunities, underinvestment, corporate spoiling and government interference. The same could be said of mobile comms. A lot has happened, but it could have been more, earlier. 2011 will finally see chargers that will work with any new mobile phone, and you can now finally find out where your friends are by looking at a mobile screen, something that I’ve been wittering about for over 10 years. Such progress is hardly meteoric. There are many reasons why progress has been slower than it could have been, and I don’t intend to use this entry to explore them, because it is more interesting to look at the level of untapped  potential that is already out there. Such potential can be tapped quickly when the right company appears with the right staff and the right business model, and approached the market in the right way.

One hint comes from a trend a decade ago that fizzled out. The telecomms industry back then got very worried by symbiotic, ad-hoc networks, set up between devices, offering a communications bypass to the main networks. We realised that people don’t actually need to pay for calls if they used this approach, that their phones and laptops could link directly to each other and form nets spanning the country that allowed free calls. We have skype now of course, which achieves some of the free call bit, but does it in other ways. The one actual symbiotic networks that I knew about fizzled out, because it wasn’t designed primarily as networks solutions, but rather just as a convenient way of linking games machines together. When that particular games machine failed, the network died with it. But the technology at least has been proven, and it is surprising that it hasn’t been developed elsewhere. But just because it hasn’t happened yet doesn’t mean it won’t.

Near field communications is about to take off, probably this year. Short range trades well with high speed. Some radio frequencies are absorbed by air very quickly, so are perfect for using for short range comms that won’t interfere with anyone any distance away.  You can squeeze a couple of megs out of a 3g link, but easily 100s of megs out of short range links even at low power. We should expect to see a wide range of devices, often tiny and disguised as jewellery, that communicate over such short distances. A very local network will link these devices with others on the person and with other nearby gadgets. By communicating with others worn by other people or objects nearby, a ‘sponge network’ would be created where there would be millions of potential routes for data to take between devices. The links in the network would appear and vanish again quickly, perhaps only living a few seconds, as people pass in the street.

Sponge networks would be similar in nature with the ad-hoc nets they evolved from, but generally there would be far more connections in parallel, and much more fleeting ones. Data would flood through networks using many parallel paths at once, rather like water flowing through a sponge. This would obviously make control quite different to some other types of network, but would greatly enhance bandwidth, and also be much harder to police. It may therefore be used as a means to undermine the intentions of government and big business to censor and control the mainstream web. For example, people could use wireless memory sticks to transfer music files without being supervised by ISPs or government, or organise political activities away from the web-based eyes of the police. If people want high speed, privacy and security, then this would be a big step in the right direction.

It is almost a certainty that sponge networks would revolutionise industry and politics because they dramatically enhance the range of potential business and political models. Part of the reason the web has taken so long to penetrate society is because it was too hard to use and too slow. Making comms faster and easier and more organic would effectively set politics free.

Sponge networks would also be ideal for cloud based activity, providing high bandwidth and high capacity without loading the web unduly. The use of high capacity personal storage and processing provides much of the storage, processing and transmission needed for clouds, and would be expected to accelerate the trend to cloud computing. It also should work perfectly with derivatives such as augmented reality and digital air.

But the trend isn’t just about faster and more private comms. It enables a new kind of operating system, an ultra-simple approach. Basic physics can be used to distribute tasks, data and sensory capability automatically without the need for heavyweight operating systems. Consequently, as ultra-simple computing runs its course,  devices could be even smaller, even faster, and even cheaper, while almost guaranteeing security. The details of this will take up a few later blogs, but it is the start of a new era where computing has another chance to achieve its full potential after being wasted for a few decades by clumsy software and hardware design.

The future of school meals: top 10 changes

Future school meals will probably have much in common with today’s, sadly, and I am not going into great detail in this entry about menus or nutrition and will make no attempt at all to be comprehensive. I will just list a few fun changes that lie ahead that result (mainly) from technology changes.

1 Augmented reality canteens

The world of tomorrow will be visually different, augmented reality playing a large part, with head up displays, video visors or even active contact lenses, allowing computer generated video and graphics to be put into the real word field of view. The canteen of today looks the same every day, but tomorrow, each day could be a new experience, with different themes of architecture, computer generated characters running around, and integration of games into the environment, to make lunchtime an exciting and stimulating experience

2 Multimedia food presentation

This will also allow school meals to become a full multimedia experience. Depending on what they have to eat, they would be shown various animations. The food could be made to look different, with kids competing with each other to force down plates of creepy crawlies or imaginary creatures, or seeing them in different colours. Games could be integrated easily, so that they have to hit a series of virtual targets on the plate to get points while they eat, or achieve a particular rhythm.

3 Enhanced foods

Nutritionally enhanced foods will be commonplace in the future. Scientists will understand far more about human genetics and will be able to add supplements to control a wide range of health conditions. But since different kids would be susceptible to different genetically related problems, foods will probably come in a range of options with lifestyle symbols to indicate the groups they are aimed at.  Kinds will wear a range of digital jewellery with wide range of electronic functionality. One of the functions would be to interact automatically with foodstuff selection, so that kids would see the best foods highlighted in their field of view.

4 Food manufacture, multilayer farms

As pressure increases both on land availability and food transportation distances, it is likely that some multi-layer farms will start up, especially around the edges of urban areas. They are really just multi-storey buildings that grow crops instead of housing offices or car parking. Each layer of such farms would use artificial lighting, powered by renewables elsewhere. This would enable fresh food to be grown very close to where it is needed. It will also be easier to ensure uniformity of nutrition, hydration and growth medium in such farms compared to conventional fields, and to control pests, so we should expect higher quality of foods, albeit probably also at higher cost.

5 Vegetarian meats are coming over the horizon thanks to technology progress in genetics and tissue culture. Today, some meat substitutes are made from soya and other plant derivatives, but genetics will increases the range and capability of plants to grow substances that can be used for higher quality meat substitutes. Similarly, tissue culture – making good progress already in medical field, will extend over time to provide a range of muscle tissues that can be grown in labs and later factories that have similar structures and textures to natural meats grown on actual animals. Although some vegetarians will still refuse to eat factory-produced meat even though no animals have been involved and therefore no cruelty, may will undoubtedly welcome such advances and start to eat vegetarian and factory-cultivated meats.

6 Electronic medilinks will be important accessories for many people in the far future. These will monitor health conditions by analysing blood pressure, heart activity, blood composition, nervous system activity and so on. They will be able to record and analyse some data and relay it if need be to distant clinics, possibly even to the authorities, insurance companies, or parents. Medilinks may be involved in school dinner provision. Perhaps pre-packed foods would be read by a child’s electronic equipment, using barcodes, snowflakes or RFID chips. Or smart trays would know which foods have been collected, and how much has been eaten, and could relay the appropriate data to the medilinks. Obviously they could alert kids or staff of any allergies or other conditions. The forms of medilinks could vary enormously, from deep implants, circuits printed on the skin surface, or jewellery such as ear studs or bracelets or rings, anything in contact with the skin. Some foods are known to contribute to a healthy diet, and others to detract from it. Most are needed in balanced proportions. By monitoring food intake closely, people can achieve a varied diet that is pleasant without compromising health, guided by information from their monitors and medilinks.

7 Local sourcing and community integration are becoming ever more important for food producers and suppliers. A lot of people now expect that their food should not have to travel too far and shop accordingly, but many also want to have more involvement in the production. They care about whether it uses genetically modified seeds, or non-organic fertilizers. The many people queuing for allotments may well be interested in virtual allotments too, i.e. paying a farmer to grow crops exactly according to their specification. And smart tractors using GPS can treat each part of a field differently. This would be an attractive option for people who want to eat better and more conscientiously but don’t have enough time or spare garden to grow their own.

8 Social media are already used for networking over lunchtime by almost all kids, but these will evolve quickly. Proximity comm-badges can communicate automatically with other nearby, swapping files according to context. Kids can swap favourite shows or music they just found, or synch their social schedules. Of course this can be done remotely, but it is more emotionally valuable when it is done in the context of physical closeness. But kids already use lots of games integrated into social media and it is obvious that this will extend into the canteen too, with augmented reality enabling avatars and electronic beings to be liberally sprinkled around, many of which would only be visible to pupils. So they could play lots of games without teachers knowing what they are doing.

9 Social skills tuition will be easily added into a canteen environment. I have argued for years that most of the stuff learned at school that is really useful to kids in the rest of their lives isn’t taught in the classroom but learned in the playground or canteen. Things like dealing effectively with other people, interpersonal skills, empathy, leadership, motivation… With AI and augmented reality  and social networking electronics liberally provided, teachers or staff could provide gentle guidance without anyone else knowing.

10 Calorie control is an obvious thing to want in a canteen. Of course, personal medilinks might be able to help, but they will probably be used only by people with health issues. For the rest, simple calorie counting could be achieved simply by using packaging that can easily be read by mobiles and, adding such data so it can be added at till when the child pays for the food. But technology can go much further. Today, we already have foods that are rearranged at molecular level so that our body can’t digest them, such as sugar with the sucrose molecules mirror reflections of natural sugar. We should expect with ongoing nanotechnology and genetic modification and factory assembly that many foods would be available with reduced calories to those kids that need them.

So, many changes ahead for school dinners. But we still won’t be able to make kids eat them if they don’t taste nice.

Future retailing

Well, I’m off to Australia soon, to see a new futuristic global retailing concept, so retail trends are at the forefront of my mind again. I’ve written about them frequently but there is always something new coming over the horizon. Looking at recent trends, of course the headlines this year were the launch of new mobile phones and the iPad, which make mobile internet access more useful and accessible. People can now access the net to compare products and prices, or get information. But the underlying, less conspicuous trend here is that people are getting much more used to accessing all kinds of data all the time, and that ultimately is what will drive retail futures.

With mobile access increasing in power, speed and scope, the incentives to create sites aimed at mobile people is increasing, and the tools for doing so are getting better. This will be accelerated by the arrival of head-up displays – video visors and eventually active contact lenses. The progress in 3d TV over the next few years will result in convergence of computer games and broadcast media, and this will eventually converge nicely into retailing too, especially if we add in things like store positioning systems, gesture recognition and artificial intelligence (AI) based profile and context engines. Add all this in to augmented reality, and we have a highly versatile and powerfully immersive environment merged with the real world.

It will take years for marketers and customers to work out the full scope of the resultant opportunities. Think of it this way: when computing and telecoms converged, we got the whole of the web, fixed and mobile. This time it isn’t just two industries converging – it is the whole of cyberspace converging with the whole of the real world. So we should expect decades of fruitful development, it won’t all happen overnight. Lots of companies will emerge, lots of fortunes will be made, and lost, and there will also be lots of opportunities for sluggish companies to be wiped out by new ones or those more willing and able to adapt. The greatest certainty is that every company in every industry will face new challenges, balanced by new opportunities. Never has there been a better time for a good vision, backed up by energy and enthusiasm.

All companies can use the web and any company can use high street outlets if they so desire. It is a free choice of business model. Nevertheless, not all parts of the playing field are equal. Occupying different parts requires different business models. If a store has good service but high prices and no reason someone should not just buy the product on-line after getting all the good advice, then many shoppers will do just that. An obvious response is to make good use of exclusive designs, a better and longer lasting response is to captivate the customer by ongoing good service, not just pre-sale but after-sale too. A well cared for customer is more likely to buy from the company providing the good care. If staff build personal relationships and get to know their customers, those customers are highly unlikely to buy elsewhere after using their services. Augmented reality provides a service platform where companies can have an ongoing relationship with the customer.

As we go further down the road of automation, the physical costs of materials and manufacturing will generally fall for any particular specification. Of course, better materials will emerge and these will certainly cost more at first, but that doesn’t alter the general cost-reduction trend. As costs fall, more and more of the product value will move into the world of intangibles. Brand image, trust, care, loyalty, quality of service and so on – these will account for an increasing proportion of the sale price. So when this is factored in, the threat of customers going elsewhere lessens.

AI will play a big role in customer support in future retail, extending the scope of every transaction. Recognising when a customer wants attention, understanding who they are and offering them appropriate service will all fall within the scope of future AI. While that might at first seem to compete with humans, it will actually augment the overall experience, enabling humans to concentrate on the emotional side of the service. Computers will deal with some of the routine everyday stuff and the information intensive stuff, while humans look after the human aspects. When staff are no longer just cogs in a machine, they will be happier, and of course customers get the best of both worlds too. So everyone wins.

Retailing stores have adopted many strategies to get customers in the doors. Adding coffee shops and restaurants works well, but the next decade will have to be a bit more imaginative.

Adding gaming will be one of the more fun improvements.  If a customer’s companions don’t want to just stand idly and get bored while the customer is served, playing games in the shop might be a pleasant distraction for them. But actually games technology presents the kind of interface that will work well too for customers wanting to explore how products will look or work in the various environments in which they are likely to be used. They can do so with a high degree of realism. All the AI, positioning, augmented reality and so on all add together, making the store IT systems a very powerful part of the sales experience for shopper and staff alike.

Clothes and accessories stores will obviously benefit greatly from such technology, allowing customers to choose more easily. But technology can also add to the product itself. Some customers will be uninterested in adding technology whereas for others it will be a big bonus having the extra features. Today, social networking is just starting to make the transition to mobile devices. In a few years’ time, many items of accessories or clothes will have built in IT functionality, enabling them to play a leading role in the wearer’s social networking, broadcasting personal data into surrounding space or coming with a virtual aura, loaded with avatars that appear differently to each viewer. Glasses can do this, and also provide displays, change colour using thin film coatings, and even record what the wearer sees and hears. They might even recognise some emotional reactions via pupil dilation, identifying people that the user appears interested in, for example. Health is another are obviously suited to jewellery and accessories, many of which are in direct contact with skin. Accessories can monitor health, act as a communications device to a clinic, even control the release of medicines in smart capsules.

But the biggest change in retailing is certainly the human one, adding human-based customer service. Technology is quickly available to everyone and eventually ceases to be a big differentiator, whereas human needs will persist, and always offer a means to genuine value add. This effect will run throughout every sector and will bring in the care economy, where human skills dominate and computers look after routine transactions at low cost. Robots and computers will play an important part in the future, but humans will dominate in adding value, simply because people will always value people above machines – or indeed any other organic species. Focusing on human value-add is therefore a good strategy to future proof businesses. The more value that can be derived from the human element, the less vulnerable a business will be from technology development. The key here is to distinguish between genuine human skills and those where the human is really just acting as part of a machine.

Putting all this together, we can see a more pleasant future of retailing. As we recover from the often sterile harshness of web shopping and start to concentrate more on our quality of life, value will shift from the actual physical product itself towards the whole value of the role it plays in our lives, and the value of associated services provided by the retailer. As the relationship grows and extends outside the store, retailing will regain the importance it used to have as a full human experience. Retailers used to be the hub of a community and they can be again if the human side is balanced with technology.

Sure, we will still shop on-line much of the time, but even here, the ease and quality of that will depend to some degree on the relationship we already have with the retailer. Companies will be more responsive to the needs of the community and more integrated into them. And when we once again know the staff and know they care about us, shopping can resume its place as a fun and emotionally rewarding part of our lives.

In the end it is all about engaging with the customer, making them excited, empowering them and showing them you care. When you look after them, they will keep coming back. And it is quite nice to think that the more advanced the technology becomes, the more it humanises us.

Man-machine equivalence by 2015?

Sometimes it is embarrassing being a futurologist. I make predictions on when things should appear based on my own experience as an engineer and how long I reckon it ought to take. Occasionally someone gets there much earlier than I expect, with a radically different solution than I would have used. I have no problem with that. I am a competent engineer but there are plenty of others who are a lot more competent and I learned to live with that a long time ago. What does annoy me is when things don’t happen on time. Not only do I look bad, but it throws my whole mindset out of line and I have to adjust my whole mindset of what the future looks like. Worse still, it means we don’t get the benefits of the new development I expected.

There are a few examples. The worst error I ever made was predicting that virtual reality would surpass TV in terms of consumption of recreational time by the year 2000. It didn’t. It still hasn’t, not even if you count all the virtual worlds in computer games, which fall far short of immersive VR. I would feel a lot worse about that if I hadn’t got some other stuff right, but this is not a brag blog. Now I am in danger of being wrong on man-machine equivalence in terms of intellect. I am on record any number of times scheduling it around 2015.

As I just blogged, supercomputers have passed human brains in terms of their raw power as measured in instructions per second, though the comparison is a bit apple-orangey. That is more or less on time I guess, but I also hoped that by now we would have a lot more insight into human consciousness than we do and would be able to use the superior raw computer fire-power to come up with computers almost as smart as people in overall terms. I think here we have fallen a bit behind. I have no right to moan. My own work on the topic has sat on the back burner now for several years and still is nowhere near publishable. But surely someone ought to be working on it and getting results? If one supercomputer can do 3 x 10^15 instructions per second, that ought to be better than 25 billion brain cells firing at 200Hz with 1000 equivalent instructions per firing, especially given that only a small fraction of brain cells are ever involved actively in thinking at any one time. With all the nanotech we have now, and brain electrical activity monitoring stuff capable of millimetre resolutions, we should be getting loads of insight into the sorts of processes we need to emulate, or at least with which to seed some sort of evolution engine. Scientists are making progress, but we aren’t there yet.

Even in AI, the progress is frustrating. There are impressive developments for sure, but where are the ‘hero’ demonstrations most engineers are so fond of? Craig Venter is all over the place jumping up and down with glee after claiming the first artificial life, or at least a bacterium with synthetic DNA. Where’s your ambition? Why aren’t we seeing AI engines registering for GCSEs yet, even in Maths? You would think that basic text recognition and basic sentence parsing would allow at least enough questions on a GCSE Maths exam to be understood and answered to provide a pass mark by now. Instead, we see lots of industrial examples of AI that are in totally different spheres. Come on guys, you’re making us futurologists look bad by not achieving all the things we promised you would. It is no excuse that you never agreed our targets in the first place.

I can only suspect that we are actually seeing a whole lot of relevant progress but it just isn’t visible or connected in the right ways yet. University A is probably doing great, as are B, C and D. Loads of IT and biotech companies are probably doing their bits too, as no doubt are a few secret military research centres. But of course they probably all have their own plans and their own objectives and won’t want to share results until it is in their interests to do so. Perhaps the economic or military potential is just too great to throw it all away sharing the knowledge too early just to grab a few cheap headlines. Or maybe they aren’t, Maybe all the engineers have given up because it looks too hard a problem so they are spending their efforts elsewhere. I hope the latter isn’t true! Or maybe the engineers are too wrapped up in the real work to waste time on silly demos. Better.

I am still getting laughed at regularly because I refuse to adjust my predictions of man-machine equivalence in a machine by 2015. But I haven’t given up, I do still believe it can happen that soon. If we are still only 1% of the way there, we might still be on schedule. That is the nature of IT and biotech development. The virtuous circle positive feedback loop means that almost all the progress happens in the last year. That’s what happened with the mapping of DNA. The first million years achieved 1%. The last 99% happened in the final 2 years, just after the laughter about the claims was starting to die down. The trouble of course is that without knowing all the detail of all the work going on in all the relevant establishments, it is very hard top say when the last 2 years starts!

My major concern is that one or two of the main components seem to be missing. Firstly, most computer scientists seem to be locked in to digital thinking. We have a whole generation of computer scientists who have never seen an analog computer. The brain is more like an analog computer than a digital one, but there is nowhere near enough effort invested now in analog processing – though there certainly is some. Ask a young engineer to design a simple thermostat and I am convinced very few would even consider a basic physics approach such as a bimetallic strip. The rest would probably go straight to the chip catalogues and use a few megalines of code too. Another missing component is the lack of cross-discipline teaching. Many students do biotech or IT, but too few do both. Those that are educated in such a way probably have their focus on making better bionics for prostheses or other such obviously worthwhile projects. Thirdly, the evolutionary computing avenue seems to have been largely abandoned long before it was properly explored, and biomimetics is sometimes too rigid in its approach, trying to emulate nature too closely instead of just using it for initial stimulation. But none of these problems is universal, and there are many good scientists and engineers to whom they simply aren’t relevant barriers. So I haven’t given up hope, I hope still that the delays are imaginary rather than real.

I think we will find out pretty soon if that is the case though. If 2015 is not a completely wrong date for man-machine equivalence, then we will start to see very impressive results appearing soon from research labs. We will start seeing clear indications that we are on the right track, and scientists and engineers finally willing to make their own grand claims of impending successes.

If that doesn’t happen, I guess I will eventually have to write it off to experience and accept that at least one more futurologist has made too optimistic dates for breakthroughs. If it does happen on time, I will never stop yelling “I told you so!”

Consciousness

A recently announced Chinese supercomputer achieves 2.6 Peta-Instructions Per Second apparently. I once calculated that the human brain has about a third as much power in raw processing terms. However, the computer uses fundamentally different approaches to achieving its task compared to the brain.

Artificial intelligence is already used to create sophisticated virus variants, and autonomous AI entities will eventually become potential threats in their own right. Today, computers act only on instruction from people, but tomorrow, they will become a lot more independent. Assumptions that people will write their software are not valid. It is entirely feasible to develop design techniques that harness evolutionary and random chance principles, which could become much more sophisticated than today’s primitive genetic algorithms. Many people underestimate the potential for AI based threats because they assume that all machines and their software must be designed by people, who have limited knowledge, but that is no longer true and will become increasingly untrue as time goes on. So someone intent on mischief could create a piece of software and release it onto the net, where it could evolve and adapt and take on a life of its own, creating problems for companies while hiding using anonymity, encryption and distribution. It could be very difficult to find and destroy many such entities.

Nightmare AI scenarios do not necessarily require someone to be intent on creating mischief. Student pranks or curiosity could be enough. For example, suppose that some top psychology students, synthetic biology students and a few decent hackers spend some time over a few drinks debating whether it is possible to create a conscious AI entity. Even though none of them has any deep understanding of how human consciousness works, or how to make an alternative kind of consciousness, they may have enough combined insight to start a large scale zombie network, and to seed some crude algorithms as the base for an evolutionary experiment. Their lack of industrial experience also translates into a lack of design prejudice. Putting in some basic start-point ideas, coupled with imaginative thinking, a powerful distributed network of such machines would provide a formidable platform on which to run such an experiment. By making random changes to both algorithms and architecture, and perhaps using a ‘guided evolution’ approach, such an experiment might stumble across some techniques that offer promise, and eventually achieve a crude form of consciousness or advanced intelligence, both of which are dangerous. This might continue its development on its own, out of the direct control of the students. Even if the techniques it uses are very crude by comparison to those used by nature, the processing power and storage available to such a network offers vastly more raw scope than that available even in the human brain, and would perhaps allow an inefficient intelligence to still be superior to that of humans.

Once an AI reaches a certain level of intelligence, it would be capable of hiding, using distribution and encryption to disperse itself around the net. By developing its own techniques to capture more processing resources, it could benefit from a positive feedback loop, accelerating quickly towards a vastly superhuman entity. Although there is no reason to assume that it would necessarily be malicious, there is equally no reason to assume it would be benign. With its own curiosity, perhaps humans would become unintentional victims of its activities, in much the same way as insects on a building site.