Category Archives: AI

WMDs for mad AIs

We think sometimes about mad scientists and what they might do. It’s fun, makes nice films occasionally, and highlights threats years before they become feasible. That then allows scientists and engineers to think through how they might defend against such scenarios, hopefully making sure they don’t happen.

You’ll be aware that a lot more talk of AI is going on again now. It does seem to be picking up progress finally. If it succeeds well enough, a lot more future science and engineering will be done by AI than by people. If genuinely conscious, self-aware AI, with proper emotions etc becomes feasible, as I think it will, then we really ought to think about what happens when it goes wrong. (Sci-fi computer games producers already do think that stuff through sometimes – my personal favorite is Mass Effect). We will one day have some insane AIs. In Mass Effect, the concept of AI being shackled is embedded in the culture, thereby attempting to limit the damage it could presumably do. On the other hand, we have had Asimov’s laws of robotics for decades, but they are sometimes being ignored when it comes to making autonomous defense systems. That doesn’t bode well. So, assuming that Mass Effect’s writers don’t get to be in charge of the world, and instead we have ideological descendants of our current leaders, what sort of things could an advanced AI do in terms of its chosen weaponry?

Advanced AI

An ultra-powerful AI is a potential threat in itself. There is no reason to expect that an advanced AI will be malign, but there is also no reason to assume it won’t be. High level AI could have at least the range of personality that we associate with people, with a potentially greater  range of emotions or motivations, so we’d have the super-helpful smart scientist type AIs but also perhaps the evil super-villain and terrorist ones.

An AI doesn’t have to intend harm to be harmful. If it wants to do something and we are in the way, even if it has no malicious intent, we could still become casualties, like ants on a building site.

I have often blogged about achieving conscious computers using techniques such as gel computing and how we could end up in a terminator scenario, favored by sci-fi. This could be deliberate act of innocent research, military development or terrorism.

Terminator scenarios are diverse but often rely on AI taking control of human weapons systems. I won’t major on that here because that threat has already been analysed in-depth by many people.

Conscious botnets could arrive by accident too – a student prank harnessing millions of bots even with an inefficient algorithm might gain enough power to achieve high level of AI. 

Smart bacteriaBacterial DNA could be modified so that bacteria can make electronics inside their cell, and power it. Linking to other bacteria, massive AI could be achieved.

Zombies

Adding the ability to enter a human nervous system or disrupt or capture control of a human brain could enable enslavement, giving us zombies. Having been enslaved, zombies could easily be linked across the net. The zombie films we watch tend to miss this feature. Zombies in films and games tend to move in herds, but not generally under control or in a much coordinated way. We should assume that real ones will be full networked, liable to remote control, and able to share sensory systems. They’d be rather smarter and more capable than what we’re generally used to. Shooting them in the head might not work so well as people expect either, as their nervous systems don’t really need a local controller, and could just as easily be controlled by a collective intelligence, though blood loss would eventually cause them to die. To stop a herd of real zombies, you’d basically have to dismember them. More Dead Space than Dawn of the Dead.

Zombie viruses could be made other ways too. It isn’t necessary to use smart bacteria. Genetic modification of viruses, or a suspension of nanoparticles are traditional favorites because they could work. Sadly, we are likely to see zombies result from deliberate human acts, likely this century.

From Zombies, it is a short hop to full evolution of the Borg from Star Trek, along with emergence of characters from computer games to take over the zombified bodies.

Terraforming

Using strong external AI to make collective adaptability so that smart bacteria can colonize many niches, bacterial-based AI or AI using bacteria could engage in terraforming. Attacking many niches that are important to humans or other life would be very destructive. Terraforming a planet you live on is not generally a good idea, but if an organism can inhabit land, sea or air and even space, there is plenty of scope to avoid self destruction. Fighting bacteria engaged on such a pursuit might be hard. Smart bacteria could spread immunity to toxins or biological threats almost instantly through a population.

Correlated traffic

Information waves and other correlated traffic, network resonance attacks are another way of using networks to collapse economies by taking advantage of the physical properties of the links and protocols rather than using more traditional viruses or denial or service attacks. AIs using smart dust or bacteria could launch signals in perfect coordination from any points on any networks simultaneously. This could push any network into resonant overloads that would likely crash them, and certainly act to deprive other traffic of bandwidth.

Decryption

Conscious botnets could be used to make decryption engines to wreck security and finance systems. Imagine how much more so a worldwide collection of trillions of AI-harnessed organisms or devices. Invisibly small smart dust and networked bacteria could also pick up most signals well before they are encrypted anyway, since they could be resident on keyboards or the components and wires within. They could even pick up electrical signals from a person’s scalp and engage in thought recognition, intercepting passwords well before a person’s fingers even move to type them.

Space guns

Solar wind deflector guns are feasible, ionizing some of the ionosphere to make a reflective surface to deflect some of the incoming solar wind to make an even bigger reflector, then again, thus ending up with an ionospheric lens or reflector that can steer perhaps 1% of the solar wind onto a city. That could generate a high enough energy density to ignite and even melt a large area of city within minutes.

This wouldn’t be as easy as using space based solar farms, and using energy direction from them. Space solar is being seriously considered but it presents an extremely attractive target for capture because of its potential as a directed energy weapon. Their intended use is to use microwave beams directed to rectenna arrays on the ground, but it would take good design to prevent a takeover possibility.

Drone armies

Drones are already becoming common at an alarming rate, and the sizes of drones are increasing in range from large insects to medium sized planes. The next generation is likely to include permanently airborne drones and swarms of insect-sized drones. The swarms offer interesting potential for WMDs. They can be dispersed and come together on command, making them hard to attack most of the time.

Individual insect-sized drones could build up an electrical charge by a wide variety of means, and could collectively attack individuals, electrocuting or disabling them, as well as overload or short-circuit electrical appliances.

Larger drones such as the ones I discussed in

http://carbonweapons.com/2013/06/27/free-floating-combat-drones/ would be capable of much greater damage, and collectively, virtually indestructible since each can be broken to pieces by an attack and automatically reassembled without losing capability using self organisation principles. A mixture of large and small drones, possibly also using bacteria and smart dust, could present an extremely formidable coordinated attack.

I also recently blogged about the storm router

http://carbonweapons.com/2014/03/17/stormrouter-making-wmds-from-hurricanes-or-thunderstorms/ that would harness hurricanes, tornados or electrical storms and divert their energy onto chosen targets.

In my Space Anchor novel, my superheroes have to fight against a formidable AI army that appears as just a global collection of tiny clouds. They do some of the things I highlighted above and come close to threatening human existence. It’s a fun story but it is based on potential engineering.

Well, I think that’s enough threats to worry about for today. Maybe given the timing of release, you’re expecting me to hint that this is an April Fool blog. Not this time. All these threats are feasible.

The internet of things will soon be history

I’ve been a full time futurologist since 1991, and an engineer working on far future R&D stuff since I left uni in 1981. It is great seeing a lot of the 1980s dreams about connecting everything together finally starting to become real, although as I’ve blogged a bit recently, some of the grander claims we’re seeing for future home automation are rather unlikely. Yes you can, but you probably won’t, though some people will certainly adopt some stuff. Now that most people are starting to get the idea that you can connect things and add intelligence to them, we’re seeing a lot of overshoot too on the importance of the internet of things, which is the generalised form of the same thing.

It’s my job as a futurologist not only to understand that trend (and I’ve been yacking about putting chips in everything for decades) but then to look past it to see what is coming next. Or if it is here to stay, then that would also be an important conclusion too, but you know what, it just isn’t. The internet of things will be about as long lived as most other generations of technology, such as the mobile phone. Do you still have one? I don’t, well I do but they are all in a box in the garage somewhere. I have a general purpose mobile computer that happens to do be a phone as well as dozens of other things. So do you probably. The only reason you might still call it a smartphone or an iPhone is because it has to be called something and nobody in the IT marketing industry has any imagination. PDA was a rubbish name and that was the choice.

You can stick chips in everything, and you can connect them all together via the net. But that capability will disappear quickly into the background and the IT zeitgeist will move on. It really won’t be very long before a lot of the things we interact with are virtual, imaginary. To all intents and purposes they will be there, and will do wonderful things, but they won’t physically exist. So they won’t have chips in them. You can’t put a chip into a figment of imagination, even though you can make it appear in front of your eyes and interact with it. A good topical example of this is the smart watch, all set to make an imminent grand entrance. Smart watches are struggling to solve battery problems, they’ll be expensive too. They don’t need batteries if they are just images and a fully interactive image of a hugely sophisticated smart watch could also be made free, as one of a million things done by a free app. The smart watch’s demise is already inevitable. The energy it takes to produce an image on the retina is a great deal less than the energy needed to power a smart watch on your wrist and the cost of a few seconds of your time to explain to an AI how you’d like your wrist to be accessorised is a few seconds of your time, rather fewer seconds than you’d have spent on choosing something that costs a lot. In fact, the energy needed for direct retinal projection and associated comms is far less than can be harvested easily from your body or the environment, so there is no battery problem to solve.

If you can do that with a smart watch, making it just an imaginary item, you can do it to any kind of IT interface. You only need to see the interface, the rest can be put anywhere, on your belt, in your bag or in the IT ether that will evolve from today’s cloud. My pad, smartphone, TV and watch can all be recycled.

I can also do loads of things with imagination that I can’t do for real. I can have an imaginary wand. I can point it at you and turn you into a frog. Then in my eyes, the images of you change to those of a frog. Sure, it’s not real, you aren’t really a frog, but you are to me. I can wave it again and make the building walls vanish, so I can see the stuff on sale inside. A few of those images could be very real and come from cameras all over the place, the chips-in-everything stuff, but actually, I don’t have much interest in most of what the shop actually has, I am not interested in most of the local physical reality of a shop; what I am far more interested in is what I can buy, and I’ll be shown those things, in ways that appeal to me, whether they’re physically there or on Amazon Virtual. So 1% is chips-in-everything, 99% is imaginary, virtual, some sort of visual manifestation of my profile, Amazon Virtual’s AI systems, how my own AI knows I like to see things, and a fair bit of other people’s imagination to design the virtual decor, the nice presentation options, the virtual fauna and flora making it more fun, and countless other intermediaries and extramediaries, or whatever you call all those others that add value and fun to an experience without actually getting in the way. All just images directly projected onto my retinas. Not so much chips-in-everything as no chips at all except a few sensors, comms and an infinitesimal timeshare of a processor and storage somewhere.

A lot of people dismiss augmented reality as irrelevant passing fad. They say video visors and active contact lenses won’t catch on because of privacy concerns (and I’d agree that is a big issue that needs to be discussed and sorted, but it will be discussed and sorted). But when you realise that what we’re going to get isn’t just an internet of things, but a total convergence of physical and virtual, a coming together of real and imaginary, an explosion of human creativity,  a new renaissance, a realisation of yours and everyone else’s wildest dreams as part of your everyday reality; when you realise that, then the internet of things suddenly starts to look more than just a little bit boring, part of the old days when we actually had to make stuff and you had to have the same as everyone else and it all cost a fortune and needed charged up all the time.

The internet of things is only starting to arrive. But it won’t stay for long before it hides in the cupboard and disappears from memory. A far, far more exciting future is coming up close behind. The world of creativity and imagination. Bring it on!

Automation and the London tube strike

I was invited on the BBC’s Radio 4 Today Programme to discuss automation this morning, but on Radio 4, studio audio quality is a higher priority than content quality, while quality of life for me is a higher priority than radio exposure, and going into Ipswich greatly reduces my quality of life. We amicably agreed they should find someone else.

There will be more automation in the future. On one hand, if we could totally automate every single job right now, all the same work would be done, so the world would still have the same overall wealth, but then we’d all be idle so our newly free time could be used to improve quality of life, or lie on beaches enjoying ourselves. The problem with that isn’t the automation itself, it is mainly the deciding what else to do with our time and establishing a fair means of distributing the wealth so it doesn’t just stay with ‘the mill owners’. Automation will eventually require some tweaks of capitalism (I discuss this at length in my book Total Sustainability).

We can’t and shouldn’t automate every job. Some jobs are dull and boring or reduce the worker to too low a level of  dignity, and they should be automated as far as we can economically – that is, without creating a greater problem elsewhere. Some jobs provide people with a huge sense of fulfillment or pleasure, and we ought to keep them and create more like them. Most jobs are in between and their situation is rather more complex. Jobs give us something to do with our time. They provide us with social contact. They stop us hanging around on the streets picking fights, or finding ways to demean ourselves or others. They provide dignity, status, self-actualisation. They provide a convenient mechanism for wealth distribution. Some provide stimulation, or exercise, or supervision. All of these factors add to the value of jobs above the actual financial value add.

The London tube strike illustrates one key factor in the social decision on which jobs should be automated. The tube provides an essential service that affects a very large number of people and all their interests should be taken into account.

The impact of potential automation on individual workers in the tube system is certainly important and we shouldn’t ignore it. It would force many of them to find other jobs, albeit in an area with very low unemployment and generally high salaries. Others would have to change to another role within the tube system, perhaps giving assistance and advice to customers instead of pushing buttons on a ticket machine or moving a lever back and forward in a train cab. I find it hard to see how pushing buttons can offer the same dignity or human fulfillment as directly helping another person, so I would consider that sort of change positive, apart from any potential income drop and its onward consequences.

On the other hand, the cumulative impacts on all those other people affected are astronomically large. Many people would have struggled to get to work. Many wouldn’t have bothered. A few would suffer health consequences due to the extra struggle or stress. Perhaps a few small business on the edge of survival will have been killed. Some tourists won’t come back, a lot will spend less. A very large number of businesses and individuals will suffer significantly to let the tube staff make a not very valid protest.

The interests of a small number of people shouldn’t be ignored, but neither should the interests of a large number of people. If these jobs are automated, a few staff would suffer significantly, most would just move on to other jobs, but the future minor miseries caused to millions would be avoided.

Other jobs that should be automated are those where staff are give undue power or authority over others. Most of us will have had bad experiences of jobsworth staff, perhaps including ticketing staff, whose personal attitude is rather less than helpful and whose replacement by a machine would make the world a better place. A few people sadly seem to relish their power to make someone else’s life more difficult. I am pleased to see widespread automation of check-in at airports for that reason too. There were simply too many check-in assistants who gleefully stood in front of big notices saying that rudeness and abuse will not be tolerated from customers, while happily abusing their customers, creating maximum inconvenience and grief to their customers through a jobsworth attitude or couldn’t-care-less incompetence. Where people are in a position of power or authority, where a job offers the sort of opportunities for sadistic self-actualisation some people get by making other people’s lives worse, there is a strong case for automation to avoid the temptation to abuse that power or authority.

As artificial intelligence and robotics increase in scope and ability, many more jobs will be automated, but more often it will affect parts of jobs. Increasing productivity isn’t a bad thing, nor is up-skilling someone to do a more difficult and fulfilling job than they could otherwise manage. Some parts of any job are dull, and we won’t miss them, if they are replaced by more enjoyable activity. In many cases, simple mechanical or information processing tasks will be replaced by those involving people skills, emotional skills. By automating these bits where we are essentially doing machine work, high technology forces us to concentrate on being human. That is no bad thing.

While automation moves people away from repetitive,boring, dangerous, low dignity tasks, or those that give people too much opportunity to cause problems for others, I am all in favour. Those jobs together don’t add up to enough to cause major economic problems. We can find better work for those concerned.

We need to guard against automation going too far though. When jobs are automated faster than new equivalent or better jobs can be created, then we will have a problem. Not from the automation itself, but as a result of the unemployment, the unbalanced wealth distribution, and all the social problems that result from those. We need to automate sustainably.

Human + machine is better than human alone, but human alone is probably better than machine alone.

Home automation. A reality check.

Home automation is much in the news at the moment now that companies are making the chips-with-everything kit and the various apps.

Like 3D, home automation comes and goes. Superficially it is attractive, but the novelty wears thin quickly. It has been possible since the 1950s to automate a home. Bill Gates notably built a hugely expensive automated home 20 years ago. There are rarely any new ideas in the field, just a lot of recycling and minor tweaking.  Way back in 2000, I wrote what was even then just a recycling summary blog-type piece for my website bringing together a lot of already well-worn ideas. And yet it could easily have come from this years papers. Here it is, go to the end of the italicised text for my updating commentary:

Chips everywhere

 August 2000

 The chips-with-everything lifestyle is almost inevitable. Almost everything can be improved by adding some intelligence to it, and since the intelligence will be cheap to make, we will take advantage of this potential. In fact, smart ways of doing things are often cheaper than dumb ways, a smart door lock may be much cheaper than a complex key based lock. A chip is often cheaper than dumb electronics or electromechanics. However, electronics no longer has a monopoly of chip technology. Some new chips incorporate tiny electromechanical or electrochemical devices to do jobs that used to be done by more expensive electronics. Chips now have the ability to analyse chemicals, biological matter or information. They are at home processing both atoms and bits.

 These new families of chips have many possible uses, but since they are relatively new, most are probably still beyond our imagination. We already have seen the massive impact of chips that can do information processing. We have much less intuition regarding the impact in the physical world.

 Some have components that act as tiny pumps to allow drugs to be dispensed at exactly the right rate. Others have tiny mirrors that can control laser beams to make video displays. Gene chips have now been built that can identify the presence of many different genes, allowing applications from rapid identification to estimation of life expectancy for insurance reasons. (They are primarily being use to tell whether people have a genetic disorder so that their treatment can be determined correctly).

 It is easy to predict some of the uses such future chips might have around the home and office, especially when they become disposably cheap. Chips on fruit that respond to various gases may warn when the fruit is at its best and when it should be disposed of. Other foods might have electronic use-by dates that sound an alarm each time the cupboard or fridge is opened close to the end of their life. Other chips may detect the presence of moulds or harmful bacteria. Packaging chips may have embedded cooking instructions that communicate directly with the microwave, or may contain real-time recipes that appear on the kitchen terminal and tell the chef exactly what to do, and when. They might know what other foodstuffs are available in the kitchen, or whether they are in stock locally and at what price. Of course, these chips could also contain pricing and other information for use by the shops themselves, replacing bar codes and the like and allowing the customer just to put all the products in a smart trolley and walk out, debiting their account automatically. Chips on foods might react when the foods are in close proximity, warning the owner that there may be odour contamination, or that these two could be combined well to make a particularly pleasant dish. Cooking by numbers. In short, the kitchen could be a techno-utopia or nightmare depending on taste.

 Mechanical switches can already be replaced by simple sensors that switch on the lights when a hand is waved nearby, or when someone enters a room. In future, switches of all kinds may be rather more emotional, glowing, changing colour or shape, trying to escape, or making a noise when a hand gets near to make them easier or more fun to use. They may respond to gestures or voice commands, or eventually infer what they are to do from something they pick up in conversation. Intelligent emotional objects may become very commonplace. Many devices will act differently according to the person making the transaction. A security device will allow one person entry, while phoning the police when someone else calls if they are a known burglar. Others may receive a welcome message or be put in videophone contact with a resident, either in the house or away.

 It will be possible to burglar proof devices by registering them in a home. They could continue to work while they are near various other fixed devices, maybe in the walls, but won’t work when removed. Moving home would still be possible by broadcasting a digitally signed message to the chips. Air quality may be continuously analysed by chips, which would alert to dangers such as carbon monoxide, or excessive radiation, and these may also monitor for the presence of bacteria or viruses or just pollen. They may be integrated into a home health system which monitors our wellbeing on a variety of fronts, watching for stress, diseases, checking our blood pressure, fitness and so on. These can all be unobtrusively monitored. The ultimate nightmare might be that our fridge would refuse to let us have any chocolate until the chips in our trainers have confirmed that we have done our exercise for the day.

 Some chips in our home would be mobile, in robots, and would have a wide range of jobs from cleaning and tidying to looking after the plants. Sensors in the soil in a plant pot could tell the robot exactly how much water and food the plant needs. The plant may even be monitored by sensors on the stem or leaves. 

The global positioning system allows chips to know almost exactly where they are outside, and in-building positioning systems could allow positioning down to millimetres. Position dependent behaviour will therefore be commonplace. Similarly, events can be timed to the precision of atomic clock broadcasts. Response can be super-intelligent, adjusting appropriately for time, place, person, social circumstances, environmental conditions, anything that can be observed by any sort of sensor or predicted by any sort of algorithm. 

With this enormous versatility, it is very hard to think of anything where some sort of chip could not make an improvement. The ubiquity of the chip will depend on how fast costs fall and how valuable a task is, but we will eventually have chips with everything.

So that was what was pretty everyday thinking in the IT industry in 2000. The articles I’ve read recently mostly aren’t all that different.

What has changed since is that companies trying to progress it are adding new layers of value-skimming. In my view some at least are big steps backwards. Let’s look at a couple.

Networking the home is fine, but doing so so that you can remotely adjust the temperature across the network or run a bath from the office is utterly pointless. It adds the extra inconvenience of having to remember access details to an account, regularly updating security details, and having to recover when the company running it loses all your data to a hacker, all for virtually no benefit.

Monitoring what the user does and sending the data back to the supplier company so that they can use it for targeted ads is another huge step backwards. Advertising is already at the top of the list of things we already have quite enough. We need more resources, more food supply, more energy, more of a lot of stuff. More advertising we can do without. It adds costs to everything and wastes our time, without giving anything back.

If a company sells home automation stuff and wants to collect the data on how I use it, and sell that on to others directly or via advertising services, it will sit on their shelf. I will not buy it, and neither will most other people. Collecting the data may be very useful, but I want to keep it, and I don’t want others to have access to it. I want to pay once, and then own it outright with full and exclusive control and data access. I do not want to have to create any online accounts, not have to worry about network security or privacy, not have to download frequent software updates, not have any company nosing into my household and absolutely definitely no adverts.

Another is to migrate interfaces for things onto our smartphones or tablets. I have no objection to having that as an optional feature, but I want to retain a full physical switch or control. For several years in BT, I lived in an office with a light that was controlled by a remote control, with no other switch. The remote control had dozens of buttons, yet all it did was turn the light on or off. I don’t want to have to look for a remote control or my phone or tablet in order to turn on a light or adjust temperature. I would much prefer a traditional light switch and thermostat. If they communicate by radio, I don’t care, but they do need to be physically present in the same place all the time.

Automated lights that go on and off as people enter or leave a room are also a step backwards. I have fallen victim once to one in a work toilet. If you sit still for a couple of minutes, they switch the lights off. That really is not welcome in an internal toilet with no windows.

The traditional way of running a house is not so demanding that we need a lot of assistance anyway. It really isn’t. I only spend a few seconds every day turning lights on and off or adjusting temperature. It would take longer than that on average to maintain apps to do it automatically. As for saving energy by turning heating on and off all the time, I think that is over-valued as a feature too. The air in a house doesn’t take much heat and if the building cools down, it takes a lot to get it back up again. That actually makes more strain on a boiler than running at a relatively constant low output. If the boiler and pumps have to work harder more often, they are likely to last less time, and savings would be eradicated.

So, all in all, while I can certainly see merits in adding chips to all sorts of stuff, I think their merits in home automation is being grossly overstated in the current media enthusiasm, and the downside being far too much ignored. Yes you can, but most people won’t want to and those who do probably won’t want to do nearly as much as is being suggested, and even those won’t want all the pain of doing so via service providers adding unnecessary layers or misusing their data.

We could have a conscious machine by end-of-play 2015

I made xmas dinner this year, as I always do. It was pretty easy.

I had a basic plan, made up a menu suited to my family and my limited ability, ensured its legality, including license to serve and consume alcohol to my family on my premises, made sure I had all the ingredients I needed, checked I had recipes and instructions where necessary. I had the tools, equipment and working space I needed, and started early enough to do it all in time for the planned delivery. It was successful.

That is pretty much what you have to do to make anything, from a cup of tea to a space station, though complexity, cost and timings may vary.

With conscious machines, it is still basically the same list. When I check through it to see whether we are ready to make a start I conclude that we are. If we make the decision now at the end of 2013 to make a machine which is conscious and self-aware by the end of 2015, we could do it.

Every time machine consciousness is raised as a goal, a lot of people start screaming for a definition of consciousness. I am conscious, and I know how it feels. So are you. Neither of us can write down a definition that everyone would agree on. I don’t care. It simply isn’t an engineering barrier. Let’s simply aim for a machine that can make either of us believe that it is conscious and self aware in much the same way as we are. We don’t need weasel words to help pass an abacus off as Commander Data.

Basic plan: actually, there are several in development.

One approach is essentially reverse engineering the human brain, mapping out the neurons and replicating them. That would work, (Markram’s team) but would take too long.  It doesn’t need us to understand how consciousness works, it is rather like  methodically taking a television apart and making an exact replica using identical purchased or manufactured components.  It has the advantage of existing backing and if nobody tries a better technique early enough, it could win. More comment on this approach: http://timeguide.wordpress.com/2013/05/17/reverse-engineering-the-brain-is-a-very-slow-way-to-make-a-smart-computer/

Another is to use a large bank of powerful digital computers with access to large pool of data and knowledge. That can produce a very capable machine that can answer difficult questions or do various things well that traditionally need smart people , but as far as creating a conscious machine, it won’t work. It will happen anyway for various reasons, and may produce some valuable outputs, but it won’t result in a conscious machine..

Another is to use accelerate guided evolution within an electronic equivalent of the ‘primordial soup’. That takes the process used by nature, which clearly worked, then improves and accelerates it using whatever insights and analysis we can add via advanced starting points, subsequent guidance, archiving, cataloging and smart filtering and pruning. That also would work. If we can make the accelerated evolution powerful enough it can be achieved quickly. This is my favoured approach because it is the only one capable of succeeding by the end of 2015. So that is the basic plan, and we’ll develop detailed instructions as we go.

Menu suited to audience and ability: a machine we agree is conscious and self aware, that we can make using know-how we already have or can reasonably develop within the project time-frame.

Legality: it isn’t illegal to make a conscious machine yet. It should be; it most definitely should be, but it isn’t. The guards are fast asleep and by the time they wake up, notice that we’re up to something, and start taking us seriously, agree on what to do about it, and start writing new laws, we’ll have finished ages ago.

Ingredients:

substantial scientific and engineering knowledge base, reconfigurable analog and digital electronics, assorted structures, 15nm feature size, self organisation, evolutionary engines, sensors, lasers, LEDs, optoelectronics, HDWDM, transparent gel, inductive power, power supply, cloud storage, data mining, P2P, open source community

Recipe & instructions

I’ve written often on this from different angles:

http://timeguide.wordpress.com/2013/02/15/how-to-make-a-conscious-computer/ summarises the key points and adds insight on core component structure – especially symmetry. I believe that consciousness can be achieved by applying similar sensory structures to  internal processes as those used to sense external stimuli. Both should have a feedback loop symmetrical to the main structure. Essentially what I’m saying is that sensing that you are sensing something is key to consciousness and that is the means of converting detection into sensing and sensing into awareness, awareness into consciousness.

Once a mainstream lab finally recognises that symmetry of external sensory and internally directed sensory structures, with symmetrical sensory feedback loops (as I describe in this link) is fundamental to achieving consciousness, progress will occur quickly. I’d expect MIT or Google to claim they have just invented this concept soon, then hopefully it will be taken seriously and progress will start.

http://timeguide.wordpress.com/2011/09/18/gel-computing/

http://timeguide.wordpress.com/2010/06/16/man-machine-equivalence-by-2015/

Tools, equipment, working space: any of many large company, government or military labs could do this.

Starting early enough: it is very disappointing that work hasn’t already conspicuouslessly begun on this approach, though of course it may be happening in secret somewhere. The slower alternative being pursued by Markram et al is apparently quite well funded and publicised. Nevertheless, if work starts at the beginning of 2014, it could achieve the required result by the end of 2015. The vast bulk of the time would be creating the sensory and feedback processes to direct the evolution of electronics within the gel.

It is possible that ethics issues are slowing progress. It should be illegal to do this without proper prior discussion and effective safeguards. Possibly some of the labs capable of doing it are avoiding doing so for ethical reasons. However, I doubt that. There are potential benefits that could be presented in such a way as to offset potential risks and it would be quite a prize for any brand to claim the first conscious machine. So I suspect the reason for the delay to date is failure of imagination.

The early days of evolutionary design were held back by teams wanting to stick too closely to nature, rather than simply drawing biomimetic idea stimulation and building on it. An entire generation of electronic and computer engineers has been crippled by being locked into digital thinking but the key processes and structures within a conscious computer will come from the analog domain.

I want my TV to be a TV, not a security and privacy threat

Our TV just died. It was great, may it rest in peace in TV heaven. It was a good TV and it lasted longer than I hoped, but I finally got an excuse to buy a new one. Sadly, it was very difficult finding one and I had to compromise. Every TV I found appears to be a government spy, a major home security threat or a chaperone device making sure I only watch wholesome programming. My old one wasn’t and I’d much rather have a new TV that still isn’t, but I had no choice in the matter. All of today’s big TV’s are ruined by the addition of features and equipment that I would much rather not have.

Firstly, I didn’t want any built in cameras or microphones: I do not want some hacker watching or listening to my wife and I on our sofa and I do not trust any company in the world on security, so if a TV has a microphone or camera, I assume that it can be hacked. Any TV that has any features offering voice recognition or gesture recognition or video comms is a security risk. All the good TVs have voice control, even though that needs a nice clear newsreader style voice, and won’t work for me, so I will get no benefit from it but I had no choice about having the microphone and will have to suffer the downside. I am hoping the mic can only be used for voice control and not for networking apps, and therefore might not be network accessible.

I drew the line at having a camera in my living room so had to avoid buying the more expensive smart TVs . If there weren’t cameras in all the top TVs, I would happily have spent 70% more. 

I also don’t want any TV that makes a record of what I watch on it for later investigation and data mining by Big Brother, the NSA, GCHQ, Suffolk County Council or ad agencies. I don’t want it even remembering anything of what is watched on it for viewing history or recommendation services.

That requirement eliminated my entire shortlist. Every decent quality large TV has been wrecked by the addition of  ‘features’ that I don’t only not want, but would much rather not have. That is not progress, it is going backwards. Samsung have made loads of really good TVs and then ruined them all. I blogged a long time ago that upgrades are wrecking our future. TV is now a major casualty.

I am rather annoyed at Samsung now – that’s who I eventually bought from. I like the TV bits, but I certainly do not and never will want a TV that ‘learns my viewing habits and offers recommendations based on what I like to watch’.

Firstly, it will be so extremely ill-informed as to make any such feature utterly useless. I am a channel hopper so 99% of things that get switched to momentarily are things or genres I never want to see again. Quite often, the only reason I stopped on that channel was to watch the new Meerkat ad.

Secondly, our TV is often on with nobody in the room. Just because a programme was on screen does not mean I or indeed anyone actually looked at it, still less that anyone enjoyed it.

Thirdly, why would any man under 95 want their TV to make notes of what they watch when they are alone, and then make that viewing history available to everyone or use it as any part of an algorithm to do so?

Fourthly, I really wanted a smart TV but couldn’t because of the implied security risks. I have to assume that if the designers think they should record and analyse my broadcast TV viewing, then the same monitoring and analysis would be extended to web browsing and any online viewing. But a smart TV isn’t only going to be accessed by others in the same building. It will be networked. Worse still, it will be networked to the web via a wireless LAN that doesn’t have a Google street view van detector built in, so it’s a fair bet that any data it stores may be snaffled without warning or authorisation some time.

Since the TV industry apparently takes the view that nasty hacker types won’t ever bother with smart TVs, they will leave easily accessible and probably very badly secured data and access logs all over the place. So I have to assume that all the data and metadata gathered by my smart TV with its unwanted and totally useless viewing recommendations will effectively be shared with everyone on the web, every advertising executive, every government snoop and local busybody, as well as all my visitors and other household members.

But it still gets worse. Smart TV’s don’t stop there. They want to help you to share stuff too. They want ‘to make it easy to share your photos and your other media from your PC, laptop, tablet, and smartphone’. Stuff that! So, if I was mad enough to buy one, any hacker worthy of the name could probably use my smart TV to access all my files on any of my gadgets. I saw no mention in the TV descriptions of regular operating system updates or virus protection or firewall software for the TVs.

So, in order to get extremely badly informed viewing recommendations that have no basis in reality, I’d have to trade all our privacy and household IT security and open the doors to unlimited and badly targeted advertising, knowing that all my viewing and web access may be recorded for ever on government databases. Why the hell would anyone think that make a TV more attractive?  When I buy a TV, I want to switch it on, hit an auto-tune button and then use it to watch TV. I don’t really want to spend hours going through a manual to do some elaborate set-up where I disable a whole string of  privacy and security risks one by one.

In the end, I abandoned my smart TV requirement, because it came with too many implied security risks. The TV I bought has a microphone to allow a visitor with a clearer voice to use voice control, which I will disable if I can, and features artificial-stupidity-based viewing recommendations which I don’t want either. These cost extra for Samsung to develop and put in my new TV. I would happily have paid extra to have them removed.

Afternote: I am an idiot, 1st class. I thought I wasn’t buying a smart TV but it is. My curioisty got the better of me and I activated the network stuff for a while to check it out, and on my awful broadband, mostly it doesn’t work, so with no significant benefits, I just won’t give it network access, it isn’t worth the risk. I can’t disable the microphone or the viewing history, but I can at least clear it if I want.

I love change and I love progress, but it’s the other direction. You’re going the wrong way!

And another new book: You Tomorrow, 2nd Edition

I wrote You Tomorrow two years ago. It was my first ebook, and pulled together a lot of material I’d written on the general future of life, with some gaps then filled in. I was quite happy with it as a book, but I could see I’d allowed quite a few typos to get into the final work, and a few other errors too.

However, two years is a long time, and I’ve thought about a lot of new areas in that time. So I decided a few months ago to do a second edition. I deleted a bit, rearranged it, and then added quite a lot. I also wrote the partner book, Total Sustainability. It includes a lot of my ideas on future business and capitalism, politics and society that don’t really belong in You Tomorrow.

So, now it’s out on sale on Amazon

http://www.amazon.co.uk/You-Tomorrow-humanity-belongings-surroundings/dp/1491278269/ in paper, at £9.00 and

http://www.amazon.co.uk/You-Tomorrow-Ian-Pearson-ebook/dp/B00G8DLB24 in ebook form at £3.81 (guessing the right price to get a round number after VAT is added is beyond me. Did you know that paper books don’t have VAT added but ebooks do?)

And here’s a pretty picture:

You_Tomorrow_Cover_for_Kindle

Free-floating AI battle drone orbs (or making Glyph from Mass Effect)

I have spent many hours playing various editions of Mass Effect, from EA Games. It is one of my favourites and has clearly benefited from some highly creative minds. They had to invent a wide range of fictional technology along with technical explanations in the detail for how they are meant to work. Some is just artistic redesign of very common sci-fi ideas, but they have added a huge amount of their own too. Sci-fi and real engineering have always had a strong mutual cross-fertilisation. I have lectured sometimes on science fact v sci-fi, to show that what we eventually achieve is sometimes far better than the sci-fi version (Exhibit A – the rubbish voice synthesisers and storage devices use on Star Trek, TOS).

Glyph

Liara talking to her assistant Glyph.Picture Credit: social.bioware.com

In Mass Effect, lots of floating holographic style orbs float around all over the place for various military or assistant purposes. They aren’t confined to a fixed holographic projection system. Disruptor and battle drones are common, and  a few home/lab/office assistants such as Glyph, who is Liara’s friendly PA, not a battle drone. These aren’t just dumb holograms, they can carry small devices and do stuff. The idea of a floating sphere may have been inspired by Halo’s, but the Mass Effect ones look more holographic and generally nicer. (Think Apple v Microsoft). Battle drones are highly topical now, but current technology uses wings and helicopters. The drones in sci-fi like Mass Effect and Halo are just free-floating ethereal orbs. That’s what I am talking about now. They aren’t in the distant future. They will be here quite soon.

I recently wrote on how to make force field and floating cars or hover-boards.

http://timeguide.wordpress.com/2013/06/21/how-to-actually-make-a-star-wars-landspeeder-or-a-back-to-the-future-hoverboard/

Briefly, they work by creating a thick cushion of magnetically confined plasma under the vehicle that can be used to keep it well off the ground, a bit like a hovercraft without a skirt or fans. Using layers of confined plasma could also be used to make relatively weak force fields. A key claim of the idea is that you can coat a firm surface with a packed array of steerable electron pipes to make the plasma, and a potentially reconfigurable and self organising circuit to produce the confinement field. No moving parts, and the coating would simply produce a lifting or propulsion force according to its area.

This is all very easy to imagine for objects with a relatively flat base like cars and hover-boards, but I later realised that the force field bit could be used to suspend additional components, and if they also have a power source, they can add locally to that field. The ability to sense their exact relative positions and instantaneously adjust the local fields to maintain or achieve their desired position so dynamic self-organisation would allow just about any shape  and dynamics to be achieved and maintained. So basically, if you break the levitation bit up, each piece could still work fine. I love self organisation, and biomimetics generally. I wrote my first paper on hormonal self-organisation over 20 years ago to show how networks or telephone exchanges could self organise, and have used it in many designs since. With a few pieces generating external air flow, the objects could wander around. Cunning design using multiple components could therefore be used to make orbs that float and wander around too, even with the inspired moving plates that Mass Effect uses for its drones. It could also be very lightweight and translucent, just like Glyph. Regular readers will not be surprised if I recommend some of these components should be made of graphene, because it can be used to make wonderful things. It is light, strong, an excellent electrical and thermal conductor, a perfect platform for electronics, can be used to make super-capacitors and so on. Glyph could use a combination of moving physical plates, and use some to add some holographic projection – to make it look pretty. So, part physical and part hologram then.

Plates used in the structure can dynamically attract or repel each other and use tethers, or use confined plasma cushions. They can create air jets in any direction. They would have a small load-bearing capability. Since graphene foam is potentially lighter than helium

http://timeguide.wordpress.com/2013/01/05/could-graphene-foam-be-a-future-helium-substitute/

it could be added into structures to reduce forces needed. So, we’re not looking at orbs that can carry heavy equipment here, but carrying processing, sensing, storage and comms would be easy. Obviously they could therefore include whatever state of the art artificial intelligence has got to, either on-board, distributed, or via the cloud. Beyond that, it is hard to imagine a small orb carrying more than a few hundred grammes. Nevertheless, it could carry enough equipment to make it very useful indeed for very many purposes. These drones could work pretty much anywhere. Space would be tricky but not that tricky, the drones would just have to carry a little fuel.

But let’s get right to the point. The primary market for this isn’t the home or lab or office, it is the battlefield. Battle drones are being regulated as I type, but that doesn’t mean they won’t be developed. My generation grew up with the nuclear arms race. Millennials will grow up with the drone arms race. And that if anything is a lot scarier. The battle drones on Mass Effect are fairly easy to kill. Real ones won’t.

a Mass Effect combat droneMass Effect combat drone, picture credit: masseffect.wikia.com

If these cute little floating drone things are taken out of the office and converted to military uses they could do pretty much all the stuff they do in sci-fi. They could have lots of local energy storage using super-caps, so they could easily carry self-organising lightweight  lasers or electrical shock weaponry too, or carry steerable mirrors to direct beams from remote lasers, and high definition 3D cameras and other sensing for reconnaissance. The interesting thing here is that self organisation of potentially redundant components would allow a free roaming battle drone that would be highly resistant to attack. You could shoot it for ages with laser or bullets and it would keep coming. Disruption of its fields by electrical weapons would make it collapse temporarily, but it would just get up and reassemble as soon as you stop firing. With its intelligence potentially local cloud based, you could make a small battalion of these that could only be properly killed by totally frazzling them all. They would be potentially lethal individually but almost irresistible as a team. Super-capacitors could be recharged frequently using companion drones to relay power from the rear line. A mist of spare components could make ready replacements for any that are destroyed. Self-orientation and use of free-space optics for comms make wiring and circuit boards redundant, and sub-millimetre chips 100m away would be quite hard to hit.

Well I’m scared. If you’re not, I didn’t explain it properly.

Reverse engineering the brain is a very slow way to make a smart computer

The race is on to build conscious and smart computers and brain replicas. This article explains some of Markam’s approach. http://www.wired.com/wiredscience/2013/05/neurologist-markam-human-brain/all/

It is a nice project, and its aims are to make a working replica of the brain by reverse engineering it. That would work eventually, but it is slow and expensive and it is debatable how valuable it is as a goal.

Imagine if you want to make an aeroplane from scratch.  You could study birds and make extremely detailed reverse engineered mathematical models of the structures of individual feathers, and try to model all the stresses and airflows as the wing beats. Eventually you could make a good model of a wing, and by also looking at the electrics, feedbacks, nerves and muscles, you could eventually make some sort of control system that would essentially replicate a bird wing. Then you could scale it all up, look for other materials, experiment a bit and eventually you might make a big bird replica. Alternatively, you could look briefly at a bird and note the basic aerodynamics of a wing, note the use of lightweight and strong materials, then let it go. You don’t need any more from nature than that. The rest can be done by looking at ways of propelling the surface to create sufficient airflow and lift using the aerofoil, and ways to achieve the strength needed. The bird provides some basic insight, but it simply isn’t necessary to copy all a bird’s proprietary technology to fly.

Back to Markam. If the real goal is to reverse engineer the actual human brain and make a detailed replica or model of it, then fair enough. I wish him and his team, and their distributed helpers and affiliates every success with that. If the project goes well, and we can find insights to help with the hundreds of brain disorders and improve medicine, great. A few billion euros will have been well spent, especially given the waste of more billions of euros elsewhere on futile and counter-productive projects. Lots of people criticise his goal, and some of their arguments are nonsensical. It is a good project and for what it’s worth, I support it.

My only real objection is that a simulation of the brain will not think well and at best will be an extremely inefficient thinking machine. So if a goal is to achieve thought or intelligence, the project as described is barking up the wrong tree. If that isn’t a goal, so what? It still has the other uses.

A simulation can do many things. It can be used to follow through the consequences of an input if the system is sufficiently well modelled. A sufficiently detailed and accurate brain simulation could predict the impacts of a drug or behaviours resulting from certain mental processes. It could follow through the impacts and chain of events resulting from an electrical impulse  this finding out what the eventual result of that will be. It can therefore very inefficiently predict the result of thinking, but by using extremely high speed computation, it could in principle work out the end result of some thoughts. But it needs enormous detail and algorithmic precision to do that. I doubt it is achievable simply because of the volume of calculation needed.  Thinking properly requires consciousness and therefore emulation. A conscious circuit has to be built, not just modelled.

Consciousness is not the same as thinking. A simulation of the brain would not be conscious, even if it can work out the result of thoughts. It is the difference between printed music and played music. One is data, one is an experience. A simulation of all the processes going on inside a head will not generate any consciousness, only data. It could think, but not feel or experience.

Having made that important distinction, I still think that Markam’s approach will prove useful. It will generate many useful insights into the workings of the brain, and many of the processes nature uses to solve certain engineering problems. These insights and techniques can be used as input into other projects. Biomimetics is already proven as a useful tool in solving big problems. Looking at how the brain works will give us hints how to make a truly conscious, properly thinking machine. But just as with birds and airbuses, we can take ideas and inspiration from nature and then do it far better. No bird can carry the weight or fly as high or as fast as an aeroplane. No proper plane uses feathers or flaps its wings.

I wrote recently about how to make a conscious computer:

http://timeguide.wordpress.com/2013/02/15/how-to-make-a-conscious-computer/ and http://timeguide.wordpress.com/2013/02/18/how-smart-could-an-ai-become/

I still think that approach will work well, and it could be a decade faster than going Markam’s route. All the core technology needed to start making a conscious computer already exists today. With funding and some smart minds to set the process in motion, it could be done in a couple of years. The potential conscious and ultra-smart computer, properly harnessed, could do its research far faster than any human on Markam’s team. It could easily beat them to the goal of a replica brain. The converse is not true, Markam’s current approach would yield a conscious computer very slowly.

So while I fully applaud the effort and endorse the goals, changing the approach now could give far more bang for the buck, far faster.

The future of music creation

When I was a student, I saw people around me that could play musical instruments and since I couldn’t, I felt a bit inadequate, so I went out and bought a £13 guitar and taught myself to play. Later, I bought a keyboard and learned to play that too. I’ve never been much good at either, and can’t read music, but  if I know a tune, I can usually play it by ear and sometimes I compose, though I never record any of my compositions. Music is highly rewarding, whether listening or creating. I play well enough for my enjoyment and there are plenty of others who can play far better to entertain audiences.

Like almost everyone, most of the music I listen to is created by others and today, you can access music by a wide range of means. It does seem to me though that the music industry is stuck in the 20th century. Even concerts seem primitive compared to what is possible. So have streaming and download services. For some reason, new technology seems mostly to have escaped its attention, apart from a few geeks. There are a few innovative musicians and bands out there but they represent a tiny fraction of the music industry. Mainstream music is decades out of date.

Starting with the instruments themselves, even electronic instruments produce sound that appears to come from a single location. An electronic violin or guitar is just an electronic version of a violin or guitar, the sound all appears to come from a single point all the way through. It doesn’t  throw sound all over the place or use a wide range of dynamic effects to embrace the audience in surround sound effects. Why not? Why can’t a musician or a technician make the music meander around the listener, creating additional emotional content by getting up close, whispering right into an ear, like a violinist picking out an individual woman in a bar and serenading her? High quality surround sound systems have been in home cinemas for yonks. They are certainly easy to arrange in a high budget concert. Audio shouldn’t stop with stereo. It is surprising just how little use current music makes of existing surround sound capability. It is as if they think everyone only ever listens on headphones.

Of course, there is no rule that electronic instruments have to be just electronic derivatives of traditional ones, and to be fair, many sounds and effects on keyboards and electric guitars do go a lot further than just emulating traditional variants. But there still seems to be very little innovation in new kinds of instrument to explore dynamic audio effects, especially any that make full use of the space around the musician and audience. With the gesture recognition already available even on an Xbox or PS3, surely we should have a much more imaginative range of potential instruments, where you can make precise gestures, wave or throw your arms, squeeze your hands, make an emotional facial expression or delicately pinch, bend or slide fingers to create effects. Even multi-touch on phones or pads should have made a far bigger impact by now.

(As an aside, ever since I was a child, I have thought that there must be a visual equivalent to music. I don’t know what it is, and probably never will, but surely, there must be visual patterns or effects that can generate an equivalent emotional response to music. I feel sure that one day someone will discover how to generate them and the field will develop.)

The human body is a good instrument itself. Most people can sing to a point or at least hum or whistle a tune even if they can’t play an instrument. A musical instrument is really just an unnecessary interface between your brain, which knows what sound you want to make, and an audio production mechanism. Up until the late 20th century, the instrument made the sound, today, outside of a live concert at least,  it is very usually a computer with a digital to analog converter and a speaker attached. Links between computers and people are far better now though, so we can bypass the hard-to-learn instrument bit. With thought recognition, nerve monitoring, humming, whistling, gesture and expression recognition and so on, there is a very rich output from the body that can potentially be used far more intuitively and directly to generate the sound. You shouldn’t have to learn how to play an instrument in the 21st century. The sound creation process should interface almost directly to your brain as intuitively as your body does. If you can hum it, you can play it. Or should be able to, if the industry was keeping up.

Going a bit further, most of us have some idea what sort of music or effect we want to create, but don’t know quite enough about music to have the experience or skill to know quite what. A skilled composer may be able to write something down right away to achieve a musical effect that the rest of us would struggle to imagine. So, add some AI. Most music is based on fairly straightforward mathematical principles, even symphonies are mostly combinations of effects and sequences that fit well within AI-friendly guidelines. We use calculators to do calculations, so use AI to help compose music. Any of us should be able to compose great music with tools we should be able to build now. It shouldn’t be the future, it should be the present.

Let’s look at music distribution. When we buy a music track or stream it, why do we still only get the audio? Why isn’t the music video included by default? Sure, you can watch on YouTube but then you generally get low quality audio and video. Why isn’t purchased music delivered at the highest quality with full HD 3D video included, or videos if the band has made a few, with all the latest ones included as they emerge? If a video is available for music video channels, it surely should be available to those who have bought the music. That it isn’t reflects the contempt that the music industry generally shows to its customers. It treats us as a bunch of thieves who must only ever be given the least possible access for the greatest possible outlay, to make up for all the times we must of course be stealing off them. That attitude has to change if the industry is to achieve its potential. 

Augmented reality is emerging now. It already offers some potential to add overlays at concerts but in a few years, when video visors are commonplace, we should expect to see band members playing up in the air, flying around the audience, virtual band members, cartoon and fantasy creations all over the place doping all sorts of things, visual special effects overlaying the sound effects. Concerts will be a spectacular opportunity to blend the best of visual, audio, dance, storytelling, games and musical arts together. Concerts could be much more exciting, if they use the technology potential. Will they? I guess we’ll have to wait and see. Much of this could be done already, but only a little is.

Now lets consider the emotional connection between a musician and the listener. We are all very aware of the intense (though unilateral) relationship teens can often build with their pop idols. They may follow them on Twitter and other social nets as well as listening to their music and buying their posters. Augmented reality will let them go much further still. They could have their idol with them pretty much all the time, virtually present in their field of view, maybe even walking hand in hand, maybe even kissing them. The potential spectrum extends from distant listening to intimate cuddles. Bearing in mind especially the ages of many fans, how far should we allow this to go and how could it be policed?

Clothing adds potential to the emotional content during listening too. Headphones are fine for the information part of audio, but the lack of stomach-throbbing sound limits the depth of the experience. Music is more than information. Some music is only half there if it isn’t at the right volume. I know from personal experience that not everyone seems to understand this, but turning the volume down (or indeed up) sometimes destroys the emotional content. Sometimes you have to feel the music, sometimes let it fully conquer your senses. Already, people are experimenting with clothes that can house electronics, some that flash on and off in synch with the music, and some that will be able to contract and expand their fibres under electronic control. You will be able to buy clothes that give you the same vibration you would otherwise get from the sub-woofer or the rock concert.

Further down the line, we will be able to connect IT directly into the nervous system. Active skin is not far away. Inducing voltages and current in nerves via tiny implants or onplants on patches of skin will allow computers to generate sensations directly.

This augmented reality and a link to the nervous system gives another whole dimension to telepresence. Band members at a concert will be able to play right in front of audience members, shake them, cuddle them. The emotional connection could be a lot better.

Picking up electrical clues from the skin allows automated music selection according to the wearers emotional state. Even properties like skin conductivity can give clues about emotional state. Depending on your stress level for example, music could be played that soothes you, or if you feel calm, maybe more stimulating tracks could be played. Playlists would thus adapt to how you feel.

Finally, music is a social thing too. It brings people together in shared experiences. This is especially true for the musicians, but audience members often feel some shared experience too. Atmosphere. Social networking already sees some people sharing what music they are listening too (I don’t want to share my tastes but I recognise that some people do, and that’s fine). Where shared musical taste is important to a social group, it could be enhanced by providing tools to enable shared composition. AI can already write music in particular styles – you can feed Mozart of Beethoven into some music generators and they will produce music that sounds like it had been composed by that person, they can compose that as fast as it comes out of the speakers. It could take style preferences from a small group of people and produce music that fits across those styles. The result is a sort of tribal music, representative of the tribe that generated it. In this way, music could become even more of a social tool in the future than it already is.