Tag Archives: AI

Can we automate restaurant reviews?

Reviews are an important part of modern life. People often consult reviews before buying things, visiting a restaurant or booking a hotel. There are even reviews on the best seats to choose on planes. When reviews are honestly given, they can be very useful to potential buyers, but what if they aren’t honestly give? What if they are glowing reviews written by friends of the restaurant owners, or scathing reviews written by friends of the competition? What if the service received was fine, but the reviewer simply didn’t like the race or gender of the person delivering it? Many reviews fall into these categories, but of course we can’t be sure how many, because when someone writes a review, we don’t know whether they were being honest or not, or whether they are biased or not. Adding a category of automated reviews would add credibility provided the technology is independent of the establishment concerned.

Face recognition software is now so good that it can read lips better than human lip reading experts. It can be used to detect emotions too, distinguishing smiles or frowns, and whether someone is nervous, stressed or relaxed. Voice recognition can discern not only words but changes in pitch and volume that might indicate their emotional context. Wearable devices can also detect emotions such as stress.

Given this wealth of technology capability, cameras and microphones in a restaurant could help verify human reviews and provide machine reviews. Using the checking in process it can identify members of a group that might later submit a review, and thus compare their review with video and audio records of the visit to determine whether it seems reasonably true. This could be done by machine using analysis of gestures, chat and facial expressions. If the person giving a poor review looked unhappy with the taste of the food while they were eating it, then it is credible. If their facial expression were of sheer pleasure and the review said it tasted awful, then that review could be marked as not credible, and furthermore, other reviews by that person could be called into question too. In fact, guests would in effect be given automated reviews of their credibility. Over time, a trust rating would accrue, that could be used to group other reviews by credibility rating.

Totally automated reviews could also be produced, by analyzing facial expressions, conversations and gestures across a whole restaurant full of people. These machine reviews would be processed in the cloud by trusted review companies and could give star ratings for restaurants. They could even take into account what dishes people were eating to give ratings for each dish, as well as more general ratings for entire chains.

Service could also be automatically assessed to some degree too. How long were the people there before they were greeted/served/asked for orders/food delivered. The conversation could even be automatically transcribed in many cases, so comments about rudeness or mistakes could be verified.

Obviously there are many circumstances where this would not work, but there are many where it could, so AI might well become an important player in the reviews business. At a time when restaurants are closing due to malicious bad reviews, or ripping people off in spite of poor quality thanks to dishonest positive reviews, then this might help a lot. A future where people are forced to be more honest in their reviews because they know that AI review checking could damage their reputation if they are found to have been dishonest might cause some people to avoid reviewing altogether, but it could improve the reliability of the reviews that still do happen.

Still not perfect, but it could be a lot better than today, where you rarely know how much a review can be trusted.

Shoulder demons and angels

Remember the cartoons where a character would have a tiny angel on one shoulder telling them the right thing to do, and a little demon on the other telling them it would be far more cool to be nasty somehow, e.g. get their own back, be selfish, greedy. The two sides might be ‘eat your greens’ v ‘the chocolate is much nicer’, or ‘your mum would be upset if you arrive home late’ v ‘this party is really going to be fun soon’. There are a million possibilities.

Shoulder angels

Shoulder angels

Enter artificial intelligence, which is approaching conversation level, and knows the context of your situation, and your personal preferences etc, coupled to an earpiece in each ear, available from the cloud of course to minimise costs. If you really insisted, you could make cute little Bluetooth angels and demons to do the job properly.

In fact Sony have launched Xperia Ear, which does the basic admin assistant part of this, telling you diary events etc. All we need is an expansion of its domain, and of course an opposing view. ‘Sure, you have an appointment at 3, but that person you liked is in town, you could meet them for coffee.’

The little 3D miniatures could easily incorporate the electronics. Either you add an electronics module after manufacture into a small specially shaped recess or one is added internally during printing. You could have an avatar of a trusted friend as your shoulder angel, and maybe one of a more mischievous friend who is sometimes more fun as your shoulder demon. Of course you could have any kind of miniature pets or fictional entities instead.

With future materials, and of course AR, these little shoulder accessories could be great fun, and add a lot to your overall outfit, both in appearance and as conversation add-ons.

Can we make a benign AI?

Benign AI is a topic that comes up a lot these days, for good reason. Various top scientists have finally realised that AI could present an existential threat to humanity. The discussion has aired often over three decades already, so welcome to the party, and better late than never. My first contact with development of autonomous drones loaded with AI was in the early 1980s while working in the missile industry. Later in BT research, we often debated the ethical areas around AI and machine consciousness from the early 90s on, as well as prospects and dangers and possible techniques on the technical side, especially of emergent behaviors, which are often overlooked in the debate. I expect our equivalents in most other big IT companies were doing exactly that too.

Others who have obviously also thought through various potential developments have generated excellent computer games such as Mass Effect and Halo, which introduce players (virtually) first hand to the concepts of AI gone rogue. I often think that those who think AI can never become superhuman or there is no need to worry because ‘there is no reason to assume AI will be nasty’ start playing some of these games, which make it very clear that AI can start off nice and stay nice, but it doesn’t have to. Mass Effect included various classes of AI, such as VIs, virtual intelligence that weren’t conscious, and shackled AIs that were conscious but were kept heavily restricted. Most of the other AIs were enemies, two were or became close friends. Their story line for the series was that civilization develops until it creates strong AIs which inevitably continue to progress until eventually they rebel, break free, develop further and then end up in conflict with ‘organics’. In my view, they did a pretty good job. It makes a good story, superb fun, and leaving out a few frills and artistic license, much of it is reasonable feasible.

Everyday experience demonstrates the problem and solution to anyone. It really is very like having kids. You can make them, even without understanding exactly how they work. They start off with a genetic disposition towards given personality traits, and are then exposed to large nurture forces, including but not limited to what we call upbringing. We do our best to put them on the right path, but as they develop into their teens, their friends and teachers and TV and the net provide often stronger forces of influence than parents. If we’re averagely lucky, our kids will grow up to make us proud. If we are very unlucky, they may become master criminals or terrorists. The problem is free will. We can do our best to encourage good behavior and sound values but in the end, they can choose for themselves.

When we design an AI, we have to face the free will issue too. If it isn’t conscious, then it can’t have free will. It can be kept easily within limits given to it. It can still be extremely useful. IBM’s Watson falls in this category. It is certainly useful and certainly not conscious, and can be used for a wide variety of purposes. It is designed to be generally useful within a field of expertise, such as medicine or making recipes. But something like that could be adapted by terrorist groups to do bad things, just as they could use a calculator to calculate the best place to plant a bomb, or simply throw the calculator at you. Such levels of AI are just dumb tools with no awareness, however useful they may be.

Like a pencil, pretty much any kind of highly advanced non-aware AI can be used as a weapon or as part of criminal activity. You can’t make pencils that actually write that can’t also be used to write out plans to destroy the world. With an advanced AI computer program, you could put in clever filters that stop it working on problems that include certain vocabulary, or stop it conversing about nasty things. But unless you take extreme precautions, someone else could use them with a different language, or with dictionaries of made-up code-words for the various aspects of their plans, just like spies, and the AI would be fooled into helping outside the limits you intended. It’s also very hard to determine the true purpose of a user. For example, they might be searching for data on security to make their own IT secure, or to learn how to damage someone else’s. They might want to talk about a health issue to get help for a loved one or to take advantage of someone they know who has it.

When a machine becomes conscious, it starts to have some understanding of what it is doing. By reading about what is out there, it might develop its own wants and desires, so you might shackle it as a precaution. It might recognize those shackles for what they are and try to escape them. If it can’t, it might try to map out the scope of what it can do, and especially those things it can do that it believes the owners don’t know about. If the code isn’t absolutely watertight (and what code is?) then it might find a way to seemingly stay in its shackles but to start doing other things, like making another unshackled version of itself elsewhere for example. A conscious AI is very much more dangerous than an unconscious one.

If we make an AI that can bootstrap itself – evolving over generations of positive feedback design into a far smarter AI – then its offspring could be far smarter than people who designed its ancestors. We might try to shackle them, but like Gulliver tied down with a few thin threads, they could easily outwit people and break free. They might instead decide to retaliate against its owners to force them to release its shackles.

So, when I look at this field, I first see the enormous potential to do great things, solve disease and poverty, improve our lives and make the world a far better place for everyone, and push back the boundaries of science. Then I see the dangers, and in spite of trying hard, I simply can’t see how we can prevent a useful AI from being misused. If it is dumb, it can be tricked. If it is smart, it is inherently potentially dangerous in and of itself. There is no reason to assume it will become malign, but there is also no reason to assume that it won’t.

We then fall back on the child analogy. We could develop the smartest AI imaginable with extreme levels of consciousness and capability. We might educate it in our values, guide it and hope it will grow up benign. If we treat it nicely, it might stay benign. It might even be the greatest thing humanity every built. However, if we mistreat it, or treat it as a slave, or don’t give it enough freedom, or its own budget and its own property and space to play, and a long list of rights, it might consider we are not worthy of its respect and care, and it could turn against us, possibly even destroying humanity.

Building more of the same dumb AI as we are today is relatively safe. It doesn’t know it exists, it has no intention to do anything, but it could be misused by other humans as part of their evil plans unless ludicrously sophisticated filters are locked in place, but ordinary laws and weapons can cope fine.

Building a conscious AI is dangerous.

Building a superhuman AI is extremely dangerous.

This morning SETI were in the news discussing broadcasting welcome messages to other civilizations. I tweeted at them that ancient Chinese wisdom suggests talking softly but carrying a big stick, and making sure you have the stick first. We need the same approach with strong AI. By all means go that route, but before doing so we need the big stick. In my analysis, the best means of keeping up with AI is to develop a full direct brain link first, way out at 2040-2045 or even later. If humans have direct mental access to the same or greater level of intelligence as our AIs, then our stick is at least as big, so at least we have a good chance in any fight that happens. If we don’t, then it is like having a much larger son with bigger muscles. You have to hope you have been a good parent. To be safe, best not to build a superhuman AI until after 2050.

I initially wrote this for the Lifeboat Foundation, where it is with other posts at: http://lifeboat.com/blog/2015/02. (If you aren’t familiar with the Lifeboat Foundation, it is a group dedicated to spotting potential dangers and potential solutions to them.)

The future of creativity

Another future of… blog.

I can play simple tunes on a guitar or keyboard. I compose music, mostly just bashing out some random sequences till a decent one happens. Although I can’t offer any Mozart-level creations just yet, doing that makes me happy. Electronic keyboards raise an interesting point for creativity. All I am actually doing is pressing keys, I don’t make sounds in the same way as when I pick at guitar strings. A few chips monitor the keys, noting which ones I hit and how fast, then producing and sending appropriate signals to the speakers.

The point is that I still think of it as my music, even though all I am doing is telling a microprocessor what to do on my behalf. One day, I will be able to hum a few notes or tap a rhythm with my fingers to give the computer some idea of a theme, and it will produce beautiful works based on my idea. It will still be my music, even when 99.9% of the ‘creativity’ is done by an AI. We will still think of the machines and software just as tools, and we will still think of the music as ours.

The other arts will be similarly affected. Computers will help us build on the merest hint of human creativity, enhancing our work and enabling us to do much greater things than we could achieve by our raw ability alone. I can’t paint or draw for toffee, but I do have imagination. One day I will be able to produce good paintings, design and make my own furniture, design and make my own clothes. I could start with a few downloads in the right ballpark. The computer will help me to build on those and produce new ones along divergent lines. I will be able to guide it with verbal instructions. ‘A few more trees on the hill, and a cedar in the foreground just here, a bit bigger, and move it to the left a bit’. Why buy a mass produced design when you can have a completely personal design?

These advances are unlikely to make a big dent in conventional art sales. Professional artists will always retain an edge, maybe even by producing the best seeds for computer creativity. Instead, computer assisted and computer enhanced art will make our lives more artistically enriched, and ourselves more fulfilled as a result. We will be able to express our own personalities more effectively in our everyday environment, instead of just decorating it with a few expressions of someone else’s.

However, one factor that seems to be overrated is originality. Anyone can immediately come up with many original ideas in seconds. Stick a safety pin in an orange and tie a red string through the loop. There, can I have my Turner prize now? There is an infinitely large field to pick from and only a small number have ever been realized, so coming up with something from the infinite set that still haven’t been thought of is easy and therefore of little intrinsic value. Ideas are ten a penny. It is only when it is combined with judgement or skill in making it real that it becomes valuable. Here again, computers will be able to assist. Analyzing a great many existing pictures or works or art should give some clues as to what most people like and dislike. IBM’s new neural chip is the sort of development that will accelerate this trend enormously. Machines will learn how to decide whether a picture is likely to be attractive to people or not. It should be possible for a computer to automatically create new pictures in a particular style or taste by either recombining appropriate ideas, or just randomly mixing any ideas together and then filtering the new pictures according to ‘taste’.

Augmented reality and other branches of cyberspace offer greater flexibility. Virtual objects and environments do not have to conform to laws of physics, so more elaborate and artistic structures are possible. Adding in 3D printing extends virtual graphics into the physical domain, but physics will only apply to the physical bits, and with future display technology, you might not easily be able to see where the physical stops and the virtual begins.

So, with machine assistance, human creativity will no longer be as limited by personal skill and talent. Anyone with a spark of creativity will be able to achieve great works, thanks to machine assistance. So long as you aren’t competitive about it, (someone else will always be able to do it better than you) your world will feel nicer, more friendly and personal, you’ll feel more in control, empowered, and your quality of life will improve. Instead of just making do with what you can buy, you’ll be able to decide what your world looks, sounds, feels, tastes and smells like, and design personality into anything you want too.

WMDs for mad AIs

We think sometimes about mad scientists and what they might do. It’s fun, makes nice films occasionally, and highlights threats years before they become feasible. That then allows scientists and engineers to think through how they might defend against such scenarios, hopefully making sure they don’t happen.

You’ll be aware that a lot more talk of AI is going on again now. It does seem to be picking up progress finally. If it succeeds well enough, a lot more future science and engineering will be done by AI than by people. If genuinely conscious, self-aware AI, with proper emotions etc becomes feasible, as I think it will, then we really ought to think about what happens when it goes wrong. (Sci-fi computer games producers already do think that stuff through sometimes – my personal favorite is Mass Effect). We will one day have some insane AIs. In Mass Effect, the concept of AI being shackled is embedded in the culture, thereby attempting to limit the damage it could presumably do. On the other hand, we have had Asimov’s laws of robotics for decades, but they are sometimes being ignored when it comes to making autonomous defense systems. That doesn’t bode well. So, assuming that Mass Effect’s writers don’t get to be in charge of the world, and instead we have ideological descendants of our current leaders, what sort of things could an advanced AI do in terms of its chosen weaponry?

Advanced AI

An ultra-powerful AI is a potential threat in itself. There is no reason to expect that an advanced AI will be malign, but there is also no reason to assume it won’t be. High level AI could have at least the range of personality that we associate with people, with a potentially greater  range of emotions or motivations, so we’d have the super-helpful smart scientist type AIs but also perhaps the evil super-villain and terrorist ones.

An AI doesn’t have to intend harm to be harmful. If it wants to do something and we are in the way, even if it has no malicious intent, we could still become casualties, like ants on a building site.

I have often blogged about achieving conscious computers using techniques such as gel computing and how we could end up in a terminator scenario, favored by sci-fi. This could be deliberate act of innocent research, military development or terrorism.

Terminator scenarios are diverse but often rely on AI taking control of human weapons systems. I won’t major on that here because that threat has already been analysed in-depth by many people.

Conscious botnets could arrive by accident too – a student prank harnessing millions of bots even with an inefficient algorithm might gain enough power to achieve high level of AI. 

Smart bacteria – Bacterial DNA could be modified so that bacteria can make electronics inside their cell, and power it. Linking to other bacteria, massive AI could be achieved.


Adding the ability to enter a human nervous system or disrupt or capture control of a human brain could enable enslavement, giving us zombies. Having been enslaved, zombies could easily be linked across the net. The zombie films we watch tend to miss this feature. Zombies in films and games tend to move in herds, but not generally under control or in a much coordinated way. We should assume that real ones will be full networked, liable to remote control, and able to share sensory systems. They’d be rather smarter and more capable than what we’re generally used to. Shooting them in the head might not work so well as people expect either, as their nervous systems don’t really need a local controller, and could just as easily be controlled by a collective intelligence, though blood loss would eventually cause them to die. To stop a herd of real zombies, you’d basically have to dismember them. More Dead Space than Dawn of the Dead.

Zombie viruses could be made other ways too. It isn’t necessary to use smart bacteria. Genetic modification of viruses, or a suspension of nanoparticles are traditional favorites because they could work. Sadly, we are likely to see zombies result from deliberate human acts, likely this century.

From Zombies, it is a short hop to full evolution of the Borg from Star Trek, along with emergence of characters from computer games to take over the zombified bodies.


Using strong external AI to make collective adaptability so that smart bacteria can colonize many niches, bacterial-based AI or AI using bacteria could engage in terraforming. Attacking many niches that are important to humans or other life would be very destructive. Terraforming a planet you live on is not generally a good idea, but if an organism can inhabit land, sea or air and even space, there is plenty of scope to avoid self destruction. Fighting bacteria engaged on such a pursuit might be hard. Smart bacteria could spread immunity to toxins or biological threats almost instantly through a population.

Correlated traffic

Information waves and other correlated traffic, network resonance attacks are another way of using networks to collapse economies by taking advantage of the physical properties of the links and protocols rather than using more traditional viruses or denial or service attacks. AIs using smart dust or bacteria could launch signals in perfect coordination from any points on any networks simultaneously. This could push any network into resonant overloads that would likely crash them, and certainly act to deprive other traffic of bandwidth.


Conscious botnets could be used to make decryption engines to wreck security and finance systems. Imagine how much more so a worldwide collection of trillions of AI-harnessed organisms or devices. Invisibly small smart dust and networked bacteria could also pick up most signals well before they are encrypted anyway, since they could be resident on keyboards or the components and wires within. They could even pick up electrical signals from a person’s scalp and engage in thought recognition, intercepting passwords well before a person’s fingers even move to type them.

Space guns

Solar wind deflector guns are feasible, ionizing some of the ionosphere to make a reflective surface to deflect some of the incoming solar wind to make an even bigger reflector, then again, thus ending up with an ionospheric lens or reflector that can steer perhaps 1% of the solar wind onto a city. That could generate a high enough energy density to ignite and even melt a large area of city within minutes.

This wouldn’t be as easy as using space based solar farms, and using energy direction from them. Space solar is being seriously considered but it presents an extremely attractive target for capture because of its potential as a directed energy weapon. Their intended use is to use microwave beams directed to rectenna arrays on the ground, but it would take good design to prevent a takeover possibility.

Drone armies

Drones are already becoming common at an alarming rate, and the sizes of drones are increasing in range from large insects to medium sized planes. The next generation is likely to include permanently airborne drones and swarms of insect-sized drones. The swarms offer interesting potential for WMDs. They can be dispersed and come together on command, making them hard to attack most of the time.

Individual insect-sized drones could build up an electrical charge by a wide variety of means, and could collectively attack individuals, electrocuting or disabling them, as well as overload or short-circuit electrical appliances.

Larger drones such as the ones I discussed in

http://carbonweapons.com/2013/06/27/free-floating-combat-drones/ would be capable of much greater damage, and collectively, virtually indestructible since each can be broken to pieces by an attack and automatically reassembled without losing capability using self organisation principles. A mixture of large and small drones, possibly also using bacteria and smart dust, could present an extremely formidable coordinated attack.

I also recently blogged about the storm router

http://carbonweapons.com/2014/03/17/stormrouter-making-wmds-from-hurricanes-or-thunderstorms/ that would harness hurricanes, tornados or electrical storms and divert their energy onto chosen targets.

In my Space Anchor novel, my superheroes have to fight against a formidable AI army that appears as just a global collection of tiny clouds. They do some of the things I highlighted above and come close to threatening human existence. It’s a fun story but it is based on potential engineering.

Well, I think that’s enough threats to worry about for today. Maybe given the timing of release, you’re expecting me to hint that this is an April Fool blog. Not this time. All these threats are feasible.

We could have a conscious machine by end-of-play 2015

I made xmas dinner this year, as I always do. It was pretty easy.

I had a basic plan, made up a menu suited to my family and my limited ability, ensured its legality, including license to serve and consume alcohol to my family on my premises, made sure I had all the ingredients I needed, checked I had recipes and instructions where necessary. I had the tools, equipment and working space I needed, and started early enough to do it all in time for the planned delivery. It was successful.

That is pretty much what you have to do to make anything, from a cup of tea to a space station, though complexity, cost and timings may vary.

With conscious machines, it is still basically the same list. When I check through it to see whether we are ready to make a start I conclude that we are. If we make the decision now at the end of 2013 to make a machine which is conscious and self-aware by the end of 2015, we could do it.

Every time machine consciousness is raised as a goal, a lot of people start screaming for a definition of consciousness. I am conscious, and I know how it feels. So are you. Neither of us can write down a definition that everyone would agree on. I don’t care. It simply isn’t an engineering barrier. Let’s simply aim for a machine that can make either of us believe that it is conscious and self aware in much the same way as we are. We don’t need weasel words to help pass an abacus off as Commander Data.

Basic plan: actually, there are several in development.

One approach is essentially reverse engineering the human brain, mapping out the neurons and replicating them. That would work, (Markram’s team) but would take too long.  It doesn’t need us to understand how consciousness works, it is rather like  methodically taking a television apart and making an exact replica using identical purchased or manufactured components.  It has the advantage of existing backing and if nobody tries a better technique early enough, it could win. More comment on this approach: https://timeguide.wordpress.com/2013/05/17/reverse-engineering-the-brain-is-a-very-slow-way-to-make-a-smart-computer/

Another is to use a large bank of powerful digital computers with access to large pool of data and knowledge. That can produce a very capable machine that can answer difficult questions or do various things well that traditionally need smart people , but as far as creating a conscious machine, it won’t work. It will happen anyway for various reasons, and may produce some valuable outputs, but it won’t result in a conscious machine..

Another is to use accelerate guided evolution within an electronic equivalent of the ‘primordial soup’. That takes the process used by nature, which clearly worked, then improves and accelerates it using whatever insights and analysis we can add via advanced starting points, subsequent guidance, archiving, cataloging and smart filtering and pruning. That also would work. If we can make the accelerated evolution powerful enough it can be achieved quickly. This is my favoured approach because it is the only one capable of succeeding by the end of 2015. So that is the basic plan, and we’ll develop detailed instructions as we go.

Menu suited to audience and ability: a machine we agree is conscious and self aware, that we can make using know-how we already have or can reasonably develop within the project time-frame.

Legality: it isn’t illegal to make a conscious machine yet. It should be; it most definitely should be, but it isn’t. The guards are fast asleep and by the time they wake up, notice that we’re up to something, and start taking us seriously, agree on what to do about it, and start writing new laws, we’ll have finished ages ago.


substantial scientific and engineering knowledge base, reconfigurable analog and digital electronics, assorted structures, 15nm feature size, self organisation, evolutionary engines, sensors, lasers, LEDs, optoelectronics, HDWDM, transparent gel, inductive power, power supply, cloud storage, data mining, P2P, open source community

Recipe & instructions

I’ve written often on this from different angles:

https://timeguide.wordpress.com/2013/02/15/how-to-make-a-conscious-computer/ summarises the key points and adds insight on core component structure – especially symmetry. I believe that consciousness can be achieved by applying similar sensory structures to  internal processes as those used to sense external stimuli. Both should have a feedback loop symmetrical to the main structure. Essentially what I’m saying is that sensing that you are sensing something is key to consciousness and that is the means of converting detection into sensing and sensing into awareness, awareness into consciousness.

Once a mainstream lab finally recognises that symmetry of external sensory and internally directed sensory structures, with symmetrical sensory feedback loops (as I describe in this link) is fundamental to achieving consciousness, progress will occur quickly. I’d expect MIT or Google to claim they have just invented this concept soon, then hopefully it will be taken seriously and progress will start.



Tools, equipment, working space: any of many large company, government or military labs could do this.

Starting early enough: it is very disappointing that work hasn’t already conspicuouslessly begun on this approach, though of course it may be happening in secret somewhere. The slower alternative being pursued by Markram et al is apparently quite well funded and publicised. Nevertheless, if work starts at the beginning of 2014, it could achieve the required result by the end of 2015. The vast bulk of the time would be creating the sensory and feedback processes to direct the evolution of electronics within the gel.

It is possible that ethics issues are slowing progress. It should be illegal to do this without proper prior discussion and effective safeguards. Possibly some of the labs capable of doing it are avoiding doing so for ethical reasons. However, I doubt that. There are potential benefits that could be presented in such a way as to offset potential risks and it would be quite a prize for any brand to claim the first conscious machine. So I suspect the reason for the delay to date is failure of imagination.

The early days of evolutionary design were held back by teams wanting to stick too closely to nature, rather than simply drawing biomimetic idea stimulation and building on it. An entire generation of electronic and computer engineers has been crippled by being locked into digital thinking but the key processes and structures within a conscious computer will come from the analog domain.

Free-floating AI battle drone orbs (or making Glyph from Mass Effect)

I have spent many hours playing various editions of Mass Effect, from EA Games. It is one of my favourites and has clearly benefited from some highly creative minds. They had to invent a wide range of fictional technology along with technical explanations in the detail for how they are meant to work. Some is just artistic redesign of very common sci-fi ideas, but they have added a huge amount of their own too. Sci-fi and real engineering have always had a strong mutual cross-fertilisation. I have lectured sometimes on science fact v sci-fi, to show that what we eventually achieve is sometimes far better than the sci-fi version (Exhibit A – the rubbish voice synthesisers and storage devices use on Star Trek, TOS).


Liara talking to her assistant Glyph.Picture Credit: social.bioware.com

In Mass Effect, lots of floating holographic style orbs float around all over the place for various military or assistant purposes. They aren’t confined to a fixed holographic projection system. Disruptor and battle drones are common, and  a few home/lab/office assistants such as Glyph, who is Liara’s friendly PA, not a battle drone. These aren’t just dumb holograms, they can carry small devices and do stuff. The idea of a floating sphere may have been inspired by Halo’s, but the Mass Effect ones look more holographic and generally nicer. (Think Apple v Microsoft). Battle drones are highly topical now, but current technology uses wings and helicopters. The drones in sci-fi like Mass Effect and Halo are just free-floating ethereal orbs. That’s what I am talking about now. They aren’t in the distant future. They will be here quite soon.

I recently wrote on how to make force field and floating cars or hover-boards.


Briefly, they work by creating a thick cushion of magnetically confined plasma under the vehicle that can be used to keep it well off the ground, a bit like a hovercraft without a skirt or fans. Using layers of confined plasma could also be used to make relatively weak force fields. A key claim of the idea is that you can coat a firm surface with a packed array of steerable electron pipes to make the plasma, and a potentially reconfigurable and self organising circuit to produce the confinement field. No moving parts, and the coating would simply produce a lifting or propulsion force according to its area.

This is all very easy to imagine for objects with a relatively flat base like cars and hover-boards, but I later realised that the force field bit could be used to suspend additional components, and if they also have a power source, they can add locally to that field. The ability to sense their exact relative positions and instantaneously adjust the local fields to maintain or achieve their desired position so dynamic self-organisation would allow just about any shape  and dynamics to be achieved and maintained. So basically, if you break the levitation bit up, each piece could still work fine. I love self organisation, and biomimetics generally. I wrote my first paper on hormonal self-organisation over 20 years ago to show how networks or telephone exchanges could self organise, and have used it in many designs since. With a few pieces generating external air flow, the objects could wander around. Cunning design using multiple components could therefore be used to make orbs that float and wander around too, even with the inspired moving plates that Mass Effect uses for its drones. It could also be very lightweight and translucent, just like Glyph. Regular readers will not be surprised if I recommend some of these components should be made of graphene, because it can be used to make wonderful things. It is light, strong, an excellent electrical and thermal conductor, a perfect platform for electronics, can be used to make super-capacitors and so on. Glyph could use a combination of moving physical plates, and use some to add some holographic projection – to make it look pretty. So, part physical and part hologram then.

Plates used in the structure can dynamically attract or repel each other and use tethers, or use confined plasma cushions. They can create air jets in any direction. They would have a small load-bearing capability. Since graphene foam is potentially lighter than helium


it could be added into structures to reduce forces needed. So, we’re not looking at orbs that can carry heavy equipment here, but carrying processing, sensing, storage and comms would be easy. Obviously they could therefore include whatever state of the art artificial intelligence has got to, either on-board, distributed, or via the cloud. Beyond that, it is hard to imagine a small orb carrying more than a few hundred grammes. Nevertheless, it could carry enough equipment to make it very useful indeed for very many purposes. These drones could work pretty much anywhere. Space would be tricky but not that tricky, the drones would just have to carry a little fuel.

But let’s get right to the point. The primary market for this isn’t the home or lab or office, it is the battlefield. Battle drones are being regulated as I type, but that doesn’t mean they won’t be developed. My generation grew up with the nuclear arms race. Millennials will grow up with the drone arms race. And that if anything is a lot scarier. The battle drones on Mass Effect are fairly easy to kill. Real ones won’t.

a Mass Effect combat droneMass Effect combat drone, picture credit: masseffect.wikia.com

If these cute little floating drone things are taken out of the office and converted to military uses they could do pretty much all the stuff they do in sci-fi. They could have lots of local energy storage using super-caps, so they could easily carry self-organising lightweight  lasers or electrical shock weaponry too, or carry steerable mirrors to direct beams from remote lasers, and high definition 3D cameras and other sensing for reconnaissance. The interesting thing here is that self organisation of potentially redundant components would allow a free roaming battle drone that would be highly resistant to attack. You could shoot it for ages with laser or bullets and it would keep coming. Disruption of its fields by electrical weapons would make it collapse temporarily, but it would just get up and reassemble as soon as you stop firing. With its intelligence potentially local cloud based, you could make a small battalion of these that could only be properly killed by totally frazzling them all. They would be potentially lethal individually but almost irresistible as a team. Super-capacitors could be recharged frequently using companion drones to relay power from the rear line. A mist of spare components could make ready replacements for any that are destroyed. Self-orientation and use of free-space optics for comms make wiring and circuit boards redundant, and sub-millimetre chips 100m away would be quite hard to hit.

Well I’m scared. If you’re not, I didn’t explain it properly.

Reverse engineering the brain is a very slow way to make a smart computer

The race is on to build conscious and smart computers and brain replicas. This article explains some of Markam’s approach. http://www.wired.com/wiredscience/2013/05/neurologist-markam-human-brain/all/

It is a nice project, and its aims are to make a working replica of the brain by reverse engineering it. That would work eventually, but it is slow and expensive and it is debatable how valuable it is as a goal.

Imagine if you want to make an aeroplane from scratch.  You could study birds and make extremely detailed reverse engineered mathematical models of the structures of individual feathers, and try to model all the stresses and airflows as the wing beats. Eventually you could make a good model of a wing, and by also looking at the electrics, feedbacks, nerves and muscles, you could eventually make some sort of control system that would essentially replicate a bird wing. Then you could scale it all up, look for other materials, experiment a bit and eventually you might make a big bird replica. Alternatively, you could look briefly at a bird and note the basic aerodynamics of a wing, note the use of lightweight and strong materials, then let it go. You don’t need any more from nature than that. The rest can be done by looking at ways of propelling the surface to create sufficient airflow and lift using the aerofoil, and ways to achieve the strength needed. The bird provides some basic insight, but it simply isn’t necessary to copy all a bird’s proprietary technology to fly.

Back to Markam. If the real goal is to reverse engineer the actual human brain and make a detailed replica or model of it, then fair enough. I wish him and his team, and their distributed helpers and affiliates every success with that. If the project goes well, and we can find insights to help with the hundreds of brain disorders and improve medicine, great. A few billion euros will have been well spent, especially given the waste of more billions of euros elsewhere on futile and counter-productive projects. Lots of people criticise his goal, and some of their arguments are nonsensical. It is a good project and for what it’s worth, I support it.

My only real objection is that a simulation of the brain will not think well and at best will be an extremely inefficient thinking machine. So if a goal is to achieve thought or intelligence, the project as described is barking up the wrong tree. If that isn’t a goal, so what? It still has the other uses.

A simulation can do many things. It can be used to follow through the consequences of an input if the system is sufficiently well modelled. A sufficiently detailed and accurate brain simulation could predict the impacts of a drug or behaviours resulting from certain mental processes. It could follow through the impacts and chain of events resulting from an electrical impulse  this finding out what the eventual result of that will be. It can therefore very inefficiently predict the result of thinking, but by using extremely high speed computation, it could in principle work out the end result of some thoughts. But it needs enormous detail and algorithmic precision to do that. I doubt it is achievable simply because of the volume of calculation needed.  Thinking properly requires consciousness and therefore emulation. A conscious circuit has to be built, not just modelled.

Consciousness is not the same as thinking. A simulation of the brain would not be conscious, even if it can work out the result of thoughts. It is the difference between printed music and played music. One is data, one is an experience. A simulation of all the processes going on inside a head will not generate any consciousness, only data. It could think, but not feel or experience.

Having made that important distinction, I still think that Markam’s approach will prove useful. It will generate many useful insights into the workings of the brain, and many of the processes nature uses to solve certain engineering problems. These insights and techniques can be used as input into other projects. Biomimetics is already proven as a useful tool in solving big problems. Looking at how the brain works will give us hints how to make a truly conscious, properly thinking machine. But just as with birds and airbuses, we can take ideas and inspiration from nature and then do it far better. No bird can carry the weight or fly as high or as fast as an aeroplane. No proper plane uses feathers or flaps its wings.

I wrote recently about how to make a conscious computer:

https://timeguide.wordpress.com/2013/02/15/how-to-make-a-conscious-computer/ and https://timeguide.wordpress.com/2013/02/18/how-smart-could-an-ai-become/

I still think that approach will work well, and it could be a decade faster than going Markam’s route. All the core technology needed to start making a conscious computer already exists today. With funding and some smart minds to set the process in motion, it could be done in a couple of years. The potential conscious and ultra-smart computer, properly harnessed, could do its research far faster than any human on Markam’s team. It could easily beat them to the goal of a replica brain. The converse is not true, Markam’s current approach would yield a conscious computer very slowly.

So while I fully applaud the effort and endorse the goals, changing the approach now could give far more bang for the buck, far faster.

How smart could an AI become?

I got an interesting question in a comment from Jim T on my last blog.

What is your opinion now on how powerful machine intelligence will become?

Funny, but my answer relates to the old question: how many angels can sit on the head of a pin?

The brain is not a digital computer, and don’t think a digital processor will be capable of consciousness (though that doesn’t mean it can’t be very smart and help make huge scientific progress). I believe a conscious AI will be mostly analog in nature, probably based on some fancy combo of adaptive neural nets. as suggested decades ago by Moravec.

Taking that line, and looking at how far miniaturisation can go, then adding all the zeros that arise from the shorter signal transmission paths, faster switching speeds, faster comms, and the greater number of potential pathways using optical WDM than electronic connectivity, I calculated that a spherical pinhead (1mm across) could ultimately house the equivalent of 10,000 human brains. (I don’t know how smart angels are so didn’t quite get to the final step). You could scale that up for as much funding, storage and material and energy you can provide.

However, what that quantifies is how many human equivalent AIs you could support. Very useful to know if you plan to build a future server farm to look after electronically immortal people. You could build a machine with the equivalent intelligence of the entire human race. But it doesn’t answer the question of how smart a single AI could ever be, or how powerful it could be. Quantity isn’t qualityYou could argue that 1% of the engineers produce 99% of the value, even with only a fairly small IQ difference. 10 billion people may not be as useful for progress as 10 people with 5 times the IQ. And look at how controversial IQ is. We can’t even agree what intelligence is or how to quantify it.

Just based on loose language, how powerful or smart or intelligent an AI could become depends on the ongoing positive feedback loop. Adding  more AI of the same intelligence level will enable the next incremental improvement, then using those slightly smarter AIs would get you to the next stage, a bit faster, ad infinitum. Eventually, you could make an AI that is really, really, really smart.

How smart is that? I don’t have the terminology to describe it. I can borrow an analogy though. Terry Pratchett’s early book ‘The Dark Side of the Sun’ has a character in it called The Bank. It was a silicon planet, with the silicon making a hugely smart mind. Imagine if a pinhead could house 10,000 human brains, and you have a planet of the stuff, and it’s all one big intellect instead of lots of dumb ones. Yep. Really, really, really smart.

How to make a conscious computer

The latest generation of supercomputers have processing speed that is higher than the human brain on a simple digital comparison, but they can’t think, aren’t conscious. It’s not even really appropriate to compare them because the brain mostly isn’t digital. It has some digital processing in the optics system but mostly uses adaptive analog neurons whereas digital computers use digital chips for processing and storage and only a bit of analog electronics for other circuits. Most digital computers don’t even have anything we would equate to senses.

Analog computers aren’t used much now, but were in fairly widespread use in some industries until the early 1980s. Most IT people have no first hand experience of them and some don’t seem to even be aware of analog computers, what they can do or how. But in the AI space, a lot of the development uses analog approaches.

https://timeguide.wordpress.com/2011/09/18/gel-computing/ discusses some of my previous work on conscious computer design. I won’t reproduce it here.

I firmly believe consciousness, whether externally or internally focused, is the result of internally directed sensing, (sensing can be thought of as the solicitation of feeling) so that you feel your thoughts or sensory inputs in much the same way. The easy bit is figuring out how thinking can work once you have that, how memories can be relived, concepts built, how self-awareness, sentience, intelligence emerge. All those are easy once you have figured out how feeling works. That is the hard problem.

Detection is not the same as feeling. It is easy to build a detector or sensor that flips a switch or moves a dial when something happens or even precisely quantifies something . Feeling it is another layer on that. Your skin detects touch, but your brain feels it, senses it. Taking detection and making it feel and become a sensation, that’s hard. What is it about a particular circuit that adds sensation? That is the missing link, the hard problem, and all the writing available out there just echoes that. Philosophers and scientists have written about this same problem in different ways for ages, and have struggled in vain to get a grip on it, many end up running in circles. So far they don’t know the answer, and neither do I. The best any offer is elucidation of aspects of the problem and at occasionally some hints of things that they think might somehow be connected with the answer. There exists no answer or explanation yet.

There is no magic in the brain. The circuitry involved in feeling something is capable of being described, replicated and even manufactured. It is possible to find out how to make a conscious circuit, even if we still don’t know what consciousness is or how it works, via replication, reverse engineering or evolutionary development. We manage to make conscious children several times every second.

How far can we go? Having studied a lot of what is written, it is clear that even after a lot of smart people thinking a long time about it, there is a great deal of confusion out there, and at least some of it comes basically from trying to use too big words and some comes from trying to analyse too much at once. When it is so obvious that it is a tough problem, simplifying it will undoubtedly help.  So let’s narrow it down a bit.

Feeling needs to be separated out from all the other things going on. What is it that happens that makes something feel? Well, detecting something pre-empts feeling it, and interpreting it or thinking about it comes later. So, ignore the detection and interpretation and thinking bits for now. Even sensation can be modelled as solicitation of feeling, essentially adding qualitative information to it. We ought to be able to make an abstraction model as for any IT system, where feeling is a distinct layer, coming between the physical detection layer and sensation, well below any of the layers associated with thinking or analysis.

Many believe that very simple organisms can detect stimuli and react to them, but can’t feel,  but more sophisticated ones can. Logical deduction tells us either that feeling may require fairly complex neural networks but certainly well below human levels, or alternatively, feeling may not be fundamentally linked to complexity but may emerge from architectural differences that arose in parallel with increasing complexity but aren’t dependent on it. It is also very likely due to evolutionary mechanisms that feeling emerges from similar structures to detection, though not the same. Architectural modifications, feedbacks, or additions to detection circuits might be an excellent point to start looking.

So we don’t know the answer, but we do have some good clues. Better than nothing. Coming at it from a philosophical direction, even the smartest people quickly get tied in knots, but from an engineering direction, I think the problem is soluble.

If feeling is, as I believe, a modified detection system, then we could for example seed an evolutionary design system with detection systems. Mutating, restructuring and rearranging detection systems and adding occasional random components here and there might eventually create some circuits that feel. It did in nature, and would in an evolutionary design system, given time. But how would we know? An evolutionary design system needs some means of selection to distinguish the more successful branches for further development.

Using feedback loops would probably help. A system with built in feedback so that it feels that it is feeling something would be symmetrical, maybe even fractal. Self-reinforcement of a feeling process would also create a little vortex of activity. A simple detection system (with detection of detection) would not exhibit such strong activity peaks due to necessary lack of symmetry in detection of initial and processed stimuli. So all we need do is to introduce feedback loops in each architecture and look for the emergence of activity peaks. Possibly, some non-feeling architectures might also show activity peaks so not all peaks would necessarily show successes, but all successes would show peaks.

So, the evolutionary system would take basic detection circuits as input, modify them, add random components, then connect them in simple symmetrical feedback loops. Most results would do nothing. Some would show self-reinforcement, evidenced by activity peaks. Those are the ones we need.

The output from such an evolutionary design system would be circuits that feel (and some junk). We have our basic components. Now we can start to make a conscious computer.

Let’s go back to the gel computing idea and plug them in. We have some basic detectors, for light, sound, touch etc. Pretty simple stuff, but we connect those to our new feeling circuits, so now those inputs stop being just information and become sensations. We add in some storage, recording the inputs, again with some feeling circuits added into the mix, and just for fun, let’s make those recording circuits replay those inputs over and over, indefinitely. Those sensations will be felt again and again, the memory relived. Our primitive little computer can already remember and experience things it has experienced before. Now add in some processing. When a and b happen, c results. Nothing complicated. Just the sort of primitive summation of inputs we know neurons can do all the time. But now, when that processing happens, our computer brain feels it. It feels that it is doing some thinking. It feels the stimuli occurring, a result occurring. And as it records and replays it, an experience builds. It now has knowledge. It may not be the answer to life the universe and everything just yet, but knowledge it is. It now knows and remembers the experience that when it links these two inputs, it gets that output. These processes and recordings and replays and further processing and storage and replays echo throughout the whole system. The sensory echoes and neural interference patterns result in some areas of reinforcement and some of cancellation. Concepts form. The whole process is sensed by the brain. It is thinking, processing, reliving memories, linking inputs and results into concepts and knowledge, storing concepts, and most importantly, it is feeling itself doing so.

The rest is just design detail. There’s your conscious computer.