Tag Archives: AI

WMDs for mad AIs

We think sometimes about mad scientists and what they might do. It’s fun, makes nice films occasionally, and highlights threats years before they become feasible. That then allows scientists and engineers to think through how they might defend against such scenarios, hopefully making sure they don’t happen.

You’ll be aware that a lot more talk of AI is going on again now. It does seem to be picking up progress finally. If it succeeds well enough, a lot more future science and engineering will be done by AI than by people. If genuinely conscious, self-aware AI, with proper emotions etc becomes feasible, as I think it will, then we really ought to think about what happens when it goes wrong. (Sci-fi computer games producers already do think that stuff through sometimes – my personal favorite is Mass Effect). We will one day have some insane AIs. In Mass Effect, the concept of AI being shackled is embedded in the culture, thereby attempting to limit the damage it could presumably do. On the other hand, we have had Asimov’s laws of robotics for decades, but they are sometimes being ignored when it comes to making autonomous defense systems. That doesn’t bode well. So, assuming that Mass Effect’s writers don’t get to be in charge of the world, and instead we have ideological descendants of our current leaders, what sort of things could an advanced AI do in terms of its chosen weaponry?

Advanced AI

An ultra-powerful AI is a potential threat in itself. There is no reason to expect that an advanced AI will be malign, but there is also no reason to assume it won’t be. High level AI could have at least the range of personality that we associate with people, with a potentially greater  range of emotions or motivations, so we’d have the super-helpful smart scientist type AIs but also perhaps the evil super-villain and terrorist ones.

An AI doesn’t have to intend harm to be harmful. If it wants to do something and we are in the way, even if it has no malicious intent, we could still become casualties, like ants on a building site.

I have often blogged about achieving conscious computers using techniques such as gel computing and how we could end up in a terminator scenario, favored by sci-fi. This could be deliberate act of innocent research, military development or terrorism.

Terminator scenarios are diverse but often rely on AI taking control of human weapons systems. I won’t major on that here because that threat has already been analysed in-depth by many people.

Conscious botnets could arrive by accident too – a student prank harnessing millions of bots even with an inefficient algorithm might gain enough power to achieve high level of AI. 

Smart bacteriaBacterial DNA could be modified so that bacteria can make electronics inside their cell, and power it. Linking to other bacteria, massive AI could be achieved.

Zombies

Adding the ability to enter a human nervous system or disrupt or capture control of a human brain could enable enslavement, giving us zombies. Having been enslaved, zombies could easily be linked across the net. The zombie films we watch tend to miss this feature. Zombies in films and games tend to move in herds, but not generally under control or in a much coordinated way. We should assume that real ones will be full networked, liable to remote control, and able to share sensory systems. They’d be rather smarter and more capable than what we’re generally used to. Shooting them in the head might not work so well as people expect either, as their nervous systems don’t really need a local controller, and could just as easily be controlled by a collective intelligence, though blood loss would eventually cause them to die. To stop a herd of real zombies, you’d basically have to dismember them. More Dead Space than Dawn of the Dead.

Zombie viruses could be made other ways too. It isn’t necessary to use smart bacteria. Genetic modification of viruses, or a suspension of nanoparticles are traditional favorites because they could work. Sadly, we are likely to see zombies result from deliberate human acts, likely this century.

From Zombies, it is a short hop to full evolution of the Borg from Star Trek, along with emergence of characters from computer games to take over the zombified bodies.

Terraforming

Using strong external AI to make collective adaptability so that smart bacteria can colonize many niches, bacterial-based AI or AI using bacteria could engage in terraforming. Attacking many niches that are important to humans or other life would be very destructive. Terraforming a planet you live on is not generally a good idea, but if an organism can inhabit land, sea or air and even space, there is plenty of scope to avoid self destruction. Fighting bacteria engaged on such a pursuit might be hard. Smart bacteria could spread immunity to toxins or biological threats almost instantly through a population.

Correlated traffic

Information waves and other correlated traffic, network resonance attacks are another way of using networks to collapse economies by taking advantage of the physical properties of the links and protocols rather than using more traditional viruses or denial or service attacks. AIs using smart dust or bacteria could launch signals in perfect coordination from any points on any networks simultaneously. This could push any network into resonant overloads that would likely crash them, and certainly act to deprive other traffic of bandwidth.

Decryption

Conscious botnets could be used to make decryption engines to wreck security and finance systems. Imagine how much more so a worldwide collection of trillions of AI-harnessed organisms or devices. Invisibly small smart dust and networked bacteria could also pick up most signals well before they are encrypted anyway, since they could be resident on keyboards or the components and wires within. They could even pick up electrical signals from a person’s scalp and engage in thought recognition, intercepting passwords well before a person’s fingers even move to type them.

Space guns

Solar wind deflector guns are feasible, ionizing some of the ionosphere to make a reflective surface to deflect some of the incoming solar wind to make an even bigger reflector, then again, thus ending up with an ionospheric lens or reflector that can steer perhaps 1% of the solar wind onto a city. That could generate a high enough energy density to ignite and even melt a large area of city within minutes.

This wouldn’t be as easy as using space based solar farms, and using energy direction from them. Space solar is being seriously considered but it presents an extremely attractive target for capture because of its potential as a directed energy weapon. Their intended use is to use microwave beams directed to rectenna arrays on the ground, but it would take good design to prevent a takeover possibility.

Drone armies

Drones are already becoming common at an alarming rate, and the sizes of drones are increasing in range from large insects to medium sized planes. The next generation is likely to include permanently airborne drones and swarms of insect-sized drones. The swarms offer interesting potential for WMDs. They can be dispersed and come together on command, making them hard to attack most of the time.

Individual insect-sized drones could build up an electrical charge by a wide variety of means, and could collectively attack individuals, electrocuting or disabling them, as well as overload or short-circuit electrical appliances.

Larger drones such as the ones I discussed in

http://carbonweapons.com/2013/06/27/free-floating-combat-drones/ would be capable of much greater damage, and collectively, virtually indestructible since each can be broken to pieces by an attack and automatically reassembled without losing capability using self organisation principles. A mixture of large and small drones, possibly also using bacteria and smart dust, could present an extremely formidable coordinated attack.

I also recently blogged about the storm router

http://carbonweapons.com/2014/03/17/stormrouter-making-wmds-from-hurricanes-or-thunderstorms/ that would harness hurricanes, tornados or electrical storms and divert their energy onto chosen targets.

In my Space Anchor novel, my superheroes have to fight against a formidable AI army that appears as just a global collection of tiny clouds. They do some of the things I highlighted above and come close to threatening human existence. It’s a fun story but it is based on potential engineering.

Well, I think that’s enough threats to worry about for today. Maybe given the timing of release, you’re expecting me to hint that this is an April Fool blog. Not this time. All these threats are feasible.

We could have a conscious machine by end-of-play 2015

I made xmas dinner this year, as I always do. It was pretty easy.

I had a basic plan, made up a menu suited to my family and my limited ability, ensured its legality, including license to serve and consume alcohol to my family on my premises, made sure I had all the ingredients I needed, checked I had recipes and instructions where necessary. I had the tools, equipment and working space I needed, and started early enough to do it all in time for the planned delivery. It was successful.

That is pretty much what you have to do to make anything, from a cup of tea to a space station, though complexity, cost and timings may vary.

With conscious machines, it is still basically the same list. When I check through it to see whether we are ready to make a start I conclude that we are. If we make the decision now at the end of 2013 to make a machine which is conscious and self-aware by the end of 2015, we could do it.

Every time machine consciousness is raised as a goal, a lot of people start screaming for a definition of consciousness. I am conscious, and I know how it feels. So are you. Neither of us can write down a definition that everyone would agree on. I don’t care. It simply isn’t an engineering barrier. Let’s simply aim for a machine that can make either of us believe that it is conscious and self aware in much the same way as we are. We don’t need weasel words to help pass an abacus off as Commander Data.

Basic plan: actually, there are several in development.

One approach is essentially reverse engineering the human brain, mapping out the neurons and replicating them. That would work, (Markram’s team) but would take too long.  It doesn’t need us to understand how consciousness works, it is rather like  methodically taking a television apart and making an exact replica using identical purchased or manufactured components.  It has the advantage of existing backing and if nobody tries a better technique early enough, it could win. More comment on this approach: http://timeguide.wordpress.com/2013/05/17/reverse-engineering-the-brain-is-a-very-slow-way-to-make-a-smart-computer/

Another is to use a large bank of powerful digital computers with access to large pool of data and knowledge. That can produce a very capable machine that can answer difficult questions or do various things well that traditionally need smart people , but as far as creating a conscious machine, it won’t work. It will happen anyway for various reasons, and may produce some valuable outputs, but it won’t result in a conscious machine..

Another is to use accelerate guided evolution within an electronic equivalent of the ‘primordial soup’. That takes the process used by nature, which clearly worked, then improves and accelerates it using whatever insights and analysis we can add via advanced starting points, subsequent guidance, archiving, cataloging and smart filtering and pruning. That also would work. If we can make the accelerated evolution powerful enough it can be achieved quickly. This is my favoured approach because it is the only one capable of succeeding by the end of 2015. So that is the basic plan, and we’ll develop detailed instructions as we go.

Menu suited to audience and ability: a machine we agree is conscious and self aware, that we can make using know-how we already have or can reasonably develop within the project time-frame.

Legality: it isn’t illegal to make a conscious machine yet. It should be; it most definitely should be, but it isn’t. The guards are fast asleep and by the time they wake up, notice that we’re up to something, and start taking us seriously, agree on what to do about it, and start writing new laws, we’ll have finished ages ago.

Ingredients:

substantial scientific and engineering knowledge base, reconfigurable analog and digital electronics, assorted structures, 15nm feature size, self organisation, evolutionary engines, sensors, lasers, LEDs, optoelectronics, HDWDM, transparent gel, inductive power, power supply, cloud storage, data mining, P2P, open source community

Recipe & instructions

I’ve written often on this from different angles:

http://timeguide.wordpress.com/2013/02/15/how-to-make-a-conscious-computer/ summarises the key points and adds insight on core component structure – especially symmetry. I believe that consciousness can be achieved by applying similar sensory structures to  internal processes as those used to sense external stimuli. Both should have a feedback loop symmetrical to the main structure. Essentially what I’m saying is that sensing that you are sensing something is key to consciousness and that is the means of converting detection into sensing and sensing into awareness, awareness into consciousness.

Once a mainstream lab finally recognises that symmetry of external sensory and internally directed sensory structures, with symmetrical sensory feedback loops (as I describe in this link) is fundamental to achieving consciousness, progress will occur quickly. I’d expect MIT or Google to claim they have just invented this concept soon, then hopefully it will be taken seriously and progress will start.

http://timeguide.wordpress.com/2011/09/18/gel-computing/

http://timeguide.wordpress.com/2010/06/16/man-machine-equivalence-by-2015/

Tools, equipment, working space: any of many large company, government or military labs could do this.

Starting early enough: it is very disappointing that work hasn’t already conspicuouslessly begun on this approach, though of course it may be happening in secret somewhere. The slower alternative being pursued by Markram et al is apparently quite well funded and publicised. Nevertheless, if work starts at the beginning of 2014, it could achieve the required result by the end of 2015. The vast bulk of the time would be creating the sensory and feedback processes to direct the evolution of electronics within the gel.

It is possible that ethics issues are slowing progress. It should be illegal to do this without proper prior discussion and effective safeguards. Possibly some of the labs capable of doing it are avoiding doing so for ethical reasons. However, I doubt that. There are potential benefits that could be presented in such a way as to offset potential risks and it would be quite a prize for any brand to claim the first conscious machine. So I suspect the reason for the delay to date is failure of imagination.

The early days of evolutionary design were held back by teams wanting to stick too closely to nature, rather than simply drawing biomimetic idea stimulation and building on it. An entire generation of electronic and computer engineers has been crippled by being locked into digital thinking but the key processes and structures within a conscious computer will come from the analog domain.

Free-floating AI battle drone orbs (or making Glyph from Mass Effect)

I have spent many hours playing various editions of Mass Effect, from EA Games. It is one of my favourites and has clearly benefited from some highly creative minds. They had to invent a wide range of fictional technology along with technical explanations in the detail for how they are meant to work. Some is just artistic redesign of very common sci-fi ideas, but they have added a huge amount of their own too. Sci-fi and real engineering have always had a strong mutual cross-fertilisation. I have lectured sometimes on science fact v sci-fi, to show that what we eventually achieve is sometimes far better than the sci-fi version (Exhibit A – the rubbish voice synthesisers and storage devices use on Star Trek, TOS).

Glyph

Liara talking to her assistant Glyph.Picture Credit: social.bioware.com

In Mass Effect, lots of floating holographic style orbs float around all over the place for various military or assistant purposes. They aren’t confined to a fixed holographic projection system. Disruptor and battle drones are common, and  a few home/lab/office assistants such as Glyph, who is Liara’s friendly PA, not a battle drone. These aren’t just dumb holograms, they can carry small devices and do stuff. The idea of a floating sphere may have been inspired by Halo’s, but the Mass Effect ones look more holographic and generally nicer. (Think Apple v Microsoft). Battle drones are highly topical now, but current technology uses wings and helicopters. The drones in sci-fi like Mass Effect and Halo are just free-floating ethereal orbs. That’s what I am talking about now. They aren’t in the distant future. They will be here quite soon.

I recently wrote on how to make force field and floating cars or hover-boards.

http://timeguide.wordpress.com/2013/06/21/how-to-actually-make-a-star-wars-landspeeder-or-a-back-to-the-future-hoverboard/

Briefly, they work by creating a thick cushion of magnetically confined plasma under the vehicle that can be used to keep it well off the ground, a bit like a hovercraft without a skirt or fans. Using layers of confined plasma could also be used to make relatively weak force fields. A key claim of the idea is that you can coat a firm surface with a packed array of steerable electron pipes to make the plasma, and a potentially reconfigurable and self organising circuit to produce the confinement field. No moving parts, and the coating would simply produce a lifting or propulsion force according to its area.

This is all very easy to imagine for objects with a relatively flat base like cars and hover-boards, but I later realised that the force field bit could be used to suspend additional components, and if they also have a power source, they can add locally to that field. The ability to sense their exact relative positions and instantaneously adjust the local fields to maintain or achieve their desired position so dynamic self-organisation would allow just about any shape  and dynamics to be achieved and maintained. So basically, if you break the levitation bit up, each piece could still work fine. I love self organisation, and biomimetics generally. I wrote my first paper on hormonal self-organisation over 20 years ago to show how networks or telephone exchanges could self organise, and have used it in many designs since. With a few pieces generating external air flow, the objects could wander around. Cunning design using multiple components could therefore be used to make orbs that float and wander around too, even with the inspired moving plates that Mass Effect uses for its drones. It could also be very lightweight and translucent, just like Glyph. Regular readers will not be surprised if I recommend some of these components should be made of graphene, because it can be used to make wonderful things. It is light, strong, an excellent electrical and thermal conductor, a perfect platform for electronics, can be used to make super-capacitors and so on. Glyph could use a combination of moving physical plates, and use some to add some holographic projection – to make it look pretty. So, part physical and part hologram then.

Plates used in the structure can dynamically attract or repel each other and use tethers, or use confined plasma cushions. They can create air jets in any direction. They would have a small load-bearing capability. Since graphene foam is potentially lighter than helium

http://timeguide.wordpress.com/2013/01/05/could-graphene-foam-be-a-future-helium-substitute/

it could be added into structures to reduce forces needed. So, we’re not looking at orbs that can carry heavy equipment here, but carrying processing, sensing, storage and comms would be easy. Obviously they could therefore include whatever state of the art artificial intelligence has got to, either on-board, distributed, or via the cloud. Beyond that, it is hard to imagine a small orb carrying more than a few hundred grammes. Nevertheless, it could carry enough equipment to make it very useful indeed for very many purposes. These drones could work pretty much anywhere. Space would be tricky but not that tricky, the drones would just have to carry a little fuel.

But let’s get right to the point. The primary market for this isn’t the home or lab or office, it is the battlefield. Battle drones are being regulated as I type, but that doesn’t mean they won’t be developed. My generation grew up with the nuclear arms race. Millennials will grow up with the drone arms race. And that if anything is a lot scarier. The battle drones on Mass Effect are fairly easy to kill. Real ones won’t.

a Mass Effect combat droneMass Effect combat drone, picture credit: masseffect.wikia.com

If these cute little floating drone things are taken out of the office and converted to military uses they could do pretty much all the stuff they do in sci-fi. They could have lots of local energy storage using super-caps, so they could easily carry self-organising lightweight  lasers or electrical shock weaponry too, or carry steerable mirrors to direct beams from remote lasers, and high definition 3D cameras and other sensing for reconnaissance. The interesting thing here is that self organisation of potentially redundant components would allow a free roaming battle drone that would be highly resistant to attack. You could shoot it for ages with laser or bullets and it would keep coming. Disruption of its fields by electrical weapons would make it collapse temporarily, but it would just get up and reassemble as soon as you stop firing. With its intelligence potentially local cloud based, you could make a small battalion of these that could only be properly killed by totally frazzling them all. They would be potentially lethal individually but almost irresistible as a team. Super-capacitors could be recharged frequently using companion drones to relay power from the rear line. A mist of spare components could make ready replacements for any that are destroyed. Self-orientation and use of free-space optics for comms make wiring and circuit boards redundant, and sub-millimetre chips 100m away would be quite hard to hit.

Well I’m scared. If you’re not, I didn’t explain it properly.

Reverse engineering the brain is a very slow way to make a smart computer

The race is on to build conscious and smart computers and brain replicas. This article explains some of Markam’s approach. http://www.wired.com/wiredscience/2013/05/neurologist-markam-human-brain/all/

It is a nice project, and its aims are to make a working replica of the brain by reverse engineering it. That would work eventually, but it is slow and expensive and it is debatable how valuable it is as a goal.

Imagine if you want to make an aeroplane from scratch.  You could study birds and make extremely detailed reverse engineered mathematical models of the structures of individual feathers, and try to model all the stresses and airflows as the wing beats. Eventually you could make a good model of a wing, and by also looking at the electrics, feedbacks, nerves and muscles, you could eventually make some sort of control system that would essentially replicate a bird wing. Then you could scale it all up, look for other materials, experiment a bit and eventually you might make a big bird replica. Alternatively, you could look briefly at a bird and note the basic aerodynamics of a wing, note the use of lightweight and strong materials, then let it go. You don’t need any more from nature than that. The rest can be done by looking at ways of propelling the surface to create sufficient airflow and lift using the aerofoil, and ways to achieve the strength needed. The bird provides some basic insight, but it simply isn’t necessary to copy all a bird’s proprietary technology to fly.

Back to Markam. If the real goal is to reverse engineer the actual human brain and make a detailed replica or model of it, then fair enough. I wish him and his team, and their distributed helpers and affiliates every success with that. If the project goes well, and we can find insights to help with the hundreds of brain disorders and improve medicine, great. A few billion euros will have been well spent, especially given the waste of more billions of euros elsewhere on futile and counter-productive projects. Lots of people criticise his goal, and some of their arguments are nonsensical. It is a good project and for what it’s worth, I support it.

My only real objection is that a simulation of the brain will not think well and at best will be an extremely inefficient thinking machine. So if a goal is to achieve thought or intelligence, the project as described is barking up the wrong tree. If that isn’t a goal, so what? It still has the other uses.

A simulation can do many things. It can be used to follow through the consequences of an input if the system is sufficiently well modelled. A sufficiently detailed and accurate brain simulation could predict the impacts of a drug or behaviours resulting from certain mental processes. It could follow through the impacts and chain of events resulting from an electrical impulse  this finding out what the eventual result of that will be. It can therefore very inefficiently predict the result of thinking, but by using extremely high speed computation, it could in principle work out the end result of some thoughts. But it needs enormous detail and algorithmic precision to do that. I doubt it is achievable simply because of the volume of calculation needed.  Thinking properly requires consciousness and therefore emulation. A conscious circuit has to be built, not just modelled.

Consciousness is not the same as thinking. A simulation of the brain would not be conscious, even if it can work out the result of thoughts. It is the difference between printed music and played music. One is data, one is an experience. A simulation of all the processes going on inside a head will not generate any consciousness, only data. It could think, but not feel or experience.

Having made that important distinction, I still think that Markam’s approach will prove useful. It will generate many useful insights into the workings of the brain, and many of the processes nature uses to solve certain engineering problems. These insights and techniques can be used as input into other projects. Biomimetics is already proven as a useful tool in solving big problems. Looking at how the brain works will give us hints how to make a truly conscious, properly thinking machine. But just as with birds and airbuses, we can take ideas and inspiration from nature and then do it far better. No bird can carry the weight or fly as high or as fast as an aeroplane. No proper plane uses feathers or flaps its wings.

I wrote recently about how to make a conscious computer:

http://timeguide.wordpress.com/2013/02/15/how-to-make-a-conscious-computer/ and http://timeguide.wordpress.com/2013/02/18/how-smart-could-an-ai-become/

I still think that approach will work well, and it could be a decade faster than going Markam’s route. All the core technology needed to start making a conscious computer already exists today. With funding and some smart minds to set the process in motion, it could be done in a couple of years. The potential conscious and ultra-smart computer, properly harnessed, could do its research far faster than any human on Markam’s team. It could easily beat them to the goal of a replica brain. The converse is not true, Markam’s current approach would yield a conscious computer very slowly.

So while I fully applaud the effort and endorse the goals, changing the approach now could give far more bang for the buck, far faster.

How smart could an AI become?

I got an interesting question in a comment from Jim T on my last blog.

What is your opinion now on how powerful machine intelligence will become?

Funny, but my answer relates to the old question: how many angels can sit on the head of a pin?

The brain is not a digital computer, and don’t think a digital processor will be capable of consciousness (though that doesn’t mean it can’t be very smart and help make huge scientific progress). I believe a conscious AI will be mostly analog in nature, probably based on some fancy combo of adaptive neural nets. as suggested decades ago by Moravec.

Taking that line, and looking at how far miniaturisation can go, then adding all the zeros that arise from the shorter signal transmission paths, faster switching speeds, faster comms, and the greater number of potential pathways using optical WDM than electronic connectivity, I calculated that a spherical pinhead (1mm across) could ultimately house the equivalent of 10,000 human brains. (I don’t know how smart angels are so didn’t quite get to the final step). You could scale that up for as much funding, storage and material and energy you can provide.

However, what that quantifies is how many human equivalent AIs you could support. Very useful to know if you plan to build a future server farm to look after electronically immortal people. You could build a machine with the equivalent intelligence of the entire human race. But it doesn’t answer the question of how smart a single AI could ever be, or how powerful it could be. Quantity isn’t qualityYou could argue that 1% of the engineers produce 99% of the value, even with only a fairly small IQ difference. 10 billion people may not be as useful for progress as 10 people with 5 times the IQ. And look at how controversial IQ is. We can’t even agree what intelligence is or how to quantify it.

Just based on loose language, how powerful or smart or intelligent an AI could become depends on the ongoing positive feedback loop. Adding  more AI of the same intelligence level will enable the next incremental improvement, then using those slightly smarter AIs would get you to the next stage, a bit faster, ad infinitum. Eventually, you could make an AI that is really, really, really smart.

How smart is that? I don’t have the terminology to describe it. I can borrow an analogy though. Terry Pratchett’s early book ‘The Dark Side of the Sun’ has a character in it called The Bank. It was a silicon planet, with the silicon making a hugely smart mind. Imagine if a pinhead could house 10,000 human brains, and you have a planet of the stuff, and it’s all one big intellect instead of lots of dumb ones. Yep. Really, really, really smart.

How to make a conscious computer

The latest generation of supercomputers have processing speed that is higher than the human brain on a simple digital comparison, but they can’t think, aren’t conscious. It’s not even really appropriate to compare them because the brain mostly isn’t digital. It has some digital processing in the optics system but mostly uses adaptive analog neurons whereas digital computers use digital chips for processing and storage and only a bit of analog electronics for other circuits. Most digital computers don’t even have anything we would equate to senses.

Analog computers aren’t used much now, but were in fairly widespread use in some industries until the early 1980s. Most IT people have no first hand experience of them and some don’t seem to even be aware of analog computers, what they can do or how. But in the AI space, a lot of the development uses analog approaches.

http://timeguide.wordpress.com/2011/09/18/gel-computing/ discusses some of my previous work on conscious computer design. I won’t reproduce it here.

I firmly believe consciousness, whether externally or internally focused, is the result of internally directed sensing, (sensing can be thought of as the solicitation of feeling) so that you feel your thoughts or sensory inputs in much the same way. The easy bit is figuring out how thinking can work once you have that, how memories can be relived, concepts built, how self-awareness, sentience, intelligence emerge. All those are easy once you have figured out how feeling works. That is the hard problem.

Detection is not the same as feeling. It is easy to build a detector or sensor that flips a switch or moves a dial when something happens or even precisely quantifies something . Feeling it is another layer on that. Your skin detects touch, but your brain feels it, senses it. Taking detection and making it feel and become a sensation, that’s hard. What is it about a particular circuit that adds sensation? That is the missing link, the hard problem, and all the writing available out there just echoes that. Philosophers and scientists have written about this same problem in different ways for ages, and have struggled in vain to get a grip on it, many end up running in circles. So far they don’t know the answer, and neither do I. The best any offer is elucidation of aspects of the problem and at occasionally some hints of things that they think might somehow be connected with the answer. There exists no answer or explanation yet.

There is no magic in the brain. The circuitry involved in feeling something is capable of being described, replicated and even manufactured. It is possible to find out how to make a conscious circuit, even if we still don’t know what consciousness is or how it works, via replication, reverse engineering or evolutionary development. We manage to make conscious children several times every second.

How far can we go? Having studied a lot of what is written, it is clear that even after a lot of smart people thinking a long time about it, there is a great deal of confusion out there, and at least some of it comes basically from trying to use too big words and some comes from trying to analyse too much at once. When it is so obvious that it is a tough problem, simplifying it will undoubtedly help.  So let’s narrow it down a bit.

Feeling needs to be separated out from all the other things going on. What is it that happens that makes something feel? Well, detecting something pre-empts feeling it, and interpreting it or thinking about it comes later. So, ignore the detection and interpretation and thinking bits for now. Even sensation can be modelled as solicitation of feeling, essentially adding qualitative information to it. We ought to be able to make an abstraction model as for any IT system, where feeling is a distinct layer, coming between the physical detection layer and sensation, well below any of the layers associated with thinking or analysis.

Many believe that very simple organisms can detect stimuli and react to them, but can’t feel,  but more sophisticated ones can. Logical deduction tells us either that feeling may require fairly complex neural networks but certainly well below human levels, or alternatively, feeling may not be fundamentally linked to complexity but may emerge from architectural differences that arose in parallel with increasing complexity but aren’t dependent on it. It is also very likely due to evolutionary mechanisms that feeling emerges from similar structures to detection, though not the same. Architectural modifications, feedbacks, or additions to detection circuits might be an excellent point to start looking.

So we don’t know the answer, but we do have some good clues. Better than nothing. Coming at it from a philosophical direction, even the smartest people quickly get tied in knots, but from an engineering direction, I think the problem is soluble.

If feeling is, as I believe, a modified detection system, then we could for example seed an evolutionary design system with detection systems. Mutating, restructuring and rearranging detection systems and adding occasional random components here and there might eventually create some circuits that feel. It did in nature, and would in an evolutionary design system, given time. But how would we know? An evolutionary design system needs some means of selection to distinguish the more successful branches for further development.

Using feedback loops would probably help. A system with built in feedback so that it feels that it is feeling something would be symmetrical, maybe even fractal. Self-reinforcement of a feeling process would also create a little vortex of activity. A simple detection system (with detection of detection) would not exhibit such strong activity peaks due to necessary lack of symmetry in detection of initial and processed stimuli. So all we need do is to introduce feedback loops in each architecture and look for the emergence of activity peaks. Possibly, some non-feeling architectures might also show activity peaks so not all peaks would necessarily show successes, but all successes would show peaks.

So, the evolutionary system would take basic detection circuits as input, modify them, add random components, then connect them in simple symmetrical feedback loops. Most results would do nothing. Some would show self-reinforcement, evidenced by activity peaks. Those are the ones we need.

The output from such an evolutionary design system would be circuits that feel (and some junk). We have our basic components. Now we can start to make a conscious computer.

Let’s go back to the gel computing idea and plug them in. We have some basic detectors, for light, sound, touch etc. Pretty simple stuff, but we connect those to our new feeling circuits, so now those inputs stop being just information and become sensations. We add in some storage, recording the inputs, again with some feeling circuits added into the mix, and just for fun, let’s make those recording circuits replay those inputs over and over, indefinitely. Those sensations will be felt again and again, the memory relived. Our primitive little computer can already remember and experience things it has experienced before. Now add in some processing. When a and b happen, c results. Nothing complicated. Just the sort of primitive summation of inputs we know neurons can do all the time. But now, when that processing happens, our computer brain feels it. It feels that it is doing some thinking. It feels the stimuli occurring, a result occurring. And as it records and replays it, an experience builds. It now has knowledge. It may not be the answer to life the universe and everything just yet, but knowledge it is. It now knows and remembers the experience that when it links these two inputs, it gets that output. These processes and recordings and replays and further processing and storage and replays echo throughout the whole system. The sensory echoes and neural interference patterns result in some areas of reinforcement and some of cancellation. Concepts form. The whole process is sensed by the brain. It is thinking, processing, reliving memories, linking inputs and results into concepts and knowledge, storing concepts, and most importantly, it is feeling itself doing so.

The rest is just design detail. There’s your conscious computer.

When will AI marriage become legal?

Gay marriage is so yesterday. OK, it isn’t quite yet, but everything has been said a million times and I don’t intend to repeat it. A related but much more interesting debate is already gathering volume globally. When will you be able to marry your robot or AI?

The traditional Oxford English definition of marriage:

The formal union of a man and a woman, typically recognized by law, by which they become husband and wife. 

But, as is being asked by some, who says they have to be a man and a woman? Why can’t they be any sex? I don’t want to get into the arguments, because people on both sides argue passionately, often flying in the face of logic, but here is a gender neutral alternative definition:

Marriage is a social union or legal contract between people called spouses that establishes rights and obligations between the spouses, between the spouses and their children, and between the spouses and their in-laws.

Well, I am all for equality for all, but who says they have to be people?

If we are going to fight over definitions, surely we should try to finish with one that might survive more than a decade or two. This one simply won’t.

Artificial intelligence, or AI as it is usually called now, is making good progress. We already have computers with more raw number crunching power than the human brain. Their software, and indeed their requirement to use software, makes them far from equivalent overall, but I don’t think we will be waiting very long now for AI machines that we will agree are conscious, self aware, intelligent, sentient, with emotions, capable of forming human-like relationships. A few cranks will still object maybe, but so what?

These AIs will likely be based on adaptive analog neural networks rather than digital processing so they will not be so different from us really. Different futurists list different dates for AIs with man-machine equivalence, depending mostly on the prejudices and experiences bequeathed by their own backgrounds. I’d say 10 years, some say 15 or 20. Some say we will never get there, but they are just wrong, so wrong. We will soon have artificially intelligent entities comparable to humans in intellect and emotional capability. So how about this definition? :

Marriage is a social union or legal contract between conscious entities called spouses that establishes rights and obligations between the spouses, between the spouses and their derivatives, and those legally connected to them.

An AI might or might not be connected to a robot. An AI may not have any permanent physical form, and robots are really a red herring here. The mind is what is surely important, not the container. An AI can still be an entity that lives for a long enough time to be eligible for a long term relationship. I often watch sci-fi or play computer games, and many have AI characters that take on some sort of avatar – Edi in Mass Effect or Cortana in Halo for example. Sometimes these avatars are made to look very attractive, even super-attractive. It is easy to imaging how someone could fall in love with their AI. It isn’t much harder to imagine that they could fall in love with each other.

It’s a while since I last wrote about machine consciousness so I’ll say how I think it will work again now.

http://timeguide.wordpress.com/2011/09/18/gel-computing/ tells of my ideas on gel computing. A lot of adaptive electronic devices suspended in gel that can set up free space optical links to each other would be an excellent way of making an artificial brain-like processor.

Using this as a base, and with each of the tiny capsules being able to perform calculations, an extremely powerful digital processor could be created. But I don’t believe digital processors can become conscious, however much their processing increases in speed. It is an act of faith I guess, I can’t prove it, but coming from a computer modelling background it seems to me that a digital computer can simulate the processes in consciousness but it can’t emulate them and that difference is crucial.

I firmly believe consciousness is a matter of internal sensing. The same way that you sense sound or images or touch, you can sense the processes based on those same neural functions and their derivatives in your brain. Emotions ditto. We make ideas and concepts out of words and images and sounds and other sensory things and emotions too. We regenerate the same sorts of patterns, and filter them similarly to create new knowledge, thoughts and memories, a sort of vortex of sensory stimuli and echoes. Consciousness might not actually just be internal sensing, we don’t know yet exactly how it works, but even if it isn’t, you could do it that way. Internal sensing can be the basis of a conscious machine, an AI. Here’s a picture. This would work. I am sure of it. There will also be other ways of achieving consciousness, and they might have different flavours. But for the purposes of arguing for AI marriage, we only need one method of achieving consciousness to be feasible.

consciousness

I think this sort of AI design could work and it would certainly be capable of emotions. In fact, it would be capable of a much wider range of emotions than human experience. I believe it could fall in love, with a human, alien, or another AI. AIs will have a range and variety of gender capabilities and characteristics. People will be able to link to them in new ways, creating new forms of intimacy. The same technology will also enable new genders for people too, as I discussed recently. In the long term view, gay marriage is just another point on a long line.

When we set aside the arguing over gender equality, what we usually agree on is the importance of love. People can fall in love with any other human of any age, race or gender, but they are also capable of loving a sufficiently developed AI. As we rush to legislate for gender equality, it really is time to start opening the debate. AI will come in a very wide range of capability and flavour. Some will be equivalent or even superior to humans in many ways. They will have needs, they will want rights, and they will become powerful enough to demand them. Sooner or later, we will need to consider equality for them too. And I for one will be on their side.