Category Archives: biomimetics

Optical computing

A few nights ago I was thinking about the optical fibre memories that we were designing in the late 1980s in BT. The idea was simple. You transmit data into an optical fibre, and if the data rate is high you can squeeze lots of data into a manageable length. Back then the speed of light in fibre was about 5 microseconds per km of fibre, so 1000km of fibre, at a data rate of 2Gb/s would hold 10Mbits of data, per wavelength, so if you can multiplex 2 million wavelengths, you’d store 20Tbits of data. You could maintain the data by using a repeater to repeat the data as it reaches one end into the other, or modify it at that point simply by changing what you re-transmit. That was all theory then, because the latest ‘hero’ experiments were only just starting to demonstrate the feasibility of such long lengths, such high density WDM and such data rates.

Nowadays, that’s ancient history of course, but we also have many new types of fibre, such as hollow fibre with various shaped pores and various dopings to allow a range of effects. And that’s where using it for computing comes in.

If optical fibre is designed for this purpose, with optimal variable refractive index designed to facilitate and maximise non-linear effects, then the photons in one data stream on one wavelength could have enough effects of photons in another stream to be used for computational interaction. Computers don’t have to be digital of course, so the effects don’t have to be huge. Analog computing has many uses, and analog interactions could certainly work, while digital ones might work, and hybrid digital/analog computing may also be feasible. Then it gets fun!

Some of the data streams could be programs. Around that time, I was designing protocols with smart packets that contained executable code, as well as other packets that could hold analog or digital data or any mix. We later called the smart packets ANTs – autonomous network telephers, a contrived term if ever there was one, but we wanted to call them ants badly. They would scurry around the network doing a wide range of jobs, using a range of biomimetic and basic physics techniques to work like ant colonies and achieve complex tasks using simple means.

If some of these smart packets or ANTs are running along a fibre, changing the properties as they go to interact with other data transmitting alongside, then ANTs can interact with one another and with any stored data. ANTs could also move forwards or backwards along the fibre by using ‘sidings’ or physical shortcuts, since they can route themselves or each other. Data produced or changed by the interactions could be digital or analog and still work fine, carried on the smart packet structure.

(If you’re interested my protocol was called UNICORN, Universal Carrier for an Optical Residential Network, and used the same architectural principles as my previous Addressed Time Slice invention, compressing analog data by a few percent to fit into a packet, with a digital address and header, or allowing any digital data rate or structure in a payload while keeping the same header specs for easy routing. That system was invented (in 1988) for the late 1990s when basic domestic broadband rate should have been 625Mbit/s or more, but we expected to be at 2Gbit/s or even 20Gbit/s soon after that in the early 2000s, and the benefit as that we wouldn’t have to change the network switching because the header overheads would still only be a few percent of total time. None of that happened because of government interference in the telecoms industry regulation that strongly disincentivised its development, and even today, 625Mbit/s ‘basic rate’ access is still a dream, let alone 20Gbit/s.)

Such a system would be feasible. Shortcuts and sidings are easy to arrange. The protocols would work fine. Non-linear effects are already well known and diverse. If it were only used for digital computing, it would have little advantage over conventional computers. With data stored on long fibre lengths, external interactions would be limited, with long latency. However, it does present a range of potentials for use with external sensors directly interacting with data streams and ANTs to accomplish some tasks associated with modern AI. It ought to be possible to use these techniques to build the adaptive analog neural networks that we’ve known are the best hope of achieving strong AI since Hans Moravek’s insight, coincidentally also around that time. The non-linear effects even enable ideal mechanisms for implementing emotions, biasing the computation in particular directions via intensity of certain wavelengths of light in much the same way as chemical hormones and neurotransmitters interact with our own neurons. Implementing up to 2 million different emotions at once is feasible.

So there’s a whole mineful of architectures, tools and techniques waiting to be explored and mined by smart young minds in the IT industry, using custom non-linear optical fibres for optical AI.

Spiders in Space

A while back I read an interesting article about how small spiders get into the air to disperse, even when there is no wind:

Spiders go ballooning on electric fields: https://phys.org/news/2018-07-spiders-ballooning-electric-fields.html

If you don’t want to read it, the key point is that they use the electric fields in the air to provide enough force to drag them into the air. It gave me an idea. Why not use that same technique to get into space?

There is electric air potential right up to the very top of the atmosphere, but electric fields permeate space too. It only provides a weak force, enough to lift a 25mg spider using the electrostatic force on a few threads from its spinnerets.

25mg isn’t very heavy, but then the threads are only designed to lift the spider. Longer threads could generate higher forces, and lots of longer threads working together could generate significant forces. I’m not thinking of using this to launch space ships though. All I want for this purpose is to lift a few grams and that sounds feasible.

If we can arrange for a synthetic ‘cyber-spider’ to eject long graphene threads in the right directions, and to wind them back in when appropriate, our cyber-spider could harness these electric forces to crawl slowly into space, and then maintain altitude. It won’t need to stay in exactly the same place, but could simply use the changing fields and forces to stay within a reasonably small region. It won’t have used any fuel or rockets to get there or stay there, but now it is in space, even if it isn’t very high, it could be quite useful, even though it is only a few grams in weight.

Suppose our invisibly small cyber-spider sits near the orbit of a particular piece of space junk. The space junk moves fast, and may well be much larger than our spider in terms of mass, but if a few threads of graphene silk were to be in its path, our spider could effectively ensnare it, cause an immediate drop of speed due to Newtonian sharing of momentum (the spider has to be accelerated to the same speed as the junk, from stationary so even though it is much lighter, that would still cause a significant drop in junk speed)) and then use its threads as a mechanism for electromagnetic drag, causing it to slowly lose more speed and fall out of orbit. That might compete well as a cheap mechanism for cleaning up space junk.

Some organic spiders can kill a man with a single bite, and space spiders could do much the same, albeit via a somewhat different process. Instead of junk, our spider could meander into collision course with an astronaut doing a space walk. A few grams isn’t much, but a stationary cyber-spider placed in the way of a rapidly moving human would have much the same effect as a very high speed rifle shot.

The astronaut could easily be a satellite. Its location could be picked to impact on a particular part of the satellite to do most damage, or to cause many fragments, and if enough fragments are created – well, we’ve all watched Gravity and know what high speed fragments of destroyed satellites can do.

The spider doesn’t even need to get itself into a precise position. If it has many threads going off in various directions, it can quickly withdraw some of them to create a Newtonian reaction to move its center of mass fast into a path. It might sit many meters away from the desired impact position, waiting until the last second to jump in front of the astronaut/satellite/space junk.

What concerns me with this is that the weapon potential lends itself to low budget garden shed outfits such as lone terrorists. It wouldn’t need rockets, or massively expensive equipment. It doesn’t need rapid deployment, since being invisible, could migrate to its required location over days, weeks or months. A large number of them could be invisibly deployed from a back garden ready for use at any time, waiting for the command before simultaneously wiping out hundreds of satellites. It only needs a very small amount of IT attached to some sort of filament spinneret. A few years ago I worked out how to spin graphene filaments at 100m/s:

Spiderman-style silk thrower

If I can do it, others can too, and there are probably many ways to do this other than mine.

If you aren’t SpiderMan, and can accept lower specs, you could make a basic graphene silk thrower and associated IT that fits in the few grams weight budget.

There are many ways to cause havoc in space. Spiders have been sci-fi horror material for decades. Soon space spiders could be quite real.

 

 

How can we make a computer conscious?

This is very text heavy and is really just my thinking out loud, so to speak. Unless you are into mental archaeology or masochistic, I’d strongly recommend that you instead go to my new blog on this which outlines all of the useful bits graphically and simply.

Otherwise….

I found this article in my drafts folder, written 3 years ago as part of my short series on making conscious computers. I thought I’d published it but didn’t. So updating and publishing it now. It’s a bit long-winded, thinking out loud, trying to derive some insights from nature on how to make conscious machines. The good news is that actual AI developments are following paths that lead in much the same direction, though some significant re-routing and new architectural features are needed if they are to optimize AI and achieve machine consciousness.

Let’s start with the problem. Today’s AI that plays chess, does web searches or answers questions is digital. It uses algorithms, sets of instructions that the computer follows one by one. All of those are reduced to simple binary actions, toggling bits between 1 and 0. The processor doing that is no more conscious or aware of it, and has no more understanding of what it is doing than an abacus knows it is doing sums. The intelligence is in the mind producing the clever algorithms that interpret the current 1s and 0s and change them in the right way. The algorithms are written down, albeit in more 1s and 0s in a memory chip, but are essentially still just text, only as smart and aware as a piece of paper with writing on it. The answer is computed, transmitted, stored, retrieved, displayed, but at no point does the computer sense that it is doing any of those things. It really is just an advanced abacus. An abacus is digital too (an analog equivalent to an abacus is a slide rule).

A big question springs to mind: can a digital computer ever be any more than an advanced abacus. Until recently, I was certain the answer was no. Surely a digital computer that just runs programs can never be conscious? It can simulate consciousness to some degree, it can in principle describe the movements of every particle in a conscious brain, every electric current, every chemical reaction. But all it is doing is describing them. It is still just an abacus. Once computed, that simulation of consciousness could be printed and the printout would be just as conscious as the computer was. A digital ‘stored program’ computer can certainly implement extremely useful AI. With the right algorithms, it can mine data, link things together, create new data from that, generate new ideas by linking together things that haven’t been linked before, make works of art, poetry, compose music, chat to people, recognize faces and emotions and gestures. It might even be able to converse about life, the universe and everything, tell you its history, discuss its hopes for the future, but all of that is just a thin gloss on an abacus. I wrote a chat-bot on my Sinclair ZX Spectrum in 1983, running on a processor with about 8,000 transistors. The chat-bot took all of about 5 small pages of code but could hold a short conversation quite well if you knew what subjects to stick to. It’s very easy to simulate conversation. But it is still just a complicated abacus and still doesn’t even know it is doing anything.

However clever the AI it implements, a conventional digital computer that just executes algorithms can’t become conscious but an analog computer can, a quantum computer can, and so can a hybrid digital/analog/quantum computer. The question remain s whether a digital computer can be conscious if it isn’t just running stored programs. Could it have a different structure, but still be digital and yet be conscious? Who knows? Not me. I used to know it couldn’t, but now that I am a lot older and slightly wiser, I now know I don’t know.

Consciousness debate often starts with what we know to be conscious, the human brain. It isn’t a digital computer, although it has digital processes running in it. It also runs a lot of analog processes. It may also run some quantum processes that are significant in consciousness. It is a conscious hybrid of digital, analog and possibly quantum computing. Consciousness evolved in nature, therefore it can be evolved in a lab. It may be difficult and time consuming, and may even be beyond current human understanding, but it is possible. Nature didn’t use magic, and what nature did can be replicated and probably even improved on. Evolutionary AI development may have hit hard times, but that only shows that the techniques used by the engineers doing it didn’t work on that occasion, not that other techniques can’t work. Around 2.6 new human-level fully conscious brains are made by nature every second without using any magic and furthermore, they are all slightly different. There are 7.6 billion slightly different implementations of human-level consciousness that work and all of those resulted from evolution. That’s enough of an existence proof and a technique-plausibility-proof for me.

Sensors evolved in nature pretty early on. They aren’t necessary for life, for organisms to move around and grow and reproduce, but they are very helpful. Over time, simple light, heat, chemical or touch detectors evolved further to simple vision and produce advanced sensations such as pain and pleasure, causing an organism to alter its behavior, in other words, feeling something. Detection of an input is not the same as sensation, i.e. feeling an input. Once detection upgrades to sensation, you have the tools to make consciousness. No more upgrades are needed. Sensing that you are sensing something is quite enough to be classified as consciousness. Internally reusing the same basic structure as external sensing of light or heat or pressure or chemical gradient or whatever allows design of thought, planning, memory, learning and construction and processing of concepts. All those things are just laying out components in different architectures. Getting from detection to sensation is the hard bit.

So design of conscious machines, and in fact what AI researchers call the hard problem, really can be reduced to the question of what makes the difference between a light switch and something that can feel being pushed or feel the current flowing when it is, the difference between a photocell and feeling whether it is light or dark, the difference between detecting light frequency, looking it up in a database, then pronouncing that it is red, and experiencing redness. That is the hard problem of AI. Once that is solved, we will very soon afterwards have a fully conscious self aware AI. There are lots of options available, so let’s look at each in turn to extract any insights.

The first stage is easy enough. Detecting presence is easy, measuring it is harder. A detector detects something, a sensor (in its everyday engineering meaning) quantifies it to some degree. A component in an organism might fire if it detects something, it might fire with a stronger signal or more frequently if it detects more of it, so it would appear to be easy to evolve from detection to sensing in nature, and it is certainly easy to replicate sensing with technology.

Essentially, detection is digital, but sensing is usually analog, even though the quantity sensed might later be digitized. Sensing normally uses real numbers, while detection uses natural numbers (real v  integer as programmer call them). The handling of analog signals in their raw form allows for biomimetic feedback loops, which I’ll argue are essential. Digitizing them introduces a level of abstraction that is essentially the difference between emulation and simulation, the difference between doing something and reading about someone doing it. Simulation can’t make a conscious machine, emulation can. I used to think that meant digital couldn’t become conscious, but actually it is just algorithmic processing of stored programs that can’t do it. There may be ways of achieving consciousness digitally, or quantumly, but I haven’t yet thought of any.

That engineering description falls far short of what we mean by sensation in human terms. How does that machine-style sensing become what we call a sensation? Logical reasoning says there would probably need to be only a small change in order to have evolved from detection to sensing in nature. Maybe something like recombining groups of components in different structures or adding them together or adding one or two new ones, that sort of thing?

So what about detecting detection? Or sensing detection? Those could evolve in sequence quite easily. Detecting detection is like your alarm system control unit detecting the change of state that indicates that a PIR has detected an intruder, a different voltage or resistance on a line, or a 1 or a 0 in a memory store. An extremely simple AI responds by ringing an alarm. But the alarm system doesn’t feel the intruder, does it?  It is just a digital response to a digital input. No good.

How about sensing detection? How do you sense a 1 or a 0? Analog interpretation and quantification of digital states is very wasteful of resources, an evolutionary dead end. It isn’t any more useful than detection of detection. So we can eliminate that.

OK, sensing of sensing? Detection of sensing? They look promising. Let’s run with that a bit. In fact, I am convinced the solution lies in here so I’ll look till I find it.

Let’s do a thought experiment on designing a conscious microphone, and for this purpose, the lowest possible order of consciousness will do, we can add architecture and complexity and structures once we have some bricks. We don’t particularly want to copy nature, but are free to steal ideas and add our own where it suits.

A normal microphone sensor produces an analog signal quantifying the frequencies and intensities of the sounds it is exposed to, and that signal may later be quantified and digitized by an analog to digital converter, possibly after passing through some circuits such as filters or amplifiers in between. Such a device isn’t conscious yet. By sensing the signal produced by the microphone, we’d just be repeating the sensing process on a transmuted signal, not sensing the sensing itself.

Even up close, detecting that the microphone is sensing something could be done by just watching a little LED going on when current flows. Sensing it is harder but if we define it in conventional engineering terms, it could still be just monitoring a needle moving as the volume changes. That is obviously not enough, it’s not conscious, it isn’t feeling it, there’s no awareness there, no ‘sensation’. Even at this primitive level, if we want a conscious mic, we surely need to get in closer, into the physics of the sensing. Measuring the changing resistance between carbon particles or speed of a membrane moving backwards and forwards would just be replicating the sensing, adding an extra sensing stage in series, not sensing the sensing, so it needs to be different from that sort of thing. There must surely need to be a secondary change or activity in the sensing mechanism itself that senses the sensing of the original signal.

That’s a pretty open task, and it could even be embedded in the detecting process or in the production process for the output signal. But even recognizing that we need this extra property narrows the search. It must be a parallel or embedded mechanism, not one in series. The same logical structure would do fine for this secondary sensing, since it is just sensing in the same logical way as the original. This essential logical symmetry would make its evolution easy too. It is easy to imagine how that could happen in nature, and easier still to see how it could be implemented in a synthetic evolution design system. Such an approach could be mimicked in natural or synthetic evolutionary development systems. In this approach, we have to feel the sensing, so we need it to comprise some sort of feedback loop with a high degree of symmetry compared with the main sensing stage. That would be natural evolution compatible as well as logically sound as an engineering approach.

This starts to look like progress. In fact, it’s already starting to look a lot like a deep neural network, with one huge difference: instead of using feed-forward signal paths for analysis and backward propagation for training, it relies instead on a symmetric feedback mechanism where part of the input for each stage of sensing comes from its own internal and output signals. A neuron is not a full sensor in its own right, and it’s reasonable to assume that multiple neurons would be clustered so that there is a feedback loop. Many in the neural network AI community are already recognizing the limits of relying on feed-forward and back-prop architectures, but web searches suggest few if any are moving yet to symmetric feedback approaches. I think they should. There’s gold in them there hills!

So, the architecture of the notional sensor array required for our little conscious microphone would have a parallel circuit and feedback loop (possibly but not necessarily integrated), and in all likelihood these parallel and sensing circuits would be heavily symmetrical, i.e. they would use pretty much the same sort of components and architectures as the sensing process itself. If the sensation bit is symmetrical, of similar design to the primary sensing circuit, that again would make it easy to evolve in nature too so is a nice 1st principles biomimetic insight. So this structure has the elegance of being very feasible for evolutionary development, natural or synthetic. It reuses similarly structured components and principles already designed, it’s just recombining a couple of them in a slightly different architecture.

Another useful insight screams for attention too. The feedback loop ensures that the incoming sensation lingers to some degree. Compared to the nanoseconds we are used to in normal IT, the signals in nature travel fairly slowly (~200m/s), and the processing and sensing occur quite slowly (~200Hz). That means this system would have some inbuilt memory that repeats the essence of the sensation in real time – while it is sensing it. It is inherently capable of memory and recall and leaves the door wide open to introduce real-time interaction between memory and incoming signal. It’s not perfect yet, but it has all the boxes ticked to be a prime contender to build thought, concepts, store and recall memories, and in all likelihood, is a potential building brick for higher level consciousness. Throw in recent technology developments such as memristors and it starts to look like we have a very promising toolkit to start building primitive consciousness, and we’re already seeing some AI researchers going that path so maybe we’re not far from the goal. So, we make a deep neural net with nice feedback from output (of the sensing system, which to clarify would be a cluster of neurons, not a single neuron) to input at every stage (and between stages) so that inputs can be detected and sensed, while the input and output signals are stored and repeated into the inputs in real time as the signals are being processed. Throw in some synthetic neurotransmitters to dampen the feedback and prevent overflow and we’re looking at a system that can feel it is feeling something and perceive what it is feeling in real time.

One further insight that immediately jumps out is since the sensing relies on the real time processing of the sensations and feedbacks, the speed of signal propagation, storage, processing and repetition timeframes must all be compatible. If it is all speeded up a million fold, it might still work fine, but if signals travel too slowly or processing is too fast relative to other factors, it won’t work. It will still get a computational result absolutely fine, but it won’t know that it has, it won’t be able to feel it. Therefore… since we have a factor of a million for signal speed (speed of light compared to nerve signal propagation speed), 50 million for switching speed, and a factor of 50 for effective neuron size (though the sensing system units would be multiple neuron clusters), we could make a conscious machine that could think at 50 million times as fast as a natural system (before allowing for any parallel processing of course). But with architectural variations too, we’d need to tune those performance metrics to make it work at all and making physically larger nets would require either tuning speeds down or sacrificing connectivity-related intelligence. An evolutionary design system could easily do that for us.

What else can we deduce about the nature of this circuit from basic principles? The symmetry of the system demands that the output must be an inverse transform of the input. Why? Well, because the parallel, feedback circuit must generate a form that is self-consistent. We can’t deduce the form of the transform from that, just that the whole system must produce an output mathematically similar to that of the input.

I now need to write another blog on how to use such circuits in neural vortexes to generate knowledge, concepts, emotions and thinking. But I’m quite pleased that it does seem that some first-principles analysis of natural evolution already gives us some pretty good clues on how to make a conscious computer. I am optimistic that current research is going the right way and only needs relatively small course corrections to achieve consciousness.

 

AIs of a feather flocking together to create global instability

Hawking and Musk have created a lot of media impact with their warnings about AI, so although terminator scenarios resulting from machine consciousness have been discussed, as have more mundane use of non-conscious autonomous weapon systems, it’s worth noting that I haven’t yet heard them mention one major category of risks from AI – emergence. AI risks have been discussed frequently since the 1970s, and in the 1990s a lot of work was done in the AI community on emergence. Complex emergent patterns of behavior often result from interactions between entities driven by simple algorithms. Genetic algorithms were demonstrated to produce evolution, simple neighbor-interaction rules were derived to illustrate flocking behaviors that make lovely screen saver effects. Cellular automata were played with. In BT we invented ways of self-organizing networks and FPGAs, played with mechanism that could be used for evolution and consciousness, demonstrated managing networks via ANTs – autonomous network telephers, using smart packets that would run up and down wires sorting things out all by themselves. In 1987 discovered a whole class of ways of bringing down networks via network resonance, information waves and their much larger class of correlated traffic – still unexploited by hackers apart from simple DOS attacks. These ideas have slowly evolved since, and some have made it into industry or hacker toolkits, but we don’t seem to be joining the dots as far as risks go.

I read an amusing article this morning by an ex-motoring-editor who was declined insurance because the AI systems used by insurance companies had labelled him as high risk because he maybe associated with people like Clarkson. Actually, he had no idea why, but that was his broker’s theory of how it might have happened. It’s a good article, well written and covers quite a few of the dangers of allowing computers to take control.

http://www.dailymail.co.uk/sciencetech/article-5310031/Evidence-robots-acquiring-racial-class-prejudices.html

The article suggested how AIs in different companies might all come to similar conclusions about people or places or trends or patterns in a nice tidy positive feedback loop. That’s exactly the sort of thing that can drive information waves, which I demonstrated in 1987 can bring down an entire network in less than 3 milliseconds, in such a way that it would continue to crash many times when restarted. That isn’t intended by the algorithms, which individually ought to make good decisions, but when interacting with one another, create the emergent phenomenon.  Automated dealing systems are already pretty well understood in this regard and mechanisms prevent frequent stock market collapses, but that is only one specific type of behavior in one industry that is protected. There do not seem to be any industry-wide mechanisms to prevent the rest of this infinite class of problems from affecting any or all of the rest, simultaneously.

As we create ever more deep learning neural networks, that essentially teach themselves from huge data pools, human understanding of their ‘mindsets’ decreases. They make decisions using algorithms that are understood at a code level, but the massive matrix of derived knowledge they create from all the data they receive becomes highly opaque. Often, even usually, nobody quite knows how a decision is made. That’s bad enough in a standalone system, but when many such systems are connected, produced and owned and run by diverse companies with diverse thinking, the scope for destructive forms of emergence increases geometrically.

One result could be gridlock. Systems fed with a single new piece of data could crash. My 3 millisecond result in 1987 would still stand since network latency is the prime limiter. The first AI receives it, alters its mindset accordingly, processes it, makes a decision and interacts with a second AI. This second one might have different ‘prejudice’ so makes its own decision based on different criteria, and refuses to respond the way intended. A 3rd one looks at the 2nd’s decision and takes that as evidence that there might be an issue, and with its risk-averse mindset, also refuse to act, and that inaction spreads through the entire network in milliseconds. Since the 1st AI thinks the data is all fine and it should have gone ahead, it now interprets the inaction of the others as evidence that that type of data is somehow ‘wrong’ so itself refuses to process any further of that type, whether from its own operators or other parts of the system. So it essentially adds its own outputs to the bad feeling and the entire system falls into sulk mode. As one part of infrastructure starts to shut down, that infects other connected parts and our entire IT could fall into sulk mode – entire global infrastructure. Since nobody knows how it all works, or what has caused the shutdown, it might be extremely hard to recover.

Another possible result is a direct information wave, almost certainly a piece of fake news. Imagine our IT world in 5 years time, with all these super-smart AIs super-connected. A piece of fake news says a nuke has just been launched somewhere. Stocks will obviously decline, whatever the circumstances, so as the news spreads, everyone’s AIs will take it on themselves to start selling shares before the inevitable collapse, triggering a collapse, except it won’t because the markets won’t let that happen. BUT… The wave does spread, and all those individual AIs want to dispose of those shares, or at least find out what’s happening, so they all start sending messages to one another, exchanging data, trying to find what’s going on. That’s the information wave. They can’t sell shares of find out, because the network is going into overload, so they try even harder and force it into severe overload. So it falls over. When it comes back online, they all try again, crashing it again, and so on.

Another potential result is smartass AI. There is always some prat somewhere who sees an opportunity to take advantage and ruins if for everyone else by doing something like exploiting a small loophole in the law, or in this case, most likely, a prejudice our smartass AI has discovered in some other AI that means it can be taken advantage of by doing x, y, or z. Since nobody quite knows how any of their AIs are making their decisions because their mindsets ate too big and too complex, it will be very hard to identify what is going on. Some really unusual behavior is corrupting the system because some AI is going rogue somewhere somehow, but which one, where, how?

That one brings us back to fake news. That will very soon infect AI systems with their own varieties of fake news. Complex networks of AIs will have many of the same problems we are seeing in human social networks. An AI could become a troll just the same as a human, deliberately winding others up to generate attention of drive a change of some parameter – any parameter – in its own favour. Activist AIs will happen due to people making them to push human activist causes, but they will also do it all by themselves. Their analysis of the system will sometimes show them that a good way to get a good result is to cause problems elsewhere.

Then there’s climate change, weather, storms, tsunamis. I don’t mean real ones, I mean the system wide result of tiny interactions of tiny waves and currents of data and knowledge in neural nets. Tiny effects in one small part of a system can interact in unforeseen ways with other parts of other systems nearby, creating maybe a breeze, which interacts with breezes in nearby regions to create hurricanes. I think that’s a reasonable analogy. Chaos applies to neural net societies just as it does to climate, and 50 year waves equivalents will cause equivalent havoc in IT.

I won’t go on with more examples, long blogs are awful to read. None of these requires any self-awareness, sentience, consciousness, call it what you will. All of these can easily happen through simple interactions of fairly trivial AI deep learning nets. The level of interconnection already sounds like it may already be becoming vulnerable to such emergence effects. Soon it definitely will be. Musk and Hawking have at least joined the party and they’ll think more and more deeply in coming months. Zuckerberg apparently doesn’t believe in AI threats but now accepts the problems social media is causing. Sorry Zuck, but the kind of AI you’re company is messing with will also be subject to its own kinds of social media issues, not just in its trivial decisions on what to post or block, but actual inter-AI socializing issues. It might not try to eliminate humanity, but if it brings all of our IT to a halt and prevents rapid recovery, we’re still screwed.

 

Artificial muscles using folded graphene

Slide1

Folded Graphene Concept

Two years ago I wrote a blog on future hosiery where I very briefly mentioned the idea of using folded graphene as synthetic muscles:

https://timeguide.wordpress.com/2015/11/16/the-future-of-nylon-ladder-free-hosiery/

Although I’ve since mentioned it to dozens of journalists, none have picked up on it, so now that soft robotics and artificial muscles are in the news, I guess it’s about time I wrote it up myself, before someone else claims the idea. I don’t want to see an MIT article about how they have just invented it.

The above pic gives the general idea. Graphene comes in insulating or conductive forms, so it will be possible to make sheets covered with tiny conducting graphene electromagnet coils that can be switched individually to either polarity and generate strong magnetic forces that pull or push as required. That makes it ideal for a synthetic muscle, given the potential scale. With 1.5nm-thick layers that could be anything from sub-micron up to metres wide, this will allow thin fibres and yarns to make muscles or shape change fabrics all the way up to springs or cherry-picker style platforms, using many such structures. Current can be switched on and off or reversed very rapidly, to make continuous forces or vibrations, with frequency response depending on application – engineering can use whatever scales are needed. Natural muscles are limited to 250Hz, but graphene synthetic muscles should be able to go to MHz.

Uses vary from high-rise rescue, through construction and maintenance, to space launch. Since the forces are entirely electromagnetic, they could be switched very rapidly to respond to any buckling, offering high stabilisation.

Slide2

The extreme difference in dimensions between folded and opened state mean that an extremely thin force mat made up of many of these cherry-picker structures could be made to fill almost any space and apply force to it. One application that springs to mind is rescues, such as after earthquakes have caused buildings to collapse. A sheet could quickly apply pressure to prize apart pieces of rubble regardless of size and orientation. It could alternatively be used for systems for rescuing people from tall buildings, fracking or many other applications.

Slide3

It would be possible to make large membranes for a wide variety of purposes that can change shape and thickness at any point, very rapidly.

Slide4

One such use is a ‘jellyfish’, complete with stinging cells that could travel around in even very thin atmospheres all by itself. Upper surfaces could harvest solar power to power compression waves that create thrust. This offers use for space exploration on other planets, but also has uses on Earth of course, from surveillance and power generation, through missile defense systems or self-positioning parachutes that may be used for my other invention, the Pythagoras Sling. That allows a totally rocket-free space launch capability with rapid re-use.

Slide5

Much thinner membranes are also possible, as shown here, especially suited for rapid deployment missile defense systems:

Slide6

Also particularly suited to space exploration o other planets or moons, is the worm, often cited for such purposes. This could easily be constructed using folded graphene, and again for rescue or military use, could come with assorted tools or lethal weapons built in.

Slide7

A larger scale cherry-picker style build could make ejector seats, elevation platforms or winches, either pushing or pulling a payload – each has its merits for particular types of application.  Expansion or contraction could be extremely rapid.

Slide8

An extreme form for space launch is the zip-winch, below. With many layers just 1.5nm thick, expanding to 20cm for each such layer, a 1000km winch cable could accelerate a payload rapidly as it compresses to just 7.5mm thick!

Slide9

Very many more configurations and uses are feasible of course, this blog just gives a few ideas. I’ll finish with a highlight I didn’t have time to draw up yet: small particles could be made housing a short length of folded graphene. Since individual magnets can be addressed and controlled, that enables magnetic powders with particles that can change both their shape and the magnetism of individual coils. Precision magnetic fields is one application, shape changing magnets another. The most exciting though is that this allows a whole new engineering field, mixing hydraulics with precision magnetics and shape changing. The powder can even create its own chambers, pistons, pumps and so on. Electromagnetic thrusters for ships are already out there, and those same thrust mechanisms could be used to manipulate powder particles too, but this allows for completely dry hydraulics, with particles that can individually behave actively or  passively.

Fun!

 

 

Chat-bots will help reduce loneliness, a bit

Amazon is really pushing its Echo and Dot devices at the moment and some other companies also use Alexa in their own devices. They are starting to gain avatar front ends too. Microsoft has their Cortana transforming into Zo, Apple has Siri’s future under wraps for now. Maybe we’ll see Siri in a Sari soon, who knows. Thanks to rapidly developing AI, chatbots and other bots have also made big strides in recent years, so it’s obvious that the two can easily be combined. The new voice control interfaces could become chatbots to offer a degree of companionship. Obviously that isn’t as good as chatting to real people, but many, very many people don’t have that choice. Loneliness is one of the biggest problems of our time. Sometimes people talk to themselves or to their pet cat, and chatting to a bot would at least get a real response some of the time. It goes further than simple interaction though.

I’m not trying to understate the magnitude of the loneliness problem, and it can’t solve it completely of course, but I think it will be a benefit to at least some lonely people in a few ways. Simply having someone to chat to will already be of some help. People will form emotional relationships with bots that they talk to a lot, especially once they have a visual front end such as an avatar. It will help some to develop and practice social skills if that is their problem, and for many others who feel left out of local activity, it might offer them real-time advice on what is on locally in the next few days that might appeal to them, based on their conversations. Talking through problems with a bot can also help almost as much as doing so with a human. In ancient times when I was a programmer, I’d often solve a bug by trying to explain how my program worked, and in doing so i would see the bug myself. Explaining it to a teddy bear would have been just as effective, the chat was just a vehicle for checking through the logic from a new angle. The same might apply to interactive conversation with a bot. Sometimes lonely people can talk too much about problems when they finally meet people, and that can act as a deterrent to future encounters, so that barrier would also be reduced. All in all, having a bot might make lonely people more able to get and sustain good quality social interactions with real people, and make friends.

Another benefit that has nothing to do with loneliness is that giving a computer voice instructions forces people to think clearly and phrase their requests correctly, just like writing a short computer program. In a society where so many people don’t seem to think very clearly or even if they can, often can’t express what they want clearly, this will give some much needed training.

Chatbots could also offer challenges to people’s thinking, even to help counter extremism. If people make comments that go against acceptable social attitudes or against known facts, a bot could present the alternative viewpoint, probably more patiently than another human who finds such viewpoints frustrating. I’d hate to see this as a means to police political correctness, though it might well be used in such a way by some providers, but it could improve people’s lack of understanding of even the most basic science, technology, culture or even politics, so has educational value. Even if it doesn’t convert people, it might at least help them to understand their own views more clearly and be better practiced at communicating their arguments.

Chat bots could make a significant contribution to society. They are just machines, but those machines are tools for other people and society as a whole to help more effectively.

 

Colour changing cars, everyday objects and makeup

http://www.theverge.com/2016/11/24/13740946/dutch-scientists-use-color-changing-graphene-bubbles-to-create-mechanical-pixels shows how graphene can be used to make displays with each pixel changing colour according to mechanical deformation.

Meanwhile, Lexus have just created a car with a shell covered in LEDs so it can act as a massive display.

http://www.theverge.com/2016/12/5/13846396/lexus-led-lit-is-colors-dua-lipa-vevo

In 2014 I wrote about using polymer LED displays for future Minis so it’s nice to see another prediction come true.

Looking at the mechanical pixels though, it is clear that mechanical pixels could respond directly to sound, or to turbulence of passing air, plus other vibration that arises from motion on a road surface, or the engine. Car panel colours could change all the time powered by ambient energy. Coatings on any solid objects could follow, so people might have plenty of shimmering colours in their everyday environment. Could. Not sure I want it, but they could.

With sound as a control system, sound wave generators at the edges or underneath such surfaces could produce a wide variety of pleasing patterns. We could soon have furniture that does a good impression of being a cuttlefish.

I often get asked about smart makeup, on which I’ve often spoken since the late 90s. Thin film makeup displays could use this same tech. So er, we could have people with makeup pretending to be cuttlefish too. I think I’ll quit while I’m ahead.

Carbethium, a better-than-scifi material

How to build one of these for real:

Light_bridge

Halo light bridge, from halo.wikia.com

Or indeed one of these:

From halo.wikia.com

From halo.wikia.com

I recently tweeted that I had an idea how to make the glowy bridges and shields we’ve seen routinely in sci-fi games from Half Life to Destiny, the bridges that seem to appear in a second or two from nothing across a divide, yet are strong enough to drive tanks over, and able to vanish as quickly and completely when they are switched off. I woke today realizing that with a bit of work, that it could be the basis of a general purpose material to make the tanks too, and buildings and construction platforms, bridges, roads and driverless pod systems, personal shields and city defense domes, force fields, drones, planes and gliders, space elevator bases, clothes, sports tracks, robotics, and of course assorted weapons and weapon systems. The material would only appear as needed and could be fully programmable. It could even be used to render buildings from VR to real life in seconds, enabling at least some holodeck functionality. All of this is feasible by 2050.

Since it would be as ethereal as those Halo structures, I first wanted to call the material ethereum, but that name was already taken (for a 2014 block-chain programming platform, which I note could be used to build the smart ANTS network management system that Chris Winter and I developed in BT in 1993), and this new material would be a programmable construction platform so the names would conflict, and etherium is too close. Ethium might work, but it would be based on graphene and carbon nanotubes, and I am quite into carbon so I chose carbethium.

Ages ago I blogged about plasma as a 21st Century building material. I’m still not certain this is feasible, but it may be, and it doesn’t matter for the purposes of this blog anyway.

https://timeguide.wordpress.com/2013/11/01/will-plasma-be-the-new-glass/

Around then I also blogged how to make free-floating battle drones and more recently how to make a Star Wars light-saber.

https://timeguide.wordpress.com/2013/06/23/free-floating-ai-battle-drone-orbs-or-making-glyph-from-mass-effect/

https://timeguide.wordpress.com/2015/11/25/how-to-make-a-star-wars-light-saber/

Carbethium would use some of the same principles but would add the enormous strength and high conductivity of graphene to provide the physical properties to make a proper construction material. The programmable matter bits and the instant build would use a combination of 3D interlocking plates, linear induction,  and magnetic wells. A plane such as a light bridge or a light shield would extend from a node in caterpillar track form with plates added as needed until the structure is complete. By reversing the build process, it could withdraw into the node. Bridges that only exist when they are needed would be good fun and we could have them by 2050 as well as the light shields and the light swords, and light tanks.

The last bit worries me. The ethics of carbethium are the typical mixture of enormous potential good and huge potential for abuse to bring death and destruction that we’re learning to expect of the future.

If we can make free-floating battle drones, tanks, robots, planes and rail-gun plasma weapons all appear within seconds, if we can build military bases and erect shield domes around them within seconds, then warfare moves into a new realm. Those countries that develop this stuff first will have a huge advantage, with the ability to send autonomous robotic armies to defeat enemies with little or no risk to their own people. If developed by a James Bond super-villain on a hidden island, it would even be the sort of thing that would enable a serious bid to take over the world.

But in the words of Professor Emmett Brown, “well, I figured, what the hell?”. 2050 values are not 2016 values. Our value set is already on a random walk, disconnected from any anchor, its future direction indicated by a combination of current momentum and a chaos engine linking to random utterances of arbitrary celebrities on social media. 2050 morality on many issues will be the inverse of today’s, just as today’s is on many issues the inverse of the 1970s’. Whatever you do or however politically correct you might think you are today, you will be an outcast before you get old: https://timeguide.wordpress.com/2015/05/22/morality-inversion-you-will-be-an-outcast-before-youre-old/

We’re already fucked, carbethium just adds some style.

Graphene combines huge tensile strength with enormous electrical conductivity. A plate can be added to the edge of an existing plate and interlocked, I imagine in a hexagonal or triangular mesh. Plates can be designed in many diverse ways to interlock, so that rotating one engages with the next, and reversing the rotation unlocks them. Plates can be pushed to the forward edge by magnetic wells, using linear induction motors, using the graphene itself as the conductor to generate the magnetic field and the design of the structure of the graphene threads enabling the linear induction fields. That would likely require that the structure forms first out of graphene threads, then the gaps between filled by mesh, and plates added to that to make the structure finally solid. This would happen in thickness as well as width, to make a 3D structure, though a graphene bridge would only need to be dozens of atoms thick.

So a bridge made of graphene could start with a single thread, which could be shot across a gap at hundreds of meters per second. I explained how to make a Spiderman-style silk thrower to do just that in a previous blog:

https://timeguide.wordpress.com/2015/11/12/how-to-make-a-spiderman-style-graphene-silk-thrower-for-emergency-services/

The mesh and 3D build would all follow from that. In theory that could all happen in seconds, the supply of plates and the available power being the primary limiting factors.

Similarly, a shield or indeed any kind of plate could be made by extending carbon mesh out from the edge or center and infilling. We see that kind of technique used often in sci-fi to generate armor, from lost in Space to Iron Man.

The key components in carbetheum are 3D interlocking plate design and magnetic field design for the linear induction motors. Interlocking via rotation is fairly easy in 2D, any spiral will work, and the 3rd dimension is open to any building block manufacturer. 3D interlocking structures are very diverse and often innovative, and some would be more suited to particular applications than others. As for linear induction motors, a circuit is needed to produce the travelling magnetic well, but that circuit is made of the actual construction material. The front edge link between two wires creates a forward-facing magnetic field to propel the next plates and convey enough intertia to them to enable kinetic interlocks.

So it is feasible, and only needs some engineering. The main barrier is price and material quality. Graphene is still expensive to make, as are carbon nanotubes, so we won’t see bridges made of them just yet. The material quality so far is fine for small scale devices, but not yet for major civil engineering.

However, the field is developing extremely quickly because big companies and investors can clearly see the megabucks at the end of the rainbow. We will have almost certainly have large quantity production of high quality graphene for civil engineering by 2050.

This field will be fun. Anyone who plays computer games is already familiar with the idea. Light bridges and shields, or light swords would appear much as in games, but the material would likely  be graphene and nanotubes (or maybe the newfangled molybdenum equivalents). They would glow during construction with the plasma generated by the intense electric and magnetic fields, and the glow would be needed afterward to make these ultra-thin physical barriers clearly visible,but they might become highly transparent otherwise.

Assembling structures as they are needed and disassembling them just as easily will be very resource-friendly, though it is unlikely that carbon will be in short supply. We can just use some oil or coal to get more if needed, or process some CO2. The walls of a building could be grown from the ground up at hundreds of meters per second in theory, with floors growing almost as fast, though there should be little need to do so in practice, apart from pushing space vehicles up so high that they need little fuel to enter orbit. Nevertheless, growing a  building and then even growing the internal structures and even furniture is feasible, all using glowy carbetheum. Electronic soft fabrics, cushions and hard surfaces and support structures are all possible by combining carbon nanotubes and graphene and using the reconfigurable matter properties carbethium convents. So are visual interfaces, electronic windows, electronic wallpaper, electronic carpet, computers, storage, heating, lighting, energy storage and even solar power panels. So is all the comms and IoT and all the smart embdedded control systems you could ever want. So you’d use a computer with VR interface to design whatever kind of building and interior furniture decor you want, and then when you hit the big red button, it would appear in front of your eyes from the carbethium blocks you had delivered. You could also build robots using the same self-assembly approach.

If these structures can assemble fast enough, and I think they could, then a new form of kinetic architecture would appear. This would use the momentum of the construction material to drive the front edges of the surfaces, kinetic assembly allowing otherwise impossible and elaborate arches to be made.

A city transport infrastructure could be built entirely out of carbethium. The linear induction mats could grow along a road, connecting quickly to make a whole city grid. Circuit design allows the infrastructure to steer driverless pods wherever they need to go, and they could also be assembled as required using carbethium. No parking or storage is needed, as the pod would just melt away onto the surface when it isn’t needed.

I could go to town on military and terrorist applications, but more interesting is the use of the defense domes. When I was a kid, I imagined having a house with a defense dome over it. Lots of sci-fi has them now too. Domes have a strong appeal, even though they could also be used as prisons of course. A supply of carbetheum on the city edges could be used to grow a strong dome in minutes or even seconds, and there is no practical limit to how strong it could be. Even if lasers were used to penetrate it, the holes could fill in in real time, replacing material as fast as it is evaporated away.

Anyway, lots of fun. Today’s civil engineering projects like HS2 look more and more primitive by the day, as we finally start to see the true potential of genuinely 21st century construction materials. 2050 is not too early to expect widespread use of carbetheum. It won’t be called that – whoever commercializes it first will name it, or Google or MIT will claim to have just invented it in a decade or so, so my own name for it will be lost to personal history. But remember, you saw it here first.

The future of vacuum cleaners

Dyson seems pretty good in vacuum cleaners and may well have tried this and found it doesn’t work, but then again, sometimes people in an industry can’t see the woods for the trees so who knows, there may be something in this:

Our new pet cat Jess, loves to pick up soft balls with a claw and throw them, and catch them again. Retractable claws are very effective.IMG_6689- Jess (2)

Jess the cat

At a smaller scale, velcro uses tiny little hooks to stick together, copying burs from nature.

Suppose you make a tiny little ball that has even tinier little retractable spines or even better, hooks. And suppose you make them by the trillion and make a powder that your vacuum cleaner attachment first sprinkles onto a carpet, then agitates furiously and quickly, and thus gets the hooks to stick to dirt, pull it off the surface and retract (so that the balls don’t stick to the carpet) and then you suck the whole lot into the machine. Since the balls have a certain defined specification, they are easy to separate from the dirt and dust and reuse again straight away. So you get superior cleaning. Some of the balls would be lost each time, and some would get sucked up next time, but overall you’d need to periodically top up the reservoir.

The current approach is to beat the hell out of the carpet fibers with a spinning brush and that works fine, but I think adding the active powder might be better because it gets right in among the dirt and drags it kicking and screaming off the fibers.

So, ball design. Firstly, it doesn’t need to be ball shaped at all, and secondly it doesn’t need spines really, just to be able to rapidly change its shape so it gets some sort of temporary traction on a dirt particle to knock it off. What we need here is any structure that expands and contracts or dramatically changes shape when a force is applied, ideally resonantly. Two or three particles connected by a tether would move back and forwards under an oscillating electrostatic, electrical or magnetic field or even an acoustic wave. There are billions of ways of doing that and some would be cheaper than others to manufacture in large quantity. Chemists are brilliant at designing custom molecules with particular shapes, and biology has done that with zillions of enzymes too. Our balls would be pretty small but more micro-tech than nano-tech or molecular tech.

The vacuum cleaner attachment would thus spray this stuff onto the carpet and start resonating it with an EM field or sound waves. The little particles would wildly thrash around doing their micro-cleaning, yanking dirt free, and then they would be sucked back into the cleaner to be used again. The cleaner head doesn’t even need to have a spinning brush, the only moving parts would be the powder, though having an agitating brush might help get them deeper into the fabric I guess.

 

The future of nylon: ladder-free hosiery

Last week I outlined the design for a 3D printer that can print and project graphene filaments at 100m/s. That was designed to be worn on the wrist like Spiderman’s, but an industrial version could print faster. When I checked a few of the figures, I discovered that the spinnerets for making nylon stockings run at around the same speed. That means that graphene stockings could be made at around the same speed. My print head produced 140 denier graphene yarn but it made that from many finer filaments so basically any yarn thickness from a dozen carbon atoms right up to 140 denier would be feasible.

The huge difference is that a 140 denier graphene thread is strong enough to support a man at 2g acceleration. 10 denier stockings are made from yarn that breaks quite easily, but unless I’ve gone badly wrong on the back of my envelope, 10 denier graphene would have roughly 10kg (22lb)breaking strain. That’s 150 times stronger than nylon yarn of the same thickness.

If so, then that would mean that a graphene stocking would have incredible strength. A pair of 10 denier graphene stockings or tights (pantyhose) might last for years without laddering. That might not be good news for the nylon stocking industry, but I feel confident they would adapt easily to such potential.

Alternatively, much finer yarns could be made that would still have reasonable ladder resistance, so that would also affect the visual appearance and texture. They could be made so fine that the fibers are invisible even up close. People might not always want that, but the key message is that wear-resistant, ladder free hosiery could be made that has any gauge from 0.1 denier to 140 denier.

There is also a bonus that graphene is a superb conductor. That means that graphene fibers could be woven into nylon hosiery to add circuits. Those circuits might be to harvest radio energy, act as an aerial, power LEDS in the hosiery or change its colors or patterns. So even if it isn’t used for the whole garment, it might still have important uses in the garment as an addition to the weave.

There is yet another bonus. Graphene circuits could allow electrical supply to shape changing polymers that act rather like muscles, contracting when a voltage is applied across them, so that a future pair of tights could shape a leg far better, with tensions and pressures electronically adjusted over the leg to create the perfect shape. Graphene can make electronic muscles directly too, but in a more complex mechanism (e.g. using magnetic field generation and interaction, or capacitors and electrical attraction/repulsion).