Category Archives: biomimetics

Optical computing

A few nights ago I was thinking about the optical fibre memories that we were designing in the late 1980s in BT. The idea was simple. You transmit data into an optical fibre, and if the data rate is high you can squeeze lots of data into a manageable length. Back then the speed of light in fibre was about 5 microseconds per km of fibre, so 1000km of fibre, at a data rate of 2Gb/s would hold 10Mbits of data, per wavelength, so if you can multiplex 2 million wavelengths, you’d store 20Tbits of data. You could maintain the data by using a repeater to repeat the data as it reaches one end into the other, or modify it at that point simply by changing what you re-transmit. That was all theory then, because the latest ‘hero’ experiments were only just starting to demonstrate the feasibility of such long lengths, such high density WDM and such data rates.

Nowadays, that’s ancient history of course, but we also have many new types of fibre, such as hollow fibre with various shaped pores and various dopings to allow a range of effects. And that’s where using it for computing comes in.

If optical fibre is designed for this purpose, with optimal variable refractive index designed to facilitate and maximise non-linear effects, then the photons in one data stream on one wavelength could have enough effects of photons in another stream to be used for computational interaction. Computers don’t have to be digital of course, so the effects don’t have to be huge. Analog computing has many uses, and analog interactions could certainly work, while digital ones might work, and hybrid digital/analog computing may also be feasible. Then it gets fun!

Some of the data streams could be programs. Around that time, I was designing protocols with smart packets that contained executable code, as well as other packets that could hold analog or digital data or any mix. We later called the smart packets ANTs – autonomous network telephers, a contrived term if ever there was one, but we wanted to call them ants badly. They would scurry around the network doing a wide range of jobs, using a range of biomimetic and basic physics techniques to work like ant colonies and achieve complex tasks using simple means.

If some of these smart packets or ANTs are running along a fibre, changing the properties as they go to interact with other data transmitting alongside, then ANTs can interact with one another and with any stored data. ANTs could also move forwards or backwards along the fibre by using ‘sidings’ or physical shortcuts, since they can route themselves or each other. Data produced or changed by the interactions could be digital or analog and still work fine, carried on the smart packet structure.

(If you’re interested my protocol was called UNICORN, Universal Carrier for an Optical Residential Network, and used the same architectural principles as my previous Addressed Time Slice invention, compressing analog data by a few percent to fit into a packet, with a digital address and header, or allowing any digital data rate or structure in a payload while keeping the same header specs for easy routing. That system was invented (in 1988) for the late 1990s when basic domestic broadband rate should have been 625Mbit/s or more, but we expected to be at 2Gbit/s or even 20Gbit/s soon after that in the early 2000s, and the benefit as that we wouldn’t have to change the network switching because the header overheads would still only be a few percent of total time. None of that happened because of government interference in the telecoms industry regulation that strongly disincentivised its development, and even today, 625Mbit/s ‘basic rate’ access is still a dream, let alone 20Gbit/s.)

Such a system would be feasible. Shortcuts and sidings are easy to arrange. The protocols would work fine. Non-linear effects are already well known and diverse. If it were only used for digital computing, it would have little advantage over conventional computers. With data stored on long fibre lengths, external interactions would be limited, with long latency. However, it does present a range of potentials for use with external sensors directly interacting with data streams and ANTs to accomplish some tasks associated with modern AI. It ought to be possible to use these techniques to build the adaptive analog neural networks that we’ve known are the best hope of achieving strong AI since Hans Moravek’s insight, coincidentally also around that time. The non-linear effects even enable ideal mechanisms for implementing emotions, biasing the computation in particular directions via intensity of certain wavelengths of light in much the same way as chemical hormones and neurotransmitters interact with our own neurons. Implementing up to 2 million different emotions at once is feasible.

So there’s a whole mineful of architectures, tools and techniques waiting to be explored and mined by smart young minds in the IT industry, using custom non-linear optical fibres for optical AI.

Spiders in Space

A while back I read an interesting article about how small spiders get into the air to disperse, even when there is no wind:

Spiders go ballooning on electric fields: https://phys.org/news/2018-07-spiders-ballooning-electric-fields.html

If you don’t want to read it, the key point is that they use the electric fields in the air to provide enough force to drag them into the air. It gave me an idea. Why not use that same technique to get into space?

There is electric air potential right up to the very top of the atmosphere, but electric fields permeate space too. It only provides a weak force, enough to lift a 25mg spider using the electrostatic force on a few threads from its spinnerets.

25mg isn’t very heavy, but then the threads are only designed to lift the spider. Longer threads could generate higher forces, and lots of longer threads working together could generate significant forces. I’m not thinking of using this to launch space ships though. All I want for this purpose is to lift a few grams and that sounds feasible.

If we can arrange for a synthetic ‘cyber-spider’ to eject long graphene threads in the right directions, and to wind them back in when appropriate, our cyber-spider could harness these electric forces to crawl slowly into space, and then maintain altitude. It won’t need to stay in exactly the same place, but could simply use the changing fields and forces to stay within a reasonably small region. It won’t have used any fuel or rockets to get there or stay there, but now it is in space, even if it isn’t very high, it could be quite useful, even though it is only a few grams in weight.

Suppose our invisibly small cyber-spider sits near the orbit of a particular piece of space junk. The space junk moves fast, and may well be much larger than our spider in terms of mass, but if a few threads of graphene silk were to be in its path, our spider could effectively ensnare it, cause an immediate drop of speed due to Newtonian sharing of momentum (the spider has to be accelerated to the same speed as the junk, from stationary so even though it is much lighter, that would still cause a significant drop in junk speed)) and then use its threads as a mechanism for electromagnetic drag, causing it to slowly lose more speed and fall out of orbit. That might compete well as a cheap mechanism for cleaning up space junk.

Some organic spiders can kill a man with a single bite, and space spiders could do much the same, albeit via a somewhat different process. Instead of junk, our spider could meander into collision course with an astronaut doing a space walk. A few grams isn’t much, but a stationary cyber-spider placed in the way of a rapidly moving human would have much the same effect as a very high speed rifle shot.

The astronaut could easily be a satellite. Its location could be picked to impact on a particular part of the satellite to do most damage, or to cause many fragments, and if enough fragments are created – well, we’ve all watched Gravity and know what high speed fragments of destroyed satellites can do.

The spider doesn’t even need to get itself into a precise position. If it has many threads going off in various directions, it can quickly withdraw some of them to create a Newtonian reaction to move its center of mass fast into a path. It might sit many meters away from the desired impact position, waiting until the last second to jump in front of the astronaut/satellite/space junk.

What concerns me with this is that the weapon potential lends itself to low budget garden shed outfits such as lone terrorists. It wouldn’t need rockets, or massively expensive equipment. It doesn’t need rapid deployment, since being invisible, could migrate to its required location over days, weeks or months. A large number of them could be invisibly deployed from a back garden ready for use at any time, waiting for the command before simultaneously wiping out hundreds of satellites. It only needs a very small amount of IT attached to some sort of filament spinneret. A few years ago I worked out how to spin graphene filaments at 100m/s:

https://carbondevices.com/2015/11/13/spiderman-style-silk-thrower/

If I can do it, others can too, and there are probably many ways to do this other than mine.

If you aren’t SpiderMan, and can accept lower specs, you could make a basic graphene silk thrower and associated IT that fits in the few grams weight budget.

There are many ways to cause havoc in space. Spiders have been sci-fi horror material for decades. Soon space spiders could be quite real.

 

 

How can we make a computer conscious?

This is very text heavy and is really just my thinking out loud, so to speak. Unless you are into mental archaeology or masochistic, I’d strongly recommend that you instead go to my new blog on this which outlines all of the useful bits graphically and simply.

Otherwise….

I found this article in my drafts folder, written 3 years ago as part of my short series on making conscious computers. I thought I’d published it but didn’t. So updating and publishing it now. It’s a bit long-winded, thinking out loud, trying to derive some insights from nature on how to make conscious machines. The good news is that actual AI developments are following paths that lead in much the same direction, though some significant re-routing and new architectural features are needed if they are to optimize AI and achieve machine consciousness.

Let’s start with the problem. Today’s AI that plays chess, does web searches or answers questions is digital. It uses algorithms, sets of instructions that the computer follows one by one. All of those are reduced to simple binary actions, toggling bits between 1 and 0. The processor doing that is no more conscious or aware of it, and has no more understanding of what it is doing than an abacus knows it is doing sums. The intelligence is in the mind producing the clever algorithms that interpret the current 1s and 0s and change them in the right way. The algorithms are written down, albeit in more 1s and 0s in a memory chip, but are essentially still just text, only as smart and aware as a piece of paper with writing on it. The answer is computed, transmitted, stored, retrieved, displayed, but at no point does the computer sense that it is doing any of those things. It really is just an advanced abacus. An abacus is digital too (an analog equivalent to an abacus is a slide rule).

A big question springs to mind: can a digital computer ever be any more than an advanced abacus. Until recently, I was certain the answer was no. Surely a digital computer that just runs programs can never be conscious? It can simulate consciousness to some degree, it can in principle describe the movements of every particle in a conscious brain, every electric current, every chemical reaction. But all it is doing is describing them. It is still just an abacus. Once computed, that simulation of consciousness could be printed and the printout would be just as conscious as the computer was. A digital ‘stored program’ computer can certainly implement extremely useful AI. With the right algorithms, it can mine data, link things together, create new data from that, generate new ideas by linking together things that haven’t been linked before, make works of art, poetry, compose music, chat to people, recognize faces and emotions and gestures. It might even be able to converse about life, the universe and everything, tell you its history, discuss its hopes for the future, but all of that is just a thin gloss on an abacus. I wrote a chat-bot on my Sinclair ZX Spectrum in 1983, running on a processor with about 8,000 transistors. The chat-bot took all of about 5 small pages of code but could hold a short conversation quite well if you knew what subjects to stick to. It’s very easy to simulate conversation. But it is still just a complicated abacus and still doesn’t even know it is doing anything.

However clever the AI it implements, a conventional digital computer that just executes algorithms can’t become conscious but an analog computer can, a quantum computer can, and so can a hybrid digital/analog/quantum computer. The question remain s whether a digital computer can be conscious if it isn’t just running stored programs. Could it have a different structure, but still be digital and yet be conscious? Who knows? Not me. I used to know it couldn’t, but now that I am a lot older and slightly wiser, I now know I don’t know.

Consciousness debate often starts with what we know to be conscious, the human brain. It isn’t a digital computer, although it has digital processes running in it. It also runs a lot of analog processes. It may also run some quantum processes that are significant in consciousness. It is a conscious hybrid of digital, analog and possibly quantum computing. Consciousness evolved in nature, therefore it can be evolved in a lab. It may be difficult and time consuming, and may even be beyond current human understanding, but it is possible. Nature didn’t use magic, and what nature did can be replicated and probably even improved on. Evolutionary AI development may have hit hard times, but that only shows that the techniques used by the engineers doing it didn’t work on that occasion, not that other techniques can’t work. Around 2.6 new human-level fully conscious brains are made by nature every second without using any magic and furthermore, they are all slightly different. There are 7.6 billion slightly different implementations of human-level consciousness that work and all of those resulted from evolution. That’s enough of an existence proof and a technique-plausibility-proof for me.

Sensors evolved in nature pretty early on. They aren’t necessary for life, for organisms to move around and grow and reproduce, but they are very helpful. Over time, simple light, heat, chemical or touch detectors evolved further to simple vision and produce advanced sensations such as pain and pleasure, causing an organism to alter its behavior, in other words, feeling something. Detection of an input is not the same as sensation, i.e. feeling an input. Once detection upgrades to sensation, you have the tools to make consciousness. No more upgrades are needed. Sensing that you are sensing something is quite enough to be classified as consciousness. Internally reusing the same basic structure as external sensing of light or heat or pressure or chemical gradient or whatever allows design of thought, planning, memory, learning and construction and processing of concepts. All those things are just laying out components in different architectures. Getting from detection to sensation is the hard bit.

So design of conscious machines, and in fact what AI researchers call the hard problem, really can be reduced to the question of what makes the difference between a light switch and something that can feel being pushed or feel the current flowing when it is, the difference between a photocell and feeling whether it is light or dark, the difference between detecting light frequency, looking it up in a database, then pronouncing that it is red, and experiencing redness. That is the hard problem of AI. Once that is solved, we will very soon afterwards have a fully conscious self aware AI. There are lots of options available, so let’s look at each in turn to extract any insights.

The first stage is easy enough. Detecting presence is easy, measuring it is harder. A detector detects something, a sensor (in its everyday engineering meaning) quantifies it to some degree. A component in an organism might fire if it detects something, it might fire with a stronger signal or more frequently if it detects more of it, so it would appear to be easy to evolve from detection to sensing in nature, and it is certainly easy to replicate sensing with technology.

Essentially, detection is digital, but sensing is usually analog, even though the quantity sensed might later be digitized. Sensing normally uses real numbers, while detection uses natural numbers (real v  integer as programmer call them). The handling of analog signals in their raw form allows for biomimetic feedback loops, which I’ll argue are essential. Digitizing them introduces a level of abstraction that is essentially the difference between emulation and simulation, the difference between doing something and reading about someone doing it. Simulation can’t make a conscious machine, emulation can. I used to think that meant digital couldn’t become conscious, but actually it is just algorithmic processing of stored programs that can’t do it. There may be ways of achieving consciousness digitally, or quantumly, but I haven’t yet thought of any.

That engineering description falls far short of what we mean by sensation in human terms. How does that machine-style sensing become what we call a sensation? Logical reasoning says there would probably need to be only a small change in order to have evolved from detection to sensing in nature. Maybe something like recombining groups of components in different structures or adding them together or adding one or two new ones, that sort of thing?

So what about detecting detection? Or sensing detection? Those could evolve in sequence quite easily. Detecting detection is like your alarm system control unit detecting the change of state that indicates that a PIR has detected an intruder, a different voltage or resistance on a line, or a 1 or a 0 in a memory store. An extremely simple AI responds by ringing an alarm. But the alarm system doesn’t feel the intruder, does it?  It is just a digital response to a digital input. No good.

How about sensing detection? How do you sense a 1 or a 0? Analog interpretation and quantification of digital states is very wasteful of resources, an evolutionary dead end. It isn’t any more useful than detection of detection. So we can eliminate that.

OK, sensing of sensing? Detection of sensing? They look promising. Let’s run with that a bit. In fact, I am convinced the solution lies in here so I’ll look till I find it.

Let’s do a thought experiment on designing a conscious microphone, and for this purpose, the lowest possible order of consciousness will do, we can add architecture and complexity and structures once we have some bricks. We don’t particularly want to copy nature, but are free to steal ideas and add our own where it suits.

A normal microphone sensor produces an analog signal quantifying the frequencies and intensities of the sounds it is exposed to, and that signal may later be quantified and digitized by an analog to digital converter, possibly after passing through some circuits such as filters or amplifiers in between. Such a device isn’t conscious yet. By sensing the signal produced by the microphone, we’d just be repeating the sensing process on a transmuted signal, not sensing the sensing itself.

Even up close, detecting that the microphone is sensing something could be done by just watching a little LED going on when current flows. Sensing it is harder but if we define it in conventional engineering terms, it could still be just monitoring a needle moving as the volume changes. That is obviously not enough, it’s not conscious, it isn’t feeling it, there’s no awareness there, no ‘sensation’. Even at this primitive level, if we want a conscious mic, we surely need to get in closer, into the physics of the sensing. Measuring the changing resistance between carbon particles or speed of a membrane moving backwards and forwards would just be replicating the sensing, adding an extra sensing stage in series, not sensing the sensing, so it needs to be different from that sort of thing. There must surely need to be a secondary change or activity in the sensing mechanism itself that senses the sensing of the original signal.

That’s a pretty open task, and it could even be embedded in the detecting process or in the production process for the output signal. But even recognizing that we need this extra property narrows the search. It must be a parallel or embedded mechanism, not one in series. The same logical structure would do fine for this secondary sensing, since it is just sensing in the same logical way as the original. This essential logical symmetry would make its evolution easy too. It is easy to imagine how that could happen in nature, and easier still to see how it could be implemented in a synthetic evolution design system. Such an approach could be mimicked in natural or synthetic evolutionary development systems. In this approach, we have to feel the sensing, so we need it to comprise some sort of feedback loop with a high degree of symmetry compared with the main sensing stage. That would be natural evolution compatible as well as logically sound as an engineering approach.

This starts to look like progress. In fact, it’s already starting to look a lot like a deep neural network, with one huge difference: instead of using feed-forward signal paths for analysis and backward propagation for training, it relies instead on a symmetric feedback mechanism where part of the input for each stage of sensing comes from its own internal and output signals. A neuron is not a full sensor in its own right, and it’s reasonable to assume that multiple neurons would be clustered so that there is a feedback loop. Many in the neural network AI community are already recognizing the limits of relying on feed-forward and back-prop architectures, but web searches suggest few if any are moving yet to symmetric feedback approaches. I think they should. There’s gold in them there hills!

So, the architecture of the notional sensor array required for our little conscious microphone would have a parallel circuit and feedback loop (possibly but not necessarily integrated), and in all likelihood these parallel and sensing circuits would be heavily symmetrical, i.e. they would use pretty much the same sort of components and architectures as the sensing process itself. If the sensation bit is symmetrical, of similar design to the primary sensing circuit, that again would make it easy to evolve in nature too so is a nice 1st principles biomimetic insight. So this structure has the elegance of being very feasible for evolutionary development, natural or synthetic. It reuses similarly structured components and principles already designed, it’s just recombining a couple of them in a slightly different architecture.

Another useful insight screams for attention too. The feedback loop ensures that the incoming sensation lingers to some degree. Compared to the nanoseconds we are used to in normal IT, the signals in nature travel fairly slowly (~200m/s), and the processing and sensing occur quite slowly (~200Hz). That means this system would have some inbuilt memory that repeats the essence of the sensation in real time – while it is sensing it. It is inherently capable of memory and recall and leaves the door wide open to introduce real-time interaction between memory and incoming signal. It’s not perfect yet, but it has all the boxes ticked to be a prime contender to build thought, concepts, store and recall memories, and in all likelihood, is a potential building brick for higher level consciousness. Throw in recent technology developments such as memristors and it starts to look like we have a very promising toolkit to start building primitive consciousness, and we’re already seeing some AI researchers going that path so maybe we’re not far from the goal. So, we make a deep neural net with nice feedback from output (of the sensing system, which to clarify would be a cluster of neurons, not a single neuron) to input at every stage (and between stages) so that inputs can be detected and sensed, while the input and output signals are stored and repeated into the inputs in real time as the signals are being processed. Throw in some synthetic neurotransmitters to dampen the feedback and prevent overflow and we’re looking at a system that can feel it is feeling something and perceive what it is feeling in real time.

One further insight that immediately jumps out is since the sensing relies on the real time processing of the sensations and feedbacks, the speed of signal propagation, storage, processing and repetition timeframes must all be compatible. If it is all speeded up a million fold, it might still work fine, but if signals travel too slowly or processing is too fast relative to other factors, it won’t work. It will still get a computational result absolutely fine, but it won’t know that it has, it won’t be able to feel it. Therefore… since we have a factor of a million for signal speed (speed of light compared to nerve signal propagation speed), 50 million for switching speed, and a factor of 50 for effective neuron size (though the sensing system units would be multiple neuron clusters), we could make a conscious machine that could think at 50 million times as fast as a natural system (before allowing for any parallel processing of course). But with architectural variations too, we’d need to tune those performance metrics to make it work at all and making physically larger nets would require either tuning speeds down or sacrificing connectivity-related intelligence. An evolutionary design system could easily do that for us.

What else can we deduce about the nature of this circuit from basic principles? The symmetry of the system demands that the output must be an inverse transform of the input. Why? Well, because the parallel, feedback circuit must generate a form that is self-consistent. We can’t deduce the form of the transform from that, just that the whole system must produce an output mathematically similar to that of the input.

I now need to write another blog on how to use such circuits in neural vortexes to generate knowledge, concepts, emotions and thinking. But I’m quite pleased that it does seem that some first-principles analysis of natural evolution already gives us some pretty good clues on how to make a conscious computer. I am optimistic that current research is going the right way and only needs relatively small course corrections to achieve consciousness.

 

AIs of a feather flocking together to create global instability

Hawking and Musk have created a lot of media impact with their warnings about AI, so although terminator scenarios resulting from machine consciousness have been discussed, as have more mundane use of non-conscious autonomous weapon systems, it’s worth noting that I haven’t yet heard them mention one major category of risks from AI – emergence. AI risks have been discussed frequently since the 1970s, and in the 1990s a lot of work was done in the AI community on emergence. Complex emergent patterns of behavior often result from interactions between entities driven by simple algorithms. Genetic algorithms were demonstrated to produce evolution, simple neighbor-interaction rules were derived to illustrate flocking behaviors that make lovely screen saver effects. Cellular automata were played with. In BT we invented ways of self-organizing networks and FPGAs, played with mechanism that could be used for evolution and consciousness, demonstrated managing networks via ANTs – autonomous network telephers, using smart packets that would run up and down wires sorting things out all by themselves. In 1987 discovered a whole class of ways of bringing down networks via network resonance, information waves and their much larger class of correlated traffic – still unexploited by hackers apart from simple DOS attacks. These ideas have slowly evolved since, and some have made it into industry or hacker toolkits, but we don’t seem to be joining the dots as far as risks go.

I read an amusing article this morning by an ex-motoring-editor who was declined insurance because the AI systems used by insurance companies had labelled him as high risk because he maybe associated with people like Clarkson. Actually, he had no idea why, but that was his broker’s theory of how it might have happened. It’s a good article, well written and covers quite a few of the dangers of allowing computers to take control.

http://www.dailymail.co.uk/sciencetech/article-5310031/Evidence-robots-acquiring-racial-class-prejudices.html

The article suggested how AIs in different companies might all come to similar conclusions about people or places or trends or patterns in a nice tidy positive feedback loop. That’s exactly the sort of thing that can drive information waves, which I demonstrated in 1987 can bring down an entire network in less than 3 milliseconds, in such a way that it would continue to crash many times when restarted. That isn’t intended by the algorithms, which individually ought to make good decisions, but when interacting with one another, create the emergent phenomenon.  Automated dealing systems are already pretty well understood in this regard and mechanisms prevent frequent stock market collapses, but that is only one specific type of behavior in one industry that is protected. There do not seem to be any industry-wide mechanisms to prevent the rest of this infinite class of problems from affecting any or all of the rest, simultaneously.

As we create ever more deep learning neural networks, that essentially teach themselves from huge data pools, human understanding of their ‘mindsets’ decreases. They make decisions using algorithms that are understood at a code level, but the massive matrix of derived knowledge they create from all the data they receive becomes highly opaque. Often, even usually, nobody quite knows how a decision is made. That’s bad enough in a standalone system, but when many such systems are connected, produced and owned and run by diverse companies with diverse thinking, the scope for destructive forms of emergence increases geometrically.

One result could be gridlock. Systems fed with a single new piece of data could crash. My 3 millisecond result in 1987 would still stand since network latency is the prime limiter. The first AI receives it, alters its mindset accordingly, processes it, makes a decision and interacts with a second AI. This second one might have different ‘prejudice’ so makes its own decision based on different criteria, and refuses to respond the way intended. A 3rd one looks at the 2nd’s decision and takes that as evidence that there might be an issue, and with its risk-averse mindset, also refuse to act, and that inaction spreads through the entire network in milliseconds. Since the 1st AI thinks the data is all fine and it should have gone ahead, it now interprets the inaction of the others as evidence that that type of data is somehow ‘wrong’ so itself refuses to process any further of that type, whether from its own operators or other parts of the system. So it essentially adds its own outputs to the bad feeling and the entire system falls into sulk mode. As one part of infrastructure starts to shut down, that infects other connected parts and our entire IT could fall into sulk mode – entire global infrastructure. Since nobody knows how it all works, or what has caused the shutdown, it might be extremely hard to recover.

Another possible result is a direct information wave, almost certainly a piece of fake news. Imagine our IT world in 5 years time, with all these super-smart AIs super-connected. A piece of fake news says a nuke has just been launched somewhere. Stocks will obviously decline, whatever the circumstances, so as the news spreads, everyone’s AIs will take it on themselves to start selling shares before the inevitable collapse, triggering a collapse, except it won’t because the markets won’t let that happen. BUT… The wave does spread, and all those individual AIs want to dispose of those shares, or at least find out what’s happening, so they all start sending messages to one another, exchanging data, trying to find what’s going on. That’s the information wave. They can’t sell shares of find out, because the network is going into overload, so they try even harder and force it into severe overload. So it falls over. When it comes back online, they all try again, crashing it again, and so on.

Another potential result is smartass AI. There is always some prat somewhere who sees an opportunity to take advantage and ruins if for everyone else by doing something like exploiting a small loophole in the law, or in this case, most likely, a prejudice our smartass AI has discovered in some other AI that means it can be taken advantage of by doing x, y, or z. Since nobody quite knows how any of their AIs are making their decisions because their mindsets ate too big and too complex, it will be very hard to identify what is going on. Some really unusual behavior is corrupting the system because some AI is going rogue somewhere somehow, but which one, where, how?

That one brings us back to fake news. That will very soon infect AI systems with their own varieties of fake news. Complex networks of AIs will have many of the same problems we are seeing in human social networks. An AI could become a troll just the same as a human, deliberately winding others up to generate attention of drive a change of some parameter – any parameter – in its own favour. Activist AIs will happen due to people making them to push human activist causes, but they will also do it all by themselves. Their analysis of the system will sometimes show them that a good way to get a good result is to cause problems elsewhere.

Then there’s climate change, weather, storms, tsunamis. I don’t mean real ones, I mean the system wide result of tiny interactions of tiny waves and currents of data and knowledge in neural nets. Tiny effects in one small part of a system can interact in unforeseen ways with other parts of other systems nearby, creating maybe a breeze, which interacts with breezes in nearby regions to create hurricanes. I think that’s a reasonable analogy. Chaos applies to neural net societies just as it does to climate, and 50 year waves equivalents will cause equivalent havoc in IT.

I won’t go on with more examples, long blogs are awful to read. None of these requires any self-awareness, sentience, consciousness, call it what you will. All of these can easily happen through simple interactions of fairly trivial AI deep learning nets. The level of interconnection already sounds like it may already be becoming vulnerable to such emergence effects. Soon it definitely will be. Musk and Hawking have at least joined the party and they’ll think more and more deeply in coming months. Zuckerberg apparently doesn’t believe in AI threats but now accepts the problems social media is causing. Sorry Zuck, but the kind of AI you’re company is messing with will also be subject to its own kinds of social media issues, not just in its trivial decisions on what to post or block, but actual inter-AI socializing issues. It might not try to eliminate humanity, but if it brings all of our IT to a halt and prevents rapid recovery, we’re still screwed.

 

Artificial muscles using folded graphene

Slide1

Folded Graphene Concept

Two years ago I wrote a blog on future hosiery where I very briefly mentioned the idea of using folded graphene as synthetic muscles:

The future of nylon: ladder-free hosiery

Although I’ve since mentioned it to dozens of journalists, none have picked up on it, so now that soft robotics and artificial muscles are in the news, I guess it’s about time I wrote it up myself, before someone else claims the idea. I don’t want to see an MIT article about how they have just invented it.

The above pic gives the general idea. Graphene comes in insulating or conductive forms, so it will be possible to make sheets covered with tiny conducting graphene electromagnet coils that can be switched individually to either polarity and generate strong magnetic forces that pull or push as required. That makes it ideal for a synthetic muscle, given the potential scale. With 1.5nm-thick layers that could be anything from sub-micron up to metres wide, this will allow thin fibres and yarns to make muscles or shape change fabrics all the way up to springs or cherry-picker style platforms, using many such structures. Current can be switched on and off or reversed very rapidly, to make continuous forces or vibrations, with frequency response depending on application – engineering can use whatever scales are needed. Natural muscles are limited to 250Hz, but graphene synthetic muscles should be able to go to MHz.

Uses vary from high-rise rescue, through construction and maintenance, to space launch. Since the forces are entirely electromagnetic, they could be switched very rapidly to respond to any buckling, offering high stabilisation.

Slide2

The extreme difference in dimensions between folded and opened state mean that an extremely thin force mat made up of many of these cherry-picker structures could be made to fill almost any space and apply force to it. One application that springs to mind is rescues, such as after earthquakes have caused buildings to collapse. A sheet could quickly apply pressure to prize apart pieces of rubble regardless of size and orientation. It could alternatively be used for systems for rescuing people from tall buildings, fracking or many other applications.

Slide3

It would be possible to make large membranes for a wide variety of purposes that can change shape and thickness at any point, very rapidly.

Slide4

One such use is a ‘jellyfish’, complete with stinging cells that could travel around in even very thin atmospheres all by itself. Upper surfaces could harvest solar power to power compression waves that create thrust. This offers use for space exploration on other planets, but also has uses on Earth of course, from surveillance and power generation, through missile defense systems or self-positioning parachutes that may be used for my other invention, the Pythagoras Sling. That allows a totally rocket-free space launch capability with rapid re-use.

Slide5

Much thinner membranes are also possible, as shown here, especially suited for rapid deployment missile defense systems:

Slide6

Also particularly suited to space exploration o other planets or moons, is the worm, often cited for such purposes. This could easily be constructed using folded graphene, and again for rescue or military use, could come with assorted tools or lethal weapons built in.

Slide7

A larger scale cherry-picker style build could make ejector seats, elevation platforms or winches, either pushing or pulling a payload – each has its merits for particular types of application.  Expansion or contraction could be extremely rapid.

Slide8

An extreme form for space launch is the zip-winch, below. With many layers just 1.5nm thick, expanding to 20cm for each such layer, a 1000km winch cable could accelerate a payload rapidly as it compresses to just 7.5mm thick!

Slide9

Very many more configurations and uses are feasible of course, this blog just gives a few ideas. I’ll finish with a highlight I didn’t have time to draw up yet: small particles could be made housing a short length of folded graphene. Since individual magnets can be addressed and controlled, that enables magnetic powders with particles that can change both their shape and the magnetism of individual coils. Precision magnetic fields is one application, shape changing magnets another. The most exciting though is that this allows a whole new engineering field, mixing hydraulics with precision magnetics and shape changing. The powder can even create its own chambers, pistons, pumps and so on. Electromagnetic thrusters for ships are already out there, and those same thrust mechanisms could be used to manipulate powder particles too, but this allows for completely dry hydraulics, with particles that can individually behave actively or  passively.

Fun!

 

 

Chat-bots will help reduce loneliness, a bit

Amazon is really pushing its Echo and Dot devices at the moment and some other companies also use Alexa in their own devices. They are starting to gain avatar front ends too. Microsoft has their Cortana transforming into Zo, Apple has Siri’s future under wraps for now. Maybe we’ll see Siri in a Sari soon, who knows. Thanks to rapidly developing AI, chatbots and other bots have also made big strides in recent years, so it’s obvious that the two can easily be combined. The new voice control interfaces could become chatbots to offer a degree of companionship. Obviously that isn’t as good as chatting to real people, but many, very many people don’t have that choice. Loneliness is one of the biggest problems of our time. Sometimes people talk to themselves or to their pet cat, and chatting to a bot would at least get a real response some of the time. It goes further than simple interaction though.

I’m not trying to understate the magnitude of the loneliness problem, and it can’t solve it completely of course, but I think it will be a benefit to at least some lonely people in a few ways. Simply having someone to chat to will already be of some help. People will form emotional relationships with bots that they talk to a lot, especially once they have a visual front end such as an avatar. It will help some to develop and practice social skills if that is their problem, and for many others who feel left out of local activity, it might offer them real-time advice on what is on locally in the next few days that might appeal to them, based on their conversations. Talking through problems with a bot can also help almost as much as doing so with a human. In ancient times when I was a programmer, I’d often solve a bug by trying to explain how my program worked, and in doing so i would see the bug myself. Explaining it to a teddy bear would have been just as effective, the chat was just a vehicle for checking through the logic from a new angle. The same might apply to interactive conversation with a bot. Sometimes lonely people can talk too much about problems when they finally meet people, and that can act as a deterrent to future encounters, so that barrier would also be reduced. All in all, having a bot might make lonely people more able to get and sustain good quality social interactions with real people, and make friends.

Another benefit that has nothing to do with loneliness is that giving a computer voice instructions forces people to think clearly and phrase their requests correctly, just like writing a short computer program. In a society where so many people don’t seem to think very clearly or even if they can, often can’t express what they want clearly, this will give some much needed training.

Chatbots could also offer challenges to people’s thinking, even to help counter extremism. If people make comments that go against acceptable social attitudes or against known facts, a bot could present the alternative viewpoint, probably more patiently than another human who finds such viewpoints frustrating. I’d hate to see this as a means to police political correctness, though it might well be used in such a way by some providers, but it could improve people’s lack of understanding of even the most basic science, technology, culture or even politics, so has educational value. Even if it doesn’t convert people, it might at least help them to understand their own views more clearly and be better practiced at communicating their arguments.

Chat bots could make a significant contribution to society. They are just machines, but those machines are tools for other people and society as a whole to help more effectively.

 

Colour changing cars, everyday objects and makeup

http://www.theverge.com/2016/11/24/13740946/dutch-scientists-use-color-changing-graphene-bubbles-to-create-mechanical-pixels shows how graphene can be used to make displays with each pixel changing colour according to mechanical deformation.

Meanwhile, Lexus have just created a car with a shell covered in LEDs so it can act as a massive display.

http://www.theverge.com/2016/12/5/13846396/lexus-led-lit-is-colors-dua-lipa-vevo

In 2014 I wrote about using polymer LED displays for future Minis so it’s nice to see another prediction come true.

Looking at the mechanical pixels though, it is clear that mechanical pixels could respond directly to sound, or to turbulence of passing air, plus other vibration that arises from motion on a road surface, or the engine. Car panel colours could change all the time powered by ambient energy. Coatings on any solid objects could follow, so people might have plenty of shimmering colours in their everyday environment. Could. Not sure I want it, but they could.

With sound as a control system, sound wave generators at the edges or underneath such surfaces could produce a wide variety of pleasing patterns. We could soon have furniture that does a good impression of being a cuttlefish.

I often get asked about smart makeup, on which I’ve often spoken since the late 90s. Thin film makeup displays could use this same tech. So er, we could have people with makeup pretending to be cuttlefish too. I think I’ll quit while I’m ahead.

Carbethium, a better-than-scifi material

How to build one of these for real:

Light_bridge

Halo light bridge, from halo.wikia.com

Or indeed one of these:

From halo.wikia.com

From halo.wikia.com

I recently tweeted that I had an idea how to make the glowy bridges and shields we’ve seen routinely in sci-fi games from Half Life to Destiny, the bridges that seem to appear in a second or two from nothing across a divide, yet are strong enough to drive tanks over, and able to vanish as quickly and completely when they are switched off. I woke today realizing that with a bit of work, that it could be the basis of a general purpose material to make the tanks too, and buildings and construction platforms, bridges, roads and driverless pod systems, personal shields and city defense domes, force fields, drones, planes and gliders, space elevator bases, clothes, sports tracks, robotics, and of course assorted weapons and weapon systems. The material would only appear as needed and could be fully programmable. It could even be used to render buildings from VR to real life in seconds, enabling at least some holodeck functionality. All of this is feasible by 2050.

Since it would be as ethereal as those Halo structures, I first wanted to call the material ethereum, but that name was already taken (for a 2014 block-chain programming platform, which I note could be used to build the smart ANTS network management system that Chris Winter and I developed in BT in 1993), and this new material would be a programmable construction platform so the names would conflict, and etherium is too close. Ethium might work, but it would be based on graphene and carbon nanotubes, and I am quite into carbon so I chose carbethium.

Ages ago I blogged about plasma as a 21st Century building material. I’m still not certain this is feasible, but it may be, and it doesn’t matter for the purposes of this blog anyway.

Will plasma be the new glass?

Around then I also blogged how to make free-floating battle drones and more recently how to make a Star Wars light-saber.

Free-floating AI battle drone orbs (or making Glyph from Mass Effect)

How to make a Star Wars light saber

Carbethium would use some of the same principles but would add the enormous strength and high conductivity of graphene to provide the physical properties to make a proper construction material. The programmable matter bits and the instant build would use a combination of 3D interlocking plates, linear induction,  and magnetic wells. A plane such as a light bridge or a light shield would extend from a node in caterpillar track form with plates added as needed until the structure is complete. By reversing the build process, it could withdraw into the node. Bridges that only exist when they are needed would be good fun and we could have them by 2050 as well as the light shields and the light swords, and light tanks.

The last bit worries me. The ethics of carbethium are the typical mixture of enormous potential good and huge potential for abuse to bring death and destruction that we’re learning to expect of the future.

If we can make free-floating battle drones, tanks, robots, planes and rail-gun plasma weapons all appear within seconds, if we can build military bases and erect shield domes around them within seconds, then warfare moves into a new realm. Those countries that develop this stuff first will have a huge advantage, with the ability to send autonomous robotic armies to defeat enemies with little or no risk to their own people. If developed by a James Bond super-villain on a hidden island, it would even be the sort of thing that would enable a serious bid to take over the world.

But in the words of Professor Emmett Brown, “well, I figured, what the hell?”. 2050 values are not 2016 values. Our value set is already on a random walk, disconnected from any anchor, its future direction indicated by a combination of current momentum and a chaos engine linking to random utterances of arbitrary celebrities on social media. 2050 morality on many issues will be the inverse of today’s, just as today’s is on many issues the inverse of the 1970s’. Whatever you do or however politically correct you might think you are today, you will be an outcast before you get old: https://timeguide.wordpress.com/2015/05/22/morality-inversion-you-will-be-an-outcast-before-youre-old/

We’re already fucked, carbethium just adds some style.

Graphene combines huge tensile strength with enormous electrical conductivity. A plate can be added to the edge of an existing plate and interlocked, I imagine in a hexagonal or triangular mesh. Plates can be designed in many diverse ways to interlock, so that rotating one engages with the next, and reversing the rotation unlocks them. Plates can be pushed to the forward edge by magnetic wells, using linear induction motors, using the graphene itself as the conductor to generate the magnetic field and the design of the structure of the graphene threads enabling the linear induction fields. That would likely require that the structure forms first out of graphene threads, then the gaps between filled by mesh, and plates added to that to make the structure finally solid. This would happen in thickness as well as width, to make a 3D structure, though a graphene bridge would only need to be dozens of atoms thick.

So a bridge made of graphene could start with a single thread, which could be shot across a gap at hundreds of meters per second. I explained how to make a Spiderman-style silk thrower to do just that in a previous blog:

How to make a Spiderman-style graphene silk thrower for emergency services

The mesh and 3D build would all follow from that. In theory that could all happen in seconds, the supply of plates and the available power being the primary limiting factors.

Similarly, a shield or indeed any kind of plate could be made by extending carbon mesh out from the edge or center and infilling. We see that kind of technique used often in sci-fi to generate armor, from lost in Space to Iron Man.

The key components in carbetheum are 3D interlocking plate design and magnetic field design for the linear induction motors. Interlocking via rotation is fairly easy in 2D, any spiral will work, and the 3rd dimension is open to any building block manufacturer. 3D interlocking structures are very diverse and often innovative, and some would be more suited to particular applications than others. As for linear induction motors, a circuit is needed to produce the travelling magnetic well, but that circuit is made of the actual construction material. The front edge link between two wires creates a forward-facing magnetic field to propel the next plates and convey enough intertia to them to enable kinetic interlocks.

So it is feasible, and only needs some engineering. The main barrier is price and material quality. Graphene is still expensive to make, as are carbon nanotubes, so we won’t see bridges made of them just yet. The material quality so far is fine for small scale devices, but not yet for major civil engineering.

However, the field is developing extremely quickly because big companies and investors can clearly see the megabucks at the end of the rainbow. We will have almost certainly have large quantity production of high quality graphene for civil engineering by 2050.

This field will be fun. Anyone who plays computer games is already familiar with the idea. Light bridges and shields, or light swords would appear much as in games, but the material would likely  be graphene and nanotubes (or maybe the newfangled molybdenum equivalents). They would glow during construction with the plasma generated by the intense electric and magnetic fields, and the glow would be needed afterward to make these ultra-thin physical barriers clearly visible,but they might become highly transparent otherwise.

Assembling structures as they are needed and disassembling them just as easily will be very resource-friendly, though it is unlikely that carbon will be in short supply. We can just use some oil or coal to get more if needed, or process some CO2. The walls of a building could be grown from the ground up at hundreds of meters per second in theory, with floors growing almost as fast, though there should be little need to do so in practice, apart from pushing space vehicles up so high that they need little fuel to enter orbit. Nevertheless, growing a  building and then even growing the internal structures and even furniture is feasible, all using glowy carbetheum. Electronic soft fabrics, cushions and hard surfaces and support structures are all possible by combining carbon nanotubes and graphene and using the reconfigurable matter properties carbethium convents. So are visual interfaces, electronic windows, electronic wallpaper, electronic carpet, computers, storage, heating, lighting, energy storage and even solar power panels. So is all the comms and IoT and all the smart embdedded control systems you could ever want. So you’d use a computer with VR interface to design whatever kind of building and interior furniture decor you want, and then when you hit the big red button, it would appear in front of your eyes from the carbethium blocks you had delivered. You could also build robots using the same self-assembly approach.

If these structures can assemble fast enough, and I think they could, then a new form of kinetic architecture would appear. This would use the momentum of the construction material to drive the front edges of the surfaces, kinetic assembly allowing otherwise impossible and elaborate arches to be made.

A city transport infrastructure could be built entirely out of carbethium. The linear induction mats could grow along a road, connecting quickly to make a whole city grid. Circuit design allows the infrastructure to steer driverless pods wherever they need to go, and they could also be assembled as required using carbethium. No parking or storage is needed, as the pod would just melt away onto the surface when it isn’t needed.

I could go to town on military and terrorist applications, but more interesting is the use of the defense domes. When I was a kid, I imagined having a house with a defense dome over it. Lots of sci-fi has them now too. Domes have a strong appeal, even though they could also be used as prisons of course. A supply of carbetheum on the city edges could be used to grow a strong dome in minutes or even seconds, and there is no practical limit to how strong it could be. Even if lasers were used to penetrate it, the holes could fill in in real time, replacing material as fast as it is evaporated away.

Anyway, lots of fun. Today’s civil engineering projects like HS2 look more and more primitive by the day, as we finally start to see the true potential of genuinely 21st century construction materials. 2050 is not too early to expect widespread use of carbetheum. It won’t be called that – whoever commercializes it first will name it, or Google or MIT will claim to have just invented it in a decade or so, so my own name for it will be lost to personal history. But remember, you saw it here first.

The future of vacuum cleaners

Dyson seems pretty good in vacuum cleaners and may well have tried this and found it doesn’t work, but then again, sometimes people in an industry can’t see the woods for the trees so who knows, there may be something in this:

Our new pet cat Jess, loves to pick up soft balls with a claw and throw them, and catch them again. Retractable claws are very effective.IMG_6689- Jess (2)

Jess the cat

At a smaller scale, velcro uses tiny little hooks to stick together, copying burs from nature.

Suppose you make a tiny little ball that has even tinier little retractable spines or even better, hooks. And suppose you make them by the trillion and make a powder that your vacuum cleaner attachment first sprinkles onto a carpet, then agitates furiously and quickly, and thus gets the hooks to stick to dirt, pull it off the surface and retract (so that the balls don’t stick to the carpet) and then you suck the whole lot into the machine. Since the balls have a certain defined specification, they are easy to separate from the dirt and dust and reuse again straight away. So you get superior cleaning. Some of the balls would be lost each time, and some would get sucked up next time, but overall you’d need to periodically top up the reservoir.

The current approach is to beat the hell out of the carpet fibers with a spinning brush and that works fine, but I think adding the active powder might be better because it gets right in among the dirt and drags it kicking and screaming off the fibers.

So, ball design. Firstly, it doesn’t need to be ball shaped at all, and secondly it doesn’t need spines really, just to be able to rapidly change its shape so it gets some sort of temporary traction on a dirt particle to knock it off. What we need here is any structure that expands and contracts or dramatically changes shape when a force is applied, ideally resonantly. Two or three particles connected by a tether would move back and forwards under an oscillating electrostatic, electrical or magnetic field or even an acoustic wave. There are billions of ways of doing that and some would be cheaper than others to manufacture in large quantity. Chemists are brilliant at designing custom molecules with particular shapes, and biology has done that with zillions of enzymes too. Our balls would be pretty small but more micro-tech than nano-tech or molecular tech.

The vacuum cleaner attachment would thus spray this stuff onto the carpet and start resonating it with an EM field or sound waves. The little particles would wildly thrash around doing their micro-cleaning, yanking dirt free, and then they would be sucked back into the cleaner to be used again. The cleaner head doesn’t even need to have a spinning brush, the only moving parts would be the powder, though having an agitating brush might help get them deeper into the fabric I guess.

 

The future of nylon: ladder-free hosiery

Last week I outlined the design for a 3D printer that can print and project graphene filaments at 100m/s. That was designed to be worn on the wrist like Spiderman’s, but an industrial version could print faster. When I checked a few of the figures, I discovered that the spinnerets for making nylon stockings run at around the same speed. That means that graphene stockings could be made at around the same speed. My print head produced 140 denier graphene yarn but it made that from many finer filaments so basically any yarn thickness from a dozen carbon atoms right up to 140 denier would be feasible.

The huge difference is that a 140 denier graphene thread is strong enough to support a man at 2g acceleration. 10 denier stockings are made from yarn that breaks quite easily, but unless I’ve gone badly wrong on the back of my envelope, 10 denier graphene would have roughly 10kg (22lb)breaking strain. That’s 150 times stronger than nylon yarn of the same thickness.

If so, then that would mean that a graphene stocking would have incredible strength. A pair of 10 denier graphene stockings or tights (pantyhose) might last for years without laddering. That might not be good news for the nylon stocking industry, but I feel confident they would adapt easily to such potential.

Alternatively, much finer yarns could be made that would still have reasonable ladder resistance, so that would also affect the visual appearance and texture. They could be made so fine that the fibers are invisible even up close. People might not always want that, but the key message is that wear-resistant, ladder free hosiery could be made that has any gauge from 0.1 denier to 140 denier.

There is also a bonus that graphene is a superb conductor. That means that graphene fibers could be woven into nylon hosiery to add circuits. Those circuits might be to harvest radio energy, act as an aerial, power LEDS in the hosiery or change its colors or patterns. So even if it isn’t used for the whole garment, it might still have important uses in the garment as an addition to the weave.

There is yet another bonus. Graphene circuits could allow electrical supply to shape changing polymers that act rather like muscles, contracting when a voltage is applied across them, so that a future pair of tights could shape a leg far better, with tensions and pressures electronically adjusted over the leg to create the perfect shape. Graphene can make electronic muscles directly too, but in a more complex mechanism (e.g. using magnetic field generation and interaction, or capacitors and electrical attraction/repulsion).

How to make a Spiderman-style graphene silk thrower for emergency services

I quite like Spiderman movies, and having the ability to fire a web at a distant object or villain has its appeal. Since he fires web from his forearm, it must be lightweight to withstand the recoil, and to fire enough to hold his weight while he swings, it would need to have extremely strong fibers. It is therefore pretty obvious that the material of choice when we build such a thing will be graphene, which is even stronger than spider silk (though I suppose a chemical ejection device making spider silk might work too). A thin graphene thread is sufficient to hold him as he swings so it could fit inside a manageable capsule.

So how to eject it?

One way I suggested for making graphene threads is to 3D print the graphene, using print nozzles made of carbon nanotubes and using a very high-speed modulation to spread the atoms at precise spacing so they emerge in the right physical patterns and attach appropriate positive or negative charge to each atom as they emerge from the nozzles so that they are thrown together to make them bond into graphene. This illustration tries to show the idea looking at the nozzles end on, but shows only a part of the array:printing graphene filamentsIt doesn’t show properly that the nozzles are at angles to each other and the atoms are ejected in precise phased patterns, but they need to be, since the atoms are too far apart to form graphene otherwise so they need to eject at the right speed in the right directions with the right charges at the right times and if all that is done correctly then a graphene filament would result. The nozzle arrangements, geometry and carbon atom sizes dictate that only narrow filaments of graphene can be produced by each nozzle, but as the threads from many nozzles are intertwined as they emerge from the spinneret, so a graphene thread would be produced made from many filaments. Nevertheless, it is possible to arrange carbon nanotubes in such a way and at the right angle, so provided we can get the high-speed modulation and spacing right, it ought to be feasible. Not easy, but possible. Then again, Spiderman isn’t real yet either.

The ejection device would therefore be a specially fabricated 3D print head maybe a square centimeter in area, backed by a capsule containing finely powdered graphite that could be vaporized to make the carbon atom stream through the nozzles. Some nice lasers might be good there, and some cool looking electronic add-ons to do the phasing and charging. You could make this into one heck of a cool gun.

How thick a thread do we need?

Assuming a 70kg (154lb) man and 2g acceleration during the swing, we need at least 150kg breaking strain to have a small safety margin, bearing in mind that if it breaks, you can fire a new thread. Steel can achieve that with 1.5mm thick wire, but graphene’s tensile strength is 300 times better than steel so 0.06mm is thick enough. 60 microns, or to put it another way, roughly 140 denier, although that is a very quick guess. That means roughly the same sort of graphene thread thickness is needed to support our Spiderman as the nylon used to make your backpack. It also means you could eject well over 10km of thread from a 200g capsule, plenty. Happy to revise my numbers if you have better ones. Google can be a pain!

How fast could the thread be ejected?

Let’s face it. If it can only manage 5cm/s, it is as much use as a chocolate flamethrower. Each bond in graphene is 1.4 angstroms long, so a graphene hexagon is about 0.2nm wide. We would want our graphene filament to eject at around 100m/s, about the speed of a crossbow bolt. 100m/s = 5 x 10^11 carbon atoms ejected per second from each nozzle, in staggered phasing. So, half a terahertz. Easy! That’s well within everyday electronics domains. Phew! If we can do better, we can shoot even faster.

We could therefore soon have a graphene filament ejection device that behaves much like Spiderman’s silk throwers. It needs some better engineers than me to build it, but there are plenty of them around.

Having such a device would be fun for sports, allowing climbers to climb vertical rock faces and overhangs quickly, or to make daring leaps and hope the device works to save them from certain death. It would also have military and police uses. It might even have uses in road accident prevention, yanking pedestrians away from danger or tethering cars instantly to slow them extra quickly. In fact, all the emergency services would have uses for such devices and it could reduce accidents and deaths. I feel confident that Spiderman would think of many more exciting uses too.

Producing graphene silk at 100m/s might also be pretty useful in just about every other manufacturing industry. With ultra-fine yarns with high strength produced at those speeds, it could revolutionize the fashion industry too.

The future of make-up

I was digging through some old 2002 powerpoint slides for an article on active skin and stumbled across probably the worst illustration I have ever done, though in my defense, I was documenting a great many ideas that day and spent only a few minutes on it:

smart makeup

If a woman ever looks like this, and isn’t impersonating a bald Frenchman, she has more problems to worry about than her make-up. The pic does however manage to convey the basic principle, and that’s all that is needed for a technical description. The idea is that her face can be electronically demarked into various makeup regions and the makeup on those regions can therefore adopt the appropriate colour for that region. In the pic ‘nanosomes’ wasn’t a serious name, but a sarcastic take on the cosmetics industry which loves to take scientific sounding words and invent new ones that make their products sound much more high tech than they actually are. Nanotech could certainly play a role, but since the eye can’t discern features smaller than 0.1mm, it isn’t essential. This is no longer just an idea, companies are now working on development of smart makeup, and we already have prototype electronic tattoos, one of the layers I used for my active skin but again based on an earlier vision.

The original idea didn’t use electronics, but simply used self-organisation tech I’d designed in 1993 on an electronic DNA project. Either way would work, but the makeup would be different for each.

The electronic layer, if required, would most likely be printed onto the skin at a beauty salon, would be totally painless, last weeks and could take only a few minutes to print. It extends IoT to the face.

Both mechanisms could use makeup containing flat plates that create colour by diffraction the same way the scales on a butterfly does. That would make an excellent colour pallet. Beetles produce colour a different way and that would work too. Or we could copy squids or cuttlefish. Nature has given us many excellent start points for biomimetics, and indeed the self-organisation principles were stolen from nature too. Nature used hormone gradients to help your cells differentiate when you were an embryo. If nature can arrange the rich microscopic detail of every part of your face, then similar techniques can certainly work for a simple surface layer of make-up. Having the electronic underlay makes self organisation easier but it isn’t essential. There are many ways to implement self organisation in makeup and only some of them require any electronics at all, and some of those would use electronic particles embedded in the make-up rather than an underlay.

An electronic underlay can be useful to provide the energy for a transition too, and that allows the makeup to change colour on command. That means in principle that a woman could slap the makeup all over her face and touch a button on her digital mirror (which might simply be a tablet or smart phone) and the make-up would instantly change to be like the picture she selected. With suitable power availability, the make-up could be a full refresh rate video display, and we might see teenagers walking future streets wearing kaleidoscopic make-up that shows garish cartoon video expressions and animates their emoticons. More mature women might choose different appearances for different situations and they could be selected manually via an app or gesture or automatically by predetermined location settings.

Obviously, make-up is mostly used on the face, but once it becomes the basis of a smear-on computer display, it could be used on any part of the body as a full touch sensitive display area, e.g. the forearm.

Although some men already wear makeup, many more might use smart make-up as its techie nature makes it more acceptable.

The future of washing machines

Ultrasonic washing ball

Ultrasonic washing ball

For millennia, people washed clothes by stirring, hitting, squeezing and generally agitating them in rivers or buckets of water. The basic mechanism is to loosen dirt particles and use the water to wash them away or dissolve them.

Mostly, washing machines just automate the same process, agitating clothes in water, sometimes with detergent, to remove dirt from the fabric. Most use detergent to help free the dirt particles but more recently, some use ultrasound to create micro-cavitation bubbles and when they collapse, the shock waves help release the particles. That means the machines can clean at lower temperatures with little or no detergent.

It occurred to me that we don’t really need the machine to tumble the clothes. A ball about the size of a grapefruit could contain batteries and a set of ultrasonic transducers and could be simply chucked in a bucket with the clothes. It could create the bubbles and clean the clothes. Some basic engineering has to be done to make it work but it is entirely feasible.

One of the problems is that ultrasound doesn’t penetrate very far. To solve that, two mechanisms can be used in parallel. One is to let the ball roam around the clothes, and that could be done by changing its density by means of a swim bladder and using gravity to move it up and down, or maybe by adding a few simple paddles or cilia so it can move like a bacterium or by changing its shape so that as it moves up and down, it also moves sideways. The second mechanism is to use phased array ultrasonic transducers so that the beams can be steered and interfere constructively, thereby focusing energy and micro-cavitation generation around the bucket in a chosen pattern.

Making such a ball could be much cheaper than a full sized washing machine, making it ideal for developing countries. Transducers are cheap, and the software to drive them and steer the beams is easy enough and replicable free of charge once developed.

It would contain a rechargeable battery that could use a simple solar panel charging unit (which obviously could be used to generate power for other purposes too).

Such a device could bring cheap washing machine capability to millions of people who can’t afford a full sized washing machine or who are not connected to electricity supplies. It would save time, water and a great deal of drudgery at low expense.

 

 

Stimulative technology

You are sick of reading about disruptive technology, well, I am anyway. When a technology changes many areas of life and business dramatically it is often labelled disruptive technology. Disruption was the business strategy buzzword of the last decade. Great news though: the primarily disruptive phase of IT is rapidly being replaced by a more stimulative phase, where it still changes things but in a more creative way. Disruption hasn’t stopped, it’s just not going to be the headline effect. Stimulation will replace it. It isn’t just IT that is changing either, but materials and biotech too.

Stimulative technology creates new areas of business, new industries, new areas of lifestyle. It isn’t new per se. The invention of the wheel is an excellent example. It destroyed a cave industry based on log rolling, and doubtless a few cavemen had to retrain from their carrying or log-rolling careers.

I won’t waffle on for ages here, I don’t need to. The internet of things, digital jewelry, active skin, AI, neural chips, storage and processing that is physically tiny but with huge capacity, dirt cheap displays, lighting, local 3D mapping and location, 3D printing, far-reach inductive powering, virtual and augmented reality, smart drugs and delivery systems, drones, new super-materials such as graphene and molybdenene, spray-on solar … The list carries on and on. These are all developing very, very quickly now, and are all capable of stimulating entire new industries and revolutionizing lifestyle and the way we do business. They will certainly disrupt, but they will stimulate even more. Some jobs will be wiped out, but more will be created. Pretty much everything will be affected hugely, but mostly beneficially and creatively. The economy will grow faster, there will be many beneficial effects across the board, including the arts and social development as well as manufacturing industry, other commerce and politics. Overall, we will live better lives as a result.

So, you read it here first. Stimulative technology is the next disruptive technology.

 

The future of drones – predators. No, not that one.

It is a sad fact of life that companies keep using the most useful terminology for things that don’t deserve it. The Apple retina display, which makes it more difficult to find a suitable name for direct retinal displays that use the retina directly. Why can’t they be the ones called retina displays? Or the LED TV, where the LEDs are typically just LED back-lighting for an LCD display. That makes it hard to name TVs where each pixel is actually an LED. Or the Predator drone, as definitely  not the topic of this blog, where I will talk about predator drones that attack other ones.

I have written several times now on the dangers of drones. My most recent scare was realizing the potential for small drones carrying high-powered lasers and using cloud based face recognition to identify valuable targets in a crowd and blind them, using something like a Raspberry Pi as the main controller. All of that could be done tomorrow with components easily purchased on the net. A while ago I blogged that the Predators and Reapers are not the ones you need to worry about, so much as the little ones which can attack you in swarms.

This morning I was again considering terrorist uses for the micro-drones we’re now seeing. A 5cm drone with a networked camera and control could carry a needle infected with Ebola or aids or carrying a drop of nerve toxin. A small swarm of tiny drones, each with a gram of explosive that detonates when it collides with a forehead, could kill as many people as a bomb.

We will soon have to defend against terrorist drones and the tiniest drones give the most effective terror per dollar so they are the most likely to be the threat. The solution is quite simple. and nature solved it a long time ago. Mosquitos and flies in my back garden get eaten by a range of predators. Frogs might get them if they come too close to the surface, but in the air, dragonflies are expert at catching them. Bats are good too. So to deal with threats from tiny drones, we could use predator drones to seek and destroy them. For bigger drones, we’d need bigger predators and for very big ones, conventional anti-aircraft weapons become useful. In most cases, catching them in nets would work well. Nets are very effective against rotors. The use of nets doesn’t need such sophisticated control systems and if the net can be held a reasonable distance from the predator, it won’t destroy it if the micro-drone explodes. With a little more precise control, spraying solidifying foam onto the target drone could also immobilize it and some foams could help disperse small explosions or contain their lethal payloads. Spiders also provide inspiration here, as many species wrap their victims in silk to immobilize them. A single predator could catch and immobilize many victims. Such a defense system ought to be feasible.

The main problem remains. What do we call predator drones now that the most useful name has been trademarked for a particular model?

 

The future of sky

The S installment of this ‘future of’ series. I have done streets, shopping, superstores, sticks, surveillance, skyscrapers, security, space, sports, space travel and sex before, some several times. I haven’t done sky before, so here we go.

Today when you look up during the day you typically see various weather features, the sun, maybe the moon, a few birds, insects or bats, maybe some dandelion or thistle seeds. As night falls, stars, planets, seasonal shooting stars and occasional comets may appear. To those we can add human contributions such as planes, microlights, gliders and helicopters, drones, occasional hot air balloons and blimps, helium party balloons, kites and at night-time, satellites, sometimes the space station, maybe fireworks. If you’re in some places, missiles and rockets may be unfortunate extras too, as might be the occasional parachutist or someone wearing a wing-suit or on a hang-glider. I guess we should add occasional space launches and returns too. I can’t think of any more but I might have missed some.

Drones are the most recent addition and their numbers will increase quickly, mostly for surveillance purposes. When I sit out in the garden, since we live in a quiet area, the noise from occasional  microlights and small planes is especially irritating because they fly low. I am concerned that most of the discussions on drones don’t tend to mention the potential noise nuisance they might bring. With nothing between them and the ground, sound will travel well, and although some are reasonably quiet, other might not be and the noise might add up. Surveillance, spying and prying will become the biggest nuisances though, especially as miniaturization continues to bring us many insect-sized drones that aren’t noisy and may visually be almost undetectable. Privacy in your back garden or in the bedroom with unclosed curtains could disappear. They will make effective distributed weapons too:

Drones – it isn’t the Reapers and Predators you should worry about

Adverts don’t tend to appear except on blimps, and they tend to be rare visitors. A drone was this week used to drag a national flag over a football game. In the Batman films, Batman is occasionally summoned by shining a spotlight with a bat symbol onto the clouds. I forgot which film used the moon to show an advert. It is possible via a range of technologies that adverts could soon be a feature of the sky, day and night, just like in Bladerunner. In the UK, we are now getting used to roadside ads, however unwelcome they were when they first arrived, though they haven’t yet reached US proportions. It will be very sad if the sky is hijacked as an advertising platform too.

I think we’ll see some high altitude balloons being used for communications. A few companies are exploring that now. Solar powered planes are a competing solution to the same market.

As well as tiny drones, we might have bubbles. Kids make bubbles all the time but they burst quickly. With graphene, a bubble could prevent helium escaping or even be filled with graphene foam, then it would float and stay there. We might have billions of tiny bubbles floating around with tiny cameras or microphones or other sensors. The cloud could be an actual cloud.

And then there’s fairies. I wrote about fairies as the future of space travel.

Fairies will dominate space travel

They might have a useful role here too, and even if they don’t, they might still want to be here, useful or not.

As children, we used to call thistle seeds fairies, our mums thought it was cute to call them that. Biomimetics could use that same travel technique for yet another form of drone.

With all the quadcopter, micro-plane, bubble, balloon and thistle seed drones, the sky might soon be rather fuller than today. So maybe there is a guaranteed useful role for fairies, as drone police.

 

 

 

Ground up data is the next big data

This one sat in my draft folder since February, so I guess it’s time to finish it.

Big Data – I expect you’re as sick of hearing that term as I am. Gathering loads of data on everything you or your company or anything else you can access can detect, measure, record, then analyzing the hell out of it using data mining, an equally irritating term.

I long ago had a quick twitter exchange with John Hewitt, who suggested “What is sensing but the energy-constrained competition for transmission to memory, as memory is but that for expression?”. Neurons compete to see who gets listened too.  Yeah, but I am still not much wiser as to what sensing actually is. Maybe I need a brain upgrade. (It’s like magnets. I used to be able to calculate the magnetic field densities around complicated shaped objects – it was part of my first job in missile design – but even though I could do all the equations around EM theory, even general relativity, I still am no wiser how a magnetic field actually becomes a force on an object. I have an office littered with hundreds of neodymium magnets and I spend hours playing with them and I still don’t understand). I can read about neurons all day but I still don’t understand how a bunch of photons triggering a series of electro-chemical reactions results in me experiencing an image. How does the physical detection become a conscious experience?

Well, I wrote some while back that we could achieve a conscious computer within two years. It’s still two years because nobody has started using the right approach yet. I have to stress the ‘could’, because nobody actually intends to do it in that time frame, but I really believe some half-decent lab could if they tried.  (Putting that into perspective, Kurzweil and his gang at Google are looking at 2029.) That two years estimate relies heavily on evolutionary development, for me the preferred option when you don’t understand how something works, as is the case with consciousness. It is pretty easy to design conscious computers at a black box level. The devil is in the detail. I argued that you could make a conscious computer by using internally focused sensing to detect processes inside the brain, and using a sensor structure with a symmetrical feedback loop. Read it:

We could have a conscious machine by end-of-play 2015

In a nutshell, if you can feel thoughts in the same way as you feel external stimuli, you’d be conscious. I think. The symmetrical feedback loop bit is just a small engineering insight.

The missing link in that is still the same one: how does sensing work? How do you feel?

At a superficial level, you point a sensor at something and it produces a signal in some sort of relationship to whatever it is meant to sense. We can do that bit. We understand that. Your ear produces signals according to the frequencies and amplitudes of incoming sound waves, a bit like a microphone. Just the same so far. However, it is by some undefined processes later that you consciously experience the sound. How? That is the hard problem in AI. It isn’t just me that doesn’t know the answer. ‘How does red feel?’ is a more commonly used variant of the same question.

When we solve that, we will replace big data as ‘the next big thing’. If we can make sensor systems that experience or feel something rather than just producing a signal, that’s valuable already. If those sensors pool their shared experience, another similar sensor system could experience that. Basic data quickly transmutes into experience, knowledge, understanding, insight and very quickly, value, lots of it. Artificial neural nets go some way to doing that, but they still lack consciousness. Simulated neural networks can’t even get beyond a pretty straightforward computation, putting all the inputs into an equation. The true sensing bit is missing. The complex adaptive analog neural nets in our brain clearly achieve something deeper than a man-made neural network.

Meanwhile, most current AI work barks up a tree in a different forest. IBM’s Watson will do great things; Google’s search engine AI will too. But they aren’t conscious and can’t be. They’re just complicated programs running on digital processors, with absolutely zero awareness of anything they are doing. Digital programs on digital computers will never achieve any awareness, no matter how fast the chips are.

However, back in the biological realm, nature manages just fine. So biomimetics offers a lot of hope. We know we didn’t get from a pool of algae to humans in one go. At some point, organisms started moving according to light, chemical gradients, heat, touch. That most basic process of sensing may have started out coupled to internal processes that caused movement without any consciousness. But if we can understand the analog processes (electrochemical, electronic, mechanical) that take the stimulus through to a response, and can replicate it using our electronic technology, we would already have actuator circuits, even if we don’t have any form of sensation or consciousness yet. A great deal of this science has been done already of course. The computational side of most chemical and physical processes can be emulated electronically by some means or another. Actuators will be a very valuable part of the cloud, but we already have the ability to make actuators by more conventional means, so doing it organically or biomimetically just adds more actuation techniques to the portfolio. Valuable but not a terribly important breakthrough.

Looking at the system a big further along the evolutionary timeline, where eyes start to develop, where the most primitive nervous systems and brains start, where higher level processing is obviously occurring and inputs are starting to become sensations, we should be able to what is changed or changing. It is the emergence of sensation we need to identify, even if the reaction is still an unconscious reflex. We don’t need to reverse engineer the human brain. Simple organisms are simpler to understand. Feeding the architectural insights we gain from studying those primitive systems into our guided evolution engines is likely to be far faster as a means to generating true machine consciousness and strong AI. That’s how we could develop consciousness in a couple of years rather than 15.

If we can make primitive sensing devices that work like those in primitive organisms, and can respond to specific sorts of sensory input, then that is a potential way of increasing the coverage of cloud sensing and even actuation. It would effectively be a highly distributed direct response system. With clever embedding of emergent phenomena techniques (such as cellular automata, flocking etc) , it could be a quite sophisticated way of responding to quite complex distributed inputs, avoiding some of the need for big data processing. If we can gather the outputs from these simple sensors and feed them into others, that will be an even better sort of biomimetic response system. That sort of direct experience of a situation is very different from a data mined result, especially if actuation capability is there too. The philosophical question as to whether that inclusion of that second bank of sensors makes the system in any way conscious remains, but it would certainly be very useful and valuable. The architecture we end up with via this approach may look like neurons, and could even be synthetic neurons, but that may be only one solution among many. Biology may have gone the neuron route but that doesn’t necessarily mean it is the only possibility. It may be that we could one day genetically modify bacteria to produce their own organic electronics to emulate the key processes needed to generate sensation, and to power them by consuming nutrients from their environment. I suggested smart yogurt based on this idea many years ago, and believe that it could achieve vast levels of intelligence.

Digitizing and collecting the signals from the system at each stage would generate lots of  data, and that may be used by programs to derive other kinds of results, or to relay the inputs to other analog sensory systems elsewhere. (It isn’t always necessary to digitize signals to transmit them, but it helps limit signal degradation and quickly becomes important if the signal is to travel far and is essential if it is to be recorded for later use or time shifting). However, I strongly suspect that most of the value in analog sensing and direct response is local, coupled to direct action or local processing and storage.

If we have these sorts of sensors liberally spread around, we’d create a truly smart environment, with local sensing and some basic intelligence able to relay sensation remotely to other banks of sensors elsewhere for further processing or even ultimately consciousness. The local sensors could be relatively dumb like nerve endings on our skin, feeding in  signals to a more connected virtual nervous system, or a bit smarter, like neural retinal cells, doing a lot of analog pre-processing before relaying them via ganglia cells, and maybe part of a virtual brain. If they are also capable of or connected to some sort of actuation, then we would be constructing a kind of virtual organism, with tendrils covering potentially the whole globe, and able to sense and interact with its environment in an intelligent way.

I use the term virtual not because the sensors wouldn’t be real, but because their electronic nature allows connectivity to many systems, overlapping, hierarchical or distinct. Any number of higher level systems could ‘experience’ them as part of its system, rather as if your fingers could be felt by the entire human population. Multiple higher level virtual organisms could share the same basic sensory/data inputs. That gives us a whole different kind of cloud sensing.

By doing processing locally, in the analog domain, and dealing with some of the response locally, a lot of traffic across the network is avoided and a lot of remote processing. Any post-processing that does occur can therefore add to a higher level of foundation. A nice side effect from avoiding all the extra transmission and processing is increased environmental friendliness.

So, we’d have a quite different sort of data network, collecting higher quality data, essentially doing by instinct what data mining does with huge server farms and armies of programmers. Cloudy, but much smarter than a straightforward sensor net.

… I think.

It isn’t without risk though. I had a phone discussion yesterday on the dangers of this kind of network. In brief, it’s dangerous.