Category Archives: biomimetics

How to make a Spiderman-style graphene silk thrower for emergency services

I quite like Spiderman movies, and having the ability to fire a web at a distant object or villain has its appeal. Since he fires web from his forearm, it must be lightweight to withstand the recoil, and to fire enough to hold his weight while he swings, it would need to have extremely strong fibers. It is therefore pretty obvious that the material of choice when we build such a thing will be graphene, which is even stronger than spider silk (though I suppose a chemical ejection device making spider silk might work too). A thin graphene thread is sufficient to hold him as he swings so it could fit inside a manageable capsule.

So how to eject it?

One way I suggested for making graphene threads is to 3D print the graphene, using print nozzles made of carbon nanotubes and using a very high-speed modulation to spread the atoms at precise spacing so they emerge in the right physical patterns and attach appropriate positive or negative charge to each atom as they emerge from the nozzles so that they are thrown together to make them bond into graphene. This illustration tries to show the idea looking at the nozzles end on, but shows only a part of the array:printing graphene filamentsIt doesn’t show properly that the nozzles are at angles to each other and the atoms are ejected in precise phased patterns, but they need to be, since the atoms are too far apart to form graphene otherwise so they need to eject at the right speed in the right directions with the right charges at the right times and if all that is done correctly then a graphene filament would result. The nozzle arrangements, geometry and carbon atom sizes dictate that only narrow filaments of graphene can be produced by each nozzle, but as the threads from many nozzles are intertwined as they emerge from the spinneret, so a graphene thread would be produced made from many filaments. Nevertheless, it is possible to arrange carbon nanotubes in such a way and at the right angle, so provided we can get the high-speed modulation and spacing right, it ought to be feasible. Not easy, but possible. Then again, Spiderman isn’t real yet either.

The ejection device would therefore be a specially fabricated 3D print head maybe a square centimeter in area, backed by a capsule containing finely powdered graphite that could be vaporized to make the carbon atom stream through the nozzles. Some nice lasers might be good there, and some cool looking electronic add-ons to do the phasing and charging. You could make this into one heck of a cool gun.

How thick a thread do we need?

Assuming a 70kg (154lb) man and 2g acceleration during the swing, we need at least 150kg breaking strain to have a small safety margin, bearing in mind that if it breaks, you can fire a new thread. Steel can achieve that with 1.5mm thick wire, but graphene’s tensile strength is 300 times better than steel so 0.06mm is thick enough. 60 microns, or to put it another way, roughly 140 denier, although that is a very quick guess. That means roughly the same sort of graphene thread thickness is needed to support our Spiderman as the nylon used to make your backpack. It also means you could eject well over 10km of thread from a 200g capsule, plenty. Happy to revise my numbers if you have better ones. Google can be a pain!

How fast could the thread be ejected?

Let’s face it. If it can only manage 5cm/s, it is as much use as a chocolate flamethrower. Each bond in graphene is 1.4 angstroms long, so a graphene hexagon is about 0.2nm wide. We would want our graphene filament to eject at around 100m/s, about the speed of a crossbow bolt. 100m/s = 5 x 10^11 carbon atoms ejected per second from each nozzle, in staggered phasing. So, half a terahertz. Easy! That’s well within everyday electronics domains. Phew! If we can do better, we can shoot even faster.

We could therefore soon have a graphene filament ejection device that behaves much like Spiderman’s silk throwers. It needs some better engineers than me to build it, but there are plenty of them around.

Having such a device would be fun for sports, allowing climbers to climb vertical rock faces and overhangs quickly, or to make daring leaps and hope the device works to save them from certain death. It would also have military and police uses. It might even have uses in road accident prevention, yanking pedestrians away from danger or tethering cars instantly to slow them extra quickly. In fact, all the emergency services would have uses for such devices and it could reduce accidents and deaths. I feel confident that Spiderman would think of many more exciting uses too.

Producing graphene silk at 100m/s might also be pretty useful in just about every other manufacturing industry. With ultra-fine yarns with high strength produced at those speeds, it could revolutionize the fashion industry too.

The future of make-up

I was digging through some old 2002 powerpoint slides for an article on active skin and stumbled across probably the worst illustration I have ever done, though in my defense, I was documenting a great many ideas that day and spent only a few minutes on it:

smart makeup

If a woman ever looks like this, and isn’t impersonating a bald Frenchman, she has more problems to worry about than her make-up. The pic does however manage to convey the basic principle, and that’s all that is needed for a technical description. The idea is that her face can be electronically demarked into various makeup regions and the makeup on those regions can therefore adopt the appropriate colour for that region. In the pic ‘nanosomes’ wasn’t a serious name, but a sarcastic take on the cosmetics industry which loves to take scientific sounding words and invent new ones that make their products sound much more high tech than they actually are. Nanotech could certainly play a role, but since the eye can’t discern features smaller than 0.1mm, it isn’t essential. This is no longer just an idea, companies are now working on development of smart makeup, and we already have prototype electronic tattoos, one of the layers I used for my active skin but again based on an earlier vision.

The original idea didn’t use electronics, but simply used self-organisation tech I’d designed in 1993 on an electronic DNA project. Either way would work, but the makeup would be different for each.

The electronic layer, if required, would most likely be printed onto the skin at a beauty salon, would be totally painless, last weeks and could take only a few minutes to print. It extends IoT to the face.

Both mechanisms could use makeup containing flat plates that create colour by diffraction the same way the scales on a butterfly does. That would make an excellent colour pallet. Beetles produce colour a different way and that would work too. Or we could copy squids or cuttlefish. Nature has given us many excellent start points for biomimetics, and indeed the self-organisation principles were stolen from nature too. Nature used hormone gradients to help your cells differentiate when you were an embryo. If nature can arrange the rich microscopic detail of every part of your face, then similar techniques can certainly work for a simple surface layer of make-up. Having the electronic underlay makes self organisation easier but it isn’t essential. There are many ways to implement self organisation in makeup and only some of them require any electronics at all, and some of those would use electronic particles embedded in the make-up rather than an underlay.

An electronic underlay can be useful to provide the energy for a transition too, and that allows the makeup to change colour on command. That means in principle that a woman could slap the makeup all over her face and touch a button on her digital mirror (which might simply be a tablet or smart phone) and the make-up would instantly change to be like the picture she selected. With suitable power availability, the make-up could be a full refresh rate video display, and we might see teenagers walking future streets wearing kaleidoscopic make-up that shows garish cartoon video expressions and animates their emoticons. More mature women might choose different appearances for different situations and they could be selected manually via an app or gesture or automatically by predetermined location settings.

Obviously, make-up is mostly used on the face, but once it becomes the basis of a smear-on computer display, it could be used on any part of the body as a full touch sensitive display area, e.g. the forearm.

Although some men already wear makeup, many more might use smart make-up as its techie nature makes it more acceptable.

The future of washing machines

Ultrasonic washing ball

Ultrasonic washing ball

For millennia, people washed clothes by stirring, hitting, squeezing and generally agitating them in rivers or buckets of water. The basic mechanism is to loosen dirt particles and use the water to wash them away or dissolve them.

Mostly, washing machines just automate the same process, agitating clothes in water, sometimes with detergent, to remove dirt from the fabric. Most use detergent to help free the dirt particles but more recently, some use ultrasound to create micro-cavitation bubbles and when they collapse, the shock waves help release the particles. That means the machines can clean at lower temperatures with little or no detergent.

It occurred to me that we don’t really need the machine to tumble the clothes. A ball about the size of a grapefruit could contain batteries and a set of ultrasonic transducers and could be simply chucked in a bucket with the clothes. It could create the bubbles and clean the clothes. Some basic engineering has to be done to make it work but it is entirely feasible.

One of the problems is that ultrasound doesn’t penetrate very far. To solve that, two mechanisms can be used in parallel. One is to let the ball roam around the clothes, and that could be done by changing its density by means of a swim bladder and using gravity to move it up and down, or maybe by adding a few simple paddles or cilia so it can move like a bacterium or by changing its shape so that as it moves up and down, it also moves sideways. The second mechanism is to use phased array ultrasonic transducers so that the beams can be steered and interfere constructively, thereby focusing energy and micro-cavitation generation around the bucket in a chosen pattern.

Making such a ball could be much cheaper than a full sized washing machine, making it ideal for developing countries. Transducers are cheap, and the software to drive them and steer the beams is easy enough and replicable free of charge once developed.

It would contain a rechargeable battery that could use a simple solar panel charging unit (which obviously could be used to generate power for other purposes too).

Such a device could bring cheap washing machine capability to millions of people who can’t afford a full sized washing machine or who are not connected to electricity supplies. It would save time, water and a great deal of drudgery at low expense.

 

 

Stimulative technology

You are sick of reading about disruptive technology, well, I am anyway. When a technology changes many areas of life and business dramatically it is often labelled disruptive technology. Disruption was the business strategy buzzword of the last decade. Great news though: the primarily disruptive phase of IT is rapidly being replaced by a more stimulative phase, where it still changes things but in a more creative way. Disruption hasn’t stopped, it’s just not going to be the headline effect. Stimulation will replace it. It isn’t just IT that is changing either, but materials and biotech too.

Stimulative technology creates new areas of business, new industries, new areas of lifestyle. It isn’t new per se. The invention of the wheel is an excellent example. It destroyed a cave industry based on log rolling, and doubtless a few cavemen had to retrain from their carrying or log-rolling careers.

I won’t waffle on for ages here, I don’t need to. The internet of things, digital jewelry, active skin, AI, neural chips, storage and processing that is physically tiny but with huge capacity, dirt cheap displays, lighting, local 3D mapping and location, 3D printing, far-reach inductive powering, virtual and augmented reality, smart drugs and delivery systems, drones, new super-materials such as graphene and molybdenene, spray-on solar … The list carries on and on. These are all developing very, very quickly now, and are all capable of stimulating entire new industries and revolutionizing lifestyle and the way we do business. They will certainly disrupt, but they will stimulate even more. Some jobs will be wiped out, but more will be created. Pretty much everything will be affected hugely, but mostly beneficially and creatively. The economy will grow faster, there will be many beneficial effects across the board, including the arts and social development as well as manufacturing industry, other commerce and politics. Overall, we will live better lives as a result.

So, you read it here first. Stimulative technology is the next disruptive technology.

 

The future of drones – predators. No, not that one.

It is a sad fact of life that companies keep using the most useful terminology for things that don’t deserve it. The Apple retina display, which makes it more difficult to find a suitable name for direct retinal displays that use the retina directly. Why can’t they be the ones called retina displays? Or the LED TV, where the LEDs are typically just LED back-lighting for an LCD display. That makes it hard to name TVs where each pixel is actually an LED. Or the Predator drone, as definitely  not the topic of this blog, where I will talk about predator drones that attack other ones.

I have written several times now on the dangers of drones. My most recent scare was realizing the potential for small drones carrying high-powered lasers and using cloud based face recognition to identify valuable targets in a crowd and blind them, using something like a Raspberry Pi as the main controller. All of that could be done tomorrow with components easily purchased on the net. A while ago I blogged that the Predators and Reapers are not the ones you need to worry about, so much as the little ones which can attack you in swarms.

This morning I was again considering terrorist uses for the micro-drones we’re now seeing. A 5cm drone with a networked camera and control could carry a needle infected with Ebola or aids or carrying a drop of nerve toxin. A small swarm of tiny drones, each with a gram of explosive that detonates when it collides with a forehead, could kill as many people as a bomb.

We will soon have to defend against terrorist drones and the tiniest drones give the most effective terror per dollar so they are the most likely to be the threat. The solution is quite simple. and nature solved it a long time ago. Mosquitos and flies in my back garden get eaten by a range of predators. Frogs might get them if they come too close to the surface, but in the air, dragonflies are expert at catching them. Bats are good too. So to deal with threats from tiny drones, we could use predator drones to seek and destroy them. For bigger drones, we’d need bigger predators and for very big ones, conventional anti-aircraft weapons become useful. In most cases, catching them in nets would work well. Nets are very effective against rotors. The use of nets doesn’t need such sophisticated control systems and if the net can be held a reasonable distance from the predator, it won’t destroy it if the micro-drone explodes. With a little more precise control, spraying solidifying foam onto the target drone could also immobilize it and some foams could help disperse small explosions or contain their lethal payloads. Spiders also provide inspiration here, as many species wrap their victims in silk to immobilize them. A single predator could catch and immobilize many victims. Such a defense system ought to be feasible.

The main problem remains. What do we call predator drones now that the most useful name has been trademarked for a particular model?

 

The future of sky

The S installment of this ‘future of’ series. I have done streets, shopping, superstores, sticks, surveillance, skyscrapers, security, space, sports, space travel and sex before, some several times. I haven’t done sky before, so here we go.

Today when you look up during the day you typically see various weather features, the sun, maybe the moon, a few birds, insects or bats, maybe some dandelion or thistle seeds. As night falls, stars, planets, seasonal shooting stars and occasional comets may appear. To those we can add human contributions such as planes, microlights, gliders and helicopters, drones, occasional hot air balloons and blimps, helium party balloons, kites and at night-time, satellites, sometimes the space station, maybe fireworks. If you’re in some places, missiles and rockets may be unfortunate extras too, as might be the occasional parachutist or someone wearing a wing-suit or on a hang-glider. I guess we should add occasional space launches and returns too. I can’t think of any more but I might have missed some.

Drones are the most recent addition and their numbers will increase quickly, mostly for surveillance purposes. When I sit out in the garden, since we live in a quiet area, the noise from occasional  microlights and small planes is especially irritating because they fly low. I am concerned that most of the discussions on drones don’t tend to mention the potential noise nuisance they might bring. With nothing between them and the ground, sound will travel well, and although some are reasonably quiet, other might not be and the noise might add up. Surveillance, spying and prying will become the biggest nuisances though, especially as miniaturization continues to bring us many insect-sized drones that aren’t noisy and may visually be almost undetectable. Privacy in your back garden or in the bedroom with unclosed curtains could disappear. They will make effective distributed weapons too:

https://timeguide.wordpress.com/2014/07/07/drones-it-isnt-the-reapers-and-predators-you-should-worry-about/

Adverts don’t tend to appear except on blimps, and they tend to be rare visitors. A drone was this week used to drag a national flag over a football game. In the Batman films, Batman is occasionally summoned by shining a spotlight with a bat symbol onto the clouds. I forgot which film used the moon to show an advert. It is possible via a range of technologies that adverts could soon be a feature of the sky, day and night, just like in Bladerunner. In the UK, we are now getting used to roadside ads, however unwelcome they were when they first arrived, though they haven’t yet reached US proportions. It will be very sad if the sky is hijacked as an advertising platform too.

I think we’ll see some high altitude balloons being used for communications. A few companies are exploring that now. Solar powered planes are a competing solution to the same market.

As well as tiny drones, we might have bubbles. Kids make bubbles all the time but they burst quickly. With graphene, a bubble could prevent helium escaping or even be filled with graphene foam, then it would float and stay there. We might have billions of tiny bubbles floating around with tiny cameras or microphones or other sensors. The cloud could be an actual cloud.

And then there’s fairies. I wrote about fairies as the future of space travel.

https://timeguide.wordpress.com/2014/06/06/fairies-will-dominate-space-travel/

They might have a useful role here too, and even if they don’t, they might still want to be here, useful or not.

As children, we used to call thistle seeds fairies, our mums thought it was cute to call them that. Biomimetics could use that same travel technique for yet another form of drone.

With all the quadcopter, micro-plane, bubble, balloon and thistle seed drones, the sky might soon be rather fuller than today. So maybe there is a guaranteed useful role for fairies, as drone police.

 

 

 

Ground up data is the next big data

This one sat in my draft folder since February, so I guess it’s time to finish it.

Big Data – I expect you’re as sick of hearing that term as I am. Gathering loads of data on everything you or your company or anything else you can access can detect, measure, record, then analyzing the hell out of it using data mining, an equally irritating term.

I long ago had a quick twitter exchange with John Hewitt, who suggested “What is sensing but the energy-constrained competition for transmission to memory, as memory is but that for expression?”. Neurons compete to see who gets listened too.  Yeah, but I am still not much wiser as to what sensing actually is. Maybe I need a brain upgrade. (It’s like magnets. I used to be able to calculate the magnetic field densities around complicated shaped objects – it was part of my first job in missile design – but even though I could do all the equations around EM theory, even general relativity, I still am no wiser how a magnetic field actually becomes a force on an object. I have an office littered with hundreds of neodymium magnets and I spend hours playing with them and I still don’t understand). I can read about neurons all day but I still don’t understand how a bunch of photons triggering a series of electro-chemical reactions results in me experiencing an image. How does the physical detection become a conscious experience?

Well, I wrote some while back that we could achieve a conscious computer within two years. It’s still two years because nobody has started using the right approach yet. I have to stress the ‘could’, because nobody actually intends to do it in that time frame, but I really believe some half-decent lab could if they tried.  (Putting that into perspective, Kurzweil and his gang at Google are looking at 2029.) That two years estimate relies heavily on evolutionary development, for me the preferred option when you don’t understand how something works, as is the case with consciousness. It is pretty easy to design conscious computers at a black box level. The devil is in the detail. I argued that you could make a conscious computer by using internally focused sensing to detect processes inside the brain, and using a sensor structure with a symmetrical feedback loop. Read it:

https://timeguide.wordpress.com/2013/12/28/we-could-have-a-conscious-machine-by-end-of-play-2015/

In a nutshell, if you can feel thoughts in the same way as you feel external stimuli, you’d be conscious. I think. The symmetrical feedback loop bit is just a small engineering insight.

The missing link in that is still the same one: how does sensing work? How do you feel?

At a superficial level, you point a sensor at something and it produces a signal in some sort of relationship to whatever it is meant to sense. We can do that bit. We understand that. Your ear produces signals according to the frequencies and amplitudes of incoming sound waves, a bit like a microphone. Just the same so far. However, it is by some undefined processes later that you consciously experience the sound. How? That is the hard problem in AI. It isn’t just me that doesn’t know the answer. ‘How does red feel?’ is a more commonly used variant of the same question.

When we solve that, we will replace big data as ‘the next big thing’. If we can make sensor systems that experience or feel something rather than just producing a signal, that’s valuable already. If those sensors pool their shared experience, another similar sensor system could experience that. Basic data quickly transmutes into experience, knowledge, understanding, insight and very quickly, value, lots of it. Artificial neural nets go some way to doing that, but they still lack consciousness. Simulated neural networks can’t even get beyond a pretty straightforward computation, putting all the inputs into an equation. The true sensing bit is missing. The complex adaptive analog neural nets in our brain clearly achieve something deeper than a man-made neural network.

Meanwhile, most current AI work barks up a tree in a different forest. IBM’s Watson will do great things; Google’s search engine AI will too. But they aren’t conscious and can’t be. They’re just complicated programs running on digital processors, with absolutely zero awareness of anything they are doing. Digital programs on digital computers will never achieve any awareness, no matter how fast the chips are.

However, back in the biological realm, nature manages just fine. So biomimetics offers a lot of hope. We know we didn’t get from a pool of algae to humans in one go. At some point, organisms started moving according to light, chemical gradients, heat, touch. That most basic process of sensing may have started out coupled to internal processes that caused movement without any consciousness. But if we can understand the analog processes (electrochemical, electronic, mechanical) that take the stimulus through to a response, and can replicate it using our electronic technology, we would already have actuator circuits, even if we don’t have any form of sensation or consciousness yet. A great deal of this science has been done already of course. The computational side of most chemical and physical processes can be emulated electronically by some means or another. Actuators will be a very valuable part of the cloud, but we already have the ability to make actuators by more conventional means, so doing it organically or biomimetically just adds more actuation techniques to the portfolio. Valuable but not a terribly important breakthrough.

Looking at the system a big further along the evolutionary timeline, where eyes start to develop, where the most primitive nervous systems and brains start, where higher level processing is obviously occurring and inputs are starting to become sensations, we should be able to what is changed or changing. It is the emergence of sensation we need to identify, even if the reaction is still an unconscious reflex. We don’t need to reverse engineer the human brain. Simple organisms are simpler to understand. Feeding the architectural insights we gain from studying those primitive systems into our guided evolution engines is likely to be far faster as a means to generating true machine consciousness and strong AI. That’s how we could develop consciousness in a couple of years rather than 15.

If we can make primitive sensing devices that work like those in primitive organisms, and can respond to specific sorts of sensory input, then that is a potential way of increasing the coverage of cloud sensing and even actuation. It would effectively be a highly distributed direct response system. With clever embedding of emergent phenomena techniques (such as cellular automata, flocking etc) , it could be a quite sophisticated way of responding to quite complex distributed inputs, avoiding some of the need for big data processing. If we can gather the outputs from these simple sensors and feed them into others, that will be an even better sort of biomimetic response system. That sort of direct experience of a situation is very different from a data mined result, especially if actuation capability is there too. The philosophical question as to whether that inclusion of that second bank of sensors makes the system in any way conscious remains, but it would certainly be very useful and valuable. The architecture we end up with via this approach may look like neurons, and could even be synthetic neurons, but that may be only one solution among many. Biology may have gone the neuron route but that doesn’t necessarily mean it is the only possibility. It may be that we could one day genetically modify bacteria to produce their own organic electronics to emulate the key processes needed to generate sensation, and to power them by consuming nutrients from their environment. I suggested smart yogurt based on this idea many years ago, and believe that it could achieve vast levels of intelligence.

Digitizing and collecting the signals from the system at each stage would generate lots of  data, and that may be used by programs to derive other kinds of results, or to relay the inputs to other analog sensory systems elsewhere. (It isn’t always necessary to digitize signals to transmit them, but it helps limit signal degradation and quickly becomes important if the signal is to travel far and is essential if it is to be recorded for later use or time shifting). However, I strongly suspect that most of the value in analog sensing and direct response is local, coupled to direct action or local processing and storage.

If we have these sorts of sensors liberally spread around, we’d create a truly smart environment, with local sensing and some basic intelligence able to relay sensation remotely to other banks of sensors elsewhere for further processing or even ultimately consciousness. The local sensors could be relatively dumb like nerve endings on our skin, feeding in  signals to a more connected virtual nervous system, or a bit smarter, like neural retinal cells, doing a lot of analog pre-processing before relaying them via ganglia cells, and maybe part of a virtual brain. If they are also capable of or connected to some sort of actuation, then we would be constructing a kind of virtual organism, with tendrils covering potentially the whole globe, and able to sense and interact with its environment in an intelligent way.

I use the term virtual not because the sensors wouldn’t be real, but because their electronic nature allows connectivity to many systems, overlapping, hierarchical or distinct. Any number of higher level systems could ‘experience’ them as part of its system, rather as if your fingers could be felt by the entire human population. Multiple higher level virtual organisms could share the same basic sensory/data inputs. That gives us a whole different kind of cloud sensing.

By doing processing locally, in the analog domain, and dealing with some of the response locally, a lot of traffic across the network is avoided and a lot of remote processing. Any post-processing that does occur can therefore add to a higher level of foundation. A nice side effect from avoiding all the extra transmission and processing is increased environmental friendliness.

So, we’d have a quite different sort of data network, collecting higher quality data, essentially doing by instinct what data mining does with huge server farms and armies of programmers. Cloudy, but much smarter than a straightforward sensor net.

… I think.

It isn’t without risk though. I had a phone discussion yesterday on the dangers of this kind of network. In brief, it’s dangerous.