Tag Archives: biomimetics

The future of drones – predators. No, not that one.

It is a sad fact of life that companies keep using the most useful terminology for things that don’t deserve it. The Apple retina display, which makes it more difficult to find a suitable name for direct retinal displays that use the retina directly. Why can’t they be the ones called retina displays? Or the LED TV, where the LEDs are typically just LED back-lighting for an LCD display. That makes it hard to name TVs where each pixel is actually an LED. Or the Predator drone, as definitely  not the topic of this blog, where I will talk about predator drones that attack other ones.

I have written several times now on the dangers of drones. My most recent scare was realizing the potential for small drones carrying high-powered lasers and using cloud based face recognition to identify valuable targets in a crowd and blind them, using something like a Raspberry Pi as the main controller. All of that could be done tomorrow with components easily purchased on the net. A while ago I blogged that the Predators and Reapers are not the ones you need to worry about, so much as the little ones which can attack you in swarms.

This morning I was again considering terrorist uses for the micro-drones we’re now seeing. A 5cm drone with a networked camera and control could carry a needle infected with Ebola or aids or carrying a drop of nerve toxin. A small swarm of tiny drones, each with a gram of explosive that detonates when it collides with a forehead, could kill as many people as a bomb.

We will soon have to defend against terrorist drones and the tiniest drones give the most effective terror per dollar so they are the most likely to be the threat. The solution is quite simple. and nature solved it a long time ago. Mosquitos and flies in my back garden get eaten by a range of predators. Frogs might get them if they come too close to the surface, but in the air, dragonflies are expert at catching them. Bats are good too. So to deal with threats from tiny drones, we could use predator drones to seek and destroy them. For bigger drones, we’d need bigger predators and for very big ones, conventional anti-aircraft weapons become useful. In most cases, catching them in nets would work well. Nets are very effective against rotors. The use of nets doesn’t need such sophisticated control systems and if the net can be held a reasonable distance from the predator, it won’t destroy it if the micro-drone explodes. With a little more precise control, spraying solidifying foam onto the target drone could also immobilize it and some foams could help disperse small explosions or contain their lethal payloads. Spiders also provide inspiration here, as many species wrap their victims in silk to immobilize them. A single predator could catch and immobilize many victims. Such a defense system ought to be feasible.

The main problem remains. What do we call predator drones now that the most useful name has been trademarked for a particular model?

 

Advertisements

Ground up data is the next big data

This one sat in my draft folder since February, so I guess it’s time to finish it.

Big Data – I expect you’re as sick of hearing that term as I am. Gathering loads of data on everything you or your company or anything else you can access can detect, measure, record, then analyzing the hell out of it using data mining, an equally irritating term.

I long ago had a quick twitter exchange with John Hewitt, who suggested “What is sensing but the energy-constrained competition for transmission to memory, as memory is but that for expression?”. Neurons compete to see who gets listened too.  Yeah, but I am still not much wiser as to what sensing actually is. Maybe I need a brain upgrade. (It’s like magnets. I used to be able to calculate the magnetic field densities around complicated shaped objects – it was part of my first job in missile design – but even though I could do all the equations around EM theory, even general relativity, I still am no wiser how a magnetic field actually becomes a force on an object. I have an office littered with hundreds of neodymium magnets and I spend hours playing with them and I still don’t understand). I can read about neurons all day but I still don’t understand how a bunch of photons triggering a series of electro-chemical reactions results in me experiencing an image. How does the physical detection become a conscious experience?

Well, I wrote some while back that we could achieve a conscious computer within two years. It’s still two years because nobody has started using the right approach yet. I have to stress the ‘could’, because nobody actually intends to do it in that time frame, but I really believe some half-decent lab could if they tried.  (Putting that into perspective, Kurzweil and his gang at Google are looking at 2029.) That two years estimate relies heavily on evolutionary development, for me the preferred option when you don’t understand how something works, as is the case with consciousness. It is pretty easy to design conscious computers at a black box level. The devil is in the detail. I argued that you could make a conscious computer by using internally focused sensing to detect processes inside the brain, and using a sensor structure with a symmetrical feedback loop. Read it:

https://timeguide.wordpress.com/2013/12/28/we-could-have-a-conscious-machine-by-end-of-play-2015/

In a nutshell, if you can feel thoughts in the same way as you feel external stimuli, you’d be conscious. I think. The symmetrical feedback loop bit is just a small engineering insight.

The missing link in that is still the same one: how does sensing work? How do you feel?

At a superficial level, you point a sensor at something and it produces a signal in some sort of relationship to whatever it is meant to sense. We can do that bit. We understand that. Your ear produces signals according to the frequencies and amplitudes of incoming sound waves, a bit like a microphone. Just the same so far. However, it is by some undefined processes later that you consciously experience the sound. How? That is the hard problem in AI. It isn’t just me that doesn’t know the answer. ‘How does red feel?’ is a more commonly used variant of the same question.

When we solve that, we will replace big data as ‘the next big thing’. If we can make sensor systems that experience or feel something rather than just producing a signal, that’s valuable already. If those sensors pool their shared experience, another similar sensor system could experience that. Basic data quickly transmutes into experience, knowledge, understanding, insight and very quickly, value, lots of it. Artificial neural nets go some way to doing that, but they still lack consciousness. Simulated neural networks can’t even get beyond a pretty straightforward computation, putting all the inputs into an equation. The true sensing bit is missing. The complex adaptive analog neural nets in our brain clearly achieve something deeper than a man-made neural network.

Meanwhile, most current AI work barks up a tree in a different forest. IBM’s Watson will do great things; Google’s search engine AI will too. But they aren’t conscious and can’t be. They’re just complicated programs running on digital processors, with absolutely zero awareness of anything they are doing. Digital programs on digital computers will never achieve any awareness, no matter how fast the chips are.

However, back in the biological realm, nature manages just fine. So biomimetics offers a lot of hope. We know we didn’t get from a pool of algae to humans in one go. At some point, organisms started moving according to light, chemical gradients, heat, touch. That most basic process of sensing may have started out coupled to internal processes that caused movement without any consciousness. But if we can understand the analog processes (electrochemical, electronic, mechanical) that take the stimulus through to a response, and can replicate it using our electronic technology, we would already have actuator circuits, even if we don’t have any form of sensation or consciousness yet. A great deal of this science has been done already of course. The computational side of most chemical and physical processes can be emulated electronically by some means or another. Actuators will be a very valuable part of the cloud, but we already have the ability to make actuators by more conventional means, so doing it organically or biomimetically just adds more actuation techniques to the portfolio. Valuable but not a terribly important breakthrough.

Looking at the system a big further along the evolutionary timeline, where eyes start to develop, where the most primitive nervous systems and brains start, where higher level processing is obviously occurring and inputs are starting to become sensations, we should be able to what is changed or changing. It is the emergence of sensation we need to identify, even if the reaction is still an unconscious reflex. We don’t need to reverse engineer the human brain. Simple organisms are simpler to understand. Feeding the architectural insights we gain from studying those primitive systems into our guided evolution engines is likely to be far faster as a means to generating true machine consciousness and strong AI. That’s how we could develop consciousness in a couple of years rather than 15.

If we can make primitive sensing devices that work like those in primitive organisms, and can respond to specific sorts of sensory input, then that is a potential way of increasing the coverage of cloud sensing and even actuation. It would effectively be a highly distributed direct response system. With clever embedding of emergent phenomena techniques (such as cellular automata, flocking etc) , it could be a quite sophisticated way of responding to quite complex distributed inputs, avoiding some of the need for big data processing. If we can gather the outputs from these simple sensors and feed them into others, that will be an even better sort of biomimetic response system. That sort of direct experience of a situation is very different from a data mined result, especially if actuation capability is there too. The philosophical question as to whether that inclusion of that second bank of sensors makes the system in any way conscious remains, but it would certainly be very useful and valuable. The architecture we end up with via this approach may look like neurons, and could even be synthetic neurons, but that may be only one solution among many. Biology may have gone the neuron route but that doesn’t necessarily mean it is the only possibility. It may be that we could one day genetically modify bacteria to produce their own organic electronics to emulate the key processes needed to generate sensation, and to power them by consuming nutrients from their environment. I suggested smart yogurt based on this idea many years ago, and believe that it could achieve vast levels of intelligence.

Digitizing and collecting the signals from the system at each stage would generate lots of  data, and that may be used by programs to derive other kinds of results, or to relay the inputs to other analog sensory systems elsewhere. (It isn’t always necessary to digitize signals to transmit them, but it helps limit signal degradation and quickly becomes important if the signal is to travel far and is essential if it is to be recorded for later use or time shifting). However, I strongly suspect that most of the value in analog sensing and direct response is local, coupled to direct action or local processing and storage.

If we have these sorts of sensors liberally spread around, we’d create a truly smart environment, with local sensing and some basic intelligence able to relay sensation remotely to other banks of sensors elsewhere for further processing or even ultimately consciousness. The local sensors could be relatively dumb like nerve endings on our skin, feeding in  signals to a more connected virtual nervous system, or a bit smarter, like neural retinal cells, doing a lot of analog pre-processing before relaying them via ganglia cells, and maybe part of a virtual brain. If they are also capable of or connected to some sort of actuation, then we would be constructing a kind of virtual organism, with tendrils covering potentially the whole globe, and able to sense and interact with its environment in an intelligent way.

I use the term virtual not because the sensors wouldn’t be real, but because their electronic nature allows connectivity to many systems, overlapping, hierarchical or distinct. Any number of higher level systems could ‘experience’ them as part of its system, rather as if your fingers could be felt by the entire human population. Multiple higher level virtual organisms could share the same basic sensory/data inputs. That gives us a whole different kind of cloud sensing.

By doing processing locally, in the analog domain, and dealing with some of the response locally, a lot of traffic across the network is avoided and a lot of remote processing. Any post-processing that does occur can therefore add to a higher level of foundation. A nice side effect from avoiding all the extra transmission and processing is increased environmental friendliness.

So, we’d have a quite different sort of data network, collecting higher quality data, essentially doing by instinct what data mining does with huge server farms and armies of programmers. Cloudy, but much smarter than a straightforward sensor net.

… I think.

It isn’t without risk though. I had a phone discussion yesterday on the dangers of this kind of network. In brief, it’s dangerous.

The future of gardens

It’s been weeks since my last blog. I started a few but they need some more thought so as a catch-up, here is a nice frivolous topic, recycled from 1998.

Surely gardens are a place to get back to nature, to escape from technology? Well, when journalists ask to see really advanced technology, I take them to the garden. Humans still have a long way to go to catch up with what nature does all the time. A dragonfly catching smaller flies is just a hint of future warfare, and every flower is an exercise in high precision marketing, let alone engineering. But we will catch up, and even the stages between now and then will be fun.

Advanced garden technology today starts and ends with robotic lawn trimmers. I guess you could add the special materials used in garden tools, advanced battery tech, security monitoring, plant medications and nutrition. OK, there are already lots of advanced technologies in gardens, they just aren’t very glamorous. The fact is that our gardens already use a wide range of genetically enhanced plants and flowers, state of the art fertilizers and soil conditioners, fancy lawnmowers and automatic sprinkler systems. So what can we expect next?

Fiber optic plants already  add a touch of somewhat tacky enchantment to a garden and can be a good substitute for more conventional lighting. Home security uses video cameras and webcams and some rather fun documentaries have resulted from videoing pets and wild animals during the night. There will soon be many other appliances in the future garden, including the various armies of robots and micro-bots  doing a range of jobs from cutting the grass every time a blade gets more than 3 cm long, weeding, watering, pollination or carrying individual grains of fertilizer to the plants that need it. Others will fight with bugs or tidy up debris, or remove dying flowers to keep the garden looking pristine. They could even assist in propagation, burying seeds in just the right places and tending them while they become established. The garden pond may have robot ducks or fish just for fun.

Various sensors may be inserted into the ground around the garden, or smart dust just sprinkled randomly. These would warn when the ground is getting too dry and perhaps co-ordinate automatic sprinklers. They could also monitor the chemical composition, advising the gardener where to add which type of fertilizer or conditioner. In fact, when the price and size falls sufficiently, electronic sensors might well be mixed in with fertilizer and other garden care products.

With all this robot assistance, the human may design the garden and then just let the robots get on with the construction and maintenance. Or maybe just download a garden plan if they’re really lazy, or get the AI to download one.

Another obvious potential impact comes in the shape of genetic engineering. While designing the genome for custom plants is not quite as simple as assembling Lego blocks, we will nevertheless be able to pick and choose from a wide variety of characteristics available from anywhere in the plant and animal kingdom. We are promised blue roses that smell of designer perfumes, grass that only needs cut once a year and ground cover plants that actually grow faster than weeds. By messing about with genes we can thus change the appearance and characteristics of plants enormously, and while getting a company logo to appear on a flower petal might be beyond us, the garden could certainly look much more kaleidoscopic than today’s. We are already in the era where genetics has become a hobbyist activity, but so far the limits are pretty simple gene transfers to add fun things like fluorescence or light emission. Legislation will hopefully prevent people using such clubs to learn how to make viruses or bacteria for terrorist use.

In the long term we are not limited by the Lego bricks provided by nature. Nanotechnology will eventually allow us to produce inorganic ‘plants’ . You might buy a seed and drop it in the required place and it would grow into a predetermined structure just like an organic seed, taking the materials from the soil or air, or perhaps from some additives. However, there is almost no theoretical limit to the type of ‘plant’ that could be produced this way. Flowers with logos are possible, but so are video displays built into the flowers, so are garden gnomes that wander around or that actually fish in the pond. A wide range of static and dynamic ornamentation could add fun to every garden. Nanotechnology has so many possibilities, there are almost no ultimate limits to what can be done apart from the fundamental physics of materials. Power supplies for these devices could use solar, wind or thermal power.

On the patio, there is more scope for video displays in the paving and walls, to add color or atmosphere, and also to provide a recharging base for the robots without their own independent power supplies. Flat speakers could also be built into the walls, providing birdsong or other natural sounds that are otherwise declining in our gardens. Appropriately placed large display panels could simulate being on a beach while sunbathing in Nottingham (for non-Brits, Nottingham is a city not renowned for its sunshine, and very far from a beach).

All in all, the garden could become a place of relaxation, getting back to what we like best in nature, without all the boring bits looking after it in our few spare hours. Even before we retire, we will be able to enjoy the garden, instead of just weeding and cutting the grass.

1998 is a long time ago and I have lots of new ideas for the garden now, but time demands I leave them for a later blog.