Category Archives: technology

The future of virtual reality

I first covered this topic in 1991 or 1992, can’t recall, when we were playing with the Virtuality machines. I got a bit carried away, did the calculations on processing power requirements for decent images, and announced that VR would replace TV as our main entertainment by about 2000. I still use that as my best example of things I didn’t get right.

I have often considered why it didn’t take off as we expected. There are two very plausible explanations and both might apply somewhat to the new launches we’re seeing now.

1: It did happen, just differently. People are using excellent pseudo-3D environments in computer games, and that is perfectly acceptable, they simply don’t need full-blown VR. Just as 3DTV hasn’t turned out to be very popular compared to regular TV, so wandering around a virtual world doesn’t necessarily require VR. TV or  PC monitors are perfectly adequate in conjunction with the cooperative human brain to convey the important bits of the virtual world illusion.

2. Early 1990s VR headsets reportedly gave some people eye strain or psychological distortions that persisted long enough after sessions to present potential dangers. This meant corporate lawyers would have been warning about potentially vast class action suits with every kid that develops a squint blaming the headset manufacturers, or when someone walked under a bus because they were still mentally in a virtual world. If anything, people are far more likely to sue for alleged negative psychological effects now than back then.

My enthusiasm for VR hasn’t gone away. I still think it has great potential. I just hope the manufacturers are fully aware of these issues and have dealt with or are dealing with them. It would be a great shame indeed if a successful launch is followed by rapid market collapse or class action suits. I hope they can avoid both problems.

The porn industry is already gearing up to capitalise on VR, and the more innocent computer games markets too. I spend a fair bit of my spare time in the virtual worlds of computer games. I find games far more fun than TV, and adding more convincing immersion and better graphics would be a big plus. In the further future, active skin will allow our nervous systems to be connected into the IT too, recording and replaying sensations so VR could become full sensory. When you fight an enemy in a game today, the controller might vibrate if you get hit or shot. If you could feel the pain, you might try a little harder to hide. You may be less willing to walk casually through flames if they hurt rather than just making a small drop in a health indicator or you might put a little more effort into kindling romances if you could actually enjoy the cuddles. But that’s for the next generation, not mine.

VR offers a whole new depth of experience, but it did in 1991. It failed first time, let’s hope this time the technology brings the benefits without the drawbacks and can succeed.

The future of ukuleles

Well, actually stringed instruments generally, but I needed a U and I didn’t want to do universities or the UN again and certainly not unicorns, so I cheated slightly. I realize that other topics starting with U may exist, but I didn’t do much research and I needed an excuse to write up this new idea.

If I was any good at making electronics, I’d have built a demo of this, but I have only soldered 6 contacts in my life, and 4 of those were dry joints, and I know when to quit.

My idea is very simple indeed: put accelerometers on the strings. Some quick googling suggests the idea is novel.

There are numerous electric guitar, violins, probably ukuleles. They use a variety of pickups. Many are directly underneath the strings, some use accelerometers on the other side of the bridge or elsewhere on the body. In most instruments, the body is heavily involved in the overall sound production, so I wouldn’t want to replace the pickups on the body. However, adding accelerometers to the strings would give another data source with quite different characteristics. There could be just one, or several, placed at specific locations along each string. If they are too heavy, they would change the sound too much, but some now are far smaller than the eye of a needle. If they are fixed onto the string, it would need a little re-tuning, but shouldn’t destroy the sound quality. The benefit is that accelerometers on the strings would provide data not available via other pickups. They would more directly represent the string activity than a pickup on the body. This could be used as valuable input to the overall signal mix used in the electronic sound output. Having more data available is generally a good thing.

What would the new sound be like? I don’t know. If it is very different from the sound using conventional pickups, it might even open up potential for new kinds of electric instrument.

If you do experiment with this, please do report back on the results.

The future of terminators

The Terminator films were important in making people understand that AI and machine consciousness will not necessarily be a good thing. The terminator scenario has stuck in our terminology ever since.

There is absolutely no reason to assume that a super-smart machine will be hostile to us. There are even some reasons to believe it would probably want to be friends. Smarter-than-man machines could catapult us into a semi-utopian era of singularity level development to conquer disease and poverty and help us live comfortably alongside a healthier environment. Could.

But just because it doesn’t have to be bad, that doesn’t mean it can’t be. You don’t have to be bad but sometimes you are.

It is also the case that even if it means us no harm, we could just happen to be in the way when it wants to do something, and it might not care enough to protect us.

Asimov’s laws of robotics are irrelevant. Any machine smart enough to be a terminator-style threat would presumably take little notice of rules it has been given by what it may consider a highly inferior species. The ants in your back garden have rules to govern their colony and soldier ants trained to deal with invader threats to enforce territorial rules. How much do you consider them when you mow the lawn or rearrange the borders or build an extension?

These arguments are put in debates every day now.

There are however a few points that are less often discussed

Humans are not always good, indeed quite a lot of people seem to want to destroy everything most of us want to protect. Given access to super-smart machines, they could design more effective means to do so. The machines might be very benign, wanting nothing more than to help mankind as far as they possibly can, but misled into working for them, believing in architected isolation that such projects are for the benefit of humanity. (The machines might be extremely  smart, but may have existed since their inception in a rigorously constructed knowledge environment. To them, that might be the entire world, and we might be introduced as a new threat that needs to be dealt with.) So even benign AI could be an existential threat when it works for the wrong people. The smartest people can sometimes be very naive. Perhaps some smart machines could be deliberately designed to be so.

I speculated ages ago what mad scientists or mad AIs could do in terms of future WMDs:

Smart machines might be deliberately built for benign purposes and turn rogue later, or they may be built with potential for harm designed in, for military purposes. These might destroy only enemies, but you might be that enemy. Others might do that and enjoy the fun and turn on their friends when enemies run short. Emotions might be important in smart machines just as they are in us, but we shouldn’t assume they will be the same emotions or be wired the same way.

Smart machines may want to reproduce. I used this as the core storyline in my sci-fi book. They may have offspring and with the best intentions of their parent AIs, the new generation might decide not to do as they’re told. Again, in human terms, a highly familiar story that goes back thousands of years.

In the Terminator film, it is a military network that becomes self aware and goes rogue that is the problem. I don’t believe digital IT can become conscious, but I do believe reconfigurable analog adaptive neural networks could. The cloud is digital today, but it won’t stay that way. A lot of analog devices will become part of it. In

I argued how new self-organising approaches to data gathering might well supersede big data as the foundations of networked intelligence gathering. Much of this could be in a the analog domain and much could be neural. Neural chips are already being built.

It doesn’t have to be a military network that becomes the troublemaker. I suggested a long time ago that ‘innocent’ student pranks from somewhere like MIT could be the source. Some smart students from various departments could collaborate to see if they can hijack lots of networked kit to see if they can make a conscious machine. Their algorithms or techniques don’t have to be very efficient if they can hijack enough. There is a possibility that such an effort could succeed if the right bits are connected into the cloud and accessible via sloppy security, and the ground up data industry might well satisfy that prerequisite soon.

Self-organisation technology will make possible extremely effective combat drones.

Terminators also don’t have to be machines. They could be organic, products of synthetic biology. My own contribution here is smart yogurt:

With IT and biology rapidly converging via nanotech, there will be many ways hybrids could be designed, some of which could adapt and evolve to fill different niches or to evade efforts to find or harm them. Various grey goo scenarios can be constructed that don’t have any miniature metal robots dismantling things. Obviously natural viruses or bacteria could also be genetically modified to make weapons that could kill many people – they already have been. Some could result from seemingly innocent R&D by smart machines.

I dealt a while back with the potential to make zombies too, remotely controlling people – alive or dead. Zombies are feasible this century too: &

A different kind of terminator threat arises if groups of people are linked at consciousness level to produce super-intelligences. We will have direct brain links mid-century so much of the second half may be spent in a mental arms race. As I wrote in my blog about the Great Western War, some of the groups will be large and won’t like each other. The rest of us could be wiped out in the crossfire as they battle for dominance. Some people could be linked deeply into powerful machines or networks, and there are no real limits on extent or scope. Such groups could have a truly global presence in networks while remaining superficially human.

Transhumans could be a threat to normal un-enhanced humans too. While some transhumanists are very nice people, some are not, and would consider elimination of ordinary humans a price worth paying to achieve transhumanism. Transhuman doesn’t mean better human, it just means humans with greater capability. A transhuman Hitler could do a lot of harm, but then again so could ordinary everyday transhumanists that are just arrogant or selfish, which is sadly a much bigger subset.

I collated these various varieties of potential future cohabitants of our planet in:

So there are numerous ways that smart machines could end up as a threat and quite a lot of terminators that don’t need smart machines.

Outcomes from a terminator scenario range from local problems with a few casualties all the way to total extinction, but I think we are still too focused on the death aspect. There are worse fates. I’d rather be killed than converted while still conscious into one of 7 billion zombies and that is one of the potential outcomes too, as is enslavement by some mad scientist.


Fusions needs jet engine architecture, not JET

Warning: some or all of what you will read here might be nonsense, but hey, faint heart ne’er won fair maid.

Lockheed Martin are in the news with yet another claim of a fusion breakthrough. It looks exciting, but some physicists are already claiming that it won’t work. I haven’t done the sums so I don’t have a sensible opinion on it. I am filing it mentally with all the other frequently claimed breakthroughs and will wait and see, not holding my breath. I really hope they succeed though. If they don’t, then their claim is just hot air, and if they can do that, then why can’t I? So here is how I would do the easy bits of the top level design, leaving the hard sums to others.

Joint European Torus = JET, and the new Lockheed Martin approach is meant to be about the same size as a jet engine. I couldn’t help making the obvious mental leap. Long ago, plane engines used internal combustion engines and propellers. The along came 40-year-old Frank Whittle and changed the world with his jet engine invention:

Whittle and his jet engine

Picture copyright Popperfoto

Smart bunny!

Standing on his (and Rutherford’s) shoulders, I had to ask whether we can’t use a jet engine arrangement to harness fusion. We don’t need the propulsion, just the ejected products to extract heat from, fairly conventionally. As lazy as researchers can be these days, I typed ‘jet engine fusion’ into google images. Way down the page was one that I thought had already used the idea, as a spaceship propulsion system, but bringing up the page, it doesn’t, it just uses a pretty conventional reaction chamber and ejects the fusion products out through a nozzle to provide propulsion force.

So either the idea is so obviously flawed that nobody has even bothered to investigate it far enough to bother making graphics, or a major case of group-think has affected the entire physicist community. Bit of a gamble proceeding then, but, if you have a few billion to gamble, here’s how to do fusion:

Jet style nuclear fusion process

Jet style nuclear fusion process

Intake a continuous stream of deuterium and tritium.

Compress it (using some of the energy from the fusion process)  and optionally heat  or compress it conventionally to reduce energy deficit in final stage.

Feed it into the narrow reaction pathway, which is a strongly confined tunnel surrounded by an Archimedes screw of high intensity lasers.

Generate continuous heating via lasers as the plasma passes along the reaction pathway (using some of the energy from the process) until fusion finally occurs in the short fusion zone.

Allow hot fused products to expand in an expansion chamber

Pass through suitable heat exchanger to make steam/molted sodium or whatever takes your fancy.

Feed some of the energy harvested to drive compressors, heaters, and obviously the lasers. Very possibly some of the products might be useful feedstock for production of lasing medium.

Bob’s your uncle.

OK, the intake and compression bits are quite jet enginy, and using some of the energy produced to power the earlier stages is very jet enginy. We don’t have any burning of gases so it isn’t quite the same. But in the interests of extracting as much from Whittle as possible, I kept it nice and circular with as few components as possible in the way, arranging the lasers in a continuous spiral (inspired by the Archimedes screw), so that the plasma heats up as it passes through them until it starts to fuse. There is no actual screw, its just that if all the lasers are mounted and directed towards the plasma jet as it heats, the external arrangement would look very similar, and the effect would be that the temperature and proximity to fusing would rise as the plasma passes through it.  You still need serious magnetic confinement to prevent the plasma touching the walls, but there is nothing physical in the path to touch, just magnetic fields and lots of laser beam.

I can’t see any immediate reasons why it couldn’t work, and it offers some definite advantages over a torus approach or exploding pellets. It takes ideas from all the other approaches so it isn’t really new, just a rearrangement.

Doesn’t Lockheed Martin make jet engines too?

The future of sky

The S installment of this ‘future of’ series. I have done streets, shopping, superstores, sticks, surveillance, skyscrapers, security, space, sports, space travel and sex before, some several times. I haven’t done sky before, so here we go.

Today when you look up during the day you typically see various weather features, the sun, maybe the moon, a few birds, insects or bats, maybe some dandelion or thistle seeds. As night falls, stars, planets, seasonal shooting stars and occasional comets may appear. To those we can add human contributions such as planes, microlights, gliders and helicopters, drones, occasional hot air balloons and blimps, helium party balloons, kites and at night-time, satellites, sometimes the space station, maybe fireworks. If you’re in some places, missiles and rockets may be unfortunate extras too, as might be the occasional parachutist or someone wearing a wing-suit or on a hang-glider. I guess we should add occasional space launches and returns too. I can’t think of any more but I might have missed some.

Drones are the most recent addition and their numbers will increase quickly, mostly for surveillance purposes. When I sit out in the garden, since we live in a quiet area, the noise from occasional  microlights and small planes is especially irritating because they fly low. I am concerned that most of the discussions on drones don’t tend to mention the potential noise nuisance they might bring. With nothing between them and the ground, sound will travel well, and although some are reasonably quiet, other might not be and the noise might add up. Surveillance, spying and prying will become the biggest nuisances though, especially as miniaturization continues to bring us many insect-sized drones that aren’t noisy and may visually be almost undetectable. Privacy in your back garden or in the bedroom with unclosed curtains could disappear. They will make effective distributed weapons too:

Adverts don’t tend to appear except on blimps, and they tend to be rare visitors. A drone was this week used to drag a national flag over a football game. In the Batman films, Batman is occasionally summoned by shining a spotlight with a bat symbol onto the clouds. I forgot which film used the moon to show an advert. It is possible via a range of technologies that adverts could soon be a feature of the sky, day and night, just like in Bladerunner. In the UK, we are now getting used to roadside ads, however unwelcome they were when they first arrived, though they haven’t yet reached US proportions. It will be very sad if the sky is hijacked as an advertising platform too.

I think we’ll see some high altitude balloons being used for communications. A few companies are exploring that now. Solar powered planes are a competing solution to the same market.

As well as tiny drones, we might have bubbles. Kids make bubbles all the time but they burst quickly. With graphene, a bubble could prevent helium escaping or even be filled with graphene foam, then it would float and stay there. We might have billions of tiny bubbles floating around with tiny cameras or microphones or other sensors. The cloud could be an actual cloud.

And then there’s fairies. I wrote about fairies as the future of space travel.

They might have a useful role here too, and even if they don’t, they might still want to be here, useful or not.

As children, we used to call thistle seeds fairies, our mums thought it was cute to call them that. Biomimetics could use that same travel technique for yet another form of drone.

With all the quadcopter, micro-plane, bubble, balloon and thistle seed drones, the sky might soon be rather fuller than today. So maybe there is a guaranteed useful role for fairies, as drone police.




The future of cyberspace

I promised in my last blog to do one on the dimensions of cyberspace. I made this chart 15 years ago, in two parts for easy reading, but the ones it lists are still valid and I can’t think of any new ones to add right now, but I might think of some more and make an update with a third part. I changed the name to virtuality instead because it actually only talks about human-accessed cyberspace, but I’m not entirely sure that was a good thing to do. Needs work.

cyberspace dimensions

cyberspace dimensions 2

The chart  has 14 dimensions (control has two independent parts), and I identified some of the possible points on each dimension. As dimensions are meant to be, they are all orthogonal, i.e. they are independent of each other, so you can pick any one on any dimension and use it with any from each other. Standard augmented reality and pure virtual reality are two of the potential combinations, out of the 2.5 x 10^11 possibilities above. At that rate, if every person in the world tried a different one every minute, it would take a whole day to visit them all even briefly. There are many more possible, this was never meant to be exhaustive, and even two more columns makes it 10 trillion combos. Already I can see that one more column could be ownership, another could be network implementation, another could be quality of illusion. What others have I missed?

The Future of IoT – virtual sensors for virtual worlds

I recently acquired a point-and-click thermometer for Futurizon, which gives an instant reading when you point it at something. I will soon know more about the world around me, but any personal discoveries I make are quite likely to be well known to science already. I don’t expect to win a Nobel prize by discovering breeches of the second law of thermodynamics, but that isn’t the point. The thermometer just measures the transmission from a particular point in a particular frequency band, which indicates what temperature it is. It cost about £20, a pretty cheap stimulation tool to help me think about the future by understanding new things about the present. I already discovered that my computer screen doubles as a heater, but I suspected that already. Soon, I’ll know how much my head warms when if think hard, and for the futurology bit, where the best locations are to put thermal IoT stuff.

Now that I am discovering the joys or remote sensing, I want to know so much more though. Sure, you can buy satellites for a billion pounds that will monitor anything anywhere, and for a few tens of thousands you can buy quite sophisticated lab equipment. For a few tens, not so much is available and I doubt the tax man will agree that Futurizon needs a high end oscilloscope or mass spectrometer so I have to set my sights low. The results of this blog justify the R&D tax offset for the thermometer. But the future will see drops in costs for most high technologies so I also expect to get far more interesting kit cheaply soon.

Even starting with the frequent assumption that in the future you can do anything, you still have to think what you want to do. I can get instant temperature readings now. In the future, I may also want a full absorption spectrum, color readings, texture and friction readings, hardness, flexibility, sound absorption characteristics, magnetic field strength, chemical composition, and a full range of biological measurements, just for fun. If Spock can have one, I want one too.

But that only covers reality, and reality will only account for a small proportion of our everyday life in the future. I may also want to check on virtual stuff, and that needs a different kind of sensor. I want to be able to point at things that only exist in virtual worlds. It needs to be able to see virtual worlds that are (at least partly) mapped onto real physical locations, and those that are totally independent and separate from the real world. I guess that is augmented reality ones and virtual reality ones. Then it starts getting tricky because augmented reality and virtual reality are just two members of a cyberspace variants set that runs to more than ten trillion members. I might do another blog soon on what they are, too big a topic to detail here.

People will be most interested in sensors to pick up geographically linked cyberspace. Much of the imaginary stuff is virtual worlds in computer games or similar, and many of those have built-in sensors designed for their spaces. So, my character can detect caves or forts or shrines from about 500m away in the virtual world of Oblivion (yes, it is from ages ago but it is still enjoyable). Most games have some sort of sensors built-in to show you what is nearby and some of its properties.

Geographically linked cyberspace won’t all be augmented reality because some will be there for machines, not people, but you might want to make sensors for it all the same, for many reasons, most likely for navigating it, debugging, or for tracking and identifying digital trespass. The last one is interesting. A rival company might well construct an augmented reality presence that allows you to see their products alongside ones in a physical shop. It doesn’t have to be in a properly virtual environment, a web page is still a location in cyberspace and when loaded, that instance takes on a geographic mapping via that display so it is part of that same trespass. That is legal today, and it started many years ago when people started using Amazon to check for better prices while in a book shop. Today it is pretty ubiquitous. We need sensors that can detect that. It may be accepted today as fair competition, but it might one day be judged as unfair competition by regulators for various reasons, and if so, they’ll need some mechanism to police it. They’ll need to be able to detect it. Not easy if it is just a web page that only exists at that location for a few seconds. Rather easier if it is a fixed augmented reality and you can download a map.

If for some reason a court does rule that digital trespass is illegal, one way of easy(though expensive) way of solving it would be to demand that all packets carry a geographic location, which of course the site would know when the person clicks on that link. To police that, turning off location would need to be blocked, or if it is turned off, sites would not be permitted to send you certain material that might not be permitted at that location. I feel certain there would be better and cheaper and more effective solutions.

I don’t intend to spend any longer exploring details here, but it is abundantly clear from just inspecting a few trees that making detectors for virtual worlds will be a very large and diverse forest full of dangers. Who should be able to get hold of the sensors? Will they only work in certain ‘dimensions’ of cyberspace? How should the watchers be watched?

The most interesting thing I can find though is that being able to detect cyberspace would allow new kinds of adventures and apps. You could walk through a doorway and it also happens to double as a portal between many virtual universes. And you might not be able to make that jump in any other physical location. You might see future high street outlets that are nothing more than teleport chambers for cyberspace worlds. They might be stuffed with virtual internet of things things and not one one of them physical. Now that’s fun.


Ground up data is the next big data

This one sat in my draft folder since February, so I guess it’s time to finish it.

Big Data – I expect you’re as sick of hearing that term as I am. Gathering loads of data on everything you or your company or anything else you can access can detect, measure, record, then analyzing the hell out of it using data mining, an equally irritating term.

I long ago had a quick twitter exchange with John Hewitt, who suggested “What is sensing but the energy-constrained competition for transmission to memory, as memory is but that for expression?”. Neurons compete to see who gets listened too.  Yeah, but I am still not much wiser as to what sensing actually is. Maybe I need a brain upgrade. (It’s like magnets. I used to be able to calculate the magnetic field densities around complicated shaped objects – it was part of my first job in missile design – but even though I could do all the equations around EM theory, even general relativity, I still am no wiser how a magnetic field actually becomes a force on an object. I have an office littered with hundreds of neodymium magnets and I spend hours playing with them and I still don’t understand). I can read about neurons all day but I still don’t understand how a bunch of photons triggering a series of electro-chemical reactions results in me experiencing an image. How does the physical detection become a conscious experience?

Well, I wrote some while back that we could achieve a conscious computer within two years. It’s still two years because nobody has started using the right approach yet. I have to stress the ‘could’, because nobody actually intends to do it in that time frame, but I really believe some half-decent lab could if they tried.  (Putting that into perspective, Kurzweil and his gang at Google are looking at 2029.) That two years estimate relies heavily on evolutionary development, for me the preferred option when you don’t understand how something works, as is the case with consciousness. It is pretty easy to design conscious computers at a black box level. The devil is in the detail. I argued that you could make a conscious computer by using internally focused sensing to detect processes inside the brain, and using a sensor structure with a symmetrical feedback loop. Read it:

In a nutshell, if you can feel thoughts in the same way as you feel external stimuli, you’d be conscious. I think. The symmetrical feedback loop bit is just a small engineering insight.

The missing link in that is still the same one: how does sensing work? How do you feel?

At a superficial level, you point a sensor at something and it produces a signal in some sort of relationship to whatever it is meant to sense. We can do that bit. We understand that. Your ear produces signals according to the frequencies and amplitudes of incoming sound waves, a bit like a microphone. Just the same so far. However, it is by some undefined processes later that you consciously experience the sound. How? That is the hard problem in AI. It isn’t just me that doesn’t know the answer. ‘How does red feel?’ is a more commonly used variant of the same question.

When we solve that, we will replace big data as ‘the next big thing’. If we can make sensor systems that experience or feel something rather than just producing a signal, that’s valuable already. If those sensors pool their shared experience, another similar sensor system could experience that. Basic data quickly transmutes into experience, knowledge, understanding, insight and very quickly, value, lots of it. Artificial neural nets go some way to doing that, but they still lack consciousness. Simulated neural networks can’t even get beyond a pretty straightforward computation, putting all the inputs into an equation. The true sensing bit is missing. The complex adaptive analog neural nets in our brain clearly achieve something deeper than a man-made neural network.

Meanwhile, most current AI work barks up a tree in a different forest. IBM’s Watson will do great things; Google’s search engine AI will too. But they aren’t conscious and can’t be. They’re just complicated programs running on digital processors, with absolutely zero awareness of anything they are doing. Digital programs on digital computers will never achieve any awareness, no matter how fast the chips are.

However, back in the biological realm, nature manages just fine. So biomimetics offers a lot of hope. We know we didn’t get from a pool of algae to humans in one go. At some point, organisms started moving according to light, chemical gradients, heat, touch. That most basic process of sensing may have started out coupled to internal processes that caused movement without any consciousness. But if we can understand the analog processes (electrochemical, electronic, mechanical) that take the stimulus through to a response, and can replicate it using our electronic technology, we would already have actuator circuits, even if we don’t have any form of sensation or consciousness yet. A great deal of this science has been done already of course. The computational side of most chemical and physical processes can be emulated electronically by some means or another. Actuators will be a very valuable part of the cloud, but we already have the ability to make actuators by more conventional means, so doing it organically or biomimetically just adds more actuation techniques to the portfolio. Valuable but not a terribly important breakthrough.

Looking at the system a big further along the evolutionary timeline, where eyes start to develop, where the most primitive nervous systems and brains start, where higher level processing is obviously occurring and inputs are starting to become sensations, we should be able to what is changed or changing. It is the emergence of sensation we need to identify, even if the reaction is still an unconscious reflex. We don’t need to reverse engineer the human brain. Simple organisms are simpler to understand. Feeding the architectural insights we gain from studying those primitive systems into our guided evolution engines is likely to be far faster as a means to generating true machine consciousness and strong AI. That’s how we could develop consciousness in a couple of years rather than 15.

If we can make primitive sensing devices that work like those in primitive organisms, and can respond to specific sorts of sensory input, then that is a potential way of increasing the coverage of cloud sensing and even actuation. It would effectively be a highly distributed direct response system. With clever embedding of emergent phenomena techniques (such as cellular automata, flocking etc) , it could be a quite sophisticated way of responding to quite complex distributed inputs, avoiding some of the need for big data processing. If we can gather the outputs from these simple sensors and feed them into others, that will be an even better sort of biomimetic response system. That sort of direct experience of a situation is very different from a data mined result, especially if actuation capability is there too. The philosophical question as to whether that inclusion of that second bank of sensors makes the system in any way conscious remains, but it would certainly be very useful and valuable. The architecture we end up with via this approach may look like neurons, and could even be synthetic neurons, but that may be only one solution among many. Biology may have gone the neuron route but that doesn’t necessarily mean it is the only possibility. It may be that we could one day genetically modify bacteria to produce their own organic electronics to emulate the key processes needed to generate sensation, and to power them by consuming nutrients from their environment. I suggested smart yogurt based on this idea many years ago, and believe that it could achieve vast levels of intelligence.

Digitizing and collecting the signals from the system at each stage would generate lots of  data, and that may be used by programs to derive other kinds of results, or to relay the inputs to other analog sensory systems elsewhere. (It isn’t always necessary to digitize signals to transmit them, but it helps limit signal degradation and quickly becomes important if the signal is to travel far and is essential if it is to be recorded for later use or time shifting). However, I strongly suspect that most of the value in analog sensing and direct response is local, coupled to direct action or local processing and storage.

If we have these sorts of sensors liberally spread around, we’d create a truly smart environment, with local sensing and some basic intelligence able to relay sensation remotely to other banks of sensors elsewhere for further processing or even ultimately consciousness. The local sensors could be relatively dumb like nerve endings on our skin, feeding in  signals to a more connected virtual nervous system, or a bit smarter, like neural retinal cells, doing a lot of analog pre-processing before relaying them via ganglia cells, and maybe part of a virtual brain. If they are also capable of or connected to some sort of actuation, then we would be constructing a kind of virtual organism, with tendrils covering potentially the whole globe, and able to sense and interact with its environment in an intelligent way.

I use the term virtual not because the sensors wouldn’t be real, but because their electronic nature allows connectivity to many systems, overlapping, hierarchical or distinct. Any number of higher level systems could ‘experience’ them as part of its system, rather as if your fingers could be felt by the entire human population. Multiple higher level virtual organisms could share the same basic sensory/data inputs. That gives us a whole different kind of cloud sensing.

By doing processing locally, in the analog domain, and dealing with some of the response locally, a lot of traffic across the network is avoided and a lot of remote processing. Any post-processing that does occur can therefore add to a higher level of foundation. A nice side effect from avoiding all the extra transmission and processing is increased environmental friendliness.

So, we’d have a quite different sort of data network, collecting higher quality data, essentially doing by instinct what data mining does with huge server farms and armies of programmers. Cloudy, but much smarter than a straightforward sensor net.

… I think.

It isn’t without risk though. I had a phone discussion yesterday on the dangers of this kind of network. In brief, it’s dangerous.

The future of obsolescence

My regular readers will know I am not a big fan of ‘green’ policies. I want to protect the environment and green policies invariably end up damaging it. These policies normally arise by taking too simplistic a view – that all parts of the environmental system are independent of each other so each part can be addressed in isolation to improve the environment as a whole. As a systems engineer since graduation, I always look at the whole system over the whole life cycle and when you do that, you can see why green policies usually don’t work.

Tackling the problem of rapid obsolescence is one of the big errors in environmentalism. The error here is that rapid obsolescence is not necessarily  a problem. Although at first glance it may appear to cause excessive waste and unnecessary environmental damage, on deeper inspection it is very clear that it has actually driven technology through very rapid change to the point where the same function can often be realized now with less material, less energy use, less pollution and less environmental impact. As the world gets richer and more people can afford to buy more things, it is a direct result of rapid obsolescence that those things have a better environmental impact than they would if the engineering life cycle had run through fewer times.

A 150g smart-phone replaces 750kg of 1990s IT. If the green policy of making things last longer and not replacing them had been in force back then, some improvement would still have arisen, but the chances are you would not have the smart phone or tablet, would still use a plasma TV, still need a hi-fi, camera and you’d still have to travel in person to do a lot of the things your smartphone allows you to do wherever you are. In IT, rapid obsolescence continues, soon all your IT will be replaced by active contact lenses and a few grams of jewelry. If 7Bn people want to have a good quality of digitally enabled lifestyle, then letting them do so with 5 grams of materials and milliwatts of power use is far better than using a ton of materials and kilowatts of power.

Rapid engineering progress lets us build safer bridges and buildings with less material, make cars that don’t rust after 3 years and run on less fuel, given us fridges and washing machines that use less energy. Yes, we throw things away, but thanks again to rapid obsolescence, the bits are now easily recyclable.

Whether greens like it or not, our way of throwing things away after a relatively short life cycle has been one of the greatest environmental successes of our age. Fighting against rapid obsolescence doesn’t make you a friend of the earth, it makes you its unwitting enemy.

The future of levitation

Futurologists are often asked about flying cars, and there already are one or two and one day there might be some, but they’ll probably only become as common as helicopters today. Levitating cars will be more common, and will hover just above the ground, like the landspeeders on Star Wars, or just above a lower layer of cars. I need to be careful here – hovercraft were supposed to be the future but they are hard to steer and to stop quickly and that is probably why they didn’t take over as some people expected. Levitating cars won’t work either if we can’t solve that problem.

Maglev trains have been around for decades. Levitating cars won’t use anti-gravity in my lifetime, so magnetic levitation is the only non-hovercraft means obvious. They don’t actually need metal roads to fly over, although that is one mechanism. It is possible to contain a cushion of plasma and ride on that. OK, it is a bit hovercrafty, since it uses a magnetic skirt to keep the plasma in place, but at least it won’t need big fans and drafts. The same technique could work for a skateboard too.

Once we have magnetic plasma levitation working properly, we can start making all sorts of floating objects. We’ll have lots of drones by then anyway, but drones could levitate using plasma instead of using rotor blades. With plasma levitation, compound objects can be formed using clusters of levitating component parts. This can be quieter and more elegant than messy air jets or rotors.

Magnetic levitation doesn’t have very many big advantages over using wheels, but it still seems futuristic, and sometimes that is reason enough to do it. More than almost anything else, levitating cars and skateboards would bring the unmistakable message that the future has arrived. So we may see the levitating robots and toys and transport that we have come to expect in sci-fi.

To do it, we need strong magnetic fields, but they can be produced by high electrical currents in graphene circuits. Plasma is easy enough to make too. Electron pipes could do that and could be readily applied as a coating to the underside of a car or any hard surface rather like paint. We can’t do that bit yet, but a couple of decades from now it may well be feasible. By then most new cars will be self-driving, and will drive very closely together, so the need to stop quickly or divert from a path can be more easily solved. One by one, the problems with making levitating vehicles will disappear and wheels may become obsolete. We still won’t have very many flying cars, but lots that float above the ground.

All in all, levitation has a future, just as we’ve been taught to expect by sci-fi.