Daily Archives: October 16, 2014

The future of cyberspace

I promised in my last blog to do one on the dimensions of cyberspace. I made this chart 15 years ago, in two parts for easy reading, but the ones it lists are still valid and I can’t think of any new ones to add right now, but I might think of some more and make an update with a third part. I changed the name to virtuality instead because it actually only talks about human-accessed cyberspace, but I’m not entirely sure that was a good thing to do. Needs work.

cyberspace dimensions

cyberspace dimensions 2

The chart  has 14 dimensions (control has two independent parts), and I identified some of the possible points on each dimension. As dimensions are meant to be, they are all orthogonal, i.e. they are independent of each other, so you can pick any one on any dimension and use it with any from each other. Standard augmented reality and pure virtual reality are two of the potential combinations, out of the 2.5 x 10^11 possibilities above. At that rate, if every person in the world tried a different one every minute, it would take a whole day to visit them all even briefly. There are many more possible, this was never meant to be exhaustive, and even two more columns makes it 10 trillion combos. Already I can see that one more column could be ownership, another could be network implementation, another could be quality of illusion. What others have I missed?

Advertisements

The Future of IoT – virtual sensors for virtual worlds

I recently acquired a point-and-click thermometer for Futurizon, which gives an instant reading when you point it at something. I will soon know more about the world around me, but any personal discoveries I make are quite likely to be well known to science already. I don’t expect to win a Nobel prize by discovering breeches of the second law of thermodynamics, but that isn’t the point. The thermometer just measures the transmission from a particular point in a particular frequency band, which indicates what temperature it is. It cost about £20, a pretty cheap stimulation tool to help me think about the future by understanding new things about the present. I already discovered that my computer screen doubles as a heater, but I suspected that already. Soon, I’ll know how much my head warms when if think hard, and for the futurology bit, where the best locations are to put thermal IoT stuff.

Now that I am discovering the joys or remote sensing, I want to know so much more though. Sure, you can buy satellites for a billion pounds that will monitor anything anywhere, and for a few tens of thousands you can buy quite sophisticated lab equipment. For a few tens, not so much is available and I doubt the tax man will agree that Futurizon needs a high end oscilloscope or mass spectrometer so I have to set my sights low. The results of this blog justify the R&D tax offset for the thermometer. But the future will see drops in costs for most high technologies so I also expect to get far more interesting kit cheaply soon.

Even starting with the frequent assumption that in the future you can do anything, you still have to think what you want to do. I can get instant temperature readings now. In the future, I may also want a full absorption spectrum, color readings, texture and friction readings, hardness, flexibility, sound absorption characteristics, magnetic field strength, chemical composition, and a full range of biological measurements, just for fun. If Spock can have one, I want one too.

But that only covers reality, and reality will only account for a small proportion of our everyday life in the future. I may also want to check on virtual stuff, and that needs a different kind of sensor. I want to be able to point at things that only exist in virtual worlds. It needs to be able to see virtual worlds that are (at least partly) mapped onto real physical locations, and those that are totally independent and separate from the real world. I guess that is augmented reality ones and virtual reality ones. Then it starts getting tricky because augmented reality and virtual reality are just two members of a cyberspace variants set that runs to more than ten trillion members. I might do another blog soon on what they are, too big a topic to detail here.

People will be most interested in sensors to pick up geographically linked cyberspace. Much of the imaginary stuff is virtual worlds in computer games or similar, and many of those have built-in sensors designed for their spaces. So, my character can detect caves or forts or shrines from about 500m away in the virtual world of Oblivion (yes, it is from ages ago but it is still enjoyable). Most games have some sort of sensors built-in to show you what is nearby and some of its properties.

Geographically linked cyberspace won’t all be augmented reality because some will be there for machines, not people, but you might want to make sensors for it all the same, for many reasons, most likely for navigating it, debugging, or for tracking and identifying digital trespass. The last one is interesting. A rival company might well construct an augmented reality presence that allows you to see their products alongside ones in a physical shop. It doesn’t have to be in a properly virtual environment, a web page is still a location in cyberspace and when loaded, that instance takes on a geographic mapping via that display so it is part of that same trespass. That is legal today, and it started many years ago when people started using Amazon to check for better prices while in a book shop. Today it is pretty ubiquitous. We need sensors that can detect that. It may be accepted today as fair competition, but it might one day be judged as unfair competition by regulators for various reasons, and if so, they’ll need some mechanism to police it. They’ll need to be able to detect it. Not easy if it is just a web page that only exists at that location for a few seconds. Rather easier if it is a fixed augmented reality and you can download a map.

If for some reason a court does rule that digital trespass is illegal, one way of easy(though expensive) way of solving it would be to demand that all packets carry a geographic location, which of course the site would know when the person clicks on that link. To police that, turning off location would need to be blocked, or if it is turned off, sites would not be permitted to send you certain material that might not be permitted at that location. I feel certain there would be better and cheaper and more effective solutions.

I don’t intend to spend any longer exploring details here, but it is abundantly clear from just inspecting a few trees that making detectors for virtual worlds will be a very large and diverse forest full of dangers. Who should be able to get hold of the sensors? Will they only work in certain ‘dimensions’ of cyberspace? How should the watchers be watched?

The most interesting thing I can find though is that being able to detect cyberspace would allow new kinds of adventures and apps. You could walk through a doorway and it also happens to double as a portal between many virtual universes. And you might not be able to make that jump in any other physical location. You might see future high street outlets that are nothing more than teleport chambers for cyberspace worlds. They might be stuffed with virtual internet of things things and not one one of them physical. Now that’s fun.

 

Ground up data is the next big data

This one sat in my draft folder since February, so I guess it’s time to finish it.

Big Data – I expect you’re as sick of hearing that term as I am. Gathering loads of data on everything you or your company or anything else you can access can detect, measure, record, then analyzing the hell out of it using data mining, an equally irritating term.

I long ago had a quick twitter exchange with John Hewitt, who suggested “What is sensing but the energy-constrained competition for transmission to memory, as memory is but that for expression?”. Neurons compete to see who gets listened too.  Yeah, but I am still not much wiser as to what sensing actually is. Maybe I need a brain upgrade. (It’s like magnets. I used to be able to calculate the magnetic field densities around complicated shaped objects – it was part of my first job in missile design – but even though I could do all the equations around EM theory, even general relativity, I still am no wiser how a magnetic field actually becomes a force on an object. I have an office littered with hundreds of neodymium magnets and I spend hours playing with them and I still don’t understand). I can read about neurons all day but I still don’t understand how a bunch of photons triggering a series of electro-chemical reactions results in me experiencing an image. How does the physical detection become a conscious experience?

Well, I wrote some while back that we could achieve a conscious computer within two years. It’s still two years because nobody has started using the right approach yet. I have to stress the ‘could’, because nobody actually intends to do it in that time frame, but I really believe some half-decent lab could if they tried.  (Putting that into perspective, Kurzweil and his gang at Google are looking at 2029.) That two years estimate relies heavily on evolutionary development, for me the preferred option when you don’t understand how something works, as is the case with consciousness. It is pretty easy to design conscious computers at a black box level. The devil is in the detail. I argued that you could make a conscious computer by using internally focused sensing to detect processes inside the brain, and using a sensor structure with a symmetrical feedback loop. Read it:

https://timeguide.wordpress.com/2013/12/28/we-could-have-a-conscious-machine-by-end-of-play-2015/

In a nutshell, if you can feel thoughts in the same way as you feel external stimuli, you’d be conscious. I think. The symmetrical feedback loop bit is just a small engineering insight.

The missing link in that is still the same one: how does sensing work? How do you feel?

At a superficial level, you point a sensor at something and it produces a signal in some sort of relationship to whatever it is meant to sense. We can do that bit. We understand that. Your ear produces signals according to the frequencies and amplitudes of incoming sound waves, a bit like a microphone. Just the same so far. However, it is by some undefined processes later that you consciously experience the sound. How? That is the hard problem in AI. It isn’t just me that doesn’t know the answer. ‘How does red feel?’ is a more commonly used variant of the same question.

When we solve that, we will replace big data as ‘the next big thing’. If we can make sensor systems that experience or feel something rather than just producing a signal, that’s valuable already. If those sensors pool their shared experience, another similar sensor system could experience that. Basic data quickly transmutes into experience, knowledge, understanding, insight and very quickly, value, lots of it. Artificial neural nets go some way to doing that, but they still lack consciousness. Simulated neural networks can’t even get beyond a pretty straightforward computation, putting all the inputs into an equation. The true sensing bit is missing. The complex adaptive analog neural nets in our brain clearly achieve something deeper than a man-made neural network.

Meanwhile, most current AI work barks up a tree in a different forest. IBM’s Watson will do great things; Google’s search engine AI will too. But they aren’t conscious and can’t be. They’re just complicated programs running on digital processors, with absolutely zero awareness of anything they are doing. Digital programs on digital computers will never achieve any awareness, no matter how fast the chips are.

However, back in the biological realm, nature manages just fine. So biomimetics offers a lot of hope. We know we didn’t get from a pool of algae to humans in one go. At some point, organisms started moving according to light, chemical gradients, heat, touch. That most basic process of sensing may have started out coupled to internal processes that caused movement without any consciousness. But if we can understand the analog processes (electrochemical, electronic, mechanical) that take the stimulus through to a response, and can replicate it using our electronic technology, we would already have actuator circuits, even if we don’t have any form of sensation or consciousness yet. A great deal of this science has been done already of course. The computational side of most chemical and physical processes can be emulated electronically by some means or another. Actuators will be a very valuable part of the cloud, but we already have the ability to make actuators by more conventional means, so doing it organically or biomimetically just adds more actuation techniques to the portfolio. Valuable but not a terribly important breakthrough.

Looking at the system a big further along the evolutionary timeline, where eyes start to develop, where the most primitive nervous systems and brains start, where higher level processing is obviously occurring and inputs are starting to become sensations, we should be able to what is changed or changing. It is the emergence of sensation we need to identify, even if the reaction is still an unconscious reflex. We don’t need to reverse engineer the human brain. Simple organisms are simpler to understand. Feeding the architectural insights we gain from studying those primitive systems into our guided evolution engines is likely to be far faster as a means to generating true machine consciousness and strong AI. That’s how we could develop consciousness in a couple of years rather than 15.

If we can make primitive sensing devices that work like those in primitive organisms, and can respond to specific sorts of sensory input, then that is a potential way of increasing the coverage of cloud sensing and even actuation. It would effectively be a highly distributed direct response system. With clever embedding of emergent phenomena techniques (such as cellular automata, flocking etc) , it could be a quite sophisticated way of responding to quite complex distributed inputs, avoiding some of the need for big data processing. If we can gather the outputs from these simple sensors and feed them into others, that will be an even better sort of biomimetic response system. That sort of direct experience of a situation is very different from a data mined result, especially if actuation capability is there too. The philosophical question as to whether that inclusion of that second bank of sensors makes the system in any way conscious remains, but it would certainly be very useful and valuable. The architecture we end up with via this approach may look like neurons, and could even be synthetic neurons, but that may be only one solution among many. Biology may have gone the neuron route but that doesn’t necessarily mean it is the only possibility. It may be that we could one day genetically modify bacteria to produce their own organic electronics to emulate the key processes needed to generate sensation, and to power them by consuming nutrients from their environment. I suggested smart yogurt based on this idea many years ago, and believe that it could achieve vast levels of intelligence.

Digitizing and collecting the signals from the system at each stage would generate lots of  data, and that may be used by programs to derive other kinds of results, or to relay the inputs to other analog sensory systems elsewhere. (It isn’t always necessary to digitize signals to transmit them, but it helps limit signal degradation and quickly becomes important if the signal is to travel far and is essential if it is to be recorded for later use or time shifting). However, I strongly suspect that most of the value in analog sensing and direct response is local, coupled to direct action or local processing and storage.

If we have these sorts of sensors liberally spread around, we’d create a truly smart environment, with local sensing and some basic intelligence able to relay sensation remotely to other banks of sensors elsewhere for further processing or even ultimately consciousness. The local sensors could be relatively dumb like nerve endings on our skin, feeding in  signals to a more connected virtual nervous system, or a bit smarter, like neural retinal cells, doing a lot of analog pre-processing before relaying them via ganglia cells, and maybe part of a virtual brain. If they are also capable of or connected to some sort of actuation, then we would be constructing a kind of virtual organism, with tendrils covering potentially the whole globe, and able to sense and interact with its environment in an intelligent way.

I use the term virtual not because the sensors wouldn’t be real, but because their electronic nature allows connectivity to many systems, overlapping, hierarchical or distinct. Any number of higher level systems could ‘experience’ them as part of its system, rather as if your fingers could be felt by the entire human population. Multiple higher level virtual organisms could share the same basic sensory/data inputs. That gives us a whole different kind of cloud sensing.

By doing processing locally, in the analog domain, and dealing with some of the response locally, a lot of traffic across the network is avoided and a lot of remote processing. Any post-processing that does occur can therefore add to a higher level of foundation. A nice side effect from avoiding all the extra transmission and processing is increased environmental friendliness.

So, we’d have a quite different sort of data network, collecting higher quality data, essentially doing by instinct what data mining does with huge server farms and armies of programmers. Cloudy, but much smarter than a straightforward sensor net.

… I think.

It isn’t without risk though. I had a phone discussion yesterday on the dangers of this kind of network. In brief, it’s dangerous.