Category Archives: interfaces

The future of X-People

There is an abundance of choice for X in my ‘future of’ series, but most options are sealed off. I can’t do naughty stuff because I don’t want my blog to get blocked so that’s one huge category gone. X-rays are boring, even though x-ray glasses using augmented reality… nope, that’s back to the naughty category again. I won’t stoop to cover X-Factor so that only leaves X-Men, as in the films, which I admit to enjoying however silly they are.

My first observation is how strange X-Men sounds. Half of them are female. So I will use X-People. I hate political correctness, but I hate illogical nomenclature even more.

My second one is that some readers may not be familiar with the X-Men so I guess I’d better introduce the idea. Basically they are a large set of mutants or transhumans with very varied superhuman or supernatural capabilities, most of which defy physics, chemistry or biology or all of them. Essentially low-grade superheroes whose main purpose is to show off special effects. OK, fun-time!

There are several obvious options for achieving X-People capabilities:

Genetic modification, including using synthetic biology or other biotech. This would allow people to be stronger, faster, fitter, prettier, more intelligent or able to eat unlimited chocolate without getting fat. The last one will be the most popular upgrade. However, now that we have started converging biotech with IT, it won’t be very long before it will be possible to add telepathy to the list. Thought recognition and nerve stimulation are two sides of the same technology. Starting with thought control of appliances or interfaces, the world’s networked knowledge would soon be available to you just by thinking about something. You could easily send messages using thought control and someone else could hear them synthesized into an earpiece, but later it could be direct thought stimulation. Eventually, you’d have totally shared consciousness. None of that defies biology or physics, and it will happen mid-century. Storing your own thoughts and effectively extending your mind into the cloud would allow people to make their minds part of the network resources. Telepathy will be an everyday ability for many people but only with others who are suitably equipped. It won’t become easy to read other people’s minds without them having suitable technology equipped too. It will be interesting to see whether only a few people go that route or most people. Either way, 2050 X-People can easily have telepathy, control objects around them just by thinking, share minds with others and maybe even control other people, hopefully consensually.

Nanotechnology, using nanobots etc to achieve possibly major alterations to your form, or to affect others or objects. Nanotechnology is another word for magic as far as many sci-fi writers go. Being able to rearrange things on an individual atom basis is certainly fuel for fun stories, but it doesn’t allow you to do things like changing objects into gold or people into stone statues. There are plenty of shape-shifters in sci-fi but in reality, chemical bonds absorb or release energy when they are changed and that limits how much change can be made in a few seconds without superheating an object. You’d also need a LOT of nanobots to change a whole person in a few seconds. Major changes in a body would need interim states to work too, since dying during the process probably isn’t desirable. If you aren’t worried about time constraints and can afford to make changes at a more gentle speed, and all you’re doing is changing your face, skin colour, changing age or gender or adding a couple of cosmetic wings, then it might be feasible one day. Maybe you could even change your skin to a plastic coating one day, since plastics can use atomic ingredients from skin, or you could add a cream to provide what’s missing. Also, passing some nanobots to someone else via a touch might become feasible, so maybe you could cause them to change involuntarily just by touching them, again subject to scope and time limits. So nanotech can go some way to achieving some X-People capabilities related to shape changing.

Moving objects using telekinesis is rather less likely. Thought controlling a machine to move a rock is easy, moving an unmodified rock or a dumb piece of metal just by concentrating on it is beyond any technology yet on the horizon. I can’t think of any mechanism by which it could be done. Nor can I think of ways of causing things to just burst into flames without using some sort of laser or heat ray. I can’t see either how megawatt lasers can be comfortably implanted in ordinary eyes. These deficiencies might be just my lack of imagination but I suspect they are actually not feasible. Quite a few of the X-Men have these sorts of powers but they might have to stay in sci-fi.

Virtual reality, where you possess the power in a virtual world, which may be shared with others. Well, many computer games give players supernatural powers, or take on various forms, and it’s obvious that many will do so in VR too. If you can imagine it, then someone can get the graphics chips to make it happen in front of your eyes. There are no hard physics or biology barriers in VR. You can do what you like. Shared gaming or socializing environments can be very attractive and it is not uncommon for people to spend almost every waking hour in them. Role playing lets people do things or be things they can’t in the real world. They may want to be a superhero, or they might just want to feel younger or look different or try being another gender. When they look in a mirror in the VR world, they would see the person they want to be, and that could make it very compelling compared to harsh reality. I suspect that some people will spend most of their free time in VR, living a parallel fantasy life that is as important to them as their ‘real’ one. In their fantasy world, they can be anyone and have any powers they like. When they share the world with other people or AI characters, then rules start to appear because different people have different tastes and desires. That means that there will be various shared virtual worlds with different cultures, freedoms and restrictions.

Augmented reality, where you possess the power in a virtual world but in ways that it interacts with the physical world is a variation on VR, where it blends more with reality. You might have a magic wand that changes people into frogs. The wand could be just a stick, but the victim could be a real person, and the change would happen only in the augmented reality. The scope of the change could be one-sided – they might not even know that you now see them as a frog, or it could again be part of a large shared culture where other people in the community now see and treat them as a frog. The scope of such cultures is very large and arbitrary cultural rules could apply. They could include a lot of everyday life – shopping, banking, socializing, entertainment, sports… That means effects could be wide-ranging with varying degrees of reality overlap or permanence. Depending on how much of their lives people live within those cultures, virtual effects could have quite real consequences. I do think that augmented reality will eventually have much more profound long-term effects on our lives than the web.

Controlled dreaming, where you can do pretty much anything you want and be in full control of the direction your dream takes. This is effectively computer-enhanced lucid dreaming with literally all the things you could ever dream of. But other people can dream of extra things that you may never have dreamt of and it allows you to explore those areas too.  In shared or connected dreams, your dreams could interact with those of others or multiple people could share the same dream. There is a huge overlap here with virtual reality, but in dreams, things don’t get the same level of filtration and reality is heavily distorted, so I suspect that controlled dreams will offer even more potential than VR. You can dream about being in VR, but you can’t make a dream in VR.

X-People will be very abundant in the future. We might all be X-People most of the time, routinely doing things that are pure sci-fi today. Some will be real, some will be virtual, some will be in dreams, but mostly, thanks to high quality immersion and the social power of shared culture, we probably won’t really care which is which.

 

 

The future of virtual reality

I first covered this topic in 1991 or 1992, can’t recall, when we were playing with the Virtuality machines. I got a bit carried away, did the calculations on processing power requirements for decent images, and announced that VR would replace TV as our main entertainment by about 2000. I still use that as my best example of things I didn’t get right.

I have often considered why it didn’t take off as we expected. There are two very plausible explanations and both might apply somewhat to the new launches we’re seeing now.

1: It did happen, just differently. People are using excellent pseudo-3D environments in computer games, and that is perfectly acceptable, they simply don’t need full-blown VR. Just as 3DTV hasn’t turned out to be very popular compared to regular TV, so wandering around a virtual world doesn’t necessarily require VR. TV or  PC monitors are perfectly adequate in conjunction with the cooperative human brain to convey the important bits of the virtual world illusion.

2. Early 1990s VR headsets reportedly gave some people eye strain or psychological distortions that persisted long enough after sessions to present potential dangers. This meant corporate lawyers would have been warning about potentially vast class action suits with every kid that develops a squint blaming the headset manufacturers, or when someone walked under a bus because they were still mentally in a virtual world. If anything, people are far more likely to sue for alleged negative psychological effects now than back then.

My enthusiasm for VR hasn’t gone away. I still think it has great potential. I just hope the manufacturers are fully aware of these issues and have dealt with or are dealing with them. It would be a great shame indeed if a successful launch is followed by rapid market collapse or class action suits. I hope they can avoid both problems.

The porn industry is already gearing up to capitalise on VR, and the more innocent computer games markets too. I spend a fair bit of my spare time in the virtual worlds of computer games. I find games far more fun than TV, and adding more convincing immersion and better graphics would be a big plus. In the further future, active skin will allow our nervous systems to be connected into the IT too, recording and replaying sensations so VR could become full sensory. When you fight an enemy in a game today, the controller might vibrate if you get hit or shot. If you could feel the pain, you might try a little harder to hide. You may be less willing to walk casually through flames if they hurt rather than just making a small drop in a health indicator or you might put a little more effort into kindling romances if you could actually enjoy the cuddles. But that’s for the next generation, not mine.

VR offers a whole new depth of experience, but it did in 1991. It failed first time, let’s hope this time the technology brings the benefits without the drawbacks and can succeed.

The future of ukuleles

Well, actually stringed instruments generally, but I needed a U and I didn’t want to do universities or the UN again and certainly not unicorns, so I cheated slightly. I realize that other topics starting with U may exist, but I didn’t do much research and I needed an excuse to write up this new idea.

If I was any good at making electronics, I’d have built a demo of this, but I have only soldered 6 contacts in my life, and 4 of those were dry joints, and I know when to quit.

My idea is very simple indeed: put accelerometers on the strings. Some quick googling suggests the idea is novel.

There are numerous electric guitar, violins, probably ukuleles. They use a variety of pickups. Many are directly underneath the strings, some use accelerometers on the other side of the bridge or elsewhere on the body. In most instruments, the body is heavily involved in the overall sound production, so I wouldn’t want to replace the pickups on the body. However, adding accelerometers to the strings would give another data source with quite different characteristics. There could be just one, or several, placed at specific locations along each string. If they are too heavy, they would change the sound too much, but some now are far smaller than the eye of a needle. If they are fixed onto the string, it would need a little re-tuning, but shouldn’t destroy the sound quality. The benefit is that accelerometers on the strings would provide data not available via other pickups. They would more directly represent the string activity than a pickup on the body. This could be used as valuable input to the overall signal mix used in the electronic sound output. Having more data available is generally a good thing.

What would the new sound be like? I don’t know. If it is very different from the sound using conventional pickups, it might even open up potential for new kinds of electric instrument.

If you do experiment with this, please do report back on the results.

The future of cyberspace

I promised in my last blog to do one on the dimensions of cyberspace. I made this chart 15 years ago, in two parts for easy reading, but the ones it lists are still valid and I can’t think of any new ones to add right now, but I might think of some more and make an update with a third part. I changed the name to virtuality instead because it actually only talks about human-accessed cyberspace, but I’m not entirely sure that was a good thing to do. Needs work.

cyberspace dimensions

cyberspace dimensions 2

The chart  has 14 dimensions (control has two independent parts), and I identified some of the possible points on each dimension. As dimensions are meant to be, they are all orthogonal, i.e. they are independent of each other, so you can pick any one on any dimension and use it with any from each other. Standard augmented reality and pure virtual reality are two of the potential combinations, out of the 2.5 x 10^11 possibilities above. At that rate, if every person in the world tried a different one every minute, it would take a whole day to visit them all even briefly. There are many more possible, this was never meant to be exhaustive, and even two more columns makes it 10 trillion combos. Already I can see that one more column could be ownership, another could be network implementation, another could be quality of illusion. What others have I missed?

Ground up data is the next big data

This one sat in my draft folder since February, so I guess it’s time to finish it.

Big Data – I expect you’re as sick of hearing that term as I am. Gathering loads of data on everything you or your company or anything else you can access can detect, measure, record, then analyzing the hell out of it using data mining, an equally irritating term.

I long ago had a quick twitter exchange with John Hewitt, who suggested “What is sensing but the energy-constrained competition for transmission to memory, as memory is but that for expression?”. Neurons compete to see who gets listened too.  Yeah, but I am still not much wiser as to what sensing actually is. Maybe I need a brain upgrade. (It’s like magnets. I used to be able to calculate the magnetic field densities around complicated shaped objects – it was part of my first job in missile design – but even though I could do all the equations around EM theory, even general relativity, I still am no wiser how a magnetic field actually becomes a force on an object. I have an office littered with hundreds of neodymium magnets and I spend hours playing with them and I still don’t understand). I can read about neurons all day but I still don’t understand how a bunch of photons triggering a series of electro-chemical reactions results in me experiencing an image. How does the physical detection become a conscious experience?

Well, I wrote some while back that we could achieve a conscious computer within two years. It’s still two years because nobody has started using the right approach yet. I have to stress the ‘could’, because nobody actually intends to do it in that time frame, but I really believe some half-decent lab could if they tried.  (Putting that into perspective, Kurzweil and his gang at Google are looking at 2029.) That two years estimate relies heavily on evolutionary development, for me the preferred option when you don’t understand how something works, as is the case with consciousness. It is pretty easy to design conscious computers at a black box level. The devil is in the detail. I argued that you could make a conscious computer by using internally focused sensing to detect processes inside the brain, and using a sensor structure with a symmetrical feedback loop. Read it:

http://timeguide.wordpress.com/2013/12/28/we-could-have-a-conscious-machine-by-end-of-play-2015/

In a nutshell, if you can feel thoughts in the same way as you feel external stimuli, you’d be conscious. I think. The symmetrical feedback loop bit is just a small engineering insight.

The missing link in that is still the same one: how does sensing work? How do you feel?

At a superficial level, you point a sensor at something and it produces a signal in some sort of relationship to whatever it is meant to sense. We can do that bit. We understand that. Your ear produces signals according to the frequencies and amplitudes of incoming sound waves, a bit like a microphone. Just the same so far. However, it is by some undefined processes later that you consciously experience the sound. How? That is the hard problem in AI. It isn’t just me that doesn’t know the answer. ‘How does red feel?’ is a more commonly used variant of the same question.

When we solve that, we will replace big data as ‘the next big thing’. If we can make sensor systems that experience or feel something rather than just producing a signal, that’s valuable already. If those sensors pool their shared experience, another similar sensor system could experience that. Basic data quickly transmutes into experience, knowledge, understanding, insight and very quickly, value, lots of it. Artificial neural nets go some way to doing that, but they still lack consciousness. Simulated neural networks can’t even get beyond a pretty straightforward computation, putting all the inputs into an equation. The true sensing bit is missing. The complex adaptive analog neural nets in our brain clearly achieve something deeper than a man-made neural network.

Meanwhile, most current AI work barks up a tree in a different forest. IBM’s Watson will do great things; Google’s search engine AI will too. But they aren’t conscious and can’t be. They’re just complicated programs running on digital processors, with absolutely zero awareness of anything they are doing. Digital programs on digital computers will never achieve any awareness, no matter how fast the chips are.

However, back in the biological realm, nature manages just fine. So biomimetics offers a lot of hope. We know we didn’t get from a pool of algae to humans in one go. At some point, organisms started moving according to light, chemical gradients, heat, touch. That most basic process of sensing may have started out coupled to internal processes that caused movement without any consciousness. But if we can understand the analog processes (electrochemical, electronic, mechanical) that take the stimulus through to a response, and can replicate it using our electronic technology, we would already have actuator circuits, even if we don’t have any form of sensation or consciousness yet. A great deal of this science has been done already of course. The computational side of most chemical and physical processes can be emulated electronically by some means or another. Actuators will be a very valuable part of the cloud, but we already have the ability to make actuators by more conventional means, so doing it organically or biomimetically just adds more actuation techniques to the portfolio. Valuable but not a terribly important breakthrough.

Looking at the system a big further along the evolutionary timeline, where eyes start to develop, where the most primitive nervous systems and brains start, where higher level processing is obviously occurring and inputs are starting to become sensations, we should be able to what is changed or changing. It is the emergence of sensation we need to identify, even if the reaction is still an unconscious reflex. We don’t need to reverse engineer the human brain. Simple organisms are simpler to understand. Feeding the architectural insights we gain from studying those primitive systems into our guided evolution engines is likely to be far faster as a means to generating true machine consciousness and strong AI. That’s how we could develop consciousness in a couple of years rather than 15.

If we can make primitive sensing devices that work like those in primitive organisms, and can respond to specific sorts of sensory input, then that is a potential way of increasing the coverage of cloud sensing and even actuation. It would effectively be a highly distributed direct response system. With clever embedding of emergent phenomena techniques (such as cellular automata, flocking etc) , it could be a quite sophisticated way of responding to quite complex distributed inputs, avoiding some of the need for big data processing. If we can gather the outputs from these simple sensors and feed them into others, that will be an even better sort of biomimetic response system. That sort of direct experience of a situation is very different from a data mined result, especially if actuation capability is there too. The philosophical question as to whether that inclusion of that second bank of sensors makes the system in any way conscious remains, but it would certainly be very useful and valuable. The architecture we end up with via this approach may look like neurons, and could even be synthetic neurons, but that may be only one solution among many. Biology may have gone the neuron route but that doesn’t necessarily mean it is the only possibility. It may be that we could one day genetically modify bacteria to produce their own organic electronics to emulate the key processes needed to generate sensation, and to power them by consuming nutrients from their environment. I suggested smart yogurt based on this idea many years ago, and believe that it could achieve vast levels of intelligence.

Digitizing and collecting the signals from the system at each stage would generate lots of  data, and that may be used by programs to derive other kinds of results, or to relay the inputs to other analog sensory systems elsewhere. (It isn’t always necessary to digitize signals to transmit them, but it helps limit signal degradation and quickly becomes important if the signal is to travel far and is essential if it is to be recorded for later use or time shifting). However, I strongly suspect that most of the value in analog sensing and direct response is local, coupled to direct action or local processing and storage.

If we have these sorts of sensors liberally spread around, we’d create a truly smart environment, with local sensing and some basic intelligence able to relay sensation remotely to other banks of sensors elsewhere for further processing or even ultimately consciousness. The local sensors could be relatively dumb like nerve endings on our skin, feeding in  signals to a more connected virtual nervous system, or a bit smarter, like neural retinal cells, doing a lot of analog pre-processing before relaying them via ganglia cells, and maybe part of a virtual brain. If they are also capable of or connected to some sort of actuation, then we would be constructing a kind of virtual organism, with tendrils covering potentially the whole globe, and able to sense and interact with its environment in an intelligent way.

I use the term virtual not because the sensors wouldn’t be real, but because their electronic nature allows connectivity to many systems, overlapping, hierarchical or distinct. Any number of higher level systems could ‘experience’ them as part of its system, rather as if your fingers could be felt by the entire human population. Multiple higher level virtual organisms could share the same basic sensory/data inputs. That gives us a whole different kind of cloud sensing.

By doing processing locally, in the analog domain, and dealing with some of the response locally, a lot of traffic across the network is avoided and a lot of remote processing. Any post-processing that does occur can therefore add to a higher level of foundation. A nice side effect from avoiding all the extra transmission and processing is increased environmental friendliness.

So, we’d have a quite different sort of data network, collecting higher quality data, essentially doing by instinct what data mining does with huge server farms and armies of programmers. Cloudy, but much smarter than a straightforward sensor net.

… I think.

It isn’t without risk though. I had a phone discussion yesterday on the dangers of this kind of network. In brief, it’s dangerous.

Alcohol-free beer goggles

You remember that person you danced with and thought was wonderful, and then you met them the next day and your opinion was less favorable? That’s what people call beer goggles. Alcohol impairs judgment. It makes people chattier and improves their self confidence, but also makes them think others are more physically attractive and more interesting too. That’s why people get drunk apparently, because it upgrades otherwise dull people into tolerable company, breaking the ice and making people sociable and fun.

Augmented reality visors could double as alcohol-free beer goggles. When you look at someone  while wearing the visor, you wouldn’t have to see them warts and all. You could filter the warts. You could overlay their face with an upgraded version, or indeed replace it with someone else’s face. They wouldn’t even have to know.

The arms of the visor could house circuits to generate high intensity oscillating magnetic fields – trans-cranial magnetic stimulation. This has been demonstrated as a means of temporarily switching off certain areas of the brain, or at least reducing their effects. Among areas concerned are those involved in inhibitions. Alcohol does that normally, but you can’t drink tonight, so your visor can achieve the same effect for you.

So the nominated driver could be more included in drunken behavior on nights out. The visor could make people more attractive and reduce your inhibitions, basically replicating at least some of what alcohol does. I am not suggesting for a second that this is a good thing, only that it is technologically feasible. At least the wearer can set alerts so that they don’t declare their undying love to someone without at least being warned of the reality first.

The future of high quality TV

I occasionally do talks on future TV and I generally ignore current companies and their recent developments because people can read about them anywhere. If it is already out there, it isn’t the future. Companies make announcements of technologies they expect to bring in soon, which is the future, but they don’t tend to announce things until they’re almost ready for market so tracking those is no use for long term futurology.

Thanks to Pauline Rigby on Twitter, I saw the following article about Dolby’s new High Dynamic Range TV:

http://www.redsharknews.com/technology/item/2052-the-biggest-advance-in-video-for-ten-years-and-it-s-nothing-to-do-with-resolution

High dynamic range allows light levels to be reproduced across a high dynamic range. I love tech, terminology is so damned intuitive. So hopefully we will see the darkest blacks and the brightest lights.

It looks a good idea! But it won’t be their last development. We hear that the best way to predict the future is to invent it, so here’s my idea: textured pixels.

As they say, there is more to vision than just resolution. There is more to vision than just light too, even though our eyes can only make images from incoming photons and human eyes can’t even differentiate their polarisation. Eyes are not just photon detectors, they also do some image pre-processing, and the brain does a great deal more processing, using all sorts of clues from the image context.

Today’s TV displays mostly use red, blue and green LCD pixels back-lit by LEDs, fluorescent tubes or other lighting. Some newer ones use LEDs as the actual pixels, demonstrating just how stupid it was to call LCD TVs with LED back-lighting LED TVs. Each pixel that results is a small light source that can vary in brightness. Even with the new HDR that will still be the case.

Having got HDR, I suggest that textured pixels should be the next innovation. Texture is a hugely important context for vision. Micromechanical devices are becoming commonplace, and some proteins are moving into nano-motor technology territory. It would be possible to change the direction of a small plate that makes up the area of the pixel. At smaller scales, ridges could be created on the pixel, or peaks and troughs. Even reversible chemical changes could be made. Technology can go right down to nanoscale, far below the ability of the eye to perceive it, so matching the eye’s capabilities to discern texture should be feasible in the near future. If a region of the display has a physically different texture than other areas, that is an extra level of reality that they eye can perceive. It could appear glossy or matt, rough or smooth, warm or cold. Linking pixels together across an area, it could far better convey movement than jerky video frames. Sure you can emulate texture to some degree using just light, but it loses the natural subtlety.

So HDR good, Textured HDR better.

 

 

Estimating IoT value? Count ALL the beans!

In this morning’s news:

http://www.telegraph.co.uk/technology/news/11043549/UK-funds-development-of-world-wide-web-for-machines.html

£1.6M investment by UK Technology Strategy Board in Internet-of-Things HyperCat standard, which the article says will add £100Bn to the UK economy by 2020.

Garnter says that IoT has reached the hype peak of their adoption curve and I agree. Connecting machines together, and especially adding networked sensors will certainly increase technology capability across many areas of our lives, but the appeal is often overstated and the dangers often overlooked. Value should not be measured in purely financial terms either. If you value health, wealth and happiness, don’t just measure the wealth. We value other things too of course. It is too tempting just to count the most conspicuous beans. For IoT, which really just adds a layer of extra functionality onto an already technology-rich environment, that is rather like estimating the value of a chili con carne by counting the kidney beans in it.

The headline negatives of privacy and security have often been addressed so I don’t need to explore them much more here, but let’s look at a couple of typical examples from the news article. Allowing remotely controlled washing machines will obviously impact on your personal choice on laundry scheduling. The many similar shifts of control of your life to other agencies will all add up. Another one: ‘motorists could benefit from cheaper insurance if their vehicles were constantly transmitting positioning data’. Really? Insurance companies won’t want to earn less, so motorists on average will give them at least as much profit as before. What will happen is that insurance companies will enforce driving styles and car maintenance regimes that reduce your likelihood of a claim, or use that data to avoid paying out in some cases. If you have to rigidly obey lots of rules all of the time then driving will become far less enjoyable. Having to remember to check the tyre pressures and oil level every two weeks on pain of having your insurance voided is not one of the beans listed in the article, but is entirely analogous the typical home insurance rule that all your windows must have locks and they must all be locked and the keys hidden out of sight before they will pay up on a burglary.

Overall, IoT will add functionality, but it certainly will not always be used to improve our lives. Look at the way the web developed. Think about the cookies and the pop-ups and the tracking and the incessant virus protection updates needed because of the extra functions built into browsers. You didn’t want those, they were added to increase capability and revenue for the paying site owners, not for the non-paying browsers. IoT will be the same. Some things will make minor aspects of your life easier, but the price of that will that you will be far more controlled, you will have far less freedom, less privacy, less security. Most of the data collected for business use or to enhance your life will also be available to government and police. We see every day the nonsense of the statement that if you have done nothing wrong, then you have nothing to fear. If you buy all that home kit with energy monitoring etc, how long before the data is hacked and you get put on militant environmentalist blacklists because you leave devices on standby? For every area where IoT will save you time or money or improve your control, there will be many others where it does the opposite, forcing you to do more security checks, spend more money on car and home and IoT maintenance, spend more time following administrative procedures and even follow health regimes enforced by government or insurance companies. IoT promises milk and honey, but will deliver it only as part of a much bigger and unwelcome lifestyle change. Sure you can have a little more control, but only if you relinquish much more control elsewhere.

As IoT starts rolling out, these and many more issues will hit the press, and people will start to realise the downside. That will reduce the attractiveness of owning or installing such stuff, or subscribing to services that use it. There will be a very significant drop in the economic value from the hype. Yes, we could do it all and get the headline economic benefit, but the cost of greatly reduced quality of life is too high, so we won’t.

Counting the kidney beans in your chili is fine, but it won’t tell you how hot it is, and when you start eating it you may decide the beans just aren’t worth the pain.

I still agree that IoT can be a good thing, but the evidence of web implementation suggests we’re more likely to go through decades of abuse and grief before we get the promised benefits. Being honest at the outset about the true costs and lifestyle trade-offs will help people decide, and maybe we can get to the good times faster if that process leads to better controls and better implementation.

Ultra-simple computing: Part 4

Gel processing

One problem with making computers with a lot of cores is the wiring. Another is the distribution of tasks among the cores. Both of these can be solved with relatively simple architecture. Processing chips usually have a lot of connectors, letting them get data in parallel. But a beam of light can contain rays of millions of wavelengths, far more parallelism than is possible with wiring. If chips communicated using light with high density wavelength division multiplexing, it will solve some wiring issues. Taking another simple step, processors that are freed from wiring don’t have to be on a circuit board, but could be suspended in some sort of gel. Then they could use free space interconnection to connect to many nearby chips. Line of sight availability will be much easier than on a circuit board. Gel can also be used to cool chips.

Simpler chips with very few wired connections also means less internal wiring too. This reduces size still further and permits higher density of suspension without compromising line of sight.

Ripple scheduler

Process scheduling can also be done more simply with many processors. Complex software algorithms are not needed. In an array of many processors, some would be idle while some are already engaged on tasks. When a job needs processed, a task request (this could be as simple as a short pulse of a certain frequency) would be broadcast and would propagate through the array. On encountering an idle processor, the idle processor would respond with an accept response (again this could be a single pulse of another frequency. This would also propagate out as a wave through the array. These two waves may arrive at a given processor in quick succession.

Other processors could stand down automatically once one has accepted the job (i.e. when they detect the acceptance wave). That would be appropriate when all processors are equally able. Alternatively, if processors have different capabilities, the requesting agent would pick a suitable one from the returning acceptances, send a point to point message to it, and send out a cancel broadcast wave to stand others down. It would exchange details about the task with this processor on a point to point link, avoiding swamping the system with unnecessary broadcast messages.  An idle processor in the array would thus see a request wave, followed by a number of accept waves. It may then receive a personalized point to point message with task information, or if it hasn’t been chosen, it would just see the cancel wave of . Busy processors would ignore all communications except those directed specifically to them.

I’m not saying the ripple scheduling is necessarily the best approach, just an example of a very simple system for process scheduling that doesn’t need sophisticated algorithms and code.

Activator Pastes

It is obvious that this kind of simple protocol can be used with a gel processing medium populated with a suitable mixture of different kinds of processors, sensors, storage, transmission and power devices to provide a fully scalable self-organizing array that can perform a high task load with very little administrative overhead. To make your smart gel, you might just choose the volume of weight ratios of components you want and stir them into a gel rather like mixing a cocktail. A paste made up in this way could be used to add sensing, processing and storage to any surface just by painting some of the paste onto it.

A highly sophisticated distributed cloud sensor network for example could be made just by painting dabs of paste onto lamp posts. Solar power or energy harvesting devices in the paste would power the sensors to make occasional readings, pre-process them, and send them off to the net. This approach would work well for environmental or structural monitoring, surveillance, even for everyday functions like adding parking meters to lines marking the spaces on the road where they interact with ID devices in the car or an app on the driver’s smartphone.

Special inks could contain a suspension of such particles and add a highly secure electronic signature onto one signed by pen and ink.

The tacky putty stuff that we use to stick paper to walls could use activator paste as the electronic storage and processing medium to let you manage  content an e-paper calendar or notice on a wall.

I can think of lots of ways of using smart pastes in health monitoring, packaging, smart makeup and so on. The basic principle stays the same though. It would be very cheap and yet very powerful, with many potential uses. Self-organising, and needs no set up beyond giving it a job to do, which could come from any of your devices. You’d probably buy it by the litre, keep some in the jar as your computer, and paste the rest of it all over the place to make your skin, your clothes, your work-spaces and your world smart. Works for me.

 

Interfacial prejudice

This blog is caused by an interaction with Nick Colosimo, thanks Nick.

We were discussing whether usage differences for gadgets were generational. I think they are but not because older people find it hard to learn new tricks. Apart from a few unfortunate people whose brains go downhill when they get old, older people have shown they are perfectly able and willing to learn web stuff. Older people were among the busiest early adopters of social media.

I think the problem is the volume of earlier habits that need to be unlearned. I am 53 and have used computers every day since 1981. I have used slide rules and log tables, an abacus, an analog computer, several mainframes, a few minicomputers, many assorted Macs and PCs and numerous PDAs, smartphones and now tablets. They all have very different ways of using them and although I can’t say I struggle with any of them, I do find the differing implementations of features and mechanisms annoying. Each time a new operating system comes along, or a new style of PDA, you have to learn a new design language, remember where all the menus, sub-menus and all the various features are hidden on this one, how they interconnect and what depends on what.

That’s where the prejudice kicks in. The many hours of experience you have on previous systems have made you adept at navigating through a sea of features, menus, facilities. You are native to the design language, the way you do things, the places to look for buttons or menus, even what the buttons look like. You understand its culture, thoroughly. When a new device or OS is very different, using it is like going on holiday. It is like emigrating if you’re making a permanent switch. You have the ability to adapt, but the prejudice caused by your long experience on a previous system makes that harder. Your first uses involve translation from the old to the new, just like translating foreignish to your own language, rather than thinking in the new language as you will after lengthy exposure. Your attitude to anything on the new system is colored by your experiences with the old one.

It isn’t stupidity that making you slow and incompetent. Its interfacial prejudice.