Category Archives: interfaces

The future of biometric identification and authentication

If you work in IT security, the first part of this will not be news to you, skip to the section on the future. Otherwise, the first sections look at the current state of biometrics and some of what we already know about their security limitations.

Introduction

I just read an article on fingerprint recognition. Biometrics has been hailed by some as a wonderful way of determining someone’s identity, and by others as a security mechanism that is far too easy to spoof. I generally fall in the second category. I don’t mind using it for simple unimportant things like turning on my tablet, on which I keep nothing sensitive, but so far I would never trust it as part of any system that gives access to my money or sensitive files.

My own history is that voice recognition still doesn’t work for me, fingerprints don’t work for me, and face recognition doesn’t work for me. Iris scan recognition does, but I don’t trust that either. Let’s take a quick look at conventional biometrics today and the near future.

Conventional biometrics

Fingerprint recognition.

I use a Google Nexus, made by Samsung. Samsung is in the news today because their Galaxy S5 fingerprint sensor was hacked by SRLabs minutes after release, not the most promising endorsement of their security competence.

http://www.telegraph.co.uk/technology/samsung/10769478/Galaxy-S5-fingerprint-scanner-hacked.html

This article says the sensor is used in the user authentication to access Paypal. That is really not good. I expect quite a few engineers at Samsung are working very hard indeed today. I expect they thought they had tested it thoroughly, and their engineers know a thing or two about security. Every engineer knows you can photograph a fingerprint and print a replica in silicone or glue or whatever. It’s the first topic of discussion at any Biometrics 101 meeting. I would assume they tested for that. I assume they would not release something they expected to bring instant embarrassment on their company, especially something failing by that classic mechanism. Yet according to this article, that seems to be the case. Given that Samsung is one of the most advanced technology companies out there, and that they can be assumed to have made reasonable effort to get it right, that doesn’t offer much hope for fingerprint recognition. If they don’t do it right, who will?

My own experience with fingerprint recognition history is having to join a special queue every day at Universal Studios because their fingerprint recognition entry system never once recognised me or my child. So I have never liked it because of false negatives. For those people for whom it does work, their fingerprints are all over the place, some in high quality, and can easily be obtained and replicated.

As just one token in multi-factor authentication, it may yet have some potential, but as a primary access key, not a chance. It will probably remain be a weak authenticator.

Face recognition

There are many ways of recognizing faces – visible light, infrared or UV, bone structure, face shapes, skin texture patterns, lip-prints, facial gesture sequences… These could be combined in simultaneous multi-factor authentication. The technology isn’t there yet, but it offers more hope than fingerprint recognition. Using the face alone is no good though. You can make masks from high-resolution photographs of people, and photos could be made using the same spectrum known to be used in recognition systems. Adding gestures is a nice idea, but in a world where cameras are becoming ubiquitous, it wouldn’t be too hard to capture the sequence you use. Pretending that a mask is alive by adding sensing and then using video to detect any inspection for pulse or blood flows or gesture requests and then to provide appropriate response is entirely feasible, though it would deter casual entry. So I am not encouraged to believe it would be secure unless and until some cleverer innovation occurs.

What I do know is that I set my tablet up to recognize me and it works about one time in five. The rest of the time I have to wait till it fails and then type in a PIN. So on average, it actually slows entry down. False negative again. Giving lots of false negatives without the reward of avoiding false positives is not a good combination.

Iris scans

I was a subject in one of the early trials for iris recognition. It seemed very promising. It always recognized me and never confused me with someone else. That was a very small scale trial though so I’d need a lot more convincing before I let it near my bank account. I saw the problem of replication an iris using a high quality printer and was assured that that couldn’t work because the system checks for the eye being alive by watching for jitter and shining a light and watching for pupil contraction. Call me too suspicious but I didn’t and don’t find that at all reassuring. It won’t be too long before we can make a thin sheet high-res polymer display layered onto a polymer gel underlayer that contracts under electric field, with light sensors built in and some software analysis for real time response. You could even do it as part of a mask with the rest of the face also faithfully mimicking all the textures, real-time responses, blood flow mimicking, gesture sequences and so on. If the prize is valuable enough to justify the effort, every aspect of the eyes, face and fingerprints could be mimicked. It may be more Mission Impossible than casual high street robbery but I can’t yet have any confidence that any part of the face or gestures would offer good security.

DNA

We hear frequently that DNA is a superbly secure authenticator. Every one of your cells can identify you. You almost certainly leave a few cells at the scene of a crime so can be caught, and because your DNA is unique, it must have been you that did it. Perfect, yes? And because it is such a perfect authenticator, it could be used confidently to police entry to secure systems.

No! First, even for a criminal trial, only a few parts of your DNA are checked, they don’t do an entire genome match. That already brings the chances of a match down to millions rather than billions. A chance of millions to one sounds impressive to a jury until you look at the figure from the other direction. If you have 1 in 70 million chance of a match, a prosecution barrister might try to present that as a 70 million to 1 chance that you’re guilty and a juror may well be taken in. The other side of that is that 100 people of the 7 billion would have that same 1 in 70 million match. So your competent defense barrister should  present that as only a 1 in 100 chance that it was you. Not quite so impressive.

I doubt a DNA system used commercially for security systems would be as sophisticated as one used in forensic labs. It will be many years before an instant response using large parts of your genome could be made economic. But what then? Still no. You leave DNA everywhere you go, all day, every day. I find it amazing that it is permitted as evidence in trials, because it is so easy to get hold of someone’s hairs or skin flakes. You could gather hairs or skin flakes from any bus seat or hotel bathroom or bed. Any maid in a big hotel or any airline cabin attendant could gather packets of tissue and hair samples and in many cases could even attach a name to them.  Your DNA could be found at the scene of any crime having been planted there by someone who simply wanted to deflect attention from themselves and get someone else convicted instead of them. They don’t even need to know who you are. And the police can tick the crime solved box as long as someone gets convicted. It doesn’t have to be the culprit. Think you have nothing to fear if you have done nothing wrong? Think again.

If someone wants to get access to an account, but doesn’t mind whose, perhaps a DNA-based entry system would offer good potential, because people perceive it as secure, whereas it simply isn’t. So it might not be paired with other secure factors. Going back to the maid or cabin attendant. Both are low paid. A few might welcome some black market bonuses if they can collect good quality samples with a name attached, especially a name of someone staying in a posh suite, probably with a nice account or two, or privy to valuable information. Especially if they also gather their fingerprints at the same time. Knowing who they are, getting a high res pic of their face and eyes off the net, along with some voice samples from videos, then making a mask, iris replica, fingerprint and if you’re lucky also buying video of their gesture patterns from the black market, you could make an almost perfect multi-factor biometric spoof.

It also becomes quickly obvious that the people who are the most valuable or important are also the people who are most vulnerable to such high quality spoofing.

So I am not impressed with biometric authentication. It sounds good at first, but biometrics are too easy to access and mimic. Other security vulnerabilities apply in sequence too. If your biometric is being measured and sent across a network for authentication, all the other usual IT vulnerabilities still apply. The signal could be intercepted and stored, replicated another time, and you can’t change your body much, so once your iris has been photographed or your fingerprint stored and hacked, it is useless for ever. The same goes for the other biometrics.

Dynamic biometrics

Signatures, gestures and facial expressions offer at least the chance to change them. If you signature has been used, you could start using a new one. You could sign different phrases each time, as a personal one-time key. You could invent new gesture sequences. These are really just an equivalent to passwords. You have to remember them and which one you use for which system. You don’t want a street seller using your signature to verify a tiny transaction and then risk the seller using the same signature to get right into your account.

Summary of status quo

This all brings us back to the most basic of security practice. You can only use static biometrics safely as a small part of a multi-factor system, and you have to use different dynamic biometrics such as gestures or signatures on a one time basis for each system, just as you do with passwords. At best, they provide a simple alternative to a simple password. At worst, they pair low actual security with the illusion of high security, and that is a very bad combination indeed.

So without major progress, biometrics in its conventional meaning doesn’t seem to have much of a future. If it is not much more than a novelty or a toy, and can only be used safely in conjunction with some proper security system, why bother at all?

The future

You can’t easily change your eyes or your DNA or you skin, but you can add things to your body that are similar to biometrics or interact with it but offer the flexibility and replaceability of electronics.

I have written frequently about active skin, using the skin as a platform for electronics, and I believe the various layers of it offer the best potential for security technology.

Long ago, RFID chips implants became commonplace in pets and some people even had them inserted too. RFID variants could easily be printed on a membrane and stuck onto the skin surface. They could be used for one time keys too, changing each time they are used. Adding accelerometers, magnetometers, pressure sensors or even location sensors could all offer ways of enhancing security options. Active skin allows easy combination of fingerprints with other factors.

 

Ultra-thin and uninvasive security patches could be stuck onto the skin, and could not be removed without damaging them, so would offer a potentially valuable platform. Pretty much any kinds and combinations of electronics could be used in them. They could easily be made to have a certain lifetime. Very thin ones could wash off after a few days so could be useful for theme park entry during holidays or for short term contractors. Banks could offer stick on electronic patches that change fundamentally how they work every month, making it very hard to hack them.

Active skin can go inside the skin too, not just on the surface. You could for example have an electronic circuit or an array of micro-scale magnets embedded among the skin cells in your fingertip. Your fingerprint alone could easily be copied and spoofed, but not the accompanying electronic interactivity from the active skin that can be interrogated at the same time. Active skin could measure all sorts of properties of the body too, so personal body chemistry at a particular time could be used. In fact, medical monitoring is the first key development area for active skin, so we’re likely to have a lot of body data available that could make new biometrics. The key advantage here is that skin cells are very large compared to electronic feature sizes. A decent processor or memory can be made around the size of one skin cell and many could be combined using infrared optics within the skin. Temperature or chemical gradients between inner and outer skin layers could be used to power devices too.

If you are signing something, the signature could be accompanied by a signal from the fingertip, sufficiently close to the surface being signed to be useful. A ring on a finger could also offer a voluminous security electronics platform to house any number of sensors, memory and processors.

Skin itself offers a reasonable communications route, able to carry a few Mbit’s of data stream, so touching something could allow a lot of data transfer very quickly. A smart watch or any other piece of digital jewelry or active skin security patch could use your fingertip to send an authentication sequence. The watch would know who you are by constant proximity and via its own authentication tools. It could easily be unauthorized instantly when detached or via a remote command.

Active makeup offer a novel mechanism too. Makeup will soon exist that uses particles that can change color or alignment under electronic control, potentially allowing video rate pattern changes. While that makes for fun makeup, it also allows for sophisticated visual authentication sequences using one-time keys. Makeup doesn’t have to be confined only to the face of course, and security makeup could maybe be used on the forearm or hands. Combining with static biometrics, many-factor authentication could be implemented.

I believe active skin, using membranes added or printed onto and even within the skin, together with the use of capsules, electronic jewelry, and even active makeup offers the future potential to implement extremely secure personal authentication systems. This pseudo-biometric authentication offers infinitely more flexibility and changeability than the body itself, but because it is attached to the body, offers much the same ease of use and constant presence as other biometrics.

Biometrics may be pretty useless as it is, but the field does certainly have a future. We just need to add some bits. The endless potential variety of those bits and their combinations makes the available creativity space vast.

 

 

Virtual reality. Will it stick this time?

My first job was in missile design and for a year, the lab I worked in was a giant bra-shaped building, two massive domes joined by a short link-way that had been taken out of use years earlier. The domes had been used by soldiers to fire simulated missiles at simulated planes, and were built in the 1960s. One dome had a hydraulic moving platform to simulate firing from a ship. The entire dome surface was used as a screen to show the plane and missile. The missile canisters held by the soldier were counterweighted with a release mechanism coordinated to the fire instruction and the soldier’s headphones would produce a corresponding loud blast to accompany the physical weight change at launch so that they would feel as full a range of sensation experienced by a real soldier on a real battlefield as possible. The missile trajectory and control interface was simulated by analog computers. So virtual reality may have hit the civilian world around 1990 but it was in use several decades earlier in military world. In 1984, we even considered using our advancing computers to create what we called waking dreaming, simulating any chosen experience for leisure. Jaron Lanier has somehow been credited with inventing VR, and he contributed to its naming, but the fact is he ‘invented’ it several decades after it was already in common use and after the concepts were already pretty well established.

I wrote a paper in 1991 based on BT’s VR research in which I made my biggest ever futurology mistake. I worked out the number crunching requirements and pronounced that VR would overtake TV as an entertainment medium around 2000. I need hardly point out that I was wrong. I have often considered why it didn’t happen the way I thought it would. On one front, we did get the entertainment of messing around in 3D worlds, and it is the basis of almost all computer gaming now. So that happened just fine, it just didn’t use stereo vision to convey immersion. It turned out that the immersion is good enough on a TV or PC screen.

Also, in the early 1990s, just as IT companies may have been considering making VR headsets, the class action law suit became very popular, and some of those were based on very tenuous connections to real cause and effect, and meanwhile some VR headset users were reporting eye strain or disorientation. I imagine that the lawyers in those IT companies would be thinking of every teenager that develops any eye problem suing them just in case it might have been caused in part by use of their headset. Those issues plus the engineering difficulties of commercialising manufacture of good quality displays probably were enough to kill VR.

However, I later enjoyed many a simulator ride at Disney and Universal. One such ride allowed me to design my own roller coaster with twists and loops and then ride it in a simulator. It was especially enjoyable. The pull of simulator rides remains powerful.  Playing a game on an xbox is fun, but doesn’t compare with a simulator ride.

I think much of the future of VR lies in simulators where it already thrives. They can go further still. Tethered simulators can throw you around a bit but can’t manage the same range of experience that you can get on a roller coaster. Imagine using a roller coaster where you see the path ahead via a screen. As your cart reaches the top of a hill, the track apparently collapses and you see yourself hurtling towards certain death. That would scare the hell out of me. Combining the g-forces that you can get on a roller coaster with imaginative visual effects delivered via a headset would provide the ultimate experience.

Compare that with using a nice visor on its own. Sure, you can walk around an interesting object like a space station, or enjoy more immersive gaming, or you can co-design molecules. That sort of app has been used for many years in research labs anyway. Or you can train people in health and safety without exposing them to real danger. But where’s the fun? Where’s the big advantage over TV-based gaming? 3D has pretty much failed yet again for TV and movies, and hasn’t made much impact in gaming yet. Do we really think that adding a VR headset will change it all, even though 3D glasses didn’t?

I was a great believer in VR. With the active contact lens, it can be ultra-light-weight and minimally invasive while ultra-realistic. Adding active skin interfacing to the nervous system to convey physical sensation will eventually help too. But unless plain old VR it is accompanied by stimulation of the other senses, just as a simulator does, I fear the current batch of VR enthusiasts are just repeating the same mistakes I made over twenty years ago. I always knew what you could do with it and that the displays would get near perfect one day and I got carried away with excitement over the potential. That’s what caused my error. Beware you don’t make the same one. This could well be just another big flop. I hope it isn’t though.

The internet of things will soon be history

I’ve been a full time futurologist since 1991, and an engineer working on far future R&D stuff since I left uni in 1981. It is great seeing a lot of the 1980s dreams about connecting everything together finally starting to become real, although as I’ve blogged a bit recently, some of the grander claims we’re seeing for future home automation are rather unlikely. Yes you can, but you probably won’t, though some people will certainly adopt some stuff. Now that most people are starting to get the idea that you can connect things and add intelligence to them, we’re seeing a lot of overshoot too on the importance of the internet of things, which is the generalised form of the same thing.

It’s my job as a futurologist not only to understand that trend (and I’ve been yacking about putting chips in everything for decades) but then to look past it to see what is coming next. Or if it is here to stay, then that would also be an important conclusion too, but you know what, it just isn’t. The internet of things will be about as long lived as most other generations of technology, such as the mobile phone. Do you still have one? I don’t, well I do but they are all in a box in the garage somewhere. I have a general purpose mobile computer that happens to do be a phone as well as dozens of other things. So do you probably. The only reason you might still call it a smartphone or an iPhone is because it has to be called something and nobody in the IT marketing industry has any imagination. PDA was a rubbish name and that was the choice.

You can stick chips in everything, and you can connect them all together via the net. But that capability will disappear quickly into the background and the IT zeitgeist will move on. It really won’t be very long before a lot of the things we interact with are virtual, imaginary. To all intents and purposes they will be there, and will do wonderful things, but they won’t physically exist. So they won’t have chips in them. You can’t put a chip into a figment of imagination, even though you can make it appear in front of your eyes and interact with it. A good topical example of this is the smart watch, all set to make an imminent grand entrance. Smart watches are struggling to solve battery problems, they’ll be expensive too. They don’t need batteries if they are just images and a fully interactive image of a hugely sophisticated smart watch could also be made free, as one of a million things done by a free app. The smart watch’s demise is already inevitable. The energy it takes to produce an image on the retina is a great deal less than the energy needed to power a smart watch on your wrist and the cost of a few seconds of your time to explain to an AI how you’d like your wrist to be accessorised is a few seconds of your time, rather fewer seconds than you’d have spent on choosing something that costs a lot. In fact, the energy needed for direct retinal projection and associated comms is far less than can be harvested easily from your body or the environment, so there is no battery problem to solve.

If you can do that with a smart watch, making it just an imaginary item, you can do it to any kind of IT interface. You only need to see the interface, the rest can be put anywhere, on your belt, in your bag or in the IT ether that will evolve from today’s cloud. My pad, smartphone, TV and watch can all be recycled.

I can also do loads of things with imagination that I can’t do for real. I can have an imaginary wand. I can point it at you and turn you into a frog. Then in my eyes, the images of you change to those of a frog. Sure, it’s not real, you aren’t really a frog, but you are to me. I can wave it again and make the building walls vanish, so I can see the stuff on sale inside. A few of those images could be very real and come from cameras all over the place, the chips-in-everything stuff, but actually, I don’t have much interest in most of what the shop actually has, I am not interested in most of the local physical reality of a shop; what I am far more interested in is what I can buy, and I’ll be shown those things, in ways that appeal to me, whether they’re physically there or on Amazon Virtual. So 1% is chips-in-everything, 99% is imaginary, virtual, some sort of visual manifestation of my profile, Amazon Virtual’s AI systems, how my own AI knows I like to see things, and a fair bit of other people’s imagination to design the virtual decor, the nice presentation options, the virtual fauna and flora making it more fun, and countless other intermediaries and extramediaries, or whatever you call all those others that add value and fun to an experience without actually getting in the way. All just images directly projected onto my retinas. Not so much chips-in-everything as no chips at all except a few sensors, comms and an infinitesimal timeshare of a processor and storage somewhere.

A lot of people dismiss augmented reality as irrelevant passing fad. They say video visors and active contact lenses won’t catch on because of privacy concerns (and I’d agree that is a big issue that needs to be discussed and sorted, but it will be discussed and sorted). But when you realise that what we’re going to get isn’t just an internet of things, but a total convergence of physical and virtual, a coming together of real and imaginary, an explosion of human creativity,  a new renaissance, a realisation of yours and everyone else’s wildest dreams as part of your everyday reality; when you realise that, then the internet of things suddenly starts to look more than just a little bit boring, part of the old days when we actually had to make stuff and you had to have the same as everyone else and it all cost a fortune and needed charged up all the time.

The internet of things is only starting to arrive. But it won’t stay for long before it hides in the cupboard and disappears from memory. A far, far more exciting future is coming up close behind. The world of creativity and imagination. Bring it on!

Home automation. A reality check.

Home automation is much in the news at the moment now that companies are making the chips-with-everything kit and the various apps.

Like 3D, home automation comes and goes. Superficially it is attractive, but the novelty wears thin quickly. It has been possible since the 1950s to automate a home. Bill Gates notably built a hugely expensive automated home 20 years ago. There are rarely any new ideas in the field, just a lot of recycling and minor tweaking.  Way back in 2000, I wrote what was even then just a recycling summary blog-type piece for my website bringing together a lot of already well-worn ideas. And yet it could easily have come from this years papers. Here it is, go to the end of the italicised text for my updating commentary:

Chips everywhere

 August 2000

 The chips-with-everything lifestyle is almost inevitable. Almost everything can be improved by adding some intelligence to it, and since the intelligence will be cheap to make, we will take advantage of this potential. In fact, smart ways of doing things are often cheaper than dumb ways, a smart door lock may be much cheaper than a complex key based lock. A chip is often cheaper than dumb electronics or electromechanics. However, electronics no longer has a monopoly of chip technology. Some new chips incorporate tiny electromechanical or electrochemical devices to do jobs that used to be done by more expensive electronics. Chips now have the ability to analyse chemicals, biological matter or information. They are at home processing both atoms and bits.

 These new families of chips have many possible uses, but since they are relatively new, most are probably still beyond our imagination. We already have seen the massive impact of chips that can do information processing. We have much less intuition regarding the impact in the physical world.

 Some have components that act as tiny pumps to allow drugs to be dispensed at exactly the right rate. Others have tiny mirrors that can control laser beams to make video displays. Gene chips have now been built that can identify the presence of many different genes, allowing applications from rapid identification to estimation of life expectancy for insurance reasons. (They are primarily being use to tell whether people have a genetic disorder so that their treatment can be determined correctly).

 It is easy to predict some of the uses such future chips might have around the home and office, especially when they become disposably cheap. Chips on fruit that respond to various gases may warn when the fruit is at its best and when it should be disposed of. Other foods might have electronic use-by dates that sound an alarm each time the cupboard or fridge is opened close to the end of their life. Other chips may detect the presence of moulds or harmful bacteria. Packaging chips may have embedded cooking instructions that communicate directly with the microwave, or may contain real-time recipes that appear on the kitchen terminal and tell the chef exactly what to do, and when. They might know what other foodstuffs are available in the kitchen, or whether they are in stock locally and at what price. Of course, these chips could also contain pricing and other information for use by the shops themselves, replacing bar codes and the like and allowing the customer just to put all the products in a smart trolley and walk out, debiting their account automatically. Chips on foods might react when the foods are in close proximity, warning the owner that there may be odour contamination, or that these two could be combined well to make a particularly pleasant dish. Cooking by numbers. In short, the kitchen could be a techno-utopia or nightmare depending on taste.

 Mechanical switches can already be replaced by simple sensors that switch on the lights when a hand is waved nearby, or when someone enters a room. In future, switches of all kinds may be rather more emotional, glowing, changing colour or shape, trying to escape, or making a noise when a hand gets near to make them easier or more fun to use. They may respond to gestures or voice commands, or eventually infer what they are to do from something they pick up in conversation. Intelligent emotional objects may become very commonplace. Many devices will act differently according to the person making the transaction. A security device will allow one person entry, while phoning the police when someone else calls if they are a known burglar. Others may receive a welcome message or be put in videophone contact with a resident, either in the house or away.

 It will be possible to burglar proof devices by registering them in a home. They could continue to work while they are near various other fixed devices, maybe in the walls, but won’t work when removed. Moving home would still be possible by broadcasting a digitally signed message to the chips. Air quality may be continuously analysed by chips, which would alert to dangers such as carbon monoxide, or excessive radiation, and these may also monitor for the presence of bacteria or viruses or just pollen. They may be integrated into a home health system which monitors our wellbeing on a variety of fronts, watching for stress, diseases, checking our blood pressure, fitness and so on. These can all be unobtrusively monitored. The ultimate nightmare might be that our fridge would refuse to let us have any chocolate until the chips in our trainers have confirmed that we have done our exercise for the day.

 Some chips in our home would be mobile, in robots, and would have a wide range of jobs from cleaning and tidying to looking after the plants. Sensors in the soil in a plant pot could tell the robot exactly how much water and food the plant needs. The plant may even be monitored by sensors on the stem or leaves. 

The global positioning system allows chips to know almost exactly where they are outside, and in-building positioning systems could allow positioning down to millimetres. Position dependent behaviour will therefore be commonplace. Similarly, events can be timed to the precision of atomic clock broadcasts. Response can be super-intelligent, adjusting appropriately for time, place, person, social circumstances, environmental conditions, anything that can be observed by any sort of sensor or predicted by any sort of algorithm. 

With this enormous versatility, it is very hard to think of anything where some sort of chip could not make an improvement. The ubiquity of the chip will depend on how fast costs fall and how valuable a task is, but we will eventually have chips with everything.

So that was what was pretty everyday thinking in the IT industry in 2000. The articles I’ve read recently mostly aren’t all that different.

What has changed since is that companies trying to progress it are adding new layers of value-skimming. In my view some at least are big steps backwards. Let’s look at a couple.

Networking the home is fine, but doing so so that you can remotely adjust the temperature across the network or run a bath from the office is utterly pointless. It adds the extra inconvenience of having to remember access details to an account, regularly updating security details, and having to recover when the company running it loses all your data to a hacker, all for virtually no benefit.

Monitoring what the user does and sending the data back to the supplier company so that they can use it for targeted ads is another huge step backwards. Advertising is already at the top of the list of things we already have quite enough. We need more resources, more food supply, more energy, more of a lot of stuff. More advertising we can do without. It adds costs to everything and wastes our time, without giving anything back.

If a company sells home automation stuff and wants to collect the data on how I use it, and sell that on to others directly or via advertising services, it will sit on their shelf. I will not buy it, and neither will most other people. Collecting the data may be very useful, but I want to keep it, and I don’t want others to have access to it. I want to pay once, and then own it outright with full and exclusive control and data access. I do not want to have to create any online accounts, not have to worry about network security or privacy, not have to download frequent software updates, not have any company nosing into my household and absolutely definitely no adverts.

Another is to migrate interfaces for things onto our smartphones or tablets. I have no objection to having that as an optional feature, but I want to retain a full physical switch or control. For several years in BT, I lived in an office with a light that was controlled by a remote control, with no other switch. The remote control had dozens of buttons, yet all it did was turn the light on or off. I don’t want to have to look for a remote control or my phone or tablet in order to turn on a light or adjust temperature. I would much prefer a traditional light switch and thermostat. If they communicate by radio, I don’t care, but they do need to be physically present in the same place all the time.

Automated lights that go on and off as people enter or leave a room are also a step backwards. I have fallen victim once to one in a work toilet. If you sit still for a couple of minutes, they switch the lights off. That really is not welcome in an internal toilet with no windows.

The traditional way of running a house is not so demanding that we need a lot of assistance anyway. It really isn’t. I only spend a few seconds every day turning lights on and off or adjusting temperature. It would take longer than that on average to maintain apps to do it automatically. As for saving energy by turning heating on and off all the time, I think that is over-valued as a feature too. The air in a house doesn’t take much heat and if the building cools down, it takes a lot to get it back up again. That actually makes more strain on a boiler than running at a relatively constant low output. If the boiler and pumps have to work harder more often, they are likely to last less time, and savings would be eradicated.

So, all in all, while I can certainly see merits in adding chips to all sorts of stuff, I think their merits in home automation is being grossly overstated in the current media enthusiasm, and the downside being far too much ignored. Yes you can, but most people won’t want to and those who do probably won’t want to do nearly as much as is being suggested, and even those won’t want all the pain of doing so via service providers adding unnecessary layers or misusing their data.

Active Skin part 2: initial applications

When I had the active skin idea, it was obvious that there would be a lot of applications so I dragged the others from the office into a brainstorm to determine the scope of this concept. These are the original ideas from that 2001 brainstorm and the following days as I wrote them up, so don’t expect this to be an updated 2014 list, I might do that another time. Some of these have been developed at least in part by other companies in the years since, and many more are becoming more obvious as applications now that the technology foundations are catching up. I have a couple more parts of this to publish, with some more ideas. I’ve loosely listed them here in sections according to layer, but some of the devices may function at two or more different layers. I won’t repeat them, so it should be assumed that any of these could be appropriate to more than one layer. You’ll notice we didn’t bother with the wearables layer since even in 2001 wearable computing was already a well-established field in IT labs, with lots of ideas already. Slide2 Smart tattoos layer This layer is produced by deep printing well into the skin, possibly using similar means to that for tattooing. Some devices could be implanted by means of water or air pressure injection Slide10 Slide11 Slide6

  1. Display capability leading to static or multimedia display instead of static ink
  2. Use for multimedia body adornment, context dependent tattoos, tribalism
  3. monitor body chemicals for clues to emotional, hormonal or health state
  4. Measurement of blood composition to assist in drug dosing
  5. monitor nerve signals
  6. tattoos that show body’s medical state or other parameters
  7. health monitor displays, e.g.  blood insulin level, warning displays, instructions and recommendations on what actions to take
  8. show emotional state, emoticons shown according to biochemical or electrical cues
  9. may convert information on body’s state into other stimuli, such as heat or vibration
  10. may do same from external stimuli
  11. devices in different people could be linked in this way, forming emotilinks. Groups of people could be linked. People belonging to several such groups might have different signalling or position for each group.
  12. Identification, non-erasable, much less invasive than having an implant for the same purpose so would not have the same public objection. This could be electronic, or as simple as ultraviolet ink in a machine readable form such as barcode, snowflake etc
  13. Power supply for external devices using body’s energy supply, e.g. ATP
  14. Metallic ear implants on ear drum as hearing aid – electrostatic or magnetically driven
  15. Electronic signet ring, electronics that will only function when held by the rightful user
  16. Electronic signature devices

Mid-term layer Slide8 Slide9 Slide7Slide5 These components could be imprinted by printing onto the skin surface. Some could be implemented by adsorption from transfers, others by mechanical injection.

  1. Access technology – temporary access to buildings or theme parks. Rather than a simple stamp, people could have a smarter ID device printed into their skin
  2. The device could monitor where the wearer goes and for how long
  3. It could interact with monitoring equipment in buildings or equipment
  4. The device might include the use of invisible active inks on smart membrane
  5. Components could be made soluble to wash off easily, or more permanent
  6. Components could be photodegradable
  7. Could use ultraviolet inks that may be read by either external devices or other components
  8. Like smart tattoo ID systems, they could use snowflakes, colour snowflakes, barcodes or ‘digital paper’, to give a ‘digital skin’ functionality
  9. This could interact with ‘digital air’ devices
  10. Could be used to co-ordinate external device positioning accurately for medical reasons, e.g. acupuncture, TENS etc.
  11. Ultra-smart finger prints, wide range of functions based on interaction with computers and external devices, other smart skin systems, or digital paper
  12. Outputs DNA or DNA code to external reader for ID or medical reasons
  13. Combine with smart tags to achieve complex management and control systems, e.g. in package handling, product assembly
  14. SOS talismans, full health record built into body, including blood groups, tissue groups etc
  15. Degradable radiation monitors that can be positioned at key body points for more accurate dose measurement
  16. Could signal between such devices to a central display via the skin
  17. Devices might communicate using ad-hoc networks, could be used as a distributed antenna for external communication
  18. Thermometers & alarms. Use to measure heat for alarms for old people with degraded senses
  19. Directly interact with smart showers to prevent scalding
  20. Could monitor peoples behaviour for behaviour based alarms, e.g. fall alarms
  21. Overlay synthetic nervous system, use for medical prostheses, bionics or external interfacing
  22. Synthesised senses, making us sensitive to stimuli outside our biological capability
  23. Smart teeth, checks food for presence of bacteria or toxins
  24. Monitor breath for bad odours or illness
  25. Diabetic supervision, monitor ketones
  26. Monitor diet and link to smart devices in the home or hospital to police diet
  27. Modify taste by directly stimulating nerves in the tongue? Probably not feasible
  28. Calorie counting
  29. Smile enhancement, using light emission or fluorescence
  30. Smile training, e.g. tactile feedback on mouth position after operation
  31. Operation scar monitoring, patch across wound could monitor structural integrity,
  32. infection monitor based on detecting presence of harmful bacteria, or characteristics of surrounding skin affected by infection
  33. semi-permanent nail varnish with variable colour
  34. context sensitive nail varnish
  35. multimedia nail varnish
  36. Baby tagging for security purposes & wide range of medical applications such as breathing monitoring, temperature, movement etc
  37. Operation tagging to prevent mistakes, direct interaction with electronic equipment in theatre
  38. ITU applications
  39. Active alarms, integrated into external devices, directly initiate action
  40. Position based sensors and alarms
  41. Personality badge

Transfer layer This layer could use printing techniques straight onto the skin surface, or use transfers. A thin transfer membrane may stay in place for the duration of the required functionality, but could be removed relatively easily if necessary. It is envisaged that this membrane would be a thin polymer that acts as a carrier for the components as well as potentially shielding them from direct contact with the body or from the outside world. It could last for up to several days.

  1. Tactile interfaces – vibration membranes that convey texture or simple vibration
  2. Tactile stimuli as a means for alarms, coupled with heat, cold, or radiation sensors
  3. Text to Braille translation without need for external devices, using actuators in fingertip pads
  4. Use for navigation based on external magnetic field measurement, GPS or other positioning systems, translated into sensory stimuli
  5. Measurement and possible recording of force
  6. use to police child abuse, or other handling in the workplace as safety precautions. Could link to alarms
  7. motion detection, using kinetic or magnetic detection for use in sports or medical systems
  8. actuators built into transfers could give force feedback.
  9. Could directly link to nerve stimulation via lower layers to accomplish full neural feedback
  10. combine sensor and actuators to directly control avatars in cyberspace and for computer interfacing feedback
  11. interfaces for games
  12. short duration software licenses for evaluation purposes, needs fragile transfer so limits use to single user for lifetime of transfer
  13. sensors on eyes allow eye tracking
  14. direct retinal display, active contact lens replacement
  15. UV phosphors allow ultraviolet vision
  16. Actuators or tensioning devices could control wrinkles
  17. could assist in training for sports
  18. training for typing, playing music, music composition, virtual instruments
  19. keypad-free dialling
  20. air typing, drawing, sculpting
  21. type on arm using finger and arm patches
  22. finger snap control
  23. active sign languages
  24. ‘palm pilot’, computer on hand
  25. digital computer, count on fingers
  26. generic 3D interface
  27. use with transfer phone
  28. education use to explore surfaces of virtual objects in virtual environments
  29. use for teletravel navigation, or use in dangerous environments for controlling robotics
  30. direct nervous system links
  31. could assist in body language in conjunction with emotion sensors for socially disadvantaged people
  32. could act as signalling device in place of phone ring or audible alarms (actuator is not necessarily piezoelectric vibrator)
  33. doorbell on skin, personal doorbell, only alerts person of relevance
  34. active sunscreen using electrical stimuli to change sun-block cream to block UV when UV dose is reached
  35. could electrically alter heat radiation properties to enhance heating or cooling of body
  36. membranes with smart holes allow just the right amount of drug delivery in conjunction with smart tattoos. May use lower layers to accurately position and record dosing data
  37. Could use heat, cold, vibration as signals between people
  38. Electronic muscles – use polymer gel or memory metal or contracting wires
  39. Ultrasonic communication between devices and outside world
  40. Teledildonic applications
  41. Oscillating magnetic patches for medical reasons
  42. Applies voltage across wound to assist healing
  43. Smart Nicotine or antibiotic patches
  44. Painkilling patches using pain measurement (nerve activity) and directly controlling using electric stimuli, or administering drugs
  45. Placebo device patches
  46. Multimedia cosmetics
  47. Smart cosmetics, with actuators, smart tattoos that are removable
  48. Self organising cosmetic circuits, sensor, smart chemicals and actuator matrices
  49. Continuous electrolysis as hair growth limiter
  50. Electro-acupuncture with accurate positioning
  51. Control of itching to allow more rapid recovery
  52. Baby-care anti-scratch patches
  53. Printed aerials on body for device communication
  54. Detect, record, process and transmit nerve signals
  55. EEG use
  56. Thought control of devices
  57. Invisible scalp sensors for thought collection
  58. Emotion badge
  59. Truth badge, using body cues to convey whether lying or not. Could be unknown to wearer, transmitting by radio or ultrasound or in UV
  60. Context sensitive perfumes, emotionally sensitive perfumes
  61. Inverse heat sensitive perfumes, prevent too much being given off when warm
  62. Smell sensitive deodorant, or temperature dependent
  63. Context sensitive makeup, that behaves differently with different people at different situations or times
  64. Colour sensitive sun-block, protects more on fairer skin
  65. Active Bindies (dots on Indian women foreheads)
  66. Active jewellery
  67. Power generation for wearable electrical devices, using body heat, solar power, kinetics or skin contraction
  68. Microphones
  69. Frequency translation to allow hearing out of normal audible spectrum
  70. Bugs – unspecified functions in devices
  71. Mosquito killers, zapping insects with charge, or deterring with ultrasound or electrical signals
  72. Automatic antiseptic injections
  73. Use on animals for medical and pest control purposes
  74. Pet signalling and training
  75. Pet homing
  76. Pet ID systems
  77. Jam nerves
  78. Muscle toning
  79. Image capture, compound eyes, raster scan with micro-mirror and transverse lens
  80. Phones, watches, diaries etc
  81. Chameleon, cuttlefish pattern novelties
  82. Orifice monitoring
  83. Transfer body suit, self-organising polymer coating. Use for sports etc.
  84. Position-based devices
  85. Morse code devices for children’s communications
  86. Movement to voice translation – guidance for blind people or use for everyday navigation, sports feedback
  87. Strain alarms
  88. Use with smart drugs
  89. Smell as ring tone
  90. Smell as alarm
  91. Smell for emotion conveyance
  92. Snap fingers to switch lights on
  93. Tactile interfaces
  94. Emotional audio-video capture
  95. record on body condition
  96. wires on skin as addition to MIT bodynet
  97. tension control devices to assist wound healing
  98. avatar mimicry, electronically control ones appearance
  99. electronic paint-by numbers

100.means to charge up other devices by linking to external electrical device or by induction 101.devices that can read ultraviolet ink on sub layer 102.finger mouse, using fingertip sensors instead of mouse, can be used in 3D with appropriate technology base 103.Use of combinations of patches to monitor relative movements of body parts for use in training and medical treatments. Could communicate using infrared, radio or ultrasound 104.Use of an all-over skin that acts as a protective film so that each device doesn’t have to be dermatologically tested. Unlikely to be full body but could cover some key areas. E.g. some people are allergic to Elastoplast, so could have their more vulnerable areas covered. 105.Strain gauges on stomach warning of overeating 106.Strain gauge based posture alarms on the neck, back and shoulders etc 107.Breathalysers in smart teeth alert drivers to being over the limit and interact directly with car immobilisers 108.Pedometers and weight sensors built into feet to monitor exercise etc 109.Battlefield management systems using above systems with remote management Fully Removable layer

  1. Smart elastoplasts
  2. Smart contact lenses with cameras and video
  3. Smart suits with sensors and actuators for sports and work
  4. Almost all conventional personal electronic devices
  5. Web server
  6. Web sites

Active Skin – an old idea whose time is coming

Active Skin

In May 2001, while working in BT research, I had an idea – how we could use the skin surface as a new platform for electronics. I grabbed a few of my colleagues - Robin Mannings, Dennis Johnston, Ian Neild, and Paul Bowman, and we shut ourselves in a room for a few hours to brainstorm it. We originally intended to patent some of the ideas, but they weren’t core business for a telecoms company like BT so that never happened.

Now, 12.5 years on, it is too late to extract any value from a patent, but some of the technologies are starting to appear around the world as prototypes by various labs and companies, so it’s time is drawing near. We never did publish the ideas, though a few did make it out via various routes and I talk about active skin in my writings more generally. So I thought I’d serialise some of the ideas list now – there are lots. This one will just be the intro.

Introduction

Today we have implants in the body, and wearable devices such as watches and cell-phones in regular proximity to our bodies, with a much looser affiliation to other forms of electronics such as palmtops and other computers. With recent advances in miniaturisation, print technology and polymer based circuits, a new domain is now apparent but as yet unexploited, and offers enormous potential business for a nimble first-mover. The domain is the skin itself, where the body meets the rest of the world. We have called it active skin, and it has a wide range of potential applications.

Active skin layers

Stimulated by MIT work in late 1990s that has shown that the skin can be used as a communications medium, a logical progression is to consider what other uses it might be put to. What we proposed is a multi-layer range of devices.Slide2

(actually, this original pic wasn’t drawn quite right. The transfer layer sits just on the skin, not in it.)

The innermost ‘tattoo layer’ is used for smart tattoos, which are permanently imprinted into the lower layers of the skin. These layers do not wear or wash away.

The next ‘mid-term’ layer is the upper layers of the skin, which wear away gradually over time.

Above this we move just outside to the ‘transfer layer’. Children frequently wear ‘tattoos’ that are actually just transfers that stick onto the skin surface, frequently on a thin polymer base. They are fairly robust against casual contact, but can be removed fairly easily.

The final ‘detachable layer’ is occupied by fully removable devices that are only worn on a temporary basis, but which interact with the layers below.

Above this is the ‘wearable layer; the domain of the normal everyday gadget such as a watch.

A big advantage for this field is that space is not especially limited, so devices can be large in one or two dimensions. However, they must be flexible and very thin to be of use in this domain and be more comfortable than the useful alternatives.

I want my TV to be a TV, not a security and privacy threat

Our TV just died. It was great, may it rest in peace in TV heaven. It was a good TV and it lasted longer than I hoped, but I finally got an excuse to buy a new one. Sadly, it was very difficult finding one and I had to compromise. Every TV I found appears to be a government spy, a major home security threat or a chaperone device making sure I only watch wholesome programming. My old one wasn’t and I’d much rather have a new TV that still isn’t, but I had no choice in the matter. All of today’s big TV’s are ruined by the addition of features and equipment that I would much rather not have.

Firstly, I didn’t want any built in cameras or microphones: I do not want some hacker watching or listening to my wife and I on our sofa and I do not trust any company in the world on security, so if a TV has a microphone or camera, I assume that it can be hacked. Any TV that has any features offering voice recognition or gesture recognition or video comms is a security risk. All the good TVs have voice control, even though that needs a nice clear newsreader style voice, and won’t work for me, so I will get no benefit from it but I had no choice about having the microphone and will have to suffer the downside. I am hoping the mic can only be used for voice control and not for networking apps, and therefore might not be network accessible.

I drew the line at having a camera in my living room so had to avoid buying the more expensive smart TVs . If there weren’t cameras in all the top TVs, I would happily have spent 70% more. 

I also don’t want any TV that makes a record of what I watch on it for later investigation and data mining by Big Brother, the NSA, GCHQ, Suffolk County Council or ad agencies. I don’t want it even remembering anything of what is watched on it for viewing history or recommendation services.

That requirement eliminated my entire shortlist. Every decent quality large TV has been wrecked by the addition of  ‘features’ that I don’t only not want, but would much rather not have. That is not progress, it is going backwards. Samsung have made loads of really good TVs and then ruined them all. I blogged a long time ago that upgrades are wrecking our future. TV is now a major casualty.

I am rather annoyed at Samsung now – that’s who I eventually bought from. I like the TV bits, but I certainly do not and never will want a TV that ‘learns my viewing habits and offers recommendations based on what I like to watch’.

Firstly, it will be so extremely ill-informed as to make any such feature utterly useless. I am a channel hopper so 99% of things that get switched to momentarily are things or genres I never want to see again. Quite often, the only reason I stopped on that channel was to watch the new Meerkat ad.

Secondly, our TV is often on with nobody in the room. Just because a programme was on screen does not mean I or indeed anyone actually looked at it, still less that anyone enjoyed it.

Thirdly, why would any man under 95 want their TV to make notes of what they watch when they are alone, and then make that viewing history available to everyone or use it as any part of an algorithm to do so?

Fourthly, I really wanted a smart TV but couldn’t because of the implied security risks. I have to assume that if the designers think they should record and analyse my broadcast TV viewing, then the same monitoring and analysis would be extended to web browsing and any online viewing. But a smart TV isn’t only going to be accessed by others in the same building. It will be networked. Worse still, it will be networked to the web via a wireless LAN that doesn’t have a Google street view van detector built in, so it’s a fair bet that any data it stores may be snaffled without warning or authorisation some time.

Since the TV industry apparently takes the view that nasty hacker types won’t ever bother with smart TVs, they will leave easily accessible and probably very badly secured data and access logs all over the place. So I have to assume that all the data and metadata gathered by my smart TV with its unwanted and totally useless viewing recommendations will effectively be shared with everyone on the web, every advertising executive, every government snoop and local busybody, as well as all my visitors and other household members.

But it still gets worse. Smart TV’s don’t stop there. They want to help you to share stuff too. They want ‘to make it easy to share your photos and your other media from your PC, laptop, tablet, and smartphone’. Stuff that! So, if I was mad enough to buy one, any hacker worthy of the name could probably use my smart TV to access all my files on any of my gadgets. I saw no mention in the TV descriptions of regular operating system updates or virus protection or firewall software for the TVs.

So, in order to get extremely badly informed viewing recommendations that have no basis in reality, I’d have to trade all our privacy and household IT security and open the doors to unlimited and badly targeted advertising, knowing that all my viewing and web access may be recorded for ever on government databases. Why the hell would anyone think that make a TV more attractive?  When I buy a TV, I want to switch it on, hit an auto-tune button and then use it to watch TV. I don’t really want to spend hours going through a manual to do some elaborate set-up where I disable a whole string of  privacy and security risks one by one.

In the end, I abandoned my smart TV requirement, because it came with too many implied security risks. The TV I bought has a microphone to allow a visitor with a clearer voice to use voice control, which I will disable if I can, and features artificial-stupidity-based viewing recommendations which I don’t want either. These cost extra for Samsung to develop and put in my new TV. I would happily have paid extra to have them removed.

Afternote: I am an idiot, 1st class. I thought I wasn’t buying a smart TV but it is. My curioisty got the better of me and I activated the network stuff for a while to check it out, and on my awful broadband, mostly it doesn’t work, so with no significant benefits, I just won’t give it network access, it isn’t worth the risk. I can’t disable the microphone or the viewing history, but I can at least clear it if I want.

I love change and I love progress, but it’s the other direction. You’re going the wrong way!

Font size is becoming too small

Warning: rant, no futures insights enclosed.

Last night, we went for a very pleasant meal with friends. The restaurant was in a lovely location, the service was excellent, the food was excellent. The only irritating thing was a pesky fly. However, for some reason, the menu was written on nice paper in 6 point text with about 15mm line spacing. Each line went about 2/3 of the way across the page. I hadn’t brought my reading glasses so was forced to read small text at arm’s length where it was still blurred.

My poor vision is not the restaurant’s fault. But I do have to ask why there is such a desire across seemingly every organisation now to print everything possible with the tiniest font they can manage? Even when there is lots of space available, fonts are typically tiny. Serial numbers are the worst culprits. My desktop PC is normal tower size and has its serial number printed on a tiny label in 1mm font. Why? Even my hated dishwasher uses a 2mm font size and that stretches my vision to its limits.

Yes, I am ageing, but that isn’t a crime. When I was a school-kid, I took great pride in irritating my teachers by writing with the finest tipped ball-pen I could get (Bic extra-fine) in the smallest writing I could manage. I rarely submitted a homework without getting some comment back on my writing size. But then I grew up.

It makes me wonder whether increased printer capability is a problem rather than an asset. Yes, you can now print at 2000 dpi or more, and a character only needs a small grid  so you can print small enough that a magnifying glass is needed for anyone to be able to read the text. But being physically able to print that small doesn’t actually make it compulsory. So why does it make it irresistible to many people?

When I do conference presentations, if I use text at all, I make sure it is at least 16pt, preferably bigger. If it won’t fit, I re-do the wording until it does. Some conferences that employ ‘designers’ come up with slide designs that contain a massive conference logo, bars on the side and bottom, title half way down the page, and any actual material has to be shrunk to fit in a small region of the slide with eye-straining font sizes on any key data. I generally refuse to comply when a conference employs such an idiot, but they are breeding fast. If someone can’t easily read text from the back row, it is too small. It isn’t actually the pinnacle of cool design to make it illegible.

Mobiles have small displays and small type is sometimes unavoidable, but even so, why design a wireless access login page with a minuscule login box that takes up a tiny fraction of the page? If there is nothing else on the page of any consequence apart from that login details box, why not fill the display with it? Why make it a millimetre high, and have loads of empty space and some branding crap, so that a user has to spend ages stretching it to make it the important bit usable? What is the point?

To me, good design isn’t about making something that is pretty, that can eventually be used after a great deal of effort. It is about making something that does the job perfectly and simply and is pretty. A good designer can achieve form, simplicity of use and function. Only poor designers have to pick one and ignore the others.

The current trend to make text smaller and smaller is pointless and counter-productive. It will cause eye problems for younger people later in their lives. It certainly discriminates against a large proportion of the population that needs glasses. Worse, it does so without no clear benefit. Reading tiny text isn’t especially pleasurable compared to larger text. It reduces quality of life for many without increasing it for anyone else.

It is time to end this stupid trend and send designers back to school if they are somehow convinced that illegibility is some sort of artistic goal.

The primary purpose of text is to communicate. If people can’t easily read the text on your design, the communication is impeded, and your design is therefore crap. If you think you know better, and that tiny text is more attractive and that is what really counts, you should go back to school. Or find a better school and go to that one.

Free-floating AI battle drone orbs (or making Glyph from Mass Effect)

I have spent many hours playing various editions of Mass Effect, from EA Games. It is one of my favourites and has clearly benefited from some highly creative minds. They had to invent a wide range of fictional technology along with technical explanations in the detail for how they are meant to work. Some is just artistic redesign of very common sci-fi ideas, but they have added a huge amount of their own too. Sci-fi and real engineering have always had a strong mutual cross-fertilisation. I have lectured sometimes on science fact v sci-fi, to show that what we eventually achieve is sometimes far better than the sci-fi version (Exhibit A – the rubbish voice synthesisers and storage devices use on Star Trek, TOS).

Glyph

Liara talking to her assistant Glyph.Picture Credit: social.bioware.com

In Mass Effect, lots of floating holographic style orbs float around all over the place for various military or assistant purposes. They aren’t confined to a fixed holographic projection system. Disruptor and battle drones are common, and  a few home/lab/office assistants such as Glyph, who is Liara’s friendly PA, not a battle drone. These aren’t just dumb holograms, they can carry small devices and do stuff. The idea of a floating sphere may have been inspired by Halo’s, but the Mass Effect ones look more holographic and generally nicer. (Think Apple v Microsoft). Battle drones are highly topical now, but current technology uses wings and helicopters. The drones in sci-fi like Mass Effect and Halo are just free-floating ethereal orbs. That’s what I am talking about now. They aren’t in the distant future. They will be here quite soon.

I recently wrote on how to make force field and floating cars or hover-boards.

http://timeguide.wordpress.com/2013/06/21/how-to-actually-make-a-star-wars-landspeeder-or-a-back-to-the-future-hoverboard/

Briefly, they work by creating a thick cushion of magnetically confined plasma under the vehicle that can be used to keep it well off the ground, a bit like a hovercraft without a skirt or fans. Using layers of confined plasma could also be used to make relatively weak force fields. A key claim of the idea is that you can coat a firm surface with a packed array of steerable electron pipes to make the plasma, and a potentially reconfigurable and self organising circuit to produce the confinement field. No moving parts, and the coating would simply produce a lifting or propulsion force according to its area.

This is all very easy to imagine for objects with a relatively flat base like cars and hover-boards, but I later realised that the force field bit could be used to suspend additional components, and if they also have a power source, they can add locally to that field. The ability to sense their exact relative positions and instantaneously adjust the local fields to maintain or achieve their desired position so dynamic self-organisation would allow just about any shape  and dynamics to be achieved and maintained. So basically, if you break the levitation bit up, each piece could still work fine. I love self organisation, and biomimetics generally. I wrote my first paper on hormonal self-organisation over 20 years ago to show how networks or telephone exchanges could self organise, and have used it in many designs since. With a few pieces generating external air flow, the objects could wander around. Cunning design using multiple components could therefore be used to make orbs that float and wander around too, even with the inspired moving plates that Mass Effect uses for its drones. It could also be very lightweight and translucent, just like Glyph. Regular readers will not be surprised if I recommend some of these components should be made of graphene, because it can be used to make wonderful things. It is light, strong, an excellent electrical and thermal conductor, a perfect platform for electronics, can be used to make super-capacitors and so on. Glyph could use a combination of moving physical plates, and use some to add some holographic projection – to make it look pretty. So, part physical and part hologram then.

Plates used in the structure can dynamically attract or repel each other and use tethers, or use confined plasma cushions. They can create air jets in any direction. They would have a small load-bearing capability. Since graphene foam is potentially lighter than helium

http://timeguide.wordpress.com/2013/01/05/could-graphene-foam-be-a-future-helium-substitute/

it could be added into structures to reduce forces needed. So, we’re not looking at orbs that can carry heavy equipment here, but carrying processing, sensing, storage and comms would be easy. Obviously they could therefore include whatever state of the art artificial intelligence has got to, either on-board, distributed, or via the cloud. Beyond that, it is hard to imagine a small orb carrying more than a few hundred grammes. Nevertheless, it could carry enough equipment to make it very useful indeed for very many purposes. These drones could work pretty much anywhere. Space would be tricky but not that tricky, the drones would just have to carry a little fuel.

But let’s get right to the point. The primary market for this isn’t the home or lab or office, it is the battlefield. Battle drones are being regulated as I type, but that doesn’t mean they won’t be developed. My generation grew up with the nuclear arms race. Millennials will grow up with the drone arms race. And that if anything is a lot scarier. The battle drones on Mass Effect are fairly easy to kill. Real ones won’t.

a Mass Effect combat droneMass Effect combat drone, picture credit: masseffect.wikia.com

If these cute little floating drone things are taken out of the office and converted to military uses they could do pretty much all the stuff they do in sci-fi. They could have lots of local energy storage using super-caps, so they could easily carry self-organising lightweight  lasers or electrical shock weaponry too, or carry steerable mirrors to direct beams from remote lasers, and high definition 3D cameras and other sensing for reconnaissance. The interesting thing here is that self organisation of potentially redundant components would allow a free roaming battle drone that would be highly resistant to attack. You could shoot it for ages with laser or bullets and it would keep coming. Disruption of its fields by electrical weapons would make it collapse temporarily, but it would just get up and reassemble as soon as you stop firing. With its intelligence potentially local cloud based, you could make a small battalion of these that could only be properly killed by totally frazzling them all. They would be potentially lethal individually but almost irresistible as a team. Super-capacitors could be recharged frequently using companion drones to relay power from the rear line. A mist of spare components could make ready replacements for any that are destroyed. Self-orientation and use of free-space optics for comms make wiring and circuit boards redundant, and sub-millimetre chips 100m away would be quite hard to hit.

Well I’m scared. If you’re not, I didn’t explain it properly.

Deep surveillance – how much privacy could you lose?

The news that seems to have caught much of the media in shock, that our electronic activities were being monitored, comes as no surprise at all to anyone working in IT for the last decade or two. In fact, I can’t see what’s new. I’ve always assumed since the early 90s that everything I write and do on-line or say or text on a phone or watch on digital TV or do on a game console is recorded forever and checked by computers now or will be checked some time in the future for anything bad. If I don’t want anyone to know I am thinking something, I keep it in my head. Am I paranoid? No. If you think I am, then it’s you who is being naive.

I know that if some technically competent spy with lots of time and resources really wants to monitor everything I do day and night and listen to pretty much everything I say, they could, but I am not important enough, bad enough, threatening enough or even interesting enough, and that conveys far more privacy than any amount of technology barriers ever could. I live in a world of finite but just about acceptable risk of privacy invasion. I’d like more privacy, but it’s too much hassle.

Although government, big business and malicious software might want to record everything I do just in case it might be useful one day, I still assume some privacy, even if it is already technically possible to bypass it. For example, I assume that I can still say what I want in my home without the police turning up even if I am not always politically correct. I am well aware that it is possible to use a function built into the networks called no-ring dial-up to activate the microphone on my phones without me knowing, but I assume nobody bothers. They could, but probably don’t. Same with malware on my mobiles.

I also assume that the police don’t use millimetre wave scanning to video me or my wife through the walls and closed curtains. They could, but probably don’t. And there are plenty of sexier targets to point spycams at so I am probably safe there too.

Probably, nobody bothers to activate the cameras on my iphone or Nexus, but I am still a bit cautious where I point them, just in case. There is simply too much malware out there to ever assume my IT is safe. I do only plug a camera and microphone into my office PC when I need to. I am sure watching me type or read is pretty boring, and few people would do it for long, but I have my office blinds drawn and close the living room curtains in the evening for the same reason – I don’t like being watched.

In a busy tube train, it is often impossible to stop people getting close enough to use an NFC scanner to copy details from my debit card and Barclaycard, but they can be copied at any till or in any restaurant just as easily, so there is a small risk but it is both unavoidable and acceptable. Banks discovered long ago that it costs far more to prevent fraud 100% than it does to just limit it and accept some. I adopt a similar policy.

Enough of today. What of tomorrow? This is a futures blog – usually.

Well, as MM Wave systems develop, they could become much more widespread so burglars and voyeurs might start using them to check if there is anything worth stealing or videoing. Maybe some search company making visual street maps might ‘accidentally’ capture a detailed 3d map of the inside of your house when they come round as well or instead of everything they could access via your wireless LAN. Not deliberately of course, but they can’t check every line of code that some junior might have put in by mistake when they didn’t fully understand the brief.

Some of the next generation games machines will have 3D scanners and HD cameras that can apparently even see blood flow in your skin. If these are hacked or left switched on – and social networking video is one of the applications they are aiming to capture, so they’ll be on often – someone could watch you all evening, capture the most intimate body details, film your facial expressions while you are looking at a known image on a particular part of the screen. Monitoring pupil dilation, smiles, anguished expressions etc could provide a lot of evidence for your emotional state, with a detailed record of what you were watching and doing at exactly that moment, with whom. By monitoring blood flow, pulse and possibly monitoring your skin conductivity via the controller, level of excitement, stress or relaxation can easily be inferred. If given to the authorities, this sort of data might be useful to identify paedophiles or murderers, by seeing which men are excited by seeing kids on TV or those who get pleasure from violent games, so obviously we must allow it, mustn’t we? We know that Microsoft’s OS has had the capability for many years to provide a back door for the authorities. Should we assume that the new Xbox is different?

Monitoring skin conductivity is already routine in IT labs ass an input. Thought recognition is possible too and though primitive today, we will see that spread as the technology progresses. So your thoughts can be monitored too. Thoughts added to emotional reactions and knowledge of circumstances would allow a very detailed picture of someone’s attitudes. By using high speed future computers to data mine zillions of hours of full sensory data input on every one of us gathered via all this routine IT exposure, a future government or big business that is prone to bend the rules could deduce everyone’s attitudes to just about everything – the real truth about our attitudes to every friend and family member or TV celebrity or politician or product, our detailed sexual orientation, any fetishes or perversions, our racial attitudes, political allegiances, attitudes to almost every topic ever aired on TV or everyday conversation, how hard we are working, how much stress we are experiencing, many aspects of our medical state. And they could steal your ideas, if you still have any after putting all your effort into self censorship.

It doesn’t even stop there. If you dare to go outside, innumerable cameras and microphones on phones, visors, and high street surveillance will automatically record all this same stuff for everyone. Thought crimes already exist in many countries including the UK. In depth evidence will become available to back up prosecutions of crimes that today would not even be noticed. Computers that can retrospectively date mine evidence collected over decades and link it all together will be able to identify billions of crimes.

Active skin will one day link your nervous system to your IT, allowing you to record and replay sensations. You will never be able to be sure that you are the only one that can access that data either. I could easily hide algorithms in a chip or program that only I know about, that no amount of testing or inspection could ever reveal. If I can, any decent software engineer can too. That’s the main reason I have never trusted my IT – I am quite nice but I would probably be tempted to put in some secret stuff on any IT I designed. Just because I could and could almost certainly get away with it. If someone was making electronics to link to your nervous system, they’d probably be at least tempted to put a back door in too, or be told to by the authorities.

Cameron utters the old line: “if you are innocent, you have nothing to fear”. Only idiots believe that. Do you know anyone who is innocent? Of everything? Who has never ever done or even thought anything even a little bit wrong? Who has never wanted to do anything nasty to a call centre operator? And that’s before you even start to factor in corruption of the police or mistakes or being framed or dumb juries or secret courts. The real problem here is not what Prism does and what the US authorities are giving to our guys. It is what is being and will be collected and stored, forever, that will be available to all future governments of all persuasions. That’s the problem. They don’t delete it. I’ve said often that our governments are often incompetent but not malicious. Most of our leaders are nice guys, even if some are a little corrupt in some cases. But what if it all goes wrong, and we somehow end up with a deeply divided society and the wrong government or a dictatorship gets in. Which of us can be sure we won’t be up against the wall one day?

We have already lost the battle to defend our privacy. Most of it is long gone, and the only bits left are those where the technology hasn’t caught up yet. In the future, not even the deepest, most hidden parts of your mind will be private. Ever.