Category Archives: interfaces

Interfacial prejudice

This blog is caused by an interaction with Nick Colosimo, thanks Nick.

We were discussing whether usage differences for gadgets were generational. I think they are but not because older people find it hard to learn new tricks. Apart from a few unfortunate people whose brains go downhill when they get old, older people have shown they are perfectly able and willing to learn web stuff. Older people were among the busiest early adopters of social media.

I think the problem is the volume of earlier habits that need to be unlearned. I am 53 and have used computers every day since 1981. I have used slide rules and log tables, an abacus, an analog computer, several mainframes, a few minicomputers, many assorted Macs and PCs and numerous PDAs, smartphones and now tablets. They all have very different ways of using them and although I can’t say I struggle with any of them, I do find the differing implementations of features and mechanisms annoying. Each time a new operating system comes along, or a new style of PDA, you have to learn a new design language, remember where all the menus, sub-menus and all the various features are hidden on this one, how they interconnect and what depends on what.

That’s where the prejudice kicks in. The many hours of experience you have on previous systems have made you adept at navigating through a sea of features, menus, facilities. You are native to the design language, the way you do things, the places to look for buttons or menus, even what the buttons look like. You understand its culture, thoroughly. When a new device or OS is very different, using it is like going on holiday. It is like emigrating if you’re making a permanent switch. You have the ability to adapt, but the prejudice caused by your long experience on a previous system makes that harder. Your first uses involve translation from the old to the new, just like translating foreignish to your own language, rather than thinking in the new language as you will after lengthy exposure. Your attitude to anything on the new system is colored by your experiences with the old one.

It isn’t stupidity that making you slow and incompetent. Its interfacial prejudice.

Smart fuse

This maybe exists now but I couldn’t find it right away on Google. It is an idea I had a very long time ago, but with all the stuff coming from Apple and Google now, this would make an easier and cheaper way to make most appliances smart without adding huge cost or locking owners in to a corporate ecosystem.

Most mains powered appliances come with plugs that have fuses in them. Here is a UK plug, pic courtesy of BBC.

fuse

If the fuse in the plug is replaced by a smart fuse that has an internet address, then this presents a means to switch things on and off automatically. A signal could be sent over the mains from a plug-in controller somewhere in the house, or via radio, wireless LAN, even voice command. The appliance therefore becomes capable of being turned on and off remotely at minimal cost.

At slightly higher expense, with today’s miniaturisation levels, smart fuses would be a cheap way of adding other functions. They could contain ROM loaded with software for the appliance, giving security via an easy upgrade that can’t be tampered with. They could also contain timers, sensors, usage meters, and talk to other devices, such as a phone or PC, or enable appliances for cheaper electricity by letting power companies turn them on and off remotely.

There really is no need to add heavily to appliance cost to make it smart. A smart fuse could cost pennies and still do the job.

Google is wrong. We don’t all want gadgets that predict our needs.

In the early 1990s, lots of people started talking about future tech that would work out what we want and make it happen. A whole batch of new ideas came out – internet fridges, smart waste-baskets, the ability to control your air conditioning from the office or open and close curtains when you’re away on holiday. 25 years on almost and we still see just a trickle of prototypes, followed by a tsunami of apathy from the customer base.

Do you want an internet fridge, that orders milk when you’re running out, or speaks to you all the time telling you what you’re short of, or sends messages to your phone when you are shopping? I certainly don’t. It would be extremely irritating. It would crash frequently. If I forget to clean the sensors it won’t work. If I don’t regularly update the software, and update the security, and get it serviced, it won’t work. It will ask me for passwords. If my smart loo notices I’m putting on weight, the fridge will refuse to open, and tell the microwave and cooker too so that they won’t cook my lunch. It will tell my credit card not to let me buy chocolate bars or ice cream. It will be a week before kitchen rage sets in and I take a hammer to it. The smart waste bin will also be covered in tomato sauce from bean cans held in a hundred orientations until the sensor finally recognizes the scrap of bar-code that hasn’t been ripped off. Trust me, we looked at all this decades ago and found the whole idea wanting. A few show-off early adopters want it to show how cool and trendy they are, then they’ll turn it off when no-one is watching.

EDIT: example of security risks from smart devices (this one has since been fixed) http://www.bbc.co.uk/news/technology-28208905

If I am with my best friend, who has known me for 30 years, or my wife, who also knows me quite well, they ask me what I want, they discuss options with me. They don’t think they know best and just decide things. If they did, they’d soon get moaned at. If I don’t want my wife or my best friend to assume they know what I want best, why would I want gadgets to do that?

The first thing I did after checking out my smart TV was to disconnect it from the network so that it won’t upload anything and won’t get hacked or infected with viruses. Lots of people have complained about new adverts on TV that control their new xBoxes via the Kinect voice recognition. The ‘smart’ TV receiver might be switched off as that happens. I am already sick of things turning themselves off without my consent because they think they know what I want.

They don’t know what is best. They don’t know what I want. Google doesn’t either. Their many ideas about giving lots of information it thinks I want while I am out are also things I will not welcome. Is the future of UI gadgets that predict your needs, as Wired says Google thinks? No, it isn’t. What I want is a really intuitive interface so I can ask for what I want, when I want it. The very last thing I want is an idiot device thinking it knows better than I do.

We are not there yet. We are nowhere near there yet. Until we are, let me make my own decisions. PLEASE!

Time – The final frontier. Maybe

It is very risky naming the final frontier. A frontier is just the far edge of where we’ve got to.

Technology has a habit of opening new doors to new frontiers so it is a fast way of losing face. When Star Trek named space as the final frontier, it was thought to be so. We’d go off into space and keep discovering new worlds, new civilizations, long after we’ve mapped the ocean floor. Space will keep us busy for a while. In thousands of years we may have gone beyond even our own galaxy if we’ve developed faster than light travel somehow, but that just takes us to more space. It’s big, and maybe we’ll never ever get to explore all of it, but it is just a physical space with physical things in it. We can imagine more than just physical things. That means there is stuff to explore beyond space, so space isn’t the final frontier.

So… not space. Not black holes or other galaxies.

Certainly not the ocean floor, however fashionable that might be to claim. We’ll have mapped that in details long before the rest of space. Not the centre of the Earth, for the same reason.

How about cyberspace? Cyberspace physically includes all the memory in all our computers, but also the imaginary spaces that are represented in it. The entire physical universe could be simulated as just a tiny bit of cyberspace, since it only needs to be rendered when someone looks at it. All the computer game environments and virtual shops are part of it too. The cyberspace tree doesn’t have to make a sound unless someone is there to hear it, but it could. The memory in computers is limited, but the cyberspace limits come from imagination of those building or exploring it. It is sort of infinite, but really its outer limits are just a function of our minds.

Games? Dreams? Human Imagination? Love? All very new agey and sickly sweet, but no. Just like cyberspace, these are also all just different products of the human mind, so all of these can be replaced by ‘the human mind’ as a frontier. I’m still not convinced that is the final one though. Even if we extend that to greatly AI-enhanced future human mind, it still won’t be the final frontier. When we AI-enhance ourselves, and connect to the smart AIs too, we have a sort of global consciousness, linking everyone’s minds together as far as each allows. That’s a bigger frontier, since the individual minds and AIs add up to more cooperative capability than they can achieve individually. The frontier is getting bigger and more interesting. You could explore other people directly, share and meld with them. Fun, but still not the final frontier.

Time adds another dimension. We can’t do physical time travel, and even if we can do so in physics labs with tiny particles for tiny time periods, that won’t necessarily translate into a practical time machine to travel in the physical world. We can time travel in cyberspace though, as I explained in

http://timeguide.wordpress.com/2012/10/25/the-future-of-time-travel-cheat/

and when our minds are fully networked and everything is recorded, you’ll be able to travel back in time and genuinely interact with people in the past, back to the point where the recording started. You would also be able to travel forwards in time as far as the recording stops and future laws allow (I didn’t fully realise that when I wrote my time travel blog, so I ought to update it, soon). You’d be able to inhabit other peoples’ bodies, share their minds, share consciousness and feelings and emotions and thoughts. The frontier suddenly jumps out a lot once we start that recording, because you can go into the future as far as is continuously permitted. Going into that future allows you to get hold of all the future technologies and bring them back home, short circuiting the future, as long as time police don’t stop you. No, I’m not nuts – if you record everyone’s minds continuously, you can time travel into the future using cyberspace, and the effects extend beyond cyberspace into the real world you inhabit, so although it is certainly a cheat, it is effectively real time travel, backwards and forwards. It needs some security sorted out on warfare, banking and investments, procreation, gambling and so on, as well as lot of other causality issues, but to quote from Back to the Future: ‘What the hell?’ [IMPORTANT EDIT: in my following blog, I revise this a bit and conclude that although time travel to the future in this system lets you do pretty much what you want outside the system, time travel to the past only lets you interact with people and other things supported within the system platform, not the physical universe outside it. This does limit the scope for mischief.]

So, time travel in fully networked fully AI-enhanced cosmically-connected cyberspace/dream-space/imagination/love/games would be a bigger and later frontier. It lets you travel far into the future and so it notionally includes any frontiers invented and included by then. Is it the final one though? Well, there could be some frontiers discovered after the time travel windows are closed. They’d be even finaller, so I won’t bet on it.

 

 

Fairies will dominate space travel

The future sometimes looks ridiculous. I have occasionally written about smart yogurt and zombies and other things that sound silly but have a real place in the future. I am well used to being laughed at, ever since I invented text messaging and the active contact lens, but I am also well used to saying I told you so later. So: Fairies will play a big role in space travel, probably even dominate it. Yes, those little people with wings, and magic wands, that kind. Laugh all you like, but I am right.

To avoid misrepresentation and being accused of being away with the fairies, let’s be absolutely clear: I don’t believe fairies exist. They never have, except in fairy tales of course. Anyone who thinks they have seen one probably just has poor eyesight or an overactive imagination and maybe saw a dragonfly or was on drugs or was otherwise hallucinating, or whatever. But we will have fairies soon. In 50 or 60 years.

In the second half of this century, we will be able to link and extend our minds into the machine world so well that we will effectively have electronic immortality. You won’t have to die to benefit, you will easily do so while remaining fully alive, extending your mind into the machine world, into any enabled object. Some of those objects will be robots or androids, some might well be organic.

Think of the film Avatar, a story based on yesterday’s ideas. Real science and technology will be far more exciting. You could have an avatar like in the film, but that is just the tip of the iceberg when you consider the social networking implications once the mind-linking technology is commoditised and ubiquitous part of everyday life. There won’t be just one or two avatars used for military purposes like in the film, but millions of people doing that sort of thing all the time.

If an animal’s mind is networked, a human might be able to make some sort of link to it too, again like in Avatar, where the Navii link to their dragon-like creatures. You could have remote presence in the animal. That maybe won’t be as fulfilling as being in a human because the animal has limited functionality, but it might have some purpose. Now let’s leave Avatar behind.

You could link AI to an animal to make it comparable with humans so that your experience could be better, and the animal might have a more interesting life too. Imagine chatting to a pet cat or dog and it chatting back properly.

If your mind is networked as well as we think it could be, you could link your mind to other people’s minds, share consciousness, be a part-time Borg if you want. You could share someone else’s sensations, share their body. You could exchange bodies with someone, or rent yours out and live in the net for a while, or hire a different one. That sounds a lot of fun already. But it gets better.

In the same timeframe, we will have mastered genetics. We will be able to design new kinds of organisms with whatever properties chemistry and physics permits. We’ll have new proteins, new DNA bases, maybe some new bases that don’t use DNA. We’ll also have strong AI, conscious machines. We’ll also be able to link electronics routinely to our organic nervous systems, and we’ll also have a wide range of cybernetic implants to increase sensory capability, memory, IQ, networking and so on.

We will be able to make improved versions of the brain that work and feel pretty much the same as the original, but are far, far smaller. Using synthetic electronics instead of organic cells, signals will travel between neurons at light speed, instead of 200m/s, that’s more than a million times faster. But they won’t have to go so far, because we can also make neurons physically far smaller, hundreds of times smaller, so that’s a couple more zeros to play with. And we can use light to interconnect them, using millions of wavelengths, so they could have millions of connections instead of thousands and those connections will be a billion times faster. And the neurons will switch at terahertz speeds, not hundreds of hertz, that’s also billions of times faster. So even if we keep the same general architecture and feel as the Mk1 brain, we could make it a millimetre across and it could work billions of times faster than the original human brain. But with a lot more connectivity and sensory capability, greater memory, higher processing speed, it would actually be vastly superhuman, even as it retains broadly the same basic human nature.

And guess what? It will easily fit in a fairy.

So, around the time that space industry is really taking off, and we’re doing asteroid mining, and populating bases on Mars and Europa, and thinking of going further, and routinely designing new organisms, we will be able to make highly miniaturized people with brains vastly more capable than conventional humans. Since they are small, it will be quite easy to make them with fully functional wings, exactly the sort of advantage you want in a space ship where gravity is in short supply and you want to make full use of a 3D space. Exactly the sort of thing you want when size and mass is a big issue. Exactly the sort of thing you want when food is in short supply. A custom-designed electronic, fully networked brain is exactly the sort of thing you want when you need a custom-designed organism that can hibernate instantly. Fairies would be ideally suited to space travel. We could even design the brains with lots of circuit redundancy, so that radiation-induced faults can be error-corrected and repaired by newly designed proteins.

Wands are easy too. Linking the mind to a stick, and harnessing the millions of years of recent evolution that has taught us how to use sticks is a pretty good idea too. Waving a wand and just thinking what they want to happen at the target is all the interface a space-fairy needs.

This is a rich seam and I will explore it again some time. But for now, you get the idea.

Space-farers will mostly be space fairies.

 

 

 

 

Crippled by connectivity?

Total interconnection

The android OS inside my Google Nexus tablet terrifies me. I can work it to a point, but it seems to be designed by people who think in a very different way from me and that makes me feel very unsafe when using it. The result is that I only use my tablet for simple browsing of unimportant things such as news, but I don’t use it for anything important. I don’t even have my Google account logged in to it normally and that prevents me from doing quite a lot that otherwise I could.

You may think I am being overly concerned and maybe I am. Cyber-crime is high but not so high that hackers are sitting watching all your computers all day every day for the moment you drop your guard. On the other hand, automation allows computers to try very many computers frequently to see if one is open for attack and I’d rather they attacked someone else’s than mine. I also don’t leave house windows open when I go on holiday just because it is unlikely that burglars will visit my street during that time.

The problem is that there are too many apps that want you to have an account logged in before you can use them. That account often has multiple strands that allow you to buy stuff. Google’s account lets me buy apps and games or magazines on my tablet and I can’t watch youtube or access my email or go on Google+ without logging in to Google and that opens all the doors. Amazon lets me buy all sorts of things, ebay too. If you stay logged in, you can often buy stuff just by clicking a few times, you don’t have to re-enter lots of security stuff each time. That’s great except that there are links to those things in other web pages, lots of different directions by which I may approach that buying potential. Every time you install a new app, it gives you a list of 100 things it wants total authority to do for evermore. How can you possibly keep track of all those? On the good side, that streamlines life, making it easier to do anything, reducing the numbers of hoops you need to jump through to get access to something or buy something. On the bad side, it means there are far more windows and doors to check before you go out. It means you have an open window and all your money lying on the window ledge. It means there is always a suspicion that if you get a trojan or virus, it might be able to use those open logins to steal or spend your cash or your details.

When apps are standalone and you only have a couple that have spending capability, it is manageable, but when everything is interconnected so much, there are too many routes to access your cash. You cant close the main account session because so many things you want to do are linked to it and if you log out, you lose all the dependent apps. Also, without a proper keyboard, typing your fully alphanumeric passwords takes ages. Yes, you can use password managers, but that’s just another layer of security to worry about. Because I don’t ever feel confident on a highly unintuitive OS or even worse-designed apps that I know what I am doing, I want a blanket block on any spend from my tablet even while I am logged into accounts to access other stuff. I only want my tablet to be able to spend after it has warned me that it wants to, why, how much, where to, for what, and what extras there might be. Ever. I never want it to be able to spend just by me clicking on something or a friend’s kid clicking a next level button on a game.

It isn’t at all easy to navigate a lot of apps when they are written by programmers from Mars, whose idea of intuitive interface is to hide everything in the most obscure places behind the most obscure links. On a full PC, usually it’s obvious where the menus all are and what they contain. On a tablet, it is clearly a mark of programmer status to be able to hide them from anyone who hasn’t been on a user course. This is further evidenced by the number of apps that come with complaints about previous users leaving negative feedback, telling you not to moan until you’ve done this and that and another thing and basically accusing the users of being idiots. It really is quite simple. If an app is well-designed, it will be easy to use, and you won’t need to go on a user course first because it will be obvious how to work it at every menu, so there won’t be loads of customer moaning about how hard it is to do things on it. If you’re getting loads of bad user feedback, it isn’t your customers that are the idiots, it’s you.

Anyway, on my tablet, I am usually very far from sure where the menus might be that allow me to access account details or preferences or access authorizations, and when I do stumble across them, often it tells me that an account or an authorization is open, but doesn’t let me close it via that same page, leaving me to wander for ages looking elsewhere for the account details pages.

In short, obscure interfaces that give partial data and are interconnected far too much to other apps and services and preference pages and user accounts and utilities make it impossible for me to feel safe while I use a tablet logged in to any account with spending capability. If you use apps all the time you get used to them, but if you’re like me, and have zero patience, you tend to just abandon it when you find one that isn’t intuitive.

The endless pursuit of making all things connected has made all things unusable. It doesn’t take long for a pile of string to become tangled. We need to learn to do it right, and soon.

Really we aren’t there yet.

 

 

Too many cooks spoil the broth

Pure rant ahead.

I wasted ages this morning trying getting rid of the automated text fill in the Google user accounts log in box. I accept that there are greater problems in the world, but this one was more irritating at the time. I am very comfortable living with AI, but I do want there to be a big OFF switch wherever it has an effect.

It wanted to log me in as me, in my main account, which is normally fine, but in the interests of holding back 1984, I resented Google ‘helping’ me by automatically remembering who uses my machine, which actually is only me, and filling in the data for me. I clean my machine frequently, and when I clean it, I want there to be no trace of anything on it, I want to have to type in all my data from scratch again. That way I feel safe when I clean up. I know if I have cleaned that no nasties are there sucking up stuff like usernames and passwords or other account details. This looked like it was immune to my normal cleanup.

I emptied all the cookies. No effect. I cleared memory. No effect. I ran c cleaner. No effect. I went in to the browser settings and found more places that store stuff, and emptied those too. No effect. I cleaned the browsing history and deleted all the cookies and restarted. No effect. I went to my google account home page and investigated all the settings there. It said all I had to do was hit remove and tick the account that I wanted to remove, which actually doesn’t work if the account doesn’t appear as an option when you do that. It only appeared when I didn’t want it too, and hid when I wanted to remove it. I tried a different browser and jumped through all the hoops again. No effect. I went back in to browser settings and unchecked the remember form fill data. No effect. Every time I started the browser and hit sign in, my account name and picture still appeared, just waiting for my password. Somehow I finally stumbled on the screen that let me remove accounts, and hit remove. No effect.

Where was the data? Was it google remembering my IP address and filling it in? Was it my browser and I hadn’t found the right setting yet? Did I miss a cookie somewhere? Was it my PC, with some file Microsoft maintains to make my life easier? Could it be my security software helping by remembering all my critical information to make my life more secure? By now it was becoming a challenge well out of proportion to its original annoyance value.

So I went nuclear. I went to google accounts and jumped through the hoops to delete my account totally. I checked by trying to log back in, and couldn’t. My account was definitely totally deleted. However, the little box still automatically filled in my account name and waited for my password. I entered it and nothing happened, obviously because the account didn’t exist any more. So now, I had deleted my google account, with my email and google+, but was still getting the log in assistance from somewhere. I went back to the google accounts and investigated the help file. It mentioned yet another helper that could be deactivated, account login assistant or something. I hit the deactivate button, expecting final victory. No effect. I went back to c cleaner and checked I had all the boxes ticked. I had not selected the password box. I did that, ran it and hooray, no longer any assistance. C cleaner seems to keep the data if you want to remember the password, even if you clear form data. That form isn’t a form it seems. C cleaner is brilliant, I refuse to criticize it, but it didn’t interpret the word form the way I do.

So now, finally, my PC was clean and google no longer knew it was me using it. 1984 purged, I then jumped through all the hoops to get my google account back. I wouldn’t recommend that as a thing to try by the way. I have a gmail account with all my email dating back to when gmail came out. Deleting it to test something is probably not a great idea.

The lesson from all this is that there are far too many agencies pretending to look after you now by remembering stuff that identifies you. Your PC, your security software, master password files, your cookies, your browser with its remembering form fill data and password data, account login assistant and of course google. And that is just one company. Forgetting to clear any one of those means you’re still being watched.

 

Synchronisation multiplies this problem. You have to keep track of all the apps and all their interconnections and interdependencies on all your phones and tablets now too. After the heartbleed problem, it took me ages to find all the account references on my tablets and clear them. Some can’t be deactivated within an app and require another app to be used to do so. Some apps tell you something is set but cant change it. It is a nightmare. Someone finding a tablet might get access to a wide range of apps with spending capability. Now they all synch to each other, it takes ages to remove something so that it doesn’t reappear in some menu, even temporarily. Kindle’s IP protection routine regularly means it regularly trying to synch with books I have downloaded somewhere, and telling me it isn’t allowed to on my tablet. It does that whether I ask it to or not. It even tries to synch with books I have long ago deleted and specifically asked to remove from it, and still gives me message warning that it doesn’t have permission to download them. I don’t want them, I deleted them, I told it to remove them, and it still says it is trying but can’t download them. Somewhere, on some tick list on some device or website, I forgot to check or uncheck a box, or more likely didn’t even know it existed, and that means forever I have to wait for my machines to jump through unwanted and unnecessary hoops. It is becoming near impossible to truly delete something – unless you want to keep it. There are far too many interconnections and routes to keep track of them all, too many intermediaries, too many tracking markers. We now have far too many different agencies thinking they are responsible for your data, all wanting to help, and all falling over each other and getting in your way, making your life difficult.

The old proverb says that too many cooks spoil the broth. We’re there now.

 

 

The future of biometric identification and authentication

If you work in IT security, the first part of this will not be news to you, skip to the section on the future. Otherwise, the first sections look at the current state of biometrics and some of what we already know about their security limitations.

Introduction

I just read an article on fingerprint recognition. Biometrics has been hailed by some as a wonderful way of determining someone’s identity, and by others as a security mechanism that is far too easy to spoof. I generally fall in the second category. I don’t mind using it for simple unimportant things like turning on my tablet, on which I keep nothing sensitive, but so far I would never trust it as part of any system that gives access to my money or sensitive files.

My own history is that voice recognition still doesn’t work for me, fingerprints don’t work for me, and face recognition doesn’t work for me. Iris scan recognition does, but I don’t trust that either. Let’s take a quick look at conventional biometrics today and the near future.

Conventional biometrics

Fingerprint recognition.

I use a Google Nexus, made by Samsung. Samsung is in the news today because their Galaxy S5 fingerprint sensor was hacked by SRLabs minutes after release, not the most promising endorsement of their security competence.

http://www.telegraph.co.uk/technology/samsung/10769478/Galaxy-S5-fingerprint-scanner-hacked.html

This article says the sensor is used in the user authentication to access Paypal. That is really not good. I expect quite a few engineers at Samsung are working very hard indeed today. I expect they thought they had tested it thoroughly, and their engineers know a thing or two about security. Every engineer knows you can photograph a fingerprint and print a replica in silicone or glue or whatever. It’s the first topic of discussion at any Biometrics 101 meeting. I would assume they tested for that. I assume they would not release something they expected to bring instant embarrassment on their company, especially something failing by that classic mechanism. Yet according to this article, that seems to be the case. Given that Samsung is one of the most advanced technology companies out there, and that they can be assumed to have made reasonable effort to get it right, that doesn’t offer much hope for fingerprint recognition. If they don’t do it right, who will?

My own experience with fingerprint recognition history is having to join a special queue every day at Universal Studios because their fingerprint recognition entry system never once recognised me or my child. So I have never liked it because of false negatives. For those people for whom it does work, their fingerprints are all over the place, some in high quality, and can easily be obtained and replicated.

As just one token in multi-factor authentication, it may yet have some potential, but as a primary access key, not a chance. It will probably remain be a weak authenticator.

Face recognition

There are many ways of recognizing faces – visible light, infrared or UV, bone structure, face shapes, skin texture patterns, lip-prints, facial gesture sequences… These could be combined in simultaneous multi-factor authentication. The technology isn’t there yet, but it offers more hope than fingerprint recognition. Using the face alone is no good though. You can make masks from high-resolution photographs of people, and photos could be made using the same spectrum known to be used in recognition systems. Adding gestures is a nice idea, but in a world where cameras are becoming ubiquitous, it wouldn’t be too hard to capture the sequence you use. Pretending that a mask is alive by adding sensing and then using video to detect any inspection for pulse or blood flows or gesture requests and then to provide appropriate response is entirely feasible, though it would deter casual entry. So I am not encouraged to believe it would be secure unless and until some cleverer innovation occurs.

What I do know is that I set my tablet up to recognize me and it works about one time in five. The rest of the time I have to wait till it fails and then type in a PIN. So on average, it actually slows entry down. False negative again. Giving lots of false negatives without the reward of avoiding false positives is not a good combination.

Iris scans

I was a subject in one of the early trials for iris recognition. It seemed very promising. It always recognized me and never confused me with someone else. That was a very small scale trial though so I’d need a lot more convincing before I let it near my bank account. I saw the problem of replication an iris using a high quality printer and was assured that that couldn’t work because the system checks for the eye being alive by watching for jitter and shining a light and watching for pupil contraction. Call me too suspicious but I didn’t and don’t find that at all reassuring. It won’t be too long before we can make a thin sheet high-res polymer display layered onto a polymer gel underlayer that contracts under electric field, with light sensors built in and some software analysis for real time response. You could even do it as part of a mask with the rest of the face also faithfully mimicking all the textures, real-time responses, blood flow mimicking, gesture sequences and so on. If the prize is valuable enough to justify the effort, every aspect of the eyes, face and fingerprints could be mimicked. It may be more Mission Impossible than casual high street robbery but I can’t yet have any confidence that any part of the face or gestures would offer good security.

DNA

We hear frequently that DNA is a superbly secure authenticator. Every one of your cells can identify you. You almost certainly leave a few cells at the scene of a crime so can be caught, and because your DNA is unique, it must have been you that did it. Perfect, yes? And because it is such a perfect authenticator, it could be used confidently to police entry to secure systems.

No! First, even for a criminal trial, only a few parts of your DNA are checked, they don’t do an entire genome match. That already brings the chances of a match down to millions rather than billions. A chance of millions to one sounds impressive to a jury until you look at the figure from the other direction. If you have 1 in 70 million chance of a match, a prosecution barrister might try to present that as a 70 million to 1 chance that you’re guilty and a juror may well be taken in. The other side of that is that 100 people of the 7 billion would have that same 1 in 70 million match. So your competent defense barrister should  present that as only a 1 in 100 chance that it was you. Not quite so impressive.

I doubt a DNA system used commercially for security systems would be as sophisticated as one used in forensic labs. It will be many years before an instant response using large parts of your genome could be made economic. But what then? Still no. You leave DNA everywhere you go, all day, every day. I find it amazing that it is permitted as evidence in trials, because it is so easy to get hold of someone’s hairs or skin flakes. You could gather hairs or skin flakes from any bus seat or hotel bathroom or bed. Any maid in a big hotel or any airline cabin attendant could gather packets of tissue and hair samples and in many cases could even attach a name to them.  Your DNA could be found at the scene of any crime having been planted there by someone who simply wanted to deflect attention from themselves and get someone else convicted instead of them. They don’t even need to know who you are. And the police can tick the crime solved box as long as someone gets convicted. It doesn’t have to be the culprit. Think you have nothing to fear if you have done nothing wrong? Think again.

If someone wants to get access to an account, but doesn’t mind whose, perhaps a DNA-based entry system would offer good potential, because people perceive it as secure, whereas it simply isn’t. So it might not be paired with other secure factors. Going back to the maid or cabin attendant. Both are low paid. A few might welcome some black market bonuses if they can collect good quality samples with a name attached, especially a name of someone staying in a posh suite, probably with a nice account or two, or privy to valuable information. Especially if they also gather their fingerprints at the same time. Knowing who they are, getting a high res pic of their face and eyes off the net, along with some voice samples from videos, then making a mask, iris replica, fingerprint and if you’re lucky also buying video of their gesture patterns from the black market, you could make an almost perfect multi-factor biometric spoof.

It also becomes quickly obvious that the people who are the most valuable or important are also the people who are most vulnerable to such high quality spoofing.

So I am not impressed with biometric authentication. It sounds good at first, but biometrics are too easy to access and mimic. Other security vulnerabilities apply in sequence too. If your biometric is being measured and sent across a network for authentication, all the other usual IT vulnerabilities still apply. The signal could be intercepted and stored, replicated another time, and you can’t change your body much, so once your iris has been photographed or your fingerprint stored and hacked, it is useless for ever. The same goes for the other biometrics.

Dynamic biometrics

Signatures, gestures and facial expressions offer at least the chance to change them. If you signature has been used, you could start using a new one. You could sign different phrases each time, as a personal one-time key. You could invent new gesture sequences. These are really just an equivalent to passwords. You have to remember them and which one you use for which system. You don’t want a street seller using your signature to verify a tiny transaction and then risk the seller using the same signature to get right into your account.

Summary of status quo

This all brings us back to the most basic of security practice. You can only use static biometrics safely as a small part of a multi-factor system, and you have to use different dynamic biometrics such as gestures or signatures on a one time basis for each system, just as you do with passwords. At best, they provide a simple alternative to a simple password. At worst, they pair low actual security with the illusion of high security, and that is a very bad combination indeed.

So without major progress, biometrics in its conventional meaning doesn’t seem to have much of a future. If it is not much more than a novelty or a toy, and can only be used safely in conjunction with some proper security system, why bother at all?

The future

You can’t easily change your eyes or your DNA or you skin, but you can add things to your body that are similar to biometrics or interact with it but offer the flexibility and replaceability of electronics.

I have written frequently about active skin, using the skin as a platform for electronics, and I believe the various layers of it offer the best potential for security technology.

Long ago, RFID chips implants became commonplace in pets and some people even had them inserted too. RFID variants could easily be printed on a membrane and stuck onto the skin surface. They could be used for one time keys too, changing each time they are used. Adding accelerometers, magnetometers, pressure sensors or even location sensors could all offer ways of enhancing security options. Active skin allows easy combination of fingerprints with other factors.

 

Ultra-thin and uninvasive security patches could be stuck onto the skin, and could not be removed without damaging them, so would offer a potentially valuable platform. Pretty much any kinds and combinations of electronics could be used in them. They could easily be made to have a certain lifetime. Very thin ones could wash off after a few days so could be useful for theme park entry during holidays or for short term contractors. Banks could offer stick on electronic patches that change fundamentally how they work every month, making it very hard to hack them.

Active skin can go inside the skin too, not just on the surface. You could for example have an electronic circuit or an array of micro-scale magnets embedded among the skin cells in your fingertip. Your fingerprint alone could easily be copied and spoofed, but not the accompanying electronic interactivity from the active skin that can be interrogated at the same time. Active skin could measure all sorts of properties of the body too, so personal body chemistry at a particular time could be used. In fact, medical monitoring is the first key development area for active skin, so we’re likely to have a lot of body data available that could make new biometrics. The key advantage here is that skin cells are very large compared to electronic feature sizes. A decent processor or memory can be made around the size of one skin cell and many could be combined using infrared optics within the skin. Temperature or chemical gradients between inner and outer skin layers could be used to power devices too.

If you are signing something, the signature could be accompanied by a signal from the fingertip, sufficiently close to the surface being signed to be useful. A ring on a finger could also offer a voluminous security electronics platform to house any number of sensors, memory and processors.

Skin itself offers a reasonable communications route, able to carry a few Mbit’s of data stream, so touching something could allow a lot of data transfer very quickly. A smart watch or any other piece of digital jewelry or active skin security patch could use your fingertip to send an authentication sequence. The watch would know who you are by constant proximity and via its own authentication tools. It could easily be unauthorized instantly when detached or via a remote command.

Active makeup offer a novel mechanism too. Makeup will soon exist that uses particles that can change color or alignment under electronic control, potentially allowing video rate pattern changes. While that makes for fun makeup, it also allows for sophisticated visual authentication sequences using one-time keys. Makeup doesn’t have to be confined only to the face of course, and security makeup could maybe be used on the forearm or hands. Combining with static biometrics, many-factor authentication could be implemented.

I believe active skin, using membranes added or printed onto and even within the skin, together with the use of capsules, electronic jewelry, and even active makeup offers the future potential to implement extremely secure personal authentication systems. This pseudo-biometric authentication offers infinitely more flexibility and changeability than the body itself, but because it is attached to the body, offers much the same ease of use and constant presence as other biometrics.

Biometrics may be pretty useless as it is, but the field does certainly have a future. We just need to add some bits. The endless potential variety of those bits and their combinations makes the available creativity space vast.

 

 

Virtual reality. Will it stick this time?

My first job was in missile design and for a year, the lab I worked in was a giant bra-shaped building, two massive domes joined by a short link-way that had been taken out of use years earlier. The domes had been used by soldiers to fire simulated missiles at simulated planes, and were built in the 1960s. One dome had a hydraulic moving platform to simulate firing from a ship. The entire dome surface was used as a screen to show the plane and missile. The missile canisters held by the soldier were counterweighted with a release mechanism coordinated to the fire instruction and the soldier’s headphones would produce a corresponding loud blast to accompany the physical weight change at launch so that they would feel as full a range of sensation experienced by a real soldier on a real battlefield as possible. The missile trajectory and control interface was simulated by analog computers. So virtual reality may have hit the civilian world around 1990 but it was in use several decades earlier in military world. In 1984, we even considered using our advancing computers to create what we called waking dreaming, simulating any chosen experience for leisure. Jaron Lanier has somehow been credited with inventing VR, and he contributed to its naming, but the fact is he ‘invented’ it several decades after it was already in common use and after the concepts were already pretty well established.

I wrote a paper in 1991 based on BT’s VR research in which I made my biggest ever futurology mistake. I worked out the number crunching requirements and pronounced that VR would overtake TV as an entertainment medium around 2000. I need hardly point out that I was wrong. I have often considered why it didn’t happen the way I thought it would. On one front, we did get the entertainment of messing around in 3D worlds, and it is the basis of almost all computer gaming now. So that happened just fine, it just didn’t use stereo vision to convey immersion. It turned out that the immersion is good enough on a TV or PC screen.

Also, in the early 1990s, just as IT companies may have been considering making VR headsets, the class action law suit became very popular, and some of those were based on very tenuous connections to real cause and effect, and meanwhile some VR headset users were reporting eye strain or disorientation. I imagine that the lawyers in those IT companies would be thinking of every teenager that develops any eye problem suing them just in case it might have been caused in part by use of their headset. Those issues plus the engineering difficulties of commercialising manufacture of good quality displays probably were enough to kill VR.

However, I later enjoyed many a simulator ride at Disney and Universal. One such ride allowed me to design my own roller coaster with twists and loops and then ride it in a simulator. It was especially enjoyable. The pull of simulator rides remains powerful.  Playing a game on an xbox is fun, but doesn’t compare with a simulator ride.

I think much of the future of VR lies in simulators where it already thrives. They can go further still. Tethered simulators can throw you around a bit but can’t manage the same range of experience that you can get on a roller coaster. Imagine using a roller coaster where you see the path ahead via a screen. As your cart reaches the top of a hill, the track apparently collapses and you see yourself hurtling towards certain death. That would scare the hell out of me. Combining the g-forces that you can get on a roller coaster with imaginative visual effects delivered via a headset would provide the ultimate experience.

Compare that with using a nice visor on its own. Sure, you can walk around an interesting object like a space station, or enjoy more immersive gaming, or you can co-design molecules. That sort of app has been used for many years in research labs anyway. Or you can train people in health and safety without exposing them to real danger. But where’s the fun? Where’s the big advantage over TV-based gaming? 3D has pretty much failed yet again for TV and movies, and hasn’t made much impact in gaming yet. Do we really think that adding a VR headset will change it all, even though 3D glasses didn’t?

I was a great believer in VR. With the active contact lens, it can be ultra-light-weight and minimally invasive while ultra-realistic. Adding active skin interfacing to the nervous system to convey physical sensation will eventually help too. But unless plain old VR it is accompanied by stimulation of the other senses, just as a simulator does, I fear the current batch of VR enthusiasts are just repeating the same mistakes I made over twenty years ago. I always knew what you could do with it and that the displays would get near perfect one day and I got carried away with excitement over the potential. That’s what caused my error. Beware you don’t make the same one. This could well be just another big flop. I hope it isn’t though.

The internet of things will soon be history

I’ve been a full time futurologist since 1991, and an engineer working on far future R&D stuff since I left uni in 1981. It is great seeing a lot of the 1980s dreams about connecting everything together finally starting to become real, although as I’ve blogged a bit recently, some of the grander claims we’re seeing for future home automation are rather unlikely. Yes you can, but you probably won’t, though some people will certainly adopt some stuff. Now that most people are starting to get the idea that you can connect things and add intelligence to them, we’re seeing a lot of overshoot too on the importance of the internet of things, which is the generalised form of the same thing.

It’s my job as a futurologist not only to understand that trend (and I’ve been yacking about putting chips in everything for decades) but then to look past it to see what is coming next. Or if it is here to stay, then that would also be an important conclusion too, but you know what, it just isn’t. The internet of things will be about as long lived as most other generations of technology, such as the mobile phone. Do you still have one? I don’t, well I do but they are all in a box in the garage somewhere. I have a general purpose mobile computer that happens to do be a phone as well as dozens of other things. So do you probably. The only reason you might still call it a smartphone or an iPhone is because it has to be called something and nobody in the IT marketing industry has any imagination. PDA was a rubbish name and that was the choice.

You can stick chips in everything, and you can connect them all together via the net. But that capability will disappear quickly into the background and the IT zeitgeist will move on. It really won’t be very long before a lot of the things we interact with are virtual, imaginary. To all intents and purposes they will be there, and will do wonderful things, but they won’t physically exist. So they won’t have chips in them. You can’t put a chip into a figment of imagination, even though you can make it appear in front of your eyes and interact with it. A good topical example of this is the smart watch, all set to make an imminent grand entrance. Smart watches are struggling to solve battery problems, they’ll be expensive too. They don’t need batteries if they are just images and a fully interactive image of a hugely sophisticated smart watch could also be made free, as one of a million things done by a free app. The smart watch’s demise is already inevitable. The energy it takes to produce an image on the retina is a great deal less than the energy needed to power a smart watch on your wrist and the cost of a few seconds of your time to explain to an AI how you’d like your wrist to be accessorised is a few seconds of your time, rather fewer seconds than you’d have spent on choosing something that costs a lot. In fact, the energy needed for direct retinal projection and associated comms is far less than can be harvested easily from your body or the environment, so there is no battery problem to solve.

If you can do that with a smart watch, making it just an imaginary item, you can do it to any kind of IT interface. You only need to see the interface, the rest can be put anywhere, on your belt, in your bag or in the IT ether that will evolve from today’s cloud. My pad, smartphone, TV and watch can all be recycled.

I can also do loads of things with imagination that I can’t do for real. I can have an imaginary wand. I can point it at you and turn you into a frog. Then in my eyes, the images of you change to those of a frog. Sure, it’s not real, you aren’t really a frog, but you are to me. I can wave it again and make the building walls vanish, so I can see the stuff on sale inside. A few of those images could be very real and come from cameras all over the place, the chips-in-everything stuff, but actually, I don’t have much interest in most of what the shop actually has, I am not interested in most of the local physical reality of a shop; what I am far more interested in is what I can buy, and I’ll be shown those things, in ways that appeal to me, whether they’re physically there or on Amazon Virtual. So 1% is chips-in-everything, 99% is imaginary, virtual, some sort of visual manifestation of my profile, Amazon Virtual’s AI systems, how my own AI knows I like to see things, and a fair bit of other people’s imagination to design the virtual decor, the nice presentation options, the virtual fauna and flora making it more fun, and countless other intermediaries and extramediaries, or whatever you call all those others that add value and fun to an experience without actually getting in the way. All just images directly projected onto my retinas. Not so much chips-in-everything as no chips at all except a few sensors, comms and an infinitesimal timeshare of a processor and storage somewhere.

A lot of people dismiss augmented reality as irrelevant passing fad. They say video visors and active contact lenses won’t catch on because of privacy concerns (and I’d agree that is a big issue that needs to be discussed and sorted, but it will be discussed and sorted). But when you realise that what we’re going to get isn’t just an internet of things, but a total convergence of physical and virtual, a coming together of real and imaginary, an explosion of human creativity,  a new renaissance, a realisation of yours and everyone else’s wildest dreams as part of your everyday reality; when you realise that, then the internet of things suddenly starts to look more than just a little bit boring, part of the old days when we actually had to make stuff and you had to have the same as everyone else and it all cost a fortune and needed charged up all the time.

The internet of things is only starting to arrive. But it won’t stay for long before it hides in the cupboard and disappears from memory. A far, far more exciting future is coming up close behind. The world of creativity and imagination. Bring it on!