Tag Archives: interfaces

Fluorescent microsphere mist displays

A few 3D mist displays have been demonstrated over the last decade. I’ve seen a couple at trade shows and have been impressed. To date, they use mists or curtains of tiny water droplets to make a 3D space onto which to project an image, so you get a walk-through 3D life-sized display. Like this:

Leia Display System Uses A Screen Made Of Water Mist To Display 3D Projections

or check out: http://ixfocus.com/top-10-best-3d-water-projections-ever/

Two years ago, I suggested using a forehead-mounted mist projector:

Forehead 3D mist projector

so you could have a 3D image made right in front of you anywhere.

This week, a holographic display has been doing the rounds on Twitter, called Gatebox:

https://www.geek.com/tech/gatebox-wants-to-be-your-personal-holographic-companion-1682967/

It looks OK but mist displays might be better solution for everyday use because they can be made a lot bigger more cheaply. However, nobody really wants water mist causing electrical problems in their PCs or making their notebook paper soggy. You can use smoke as a mist substitute but then you have a cloud of smoke around you. So…

Suppose instead of using water droplets and walking around veiled in fog or smoke or accompanied by electrical crackling and dead PCs, that the mist was not made of water droplets but tiny dry and obviously non-toxic particles such as fluorescent micro-spheres that are invisible to the naked eye and transparent to visible light so you can’t see the mist at all, and it won’t make stuff damp. Instead of projecting visible light, the particles are made of fluorescent material, so that they are illuminated by a UV projector and fluoresce with the right colour to make the visible display. There are plenty of fluorescent materials that could be made into tiny particles, even nano-particles, and made into an invisible mist that produces a bright and high-resolution display. Even if non-toxic is too big an ask, or the fluorescent material is too expensive to waste, a large box that keeps them contained and recycles them for the next display could still be bigger, better, brighter and cheaper than a large holographic display.

Remember, you saw it here first. My 101st invention of 2016.

25th anniversary of stick interface for 3D world play,

I don’t have the exact date when I thought this up so it might be a week or two out, but late 1991 certainly, so I thought I’d celebrate its 25th anniversary by blogging the idea again.

The idea was a simple stick with simple reflectors on it that could easily be tracked using an infrared beam and detector(s). Most tools and especially tools for making crafts or drawing can be approximated by a stick, and we all have a lifetime of experience in manipulating sticks, so they would be the perfect interface, and cost almost nothing to make. Here’s a pretty picture:

Stick 2.0

Stick 2.0

You can easily imagine how you could use such a stick to carve out a wall or a roof or a piece of furniture in your 3D world, or to play any kind of sports. Nintendo built a complex wand device to do this expensively, but really a simple stick can do most of that too.

The IT dark age – The relapse

I long ago used a slide in my talks about the IT dark age, showing how we’d come through a period (early 90s)where engineers were in charge and it worked, into an era where accountants had got hold of it and were misusing it (mid 90s), followed by a terrible period where administrators discovered it and used it in the worst ways possible (late 90s, early 00s). After that dark age, we started to emerge into an age of IT enlightenment, where the dumbest of behaviors had hopefully been filtered out and we were starting to use it correctly and reap the benefits.

Well, we’ve gone into relapse. We have entered a period of uncertain duration where the hard-won wisdom we’d accumulated and handed down has been thrown in the bin by a new generation of engineers, accountants and administrators and some extraordinarily stupid decisions and system designs are once again being made. The new design process is apparently quite straightforward: What task are we trying to solve? How can we achieve this in the least effective, least secure, most time-consuming, most annoying, most customer loyalty destructive way possible? Now, how fast can we implement that? Get to it!

If aliens landed and looked at some of the recent ways we have started to use IT, they’d conclude that this was all a green conspiracy, designed to make everyone so anti-technology that we’d be happy to throw hundreds of years of progress away and go back to the 16th century. Given that they have been so successful in destroying so much of the environment under the banner of protecting it, there is sufficient evidence that greens really haven’t a clue what they are doing, but worse still, gullible political and business leaders will cheerfully do the exact opposite of what they want as long as the right doublespeak is used when they’re sold the policy.

The main Green laboratory in the UK is the previously nice seaside town of Brighton. Being an extreme socialist party, that one might think would be a binperson’s best friend, the Greens in charge nevertheless managed to force their binpeople to go on strike, making what ought to be an environmental paradise into a stinking litter-strewn cesspit for several weeks. They’ve also managed to create near-permanent traffic gridlock supposedly to maximise the amount of air pollution and CO2 they can get from the traffic.

More recently, they have decided to change their parking meters for the very latest IT. No longer do you have to reach into your pocket and push a few coins into a machine and carry a paper ticket all the way back to your car windscreen. Such a tedious process consumed up to a minute of your day. It simply had to be replaced with proper modern technology. There are loads of IT solutions to pick from, but the Greens apparently decided to go for the worst possible implementation, resulting in numerous press reports about how awful it is. IT should not be awful, it can and should be done in ways that are better in almost every way than old-fashioned systems. I rarely drive anyway and go to Brighton very rarely, but I am still annoyed at incompetent or deliberate misuse of IT.

If I were to go there by car, I’d also have to go via the Dartford Crossing, where again, inappropriate IT has been used incompetently to replace a tollbooth system that makes no economic sense in the first place. The government would be better off if it simply paid for it directly. Instead, each person using it is likely to be fined if they don’t know how it operates, and even if they do, they have to spend a lot more expensive time and effort to pay than before. Again, it is a severe abuse of IT, conferring a tiny benefit on a tiny group of people at the expense of significant extra load on very many people.

Another financial example is the migration to self-pay terminals in shops. In Stansted Airport’s W H Smith a couple of days ago, I sat watching a long queue of people taking forever to buy newspapers. Instead of a few seconds handing over a coin and walking out, it was taking a minute or more to read menus, choose which buttons to touch, inspecting papers to find barcodes, fumbling for credit cards, checking some more boxes, checking they hadn’t left their boarding pass or paper behind, and finally leaving. An assistant stood there idle, watching people struggle instead of serving them in a few seconds. I wanted a paper but the long queue was sufficient deterrent and they lost the sale. Who wins in such a situation? The staff who lost their jobs certainly didn’t. I as the customer had no paper to read so I didn’t win. I would be astonished with all the lost sales if W H Smith were better off so they didn’t win. The airport will likely make less from their take too. Even the terminal manufacturing industry only swaps one type of POS terminal for another with marginally different costs. I’m not knocking W H Smith, they are just another of loads of companies doing this now. But it isn’t progress, it is going backwards.

When I arrived at my hotel, another electronic terminal was replacing a check-in assistant with a check-in terminal usage assistant. He was very friendly and helpful, but check-in wasn’t any easier or faster for me, and the terminal design still needed him to be there too because like so many others, it was designed by people who have zero understanding of how other people actually do things.  Just like those ticket machines in rail stations that we all detest.

When I got to my room, the thermostat used a tiny LCD panel, with tiny meaningless symbols, with no backlight, in a dimly lit room, with black text on a dark green background. So even after searching for my reading glasses, since I hadn’t brought a torch with me, I couldn’t see a thing on it so I couldn’t use the air conditioning. An on/off switch and a simple wheel with temperature marked on it used to work perfectly fine. If it ain’t broke, don’t do your very best to totally wreck it.

These are just a few everyday examples, alongside other everyday IT abuses such as minute fonts and frequent use of meaningless icons instead of straightforward text. IT is wonderful. We can make devices with absolutely superb capability for very little cost. We can make lives happier, better, easier, healthier, more prosperous, even more environmentally friendly.

Why then are so many people so intent on using advanced IT to drag us back into another dark age?

 

 

Interfacial prejudice

This blog is caused by an interaction with Nick Colosimo, thanks Nick.

We were discussing whether usage differences for gadgets were generational. I think they are but not because older people find it hard to learn new tricks. Apart from a few unfortunate people whose brains go downhill when they get old, older people have shown they are perfectly able and willing to learn web stuff. Older people were among the busiest early adopters of social media.

I think the problem is the volume of earlier habits that need to be unlearned. I am 53 and have used computers every day since 1981. I have used slide rules and log tables, an abacus, an analog computer, several mainframes, a few minicomputers, many assorted Macs and PCs and numerous PDAs, smartphones and now tablets. They all have very different ways of using them and although I can’t say I struggle with any of them, I do find the differing implementations of features and mechanisms annoying. Each time a new operating system comes along, or a new style of PDA, you have to learn a new design language, remember where all the menus, sub-menus and all the various features are hidden on this one, how they interconnect and what depends on what.

That’s where the prejudice kicks in. The many hours of experience you have on previous systems have made you adept at navigating through a sea of features, menus, facilities. You are native to the design language, the way you do things, the places to look for buttons or menus, even what the buttons look like. You understand its culture, thoroughly. When a new device or OS is very different, using it is like going on holiday. It is like emigrating if you’re making a permanent switch. You have the ability to adapt, but the prejudice caused by your long experience on a previous system makes that harder. Your first uses involve translation from the old to the new, just like translating foreignish to your own language, rather than thinking in the new language as you will after lengthy exposure. Your attitude to anything on the new system is colored by your experiences with the old one.

It isn’t stupidity that making you slow and incompetent. Its interfacial prejudice.

The internet of things will soon be history

I’ve been a full time futurologist since 1991, and an engineer working on far future R&D stuff since I left uni in 1981. It is great seeing a lot of the 1980s dreams about connecting everything together finally starting to become real, although as I’ve blogged a bit recently, some of the grander claims we’re seeing for future home automation are rather unlikely. Yes you can, but you probably won’t, though some people will certainly adopt some stuff. Now that most people are starting to get the idea that you can connect things and add intelligence to them, we’re seeing a lot of overshoot too on the importance of the internet of things, which is the generalised form of the same thing.

It’s my job as a futurologist not only to understand that trend (and I’ve been yacking about putting chips in everything for decades) but then to look past it to see what is coming next. Or if it is here to stay, then that would also be an important conclusion too, but you know what, it just isn’t. The internet of things will be about as long lived as most other generations of technology, such as the mobile phone. Do you still have one? I don’t, well I do but they are all in a box in the garage somewhere. I have a general purpose mobile computer that happens to do be a phone as well as dozens of other things. So do you probably. The only reason you might still call it a smartphone or an iPhone is because it has to be called something and nobody in the IT marketing industry has any imagination. PDA was a rubbish name and that was the choice.

You can stick chips in everything, and you can connect them all together via the net. But that capability will disappear quickly into the background and the IT zeitgeist will move on. It really won’t be very long before a lot of the things we interact with are virtual, imaginary. To all intents and purposes they will be there, and will do wonderful things, but they won’t physically exist. So they won’t have chips in them. You can’t put a chip into a figment of imagination, even though you can make it appear in front of your eyes and interact with it. A good topical example of this is the smart watch, all set to make an imminent grand entrance. Smart watches are struggling to solve battery problems, they’ll be expensive too. They don’t need batteries if they are just images and a fully interactive image of a hugely sophisticated smart watch could also be made free, as one of a million things done by a free app. The smart watch’s demise is already inevitable. The energy it takes to produce an image on the retina is a great deal less than the energy needed to power a smart watch on your wrist and the cost of a few seconds of your time to explain to an AI how you’d like your wrist to be accessorised is a few seconds of your time, rather fewer seconds than you’d have spent on choosing something that costs a lot. In fact, the energy needed for direct retinal projection and associated comms is far less than can be harvested easily from your body or the environment, so there is no battery problem to solve.

If you can do that with a smart watch, making it just an imaginary item, you can do it to any kind of IT interface. You only need to see the interface, the rest can be put anywhere, on your belt, in your bag or in the IT ether that will evolve from today’s cloud. My pad, smartphone, TV and watch can all be recycled.

I can also do loads of things with imagination that I can’t do for real. I can have an imaginary wand. I can point it at you and turn you into a frog. Then in my eyes, the images of you change to those of a frog. Sure, it’s not real, you aren’t really a frog, but you are to me. I can wave it again and make the building walls vanish, so I can see the stuff on sale inside. A few of those images could be very real and come from cameras all over the place, the chips-in-everything stuff, but actually, I don’t have much interest in most of what the shop actually has, I am not interested in most of the local physical reality of a shop; what I am far more interested in is what I can buy, and I’ll be shown those things, in ways that appeal to me, whether they’re physically there or on Amazon Virtual. So 1% is chips-in-everything, 99% is imaginary, virtual, some sort of visual manifestation of my profile, Amazon Virtual’s AI systems, how my own AI knows I like to see things, and a fair bit of other people’s imagination to design the virtual decor, the nice presentation options, the virtual fauna and flora making it more fun, and countless other intermediaries and extramediaries, or whatever you call all those others that add value and fun to an experience without actually getting in the way. All just images directly projected onto my retinas. Not so much chips-in-everything as no chips at all except a few sensors, comms and an infinitesimal timeshare of a processor and storage somewhere.

A lot of people dismiss augmented reality as irrelevant passing fad. They say video visors and active contact lenses won’t catch on because of privacy concerns (and I’d agree that is a big issue that needs to be discussed and sorted, but it will be discussed and sorted). But when you realise that what we’re going to get isn’t just an internet of things, but a total convergence of physical and virtual, a coming together of real and imaginary, an explosion of human creativity,  a new renaissance, a realisation of yours and everyone else’s wildest dreams as part of your everyday reality; when you realise that, then the internet of things suddenly starts to look more than just a little bit boring, part of the old days when we actually had to make stuff and you had to have the same as everyone else and it all cost a fortune and needed charged up all the time.

The internet of things is only starting to arrive. But it won’t stay for long before it hides in the cupboard and disappears from memory. A far, far more exciting future is coming up close behind. The world of creativity and imagination. Bring it on!