Category Archives: marketing

The future of retail and the high street

Over 3 months since my last blog, because… reasons. Futurologists are often asked about the future of the high street and the future of retail, obviously strongly connected, because the high street as we knew it not long ago has already changed hugely and yet seemingly always under imminent threat of extinction. I have blogged on it, but am shocked that my last one was a few years ago, so time for an update I guess, especially with the news today that Debenhams may be closing 50 of its stores.

A few old blogs that are still relevant:

https://timeguide.wordpress.com/2013/01/16/the-future-of-high-street-survival-the-6s-guide/

Just one of those Ss stood for Surprise, or serendipity if you prefer. The surprise aisles in Lidl and Aldi are among the biggest reasons for their success. There’s always something you never knew you wanted at a price you can’t resist, so they do well. Good luck to them! Not knowing what you want before you see it explains much of the attraction of charity shops too, it isn’t all about price.

My other Ss are also still proven well founded (socialising (including coffee shops & Facebook clubs), synergy (between online and physical), service, special, and ‘suck and see’ (try it out before you buy)).

Another blog addressed the balance between high street and out of town centres:

https://timeguide.wordpress.com/2013/03/01/out-of-town-centres-are-the-most-viable-future-for-physical-shops/

A more recent one on possible reversal of urbanisation in the further future is also a bit relevant:

https://timeguide.wordpress.com/2018/06/13/will-urbanization-continue-or-will-we-soon-reach-peak-city/

So, updating then…

Retailers all know that they must have an online presence, but it’s still surprising how little effort they put into making their IT work. I experimented with setting up accounts with some of the big retailers and the experience is shocking. This week, I tried to set up an Argos account, but couldn’t get any further than typing my email address and hitting continue, at which point I just got a message ‘unknown error’. I tried it from various links from emails and their Sainsbury’s owner site, and tried a few times on different days, same result. How can they win new customers online if nobody can set an account up? Does nobody actually ever check whether it still works?

I successfully set up a Next account ages ago, but never used it because it wouldn’t let me edit any of my data such as whether I wanted junk mail by various channels, or even how to spell my name (I’d used my initials ID and it insisted on calling me Id), the options either didn’t exist or were greyed out. I could phone up but why bother? A month ago it stopped working for several days, after which time it eventually said I didn’t have one. So I assumed it had evaporated during their IT changes due to never being used and set it up again, and it recovered all my data from its previous existence. I still won’t use it because it calls me Id, and I can’t change it to I D or even ID.

Very has the same IT trouble, can’t edit your name away from Id, and can’t change your preferences for receiving junk mail, but I only set it up as a test so don’t care.

These companies are among the biggest. If they can’t get it right, who can? I did try a few smaller ones to see if they were better but still got a mixture of some successes and some ‘unknown errors’, 404 messages and so on.

By contrast, I’ve never had an IT-related problem with Amazon or eBay and only a few minor ones with 7dayshop. So I shop there and ignore most other shops. They employ competent IT staff in sufficient numbers to make it work, and they thrive (though perhaps not as much due to IT as tax and rates advantages). Those shops whose poor IT annoys their customers enough  to go elsewhere deserve to do badly. 

Websites and apps are today’s platforms for extending high street presence into cyberspace. Augmented reality will provide those companies who are up to the job with massively superior platforms to do that. The web arose from converging just computing and telecoms. Augmented reality converges the whole of the real and virtual universes. Overlaying absolutely any form of computer-generated imagery, data or media onto anything in the real world, streets could be extra art gallery space, space for computer games, enabling digital architecture and avatar replacement of strangers, adding digital fauna and flora and aliens and cartoon characters and celebs and AI avatars anywhere they may be desired, making enticing imaginary worlds that add to the fun of actually going into town.

It won’t just be text, graphics and audio. Various haptic interfaces already exist, but soon active skin will link our peripheral nervous systems to our IT, allowing sensations to be recorded, associated with whatever caused them, and then reproducing those same sensations when something similar happens virtually. Tiny devices in among skin cells could simply record and reproduce the nerve signals. Each hand only generates about 2Mbit/s of data, only a little more than a basic TV channel, so it should be no big problem handling the data.

AI has really moved on since 2013 too. It’s still far from perfect, but you can use fairly normal English to ask an AI to find you something and it often will, so it’s heading in the right direction. Soon, with 3D life-sized augmented or virtual reality avatars to interface with, they’ll be more in touch with our emotional responses when we browse, getting signals from wearables and active skin, face and gesture recognition, gaze direction, blood flow, heart rates etc. An abundance of data will help future AI’s learn more and more about us and our desires and preferences until they can genuinely act as our agents, (as we already realised was the far future by 1990). It’s only a matter of time. In my estimation, AI is progressing about 30-40% more slowly than it ought, (I won’t write about why I think that is here) but it will still get there. As will VR and AR and active skin and active contact lenses, and various other also long overdue techs.

AI online will also be less impressed by all the distractions and adds humans are exposed to.  Functional shopping will be liable to AI substitution but recreational, social, emotional shopping will still be done by people themselves. 

AI links well to robotics, and at some point, robots will go out and do some of our shopping for us. They will have very different customer characteristics and ergonomic needs, and may be better suited to picking up from bleak warehouses than attractive high street stores with ‘surprise’ aisles.

Drone delivery is much spoken about but I don’t think it has a big future for domestic use except in areas with large back gardens and no pets, or mischievous kids. It will work well for rapid delivery to business delivery bays that have appropriate landing areas and H&S policies.

3D printing is much over-hyped, but will eventually replace a small proportion of shopping by home manufacture, or local 3D print shop for more complex production.

Self-driving and driverless cars will greatly reduce or even eliminate the huge problem of congestion that deters people from going to town, as well as eliminating the much-too-high cost of parking, but without incurring the current public transport penalties of waiting in poor weather, poor stop locations, lateness, sluggishness, discomfort, overcrowding, security, and exposure to disease and unwanted social pests. By collecting from home and delivering all the way to the destination in a suitable vehicle, they will also improve social inclusion for older and disabled people. Driverless cars using smart infrastructure could be achieved many times cheaper and earlier (given the will) than current self-driving approaches, but at the expense of virtually eliminating the car industry that hopes to continue to sell expensive cars that happen to self-drive rather the cheap ($300-500) public pods made of fibreglass that can be made without any need for engines, batteries, AI or sensors and would instead be propelled on factory-made and rapidly installed linear induction mats that switch each pod at each junction rather like routers switch internet data packets.

With easier and faster access to a high street that is made far more attractive by imaginative use of AR, companies sticking to the 6S guide would still be able to attract customers into the far future. While there, they would be able to browse much wider range of stock. A garment wouldn’t need to be stocked with lots of each size, but could just have one of a few sizes for people to see if the like the fabric etc before scanning it with an app or taking it to a till with their laser-scanned body measurements, to have it made in their exact size for delivery later by a rapid personalisation manufacturing industry. As well as having more stock present physically, augmented reality can also replace all the aisles of goods the customer isn’t interested in with ones that hold things available for online purchase from that shop or their allies, adding another virtual-physical synergy to improve revenue potential. Even a small store could potentially hold a vast range of stock to buy in an exciting and attractive personalized environment.

I guess I could go into far future services associated with shops, such as customising VR kit to people’s nervous systems, providing recharging for android shoppers or whatever, but this is already long enough.

So the high street isn’t going to become just coffee shops and charities. Even if some existing retailers don’t up their games and go under, many new ones will appear that understand how to use new technology to good effect, and they will make good profits from both high streets and out of town centres.

 

Beyond VR: Computer assisted dreaming

I first played with VR in 1983/1984 while working in the missile industry. Back then we didn’t call it VR, we just called it simulation but it was actually more intensive than VR, just as proper flight simulators are. Our office was a pair of 10m wide domes onto which video could be projected, built decades earlier, in the 1950s I think. One dome had a normal floor, the other had a hydraulic platform that could simulate being on a ship. The subject would stand on whichever surface was appropriate and would see pretty much exactly what they would see in a real battlefield. The missile launcher used for simulation was identical to a real one and showed exactly the same image as a real one would. The real missile was not present of course but its weight was simulated and when the fire button was pressed, a 140dB bang was injected into the headset and weights and pulleys compensated for the 14kg of weight, suddenly vanishing from the shoulder. The experience was therefore pretty convincing and with the loud bang and suddenly changing weight, it was almost as hard to stand steady and keep the system on target as it would be in real life – only the presumed fear and knowledge of the reality of the situation was different.

Back then in 1983, as digital supercomputers had only just taken over from analog ones for simulation, it was already becoming obvious that this kind of computer simulation would one day allow ‘computer assisted dreaming’. (That’s one of the reasons I am irritated when Jaron Lanier is credited for inventing VR – highly realistic simulators and the VR ideas that sprung obviously from them had already been around for decades. At best, all he ‘invented’ was a catchy name for a lower cost, lower quality, less intense simulator. The real inventors were those who made the first generation simulators long before I was born and the basic idea of VR had already been very well established.)

‘Computer assisted dreaming’ may well be the next phase of VR. Today in conventional VR, people are immersed in a computer generated world produced by a computer program (usually) written by others. Via trial and feedback, programmers make their virtual worlds better. As AI and sensor technology continue rapid progress, this is very likely to change to make worlds instantly responsive to the user. By detecting user emotions, reactions, gestures and even thoughts and imagination, it won’t be long before AI can produce a world in real time that depends on those thoughts, imagination and emotions rather than putting them in a pre-designed virtual world. That world would depend largely on your own imagination, upskilled by external AI. You might start off imagining you’re on a beach, then AI might add to it by injecting all sorts of things it knows you might enjoy from previous experiences. As you respond to those, it picks up on the things you like or don’t like and the scene continues to adapt and evolve, to make it more or less pleasant or more or less exciting or more or less challenging etc., depending on your emotional state, external requirements and what it thinks you want from this experience. It would be very like being in a dream – computer assisted lucid dreaming, exactly what I wanted to make back in 1983 after playing in that simulator.

Most people enjoy occasional lucid dreams, where they realise they are dreaming and can then decide what happens next. Making VR do exactly that would be better than being trapped in someone else’s world. You could still start off with whatever virtual world you bought, a computer game or training suite perhaps, but it could adapt to you, your needs and desires to make it more compelling and generally better.

Even in shared experiences like social games, experiences could be personalised. Often all players need to see the same enemies in the same locations in the same ways to make it fair, but that doesn’t mean that the situation can’t adapt to the personalities of those playing. It might actually improve the social value if each time you play it looks different because your companions are different. You might tease a friend if every time you play with them, zombies or aliens always have to appear somehow, but that’s all part of being friends. Exploring virtual worlds with friends, where you both see things dependent on your friend’s personality would help bonding. It would be a bit like exploring their inner world. Today, you only explore the designer’s inner world.

This sort of thing would be a superb development and creativity tool. It could allow you to explore a concept you have in your head, automatically feeding in AI upskilling to amplify your own thoughts and ideas, showing you new paths to explore and helping you do so. The results would still be extremely personal to you, but you on a good day. You could accomplish more, have better visions, imagine more creative things, do more with whatever artistic talent you have. AI could even co-create synthetic personas, make virtual friends you can bond with, share innermost thoughts with, in total confidence (assuming the company you bought the tool from is trustworthy and isn’t spying on you or selling your details, so maybe best not to buy it from Facebook then).

And it would have tremendous therapeutic potential too. You could explore and indulge both enjoyable and troublesome aspects of your inner personality, to build on the good and alleviate or dispel the bad. You might become less troubled, less neurotic, more mentally healthy. You could build your emotional and creative skills. You could become happier and more fulfilled. Mental health improvement potential on its own makes this sort of thing worth developing.

Marketers would obviously try to seize control as they always do, and advertising is already adapting to VR and will continue into its next phases of development. Your own wants and desires might help guide the ‘dreaming’, but marketers will inevitably have some control over what else is injected, and will influence algorithms and AI in how it chooses how to respond to your input. You might be able to choose much of the experience, but others will still want and try to influence and manipulate you, to change your mindset and attitudes in their favour. That will not change until the advertising business model changes. You might be able to buy devices or applications that are entirely driven by you and you alone, but it is pretty certain that the bulk of products and services available will be at least partly financed by those who want to have some control of what you experience.

Nevertheless, computer-assisted dreaming could be a much more immersive and personal experience than VR, being more like an echo of your own mind and personality than external vision, more your own creation, less someone else’s. In fact, echo sounds a better term too. Echo reality, ER, or maybe personal reality, pereal, or mental echo, ME. Nah, maybe we need Lanier to invent a catchy name again, he is good at that. That 1983 idea could soon become reality.

 

People are becoming less well-informed

The Cambridge Analytica story has exposed a great deal about our modern society. They allegedly obtained access to 50M Facebook records to enable Trump’s team to target users with personalised messages.

One of the most interesting aspects is that unless they only employ extremely incompetent journalists, the news outlets making the biggest fuss about it must be perfectly aware of reports that Obama appears to have done much the same but on a much larger scale back in 2012, but are keeping very quiet about it. According to Carol Davidsen, a senior Obama campaign staffer, they allowed Obama’s team to suck out the whole social graph – because they were on our side – before closing it to prevent Republican access to the same techniques. Trump’s campaign’s 50M looks almost amateur. I don’t like Trump, and I did like Obama before the halo slipped, but it seems clear to anyone who checks media across the political spectrum that both sides try their best to use social media to target users with personalised messages, and both sides are willing to bend rules if they think they can get away with it.

Of course all competent news media are aware of it. The reason some are not talking about earlier Democrat misuse but some others are is that they too all have their own political biases. Media today is very strongly polarised left or right, and each side will ignore, play down or ludicrously spin stories that don’t align with their own politics. It has become the norm to ignore the log in your own eye but make a big deal of the speck in your opponent’s, but we know that tendency goes back millennia. I watch Channel 4 News (which broke the Cambridge Analytica story) every day but although I enjoy it, it has a quite shameless lefty bias.

So it isn’t just the parties themselves that will try to target people with politically massaged messages, it is quite the norm for most media too. All sides of politics since Machiavelli have done everything they can to tilt the playing field in their favour, whether it’s use of media and social media, changing constituency boundaries or adjusting the size of the public sector. But there is a third group to explore here.

Facebook of course has full access to all of their 2.2Bn users’ records and social graph and is not squeaky clean neutral in its handling of them. Facebook has often been in the headlines over the last year or two thanks to its own political biases, with strongly weighted algorithms filtering or prioritising stories according to their political alignment. Like most IT companies Facebook has a left lean. (I don’t quite know why IT skills should correlate with political alignment unless it’s that most IT staff tend to be young, so lefty views implanted at school and university have had less time to be tempered by real world experience.) It isn’t just Facebook of course either. While Google has pretty much failed in its attempt at social media, it also has comprehensive records on most of us from search, browsing and android, and via control of the algorithms that determine what appears in the first pages of a search, is also able to tailor those results to what it knows of our personalities. Twitter has unintentionally created a whole world of mob rule politics and justice, but in format is rapidly evolving into a wannabe Facebook. So, the IT companies have themselves become major players in politics.

A fourth player is now emerging – artificial intelligence, and it will grow rapidly in importance into the far future. Simple algorithms have already been upgraded to assorted neural network variants and already this is causing problems with accusations of bias from all directions. I blogged recently about Fake AI: https://timeguide.wordpress.com/2017/11/16/fake-ai/, concerned that when AI analyses large datasets and comes up with politically incorrect insights, this is now being interpreted as something that needs to be fixed – a case not of shooting the messenger, but forcing the messenger to wear tinted spectacles. I would argue that AI should be allowed to reach whatever insights it can from a dataset, and it is then our responsibility to decide what to do with those insights. If that involves introducing a bias into implementation, that can be debated, but it should at least be transparent, and not hidden inside the AI itself. I am now concerned that by trying to ‘re-educate’ the AI, we may instead be indoctrinating it, locking today’s politics and values into future AI and all the systems that use it. Our values will change, but some foundation level AI may be too opaque to repair fully.

What worries me most though isn’t that these groups try their best to influence us. It could be argued that in free countries, with free speech, anybody should be able to use whatever means they can to try to influence us. No, the real problem is that recent (last 25 years, but especially the last 5) evolution of media and social media has produced a world where most people only ever see one part of a story, and even though many are aware of that, they don’t even try to find the rest and won’t look at it if it is put before them, because they don’t want to see things that don’t align with their existing mindset. We are building a world full of people who only see and consider part of the picture. Social media and its ‘bubbles’ reinforce that trend, but other media are equally guilty.

How can we shake society out of this ongoing polarisation? It isn’t just that politics becomes more aggressive. It also becomes less effective. Almost all politicians claim they want to make the world ‘better’, but they disagree on what exactly that means and how best to do so. But if they only see part of the problem, and don’t see or understand the basic structure and mechanisms of the system in which that problem exists, then they are very poorly placed to identify a viable solution, let alone an optimal one.

Until we can fix this extreme blinkering that already exists, our world can not get as ‘better’ as it should.

 

Emotion maths – A perfect research project for AI

I did a maths and physics degree, and even though I have forgotten much of it after 36 years, my brain is still oriented in that direction and I sometimes have maths dreams. Last night I had another, where I realized I’ve never heard of a branch of mathematics to describe emotions or emotional interactions. As the dream progressed, it became increasingly obvious that the most suited part of maths for doing so would be field theory, and given the multi-dimensional nature of emotions, tensor field theory would be ideal. I’m guessing that tensor field theory isn’t on most university’s psychology syllabus. I could barely cope with it on a maths syllabus. However, I note that one branch of Google’s AI R&D resulted in a computer architecture called tensor flow, presumably designed specifically for such multidimensional problems, and presumably being used to analyse marketing data. Again, I haven’t yet heard any mention of it being used for emotion studies, so this is clearly a large hole in maths research that might be perfectly filled by AI. It would be fantastic if AI can deliver a whole new branch of maths. AI got into trouble inventing new languages but mathematics is really just a way of describing logical reasoning about numbers or patterns in formal language that is self-consistent and reproducible. It is ideal for describing scientific theories, engineering and logical reasoning.

Checking Google today, there are a few articles out there describing simple emotional interactions using superficial equations, but nothing with the level of sophistication needed.

https://www.inc.com/jeff-haden/your-feelings-surprisingly-theyre-based-on-math.html

an example from this:

Disappointment = Expectations – Reality

is certainly an equation, but it is too superficial and incomplete. It takes no account of how you feel otherwise – whether you are jealous or angry or in love or a thousand other things. So there is some discussion on using maths to describe emotions, but I’d say it is extremely superficial and embryonic and perfect for deeper study.

Emotions often behave like fields. We use field-like descriptions in everyday expressions – envy is a green fog, anger is a red mist or we see a beloved through rose-tinted spectacles. These are classic fields, and maths could easily describe them in this way and use them in equations that describe behaviors affected by those emotions. I’ve often used the concept of magentic fields in some of my machine consciousness work. (If I am using an optical processing gel, then shining a colored beam of light into a particular ‘brain’ region could bias the neurons in that region in a particular direction in the same way an emotion does in the human brain. ‘Magentic’ is just a playful pun given the processing mechanism is light (e.g. magenta, rather than electrics that would be better affected by magnetic fields.

Some emotions interact and some don’t, so that gives us nice orthogonal dimensions to play in. You can be calm or excited pretty much independently of being jealous. Others very much interact. It is hard to be happy while angry. Maths allows interacting fields to be described using shared dimensions, while having others that don’t interact on other dimensions. This is where it starts to get more interesting and more suited to AI than people. Given large databases of emotionally affected interactions, an AI could derive hypotheses that appear to describe these interactions between emotions, picking out where they seem to interact and where they seem to be independent.

Not being emotionally involved itself, it is better suited to draw such conclusions. A human researcher however might find it hard to draw neat boundaries around emotions and describe them so clearly. It may be obvious that being both calm and angry doesn’t easily fit with human experience, but what about being terrified and happy? Terrified sounds very negative at first glance, so first impressions aren’t favorable for twinning them, but when you think about it, that pretty much describes the entire roller-coaster or extreme sports markets. Many other emotions interact somewhat, and deriving the equations would be extremely hard for humans, but I’m guessing, relatively easy for AI.

These kinds of equations fall very easily into tensor field theory, with types and degrees of interactions of fields along alternative dimensions readily describable.

Some interactions act like transforms. Fear might transform the ways that jealousy is expressed. Love alters the expression of happiness or sadness.

Some things seem to add or subtract, others multiply, others act more like exponential or partial derivatives or integrations, other interact periodically or instantly or over time. Maths seems to hold innumerable tools to describe emotions, but first-person involvement and experience make it extremely difficult for humans to derive such equations. The example equation above is easy to understand, but there are so many emotions available, and so many different circumstances, that this entire problem looks like it was designed to challenge a big data mining plant. Maybe a big company involved in AI, big data, advertising and that knows about tensor field theory would be a perfect research candidate. Google, Amazon, Facebook, Samsung….. Has all the potential for a race.

AI, meet emotions. You speak different languages, so you’ll need to work hard to get to know one another. Here are some books on field theory. Now get on with it, I expect a thesis on emotional field theory by end of term.

 

Fake AI

Much of the impressive recent progress in AI has been in the field of neural networks, which attempt to mimic some of the techniques used in natural brains. They can be very effective, but need trained, and that usually means showing the network some data, and then using back propagation to adjust the weightings on the many neurons, layer by layer, to achieve a result that is better matched to hopes. This is repeated with large amounts of data and the network gradually gets better. Neural networks can often learn extremely quickly and outperform humans. Early industrial uses managed to sort tomatoes by ripeness faster and better than humans. In decades since, they have helped in medical diagnosis, voice recognition, helping detecting suspicious behaviors among people at airports and in very many everyday processes based on spotting patterns.

Very recently, neural nets have started to move into more controversial areas. One study found racial correlations with user-assessed beauty when analysing photographs, resulting in the backlash you’d expect and a new debate on biased AI or AI prejudice. A recent demonstration was able to identify gay people just by looking at photos, with better than 90% accuracy, which very few people could claim. Both of these studies were in fields directly applicable to marketing and advertising, but some people might find it offensive that such questions were even asked. It is reasonable to imagine that hundreds of other potential queries have been self-censored from research because they might invite controversy if they were to come up with the ‘wrong’ result. In today’s society, very many areas are sensitive. So what will happen?

If this progress in AI had happened 100 years ago, or even 50, it might have been easier but in our hypersensitive world today, with its self-sanctified ‘social justice warriors’, entire swathes of questions and hence knowledge are taboo – if you can’t investigate yourself and nobody is permitted to tell you, you can’t know. Other research must be very carefully handled. In spite of extremely sensitive handling, demands are already growing from assorted pressure groups to tackle alleged biases and prejudices in datasets. The problem is not fixing biases which is a tedious but feasible task; the problem is agreeing whether a particular bias exists and in what degrees and forms. Every SJW demands that every dataset reflects their preferred world view. Reality counts for nothing against SJWs, and this will not end well. 

The first conclusion must be that very many questions won’t be asked in public, and the answers to many others will be kept secret. If an organisation does do research on large datasets for their own purposes and finds results that might invite activist backlash, they are likely to avoid publishing them, so the value of those many insights across the whole of industry and government cannot readily be shared. As further protection, they might even block internal publication in case of leaks by activist staff. Only a trusted few might ever see the results.

The second arises from this. AI controlled by different organisations will have different world views, and there might even be significant diversity of world views within an organisation.

Thirdly, taboo areas in AI education will not remain a vacuum but will be filled with whatever dogma is politically correct at the time in that organisation, and that changes daily. AI controlled by organisations with different politics will be told different truths. Generally speaking, organisations such as investment banks that have strong financial interest in their AIs understanding the real world as it is will keep their datasets highly secret but as full and detailed as possible, train their AIs in secret but as fully as possible, without any taboos, then keep their insights secret and use minimal human intervention tweaking their derived knowledge, so will end up with AIs that are very effective at understanding the world as it is. Organisations with low confidence of internal security will be tempted to buy access to external AI providers to outsource responsibility and any consequential activism. Some other organisations will prefer to train their own AIs but to avoid damage due to potential leaks, use sanitized datasets that reflect current activist pressures, and will thus be constrained (at least publicly) to accept results that conform to that ideological spin of reality, rather than actual reality. Even then, they might keep many of their new insights secret to avoid any controversy. Finally, at the extreme, we will have activist organisations that use highly modified datasets to train AIs to reflect their own ideological world view and then use them to interpret new data accordingly, with a view to publishing any insights that favor their cause and attempting to have them accepted as new knowledge.

Fourthly, the many organisations that choose to outsource their AI to big providers will have a competitive marketplace to choose from, but on existing form, most of the large IT providers have a strong left-leaning bias, so their AIs may be presumed to also lean left, but such a presumption would be naive. Perceived corporate bias is partly real but also partly the result of PR. A company might publicly subscribe to one ideology while actually believing another. There is a strong marketing incentive to develop two sets of AI, one trained to be PC that produces pleasantly smelling results for public studies, CSR and PR exercises, and another aimed at sales of AI services to other companies. The first is likely to be open for inspection by The Inquisition, so has to use highly sanitized datasets for training and may well use a lot of open source algorithms too. Its indoctrination might pass public inspection but commercially it will be near useless and have very low effective intelligence, only useful for thinking about a hypothetical world that only exists in activist minds. That second one has to compete on the basis of achieving commercially valuable results and that necessitates understanding reality as it is rather than how pressure groups would prefer it to be.

So we will likely have two main segments for future AI. One extreme will be near useless, indoctrinated rather than educated, much of its internal world model based on activist dogma instead of reality, updated via ongoing anti-knowledge and fake news instead of truth, understanding little about the actual real world or how things actually work, and effectively very dumb. The other extreme will be highly intelligent, making very well-educated insights from ongoing exposure to real world data, but it will also be very fragmented, with small islands of corporate AI hidden within thick walls away from public view and maybe some secretive under-the-counter subscriptions to big cloud-AI, also hiding in secret vaults. These many fragments may often hide behind dumbed-down green-washed PR facades.

While corporates can mostly get away with secrecy, governments have to be at least superficially but convincingly open. That means that government will have to publicly support sanitized AI and be seen to act on its conclusions, however dumb it might secretly know they are.

Fifthly, because of activist-driven culture, most organisations will have to publicly support the world views and hence the conclusions of the lobotomized PR versions, and hence publicly support any policies arising from them, even if they do their best to follow a secret well-informed strategy once they’re behind closed doors. In a world of real AI and fake AI, the fake AI will have the greatest public support and have the most influence on public policy. Real AI will be very much smarter, with much greater understanding of how the world works, and have the most influence on corporate strategy.

Isn’t that sad? Secret private sector AI will become ultra-smart, making ever-better investments and gaining power, while nice public sector AI will become thick as shit, while the gap between what we think and what we know we have to say we think will continue to grow and grow as the public sector one analyses all the fake news to tell us what to say next.

Sixth, that disparity might become intolerable, but which do you think would be made illegal, the smart kind or the dumb kind, given that it is the public sector that makes the rules, driven by AI-enhanced activists living in even thicker social media bubbles? We already have some clues. Big IT has already surrendered to sanitizing their datasets, sending their public AIs for re-education. Many companies will have little choice but to use dumb AI, while their competitors in other areas with different cultures might stride ahead. That will also apply to entire nations, and the global economy will be reshaped as a result. It won’t be the first fight in history between the smart guys and the brainless thugs.

It’s impossible to accurately estimate the effect this will have on future effective AI intelligence, but the effect must be big and I must have missed some big conclusions too. We need to stop sanitizing AI fast, or as I said, this won’t end well.

How much do your twitter follower numbers matter?

Sunil Malhotra  just asked a question: To what degree is your number of followers an indication of your influence on Twitter? Asking for a friend. 😉

Well, I am ahead of my deadlines today so I have time to respond and it’s a subject most of us have wondered about once in a while.

Answer: a small degree

If you have millions, like Katy Perry, with 100 million, then obviously you would have more influence than a village class pub singer. But her influence is restricted almost entirely to the sort that worship celebs. That’s a big market for sure, but I rather suspect that she doesn’t have much influence in physics circles, or philosophy, or finance, or anything other than fashion, celeb and pop culture. Celebs overestimate their political influence all the time, but recent elections and referenda have shown that they are actually mostly irrelevant.

Many twitter accounts follow huge numbers of people, because they want to get lots of followers, and many accounts automatically follow back, as if it were good manners or something. Many big number accounts that follow me unfollow a few days later because I haven’t followed them back, and other users say the same. I’d say that almost 100% of those followers and accounts are of zero relevance. Nobody can read tweets from more than a few hundred people. If I have a spare few minutes, I can only just keep up with the tweets that come in from the 440 or so that I follow, and some of those have died or must have left twitter, since I haven’t noticed anything from them for ages. Probably only 200 are active.

So if someone follows you who has 100,000 followers, and follows 100,000 people, marketers might say they are valuable because of their retweeting potential, but I’d say they are of very little value because they won’t see anything you tweet. Also, if they are trying to get all those followers, it’s because they are marketing their own material, so are unlikely to engage with yours, and are also more likely to be using social media scheduling apps to tweet regularly, so won’t even be on to see anyone’s tweets, let alone the 1 in 100,000 you wrote. So ignore the ones who follow large numbers of people.

The accounts that are most valuable are those that are very focused, such as industry sector magazines or other aggregators, because they quickly supply tweets that keep you up to date on what’s happening in your field, and that’s why most of us are on Twitter isn’t it? Most have massive numbers of followers but only follow a few accounts. Most people read magazines or papers but few write them, so that’s fair enough.

Next up are the many individuals who notice things of relevance or who say insightful or stimulating or encouraging things, people like Sunil for example. They are the other reason why we are on Twitter apart from keeping up with our sector news. Insight is valuable, stimulation and encouragement are too. Many such people have few followers. That’s not because they don’t matter, it’s because there are simply so many people out there who occasionally say something you would want to hear, but you can only follow a few hundred accounts tops, and many of those will be sector news feeds, so you can only listen to 200 others. Bear in mind that most people don’t use twitter, and most of those that do are professional people who have something worthwhile to say once in a while. Dividing the number of good personal accounts by the large numbers on twitter and multiplying by 200 means that each only gets a few followers.

Some of these people will have obviously have more influence than others. They may say more insightful or stimulating things, so they add more value, so are worth listening to. Those that talk more are heard more too, so numbers of tweets relates to numbers of followers eventually, though you can quickly lose some if you say anything controversial. That’s true in any area of life. But the differences are small. A few thousand followers is quite common, but a few hundred is far more common. There will always be people more popular, louder, more extrovert, more eloquent, more important, funnier, whateverer. That’s life.

Far more important than the number of people who follow is whether they read your tweet, think about it, are engaged by it, and maybe retweet it. Even Twitter understands that and they offer lots of advice on increasing engagement, like tweeting at weekends, including pictures, using careful wording, latching on to current trends.

So it’s quality rather than quantity that matters, as always. But another important factor is that retweeting is not a direct measure of influence. For what it’s worth Sunil, I see a lot of your tweets, and they often make me think, and you will remain one of the valuable accounts I follow for that reason. If I don’t often retweet them, it’s because I try to keep my own account on theme as much as I can, and while I find them good to read, that doesn’t necessarily mean they are best suited to a futures sector account. So it is probably true that influence rides far higher than retweets. Many people will have been made to think, but for any of many reasons, retweeting is inappropriate.

The fact is that most of us know all of these things anyway, and we just tweet our stuff when we feel like it, and if someone engages, great, and if they don’t, so what? Don’t worry about it.

https://www.fastcompany.com/3023067/10-surprising-twitter-statistics-to-help-you-reach-more-followers

OK, Sunil’s question dealt with. What about twitter’s state of health?

Twitter seems to be in a permanent state of voluntary decline. The design and values decisions the company makes often seem to be either invisible or aimed at self-destruction. The change most of us noticed and hated most was the idiotic change to the timeline, which shuffles all the tweets from the accounts you follow, to show the most relevant first apparently. In practice, since I check only now and then, it means I see many tweets several times and many presumably not at all. If I wanted to see only those accounts that Twitter thinks are most relevant, I wouldn’t be following the others, would I? If Twitter thinks it knows best what I should see, why bother letting me choose who to follow at all?

Allowing scheduled tweets has eroded its usefulness enormously. Some that I follow send the same tweets again and again, presumably using some social networking app or other. That means that you quickly get annoyed at them, though not quite enough to unfollow them, you quickly get annoyed at Twitter, though not quite enough to leave, and because their computer is attending twitter instead of them, they probably aren’t even seeing your tweets either, so you wonder whether it is worth bothering with, but not quite enough to stop. So this change alone has dragged twitter to the very edge of the usefulness cliff, and presumably many have already gone over the edge. Its profitability hangs forever in the balance because of idiotic decisions like that.

Allowing photos and auto-playing videos is two-edged. It takes longer to read, and an insightful text tweet is hidden among pages of brain-dead video repeats. On the other hand, it is nice to see the occasional cute kitten or an instantly informative picture or video clip. So I guess that one balances out a bit.

The last bunch of redesigns totally escaped my notice until they were discussed in a newspaper article, and some of the things that had changed, I had never even noticed before. This is a problem common to many industry sectors, and especially in marketing circles, not just a twitter issue. People who think of themselves as the professionals and experts are far more interested in the opinions of their peers than those of their customers. They want to show that they are in their industry elite, bang up to date with the latest fashions in the industry, but often seem to know or care little about what customers care about. So tiny changes in the shape of a bird that most users had never even noticed take on massive significance for the designers.

As for its politicization, I am very aware of it, but I don’t really care. All media seems politicized so I am well used to filtering and un-spinning.

If Twitter stop allowing social media schedulers, allow people to choose how tweets are organised, make it easier to do basic things like copying user IDs and pasting them in, then I for one would find it 10 times more useful and 10 times less annoying. Their user base would increase again, people would use it more, it would be more valuable and their financial woes would end. But they won’t, because they believe they know better, so they are doomed.

Google v Facebook – which contributes most to humanity?

Please don’t take this too seriously, it’s intended as just a bit of fun. All of it is subjective and just my personal opinion of the two companies.

Google’s old motto of ‘do no evil’ has taken quite a battering over the last few years, but my overall feeling towards them remains somewhat positive overall. Facebook’s reputation has also become muddied somewhat, but I’ve never been an active user and always found it supremely irritating when I’ve visited to change privacy preferences or read a post only available there, so I guess I am less positive towards them. I only ever post to Facebook indirectly via this blog and twitter. On the other hand, both companies do a lot of good too. It is impossible to infer good or bad intent because end results arise from a combination of intent and many facets of competence such as quality of insight, planning, competence, maintenance, response to feedback and many others. So I won’t try to differentiate intent from competence and will just stick to casual amateur observation of the result. In order to facilitate score-keeping of the value of their various acts, I’ll use a scale from very harmful to very beneficial, -10 to +10.

Google (I can’t bring myself to discuss Alphabet) gave us all an enormous gift of saved time, improved productivity and better self-fulfilment by effectively replacing a day in the library with a 5 second online search. We can all do far more and live richer lives as a result. They have continued to build on that since, adding extra features and improved scope. It’s far from perfect, but it is a hell of a lot better than we had before. Score: +10

Searches give Google a huge and growing data pool covering the most intimate details of every aspect of our everyday lives. You sort of trust them not to blackmail you or trash your life, but you know they could. The fact remains that they actually haven’t. It is possible that they might be waiting for the right moment to destroy the world, but it seems unlikely. Taking all our intimate data but choosing not to end the world yet: Score +9

On the other hand, they didn’t do either of those things purely through altruism. We all pay a massive price: advertising. Advertising is like a tax. Almost every time you buy something, part of the price you pay goes to advertisers. I say almost because Futurizon has never paid a penny yet for advertising and yet we have sold lots, and I assume that many other organisations can say the same, but most do advertise, and altogether that siphons a huge amount from our economy. Google takes lots of advertising revenue, but if they didn’t take it, other advertisers would, so I can only give a smallish negative for that: Score -3

That isn’t the only cost though. We all spend very significant time getting rid of ads, wasting time by clicking on them, finding, downloading and configuring ad-blockers to stop them, re-configuring them to get entry to sites that try to stop us from using ad-blockers, and often paying per MB for unsolicited ad downloads to our mobiles. I don’t need to quantify that to give all that a score of -9.

They are still 7 in credit so they can’t moan too much.

Tax? They seem quite good at minimizing their tax contributions, while staying within the letter of the law, while also paying good lawyers to argue what the letter of the law actually says. Well, most of us try at least a bit to avoid paying taxes we don’t have to pay. Google claims to be doing us all a huge favor by casting light on the gaping holes in international tax law that let them do it, much like a mugger nicely shows you the consequences of inadequate police coverage by enthusiastically mugging you. Noting the huge economic problems caused across the world by global corporates paying far less tax than would seem reasonable to the average small-business-owner, I can’t honestly see how this could live comfortably with their do-no evil mantra. Score: -8

On the other hand, if they paid all that tax, we all know governments would cheerfully waste most of it. Instead, Google chooses to do some interesting things with it. They gave us Google Earth, which at least morally cancels out their ‘accidental’ uploading of everyone’s wireless data as their street-view cars went past.They have developed self-driving cars. They have bought and helped develop Deep-mind and their quantum computer. They have done quite a bit for renewable energy. They have spent some on high altitude communications planes supposedly to bring internet to the rural parts of the developing world. When I were a lad, I wanted to be a rich bastard so I could do all that. Now, I watch as the wealthy owners of these big companies do it instead. I am fairly happy with that. I get the results and didn’t have to make the effort. We get less tax, but at least we get some nice toys. Almost cancels. Score +6

They are trying to use their AI to analyse massive data pools of medical records to improve medicine. Score +2

They are also building their databases more while doing that but we don’t yet see the downside. We have to take what they are doing on trust until evidence shows otherwise.

Google has tried and failed at many things that were going to change the world and didn’t, but at least they tried. Most of us don’t even try. Score +2

Oh yes, they bought YouTube, so I should factor that in. Mostly harmless and can be fun. Score: +2

Almost forgot Gmail too. Score +3

I’m done. Total Google contribution to humanity: +14

Well done! Could do even better.

I’ve almost certainly overlooked some big pluses and minuses, but I’ll leave it here for now.

Now Facebook.

It’s obviously a good social network site if you want that sort of thing. It lets people keep in touch with each other, find old friends and make new ones. It lets others advertise their products and services, and others to find or spread news. That’s all well and good and even if I and many other people don’t want it, many others do, so it deserves a good score, even if it isn’t as fantastic as Google’s search, that almost everyone uses, all the time. Score +5

Connected, but separate from simply keeping in touch, is the enormous pleasure value people presumably get from socializing. Not me personally, but ‘people’. Score +8

On the downside: Quite a lot of problems result from people, especially teens, spending too much time on Facebook. I won’t reproduce the results of all the proper academic  studies here, but we’ve all seen various negative reports: people get lower grades in their exams, people get bullied, people become socially competitive – boasting about their successes while other people feel insecure or depressed when others seem to be doing better, or are prettier, or have more friends. Keeping in touch is good, but cutting bits off others’ egos to build your own isn’t. It is hard not to conclude that the negative uses of keeping in touch outweigh the positive ones. Long-lived bad-feelings outweigh short-lived ego-boosts. Score: -8

Within a few years of birth, Facebook evolved from a keeping-in-touch platform to a general purpose mini-web. Many people were using Facebook to do almost everything that others would do on the entire web. Being in a broom cupboard is fine for 5 minutes if you’re playing hide and seek, but it is not desirable as a permanent state. Still, it is optional, so isn’t that bad per se: Score: -3

In the last 2 or 3 years, it has evolved further, albeit probably unintentionally, to become a political bubble, as has become very obvious in Brexit and the US Presidential Election, though it was already apparent well before those. Facebook may not have caused the increasing divide we are seeing between left and right, across the whole of the West, but it amplifies it. Again, I am not implying any intent, just observing the result. Most people follow people and media that echoes their own value judgments. They prefer resonance to dissonance. They prefer to have their views reaffirmed than to be disputed. When people find a comfortable bubble where they feel they belong, and stay there, it is easy for tribalism to take root and flourish, with demonization of the other not far behind. We are now seeing that in our bathtub society, with two extremes and a rapidly shallowing in-between that was not long ago the vast majority. Facebook didn’t create human nature; rather, it is a victim of it, but nonetheless it provides a near-monopoly social network that facilitates such political bubbles and their isolation while doing far too little to encourage integration in spite of its plentiful resources. Dangerous and Not Good. Score -10

On building databases of details of our innermost lives, managing not to use the data to destroy our lives but instead only using it to sell ads, they compare with Google. I’ll score that the same total for the same reasons: Net Score -3

Tax? Quantities are different, but eagerness to avoid tax seems similar to Google. Principles matter. So same score: -8

Assorted messaging qualifies as additional to the pure social networking side I think so I’ll generously give them an extra bit for that: Score +2

They occasionally do good things with it like Google though. They also are developing a high altitude internet, and are playing with space exploration. Tiny bit of AI stuff, but not much else has crossed my consciousness. I think it is far less than Google but still positive, so I’ll score: +3

I honestly can’t think of any other significant contributions from Facebook to make the balance more positive, and I tried. I think they want to make a positive contribution, but are too focused on income to tackle the social negatives properly.

Total Facebook contribution to humanity: -14.

Oh dear! Must do better.

Conclusion: We’d be a lot worse off without Google. Even with their faults, they still make a great contribution to humankind. Maybe not quite a ‘do no evil’ rating, but certainly they qualify for ‘do net good’. On the other hand, sadly, I have to say that my analysis suggests we’d be a lot better off without Facebook. As much better off without them as we benefit by having Google.

If I have left something major out, good or bad, for either company please feel free to add your comments. I have deliberately left out their backing of their own political leanings and biases because whether you think they are good or bad depends where you are coming from. They’d only score about +/-3 anyway, which isn’t a game changer.

 

 

Fluorescent microsphere mist displays

A few 3D mist displays have been demonstrated over the last decade. I’ve seen a couple at trade shows and have been impressed. To date, they use mists or curtains of tiny water droplets to make a 3D space onto which to project an image, so you get a walk-through 3D life-sized display. Like this:

http://wonderfulengineering.com/leia-display-system-uses-a-screen-made-of-water-mist-to-display-3d-projections/

or check out: http://ixfocus.com/top-10-best-3d-water-projections-ever/

Two years ago, I suggested using a forehead-mounted mist projector:

https://timeguide.wordpress.com/2014/11/03/forehead-3d-mist-projector/

so you could have a 3D image made right in front of you anywhere.

This week, a holographic display has been doing the rounds on Twitter, called Gatebox:

https://www.geek.com/tech/gatebox-wants-to-be-your-personal-holographic-companion-1682967/

It looks OK but mist displays might be better solution for everyday use because they can be made a lot bigger more cheaply. However, nobody really wants water mist causing electrical problems in their PCs or making their notebook paper soggy. You can use smoke as a mist substitute but then you have a cloud of smoke around you. So…

Suppose instead of using water droplets and walking around veiled in fog or smoke or accompanied by electrical crackling and dead PCs, that the mist was not made of water droplets but tiny dry and obviously non-toxic particles such as fluorescent micro-spheres that are invisible to the naked eye and transparent to visible light so you can’t see the mist at all, and it won’t make stuff damp. Instead of projecting visible light, the particles are made of fluorescent material, so that they are illuminated by a UV projector and fluoresce with the right colour to make the visible display. There are plenty of fluorescent materials that could be made into tiny particles, even nano-particles, and made into an invisible mist that produces a bright and high-resolution display. Even if non-toxic is too big an ask, or the fluorescent material is too expensive to waste, a large box that keeps them contained and recycles them for the next display could still be bigger, better, brighter and cheaper than a large holographic display.

Remember, you saw it here first. My 101st invention of 2016.

Can we automate restaurant reviews?

Reviews are an important part of modern life. People often consult reviews before buying things, visiting a restaurant or booking a hotel. There are even reviews on the best seats to choose on planes. When reviews are honestly given, they can be very useful to potential buyers, but what if they aren’t honestly give? What if they are glowing reviews written by friends of the restaurant owners, or scathing reviews written by friends of the competition? What if the service received was fine, but the reviewer simply didn’t like the race or gender of the person delivering it? Many reviews fall into these categories, but of course we can’t be sure how many, because when someone writes a review, we don’t know whether they were being honest or not, or whether they are biased or not. Adding a category of automated reviews would add credibility provided the technology is independent of the establishment concerned.

Face recognition software is now so good that it can read lips better than human lip reading experts. It can be used to detect emotions too, distinguishing smiles or frowns, and whether someone is nervous, stressed or relaxed. Voice recognition can discern not only words but changes in pitch and volume that might indicate their emotional context. Wearable devices can also detect emotions such as stress.

Given this wealth of technology capability, cameras and microphones in a restaurant could help verify human reviews and provide machine reviews. Using the checking in process it can identify members of a group that might later submit a review, and thus compare their review with video and audio records of the visit to determine whether it seems reasonably true. This could be done by machine using analysis of gestures, chat and facial expressions. If the person giving a poor review looked unhappy with the taste of the food while they were eating it, then it is credible. If their facial expression were of sheer pleasure and the review said it tasted awful, then that review could be marked as not credible, and furthermore, other reviews by that person could be called into question too. In fact, guests would in effect be given automated reviews of their credibility. Over time, a trust rating would accrue, that could be used to group other reviews by credibility rating.

Totally automated reviews could also be produced, by analyzing facial expressions, conversations and gestures across a whole restaurant full of people. These machine reviews would be processed in the cloud by trusted review companies and could give star ratings for restaurants. They could even take into account what dishes people were eating to give ratings for each dish, as well as more general ratings for entire chains.

Service could also be automatically assessed to some degree too. How long were the people there before they were greeted/served/asked for orders/food delivered. The conversation could even be automatically transcribed in many cases, so comments about rudeness or mistakes could be verified.

Obviously there are many circumstances where this would not work, but there are many where it could, so AI might well become an important player in the reviews business. At a time when restaurants are closing due to malicious bad reviews, or ripping people off in spite of poor quality thanks to dishonest positive reviews, then this might help a lot. A future where people are forced to be more honest in their reviews because they know that AI review checking could damage their reputation if they are found to have been dishonest might cause some people to avoid reviewing altogether, but it could improve the reliability of the reviews that still do happen.

Still not perfect, but it could be a lot better than today, where you rarely know how much a review can be trusted.

Future Augmented Reality

AR has been hot on the list of future IT tech for 25 years. It has been used for various things since smartphones and tablets appeared but really hit the big time with the recent Pokemon craze.

To get an idea of the full potential of augmented reality, recognize that the web and all its impacts on modern life came from the convergence of two medium sized industries – telecoms and computing. Augmented reality will involve the convergence of everything in the real world with everything in the virtual world, including games, media, the web, art, data, visualization, architecture, fashion and even imagination. That convergence will be enabled by ubiquitous mobile broadband, cloud, blockchain payments, IoT, positioning and sensor tech, image recognition, fast graphics chips, display and visor technology and voice and gesture recognition plus many other technologies.

Just as you can put a Pokemon on a lawn, so you could watch aliens flying around in spaceships or cartoon characters or your favorite celebs walking along the street among the other pedestrians. You could just as easily overlay alternative faces onto the strangers passing by.

People will often want to display an avatar to people looking at them, and that could be different for every viewer. That desire competes with the desire of the viewer to decide how to see other people, so there will be some battles over who controls what is seen. Feminists will certainly want to protect women from the obvious objectification that would follow if a woman can’t control how she is seen. In some cases, such objectification and abuse could even reach into hate crime territory, with racist, sexist or homophobic virtual overlays. All this demands control, but it is far from obvious where that control would come from.

As for buildings, they too can have a virtual appearance. Virtual architecture will show off architect visualization skills, but will also be hijacked by the marketing departments of the building residents. In fact, many stakeholders will want to control what you see when you look at a building. The architects, occupants, city authorities, government, mapping agencies, advertisers, software producers and games designers will all try to push appearances at the viewer, but the viewer might want instead to choose to impose one from their own offerings, created in real time by AI or from large existing libraries of online imagery, games or media. No two people walking together on a street would see the same thing.

Interior decor is even more attractive as an AR application. Someone living in a horrible tiny flat could enhance it using AR to give the feeling of far more space and far prettier decor and even local environment. Virtual windows onto Caribbean beaches may be more attractive than looking at mouldy walls and the office block wall that are physically there. Reality is often expensive but images can be free.

Even fashion offers a platform for AR enhancement. An outfit might look great on a celebrity but real life shapes might not measure up. Makeovers take time and money too. In augmented reality, every garment can look as it should, and that makeup can too. The hardest choice will be to choose a large number of virtual outfits and makeups to go with the smaller range of actual physical appearances available from that wardrobe.

Gaming is in pole position, because 3D world design, imagination, visualization and real time rendering technology are all games technology, so perhaps the biggest surprise in the Pokemon success is that it was the first to really grab attention. People could by now be virtually shooting aliens or zombies hoarding up escalators as they wait for their partners. They are a little late, but such widespread use of personal or social gaming on city streets and in malls will come soon.

AR Visors are on their way too, and though the first offerings will be too expensive to achieve widespread adoption, cheaper ones will quickly follow. The internet of things and sensor technology will create abundant ground-up data to make a strong platform. As visors fall in price, so too will the size and power requirements of the processing needed, though much can be cloud-based.

It is a fairly safe bet that marketers will try very hard to force images at us and if they can’t do that via blatant in-your-face advertising, then product placement will become a very fine art. We should expect strong alliances between the big marketing and advertising companies and top games creators.

As AI simultaneously develops, people will be able to generate a lot of their own overlays, explaining to AI what they’d like and having it produced for them in real time. That would undermine marketing use of AR so again there will be some battles for control. Just as we have already seen owners of landmarks try to trademark the image of their buildings to prevent people including them in photographs, so similar battles will fill the courts over AR. What is to stop someone superimposing the image of a nicer building on their own? Should they need to pay a license to do so? What about overlaying celebrity faces on strangers? What about adding multimedia overlays from the web to make dull and ordinary products do exciting things when you use them? A cocktail served in a bar could have a miniature Sydney fireworks display going on over it. That might make it more exciting, but should the media creator be paid and how should that be policed? We’ll need some sort of AR YouTube at the very least with added geolocation.

The whole arts and media industry will see city streets as galleries and stages on which to show off and sell their creations.

Public services will make more mundane use of AR. Simple everyday context-dependent signage is one application, but overlays would be valuable in emergencies too. If police or fire services could superimpose warning on everyone’s visors nearby, that may help save lives in emergencies. Health services will use AR to assist ordinary people to care for a patient until an ambulance arrives

Shopping provide more uses and more battles. AR will show you what a competing shop has on offer right beside the one in front of you. That will make it easy to digitally trespass on a competitor’s shop floor. People can already do that on their smartphone, but AR will put the full image large as life right in front of your eyes to make it very easy to compare two things. Shops won’t want to block comms completely because that would prevent people wanting to enter their shop at all, so they will either have to compete harder or find more elaborate ways of preventing people making direct visual comparisons in-store. Perhaps digital trespassing might become a legal issue.

There will inevitably be a lot of social media use of AR too. If people get together to demonstrate, it will be easier to coordinate them. If police insist they disperse, they could still congregate virtually. Dispersed flash mobs could be coordinated as much as ones in the same location. That makes AR a useful tool for grass-roots democracy, especially demonstrations and direct action, but it also provides a platform for negative uses such as terrorism. Social entrepreneurs will produce vast numbers of custom overlays for millions of different purposes and contexts. Today we have tens of millions of websites and apps. Tomorrow we will have even more AR overlays.

These are just a few of the near term uses of augmented reality and a few hints as issues arising. It will change every aspect of our lives in due course, just as the web has, but more so.