Category Archives: IoT

Google and the dangerous pursuit of ‘equality’

The world just got more dangerous, and I’m not talking about N Korea and Trump.

Google just sacked an employee because he openly suggested that men and women, (not all, but some, and there is an overlap, and …) might tend to have different preferences in some areas and that could (but not always, and only in certain cases, and we must always recognize and respect everyone and …) possibly account for some of the difference in numbers of men and women in certain roles (but there might be other causes too and obviously lots of discrimination and …. )

Yes, that’s what he actually said, but with rather more ifs and buts and maybes. He felt the need to wrap such an obvious statement in several kilometers thick of cotton wool so as not to offend the deliberately offended but nonetheless deliberate offense was taken and he is out on his ear.

Now, before you start thinking this is some right-wing rant, I feel obliged to point out just how progressive Futurizon is: 50% of all Futurizon owners and employees are female, all employees and owners have the same voting rights, 50% are immigrants and all are paid exactly the same and have the same size offices, regardless of dedication, ability, nature or quality or volume of output and regardless of their race, religion, beauty, shape, fitness, dietary preferences, baldness, hobbies or political views, even if they are Conservatives. All Futurizon offices are safe zones where employees may say anything they want of any level of truth, brilliance or stupidity and expect it to be taken as absolute fact and any consequential emotional needs to be fully met. No employee may criticize any other employee’s mouse mat, desk personalisation or screen wallpaper for obvious lack of taste. All employees are totally free to do anything they choose 100% of the time and can take as much leave as they want. All work is voluntary. All have the same right to respectfully request any other employee to make them coffee, tea or Pimms. All employees of all genders real or imagined are entitled to the same maternity and paternity rights, and the same sickness benefits, whether ill or not. In fact, Futurizon does not discriminate on any grounds whatsoever. We are proud to lead the world in non-discrimination. Unfortunately, our world-leading terms of employment mean that we can no longer afford to hire any new employees.

However, I note that Google has rather more power and influence than Futurizon so their policies count more. They appear (Google also has better lawyers than I can afford, so I must stress that all that follows is my personal opinion) to have firmly decided that diversity is all-important and they seem to want total equality of outcome. The view being expressed not just by Google but by huge swathes of angry protesters seems to be that any difference in workforce representation from that of the general population must arise from discrimination or oppression so must be addressed by positive action to correct it. There are apparently no statistically discernible differences in behavior between genders, or in job or role preference, so any you may have noticed over the time you’ve been alive is just your prejudice. Google says they fully support free speech and diversity of views, but expression of views is apparently only permitted as long as those views are authorized, on penalty of dismissal.

So unless I’m picking up totally the wrong end of the stick here, and I don’t do that often, only 13% of IT engineers are women, but internal policies must ensure that the proportion rises to 50%, whether women want to do that kind of work or not. In fact, nobody may question whether as many women want to work as IT engineers as men; it must now be taken as fact. By extension, since more women currently work in marketing, HR and PR, they must be substituted by men via positive action programs until men fill 50% of those roles. Presumably similar policies must also apply in medical bays for nursing and other staff there, and in construction teams for their nice new buildings. Ditto all other genders, races, religions; all groups must be protected and equalized to USA population proportions, apparently except those that don’t claim to hold sufficiently left-wing views, in which case it is seemingly perfectly acceptable to oppress, ostracize and even expel them.

In other words, freedom of choice and difference in ability, and more importantly freedom from discrimination, must be over-ruled in favor of absolute equality of diversity, regardless of financial or social cost, or impact on product or service quality. Not expressing full and enthusiastic left-wing compliance is seemingly just cause for dismissal.

So, why does this matter outside Google? Well, AI is developing very nicely. In fact, Google is one of the star players in the field right now. It is Google that will essentially decide how much of the AI around us is trained, how it learns, what it learns, what ‘knowledge’ it has of the world. Google will pick the content the AI learns from, and overrule or reeducate it if it draws any ‘wrong’ conclusions about the world, such as that more women than men want to be nurses or work in HR, or that more men than women want to be builders or engineers. A Google AI must presumably believe that the only differences between men and women are physical, unless their AI is deliberately excluded from the loudly declared corporate values and belief sets.

You should be very worried. Google’s values really matter. They have lots of influence on some of the basic tools of everyday life. Even outside their company, their AI tools and approaches will have strong influence on how other AI develops, determining operating systems and platforms, languages, mechanisms, interfaces, filters, even prejudices and that reach and influence is likely to increase. Their AI may well be in many self-driving cars, and if they have to make life or death decisions, the underlying value assumptions must feature in the algorithms. Soon companies will need AI that is more emotionally compliant. AI will use compliments or teasing or seduction or sarcasm or wit as marketing tools as well as just search engine positioning. Soon AI will use highly expressive faces with attractive voices, with attractive messages, tailored to appeal to you by pandering to your tastes and prejudices while thinking something altogether different. AI might be the person at the party that is all smiles and compliments, before going off to tell everyone else how awful it thinks you are. If you dare to say something not ‘authorized’, the ultra-smart AI all around you might treat you condescendingly, making you feel ashamed, ostracized, a dinosaur. Then it might secretly push you down a few pages in search results, or put a negative spin on text summaries about you, or exclude you from recommendations. Or it might do all the secret stuff while pretending it thinks you’re fantastic. Internal cultural policies in companies like Google today could soon be external social engineering to push the left-wing world the IT industry believes in – it isn’t just Google; Facebook and Twitter are also important and just as Left, though Amazon, Samsung, IBM and other AI players are less overtly politically biased, so far at least. Left wing policies generally cost a lot more, but Google and Facebook will presumably still expect other companies and people to pay the taxes to pay for it all. As their female staff gear up to fight them over pay differences between men and women for similar jobs, it often seems that Google’s holier-than-thou morality doesn’t quite make it as far as their finances.

Then it really starts being fun. We’ll soon have bacteria that can fabricate electronic circuits within themselves. Soon they’ll be able to power them too, giving the concept of smart yogurt. These bacteria could also have nanotechnology flagella to help them get around. We’ll soon have bacterial spies all over our environment, even on our skin, intercepting electronic signals that give away our thoughts. They’ll bring in data on everything that is said, everything that everyone even thinks or feels. Those bacteria will be directly connected into AI, in fact they’ll be part of it. They’ll be able to change things, to favor or punish according to whether they like what someone believes in or how they behave.

It isn’t just right-wing extremists that need to worry. I’m apparently Noveau Left – I score slightly left of center on political profiling tests, but I’m worried. A lot of this PC stuff seems extreme to me, sometimes just nonsense. Maybe it is, or maybe I should be lefter. But it’s not my choice. I don’t make the rules. Companies like Google make the rules, they even run the AI ethics groups. They decide much of what people see online, and even the meaning of the words. It’s very 1984-ish.

The trouble with the ‘echo chambers’ we heard about is that they soon normalize views to the loudest voices in those groups, and they don’t tend to be the moderates. We can expect it will go further to the extreme, not less. You probably aren’t left enough either. You should also be worried.

AI and activism, a Terminator-sized threat targeting you soon

You should be familiar with the Terminator scenario. If you aren’t then you should watch one of the Terminator series of films because you really should be aware of it. But there is another issue related to AI that is arguably as dangerous as the Terminator scenario, far more likely to occur and is a threat in the near term. What’s even more dangerous is that in spite of that, I’ve never read anything about it anywhere yet. It seems to have flown under our collective radar and is already close.

In short, my concern is that AI is likely to become a heavily armed Big Brother. It only requires a few components to come together that are already well in progress. Read this, and if you aren’t scared yet, read it again until you understand it 🙂

Already, social media companies are experimenting with using AI to identify and delete ‘hate’ speech. Various governments have asked them to do this, and since they also get frequent criticism in the media because some hate speech still exists on their platforms, it seems quite reasonable for them to try to control it. AI clearly offers potential to offset the huge numbers of humans otherwise needed to do the task.

Meanwhile, AI is already used very extensively by the same companies to build personal profiles on each of us, mainly for advertising purposes. These profiles are already alarmingly comprehensive, and increasingly capable of cross-linking between our activities across multiple platforms and devices. Latest efforts by Google attempt to link eventual purchases to clicks on ads. It will be just as easy to use similar AI to link our physical movements and activities and future social connections and communications to all such previous real world or networked activity. (Update: Intel intend their self-driving car technology to be part of a mass surveillance net, again, for all the right reasons:

Although necessarily secretive about their activities, government also wants personal profiles on its citizens, always justified by crime and terrorism control. If they can’t do this directly, they can do it via legislation and acquisition of social media or ISP data.

Meanwhile, other experiences with AI chat-bots learning to mimic human behaviors have shown how easily AI can be gamed by human activists, hijacking or biasing learning phases for their own agendas. Chat-bots themselves have become ubiquitous on social media and are often difficult to distinguish from humans. Meanwhile, social media is becoming more and more important throughout everyday life, with provably large impacts in political campaigning and throughout all sorts of activism.

Meanwhile, some companies have already started using social media monitoring to police their own staff, in recruitment, during employment, and sometimes in dismissal or other disciplinary action. Other companies have similarly started monitoring social media activity of people making comments about them or their staff. Some claim to do so only to protect their own staff from online abuse, but there are blurred boundaries between abuse, fair criticism, political difference or simple everyday opinion or banter.

Meanwhile, activists increasingly use social media to force companies to sack a member of staff they disapprove of, or drop a client or supplier.

Meanwhile, end to end encryption technology is ubiquitous. Malware creation tools are easily available.

Meanwhile, successful hacks into large company databases become more and more common.

Linking these various elements of progress together, how long will it be before activists are able to develop standalone AI entities and heavily encrypt them before letting them loose on the net? Not long at all I think.  These AIs would search and police social media, spotting people who conflict with the activist agenda. Occasional hacks of corporate databases will provide names, personal details, contacts. Even without hacks, analysis of publicly available data going back years of everyone’s tweets and other social media entries will provide the lists of people who have ever done or said anything the activists disapprove of.

When identified, they would automatically activate armies of chat-bots, fake news engines and automated email campaigns against them, with coordinated malware attacks directly on the person and indirect attacks by communicating with employers, friends, contacts, government agencies customers and suppliers to do as much damage as possible to the interests of that person.

Just look at the everyday news already about alleged hacks and activities during elections and referendums by other regimes, hackers or pressure groups. Scale that up and realize that the cost of running advanced AI is negligible.

With the very many activist groups around, many driven with extremist zeal, very many people will find themselves in the sights of one or more activist groups. AI will be able to monitor everyone, all the time.  AI will be able to target each of them at the same time to destroy each of their lives, anonymously, highly encrypted, hidden, roaming from server to server to avoid detection and annihilation, once released, impossible to retrieve. The ultimate activist weapon, that carries on the fight even if the activist is locked away.

We know for certain the depths and extent of activism, the huge polarization of society, the increasingly fierce conflict between left and right, between sexes, races, ideologies.

We know about all the nice things AI will give us with cures for cancer, better search engines, automation and economic boom. But actually, will the real future of AI be harnessed to activism? Will deliberate destruction of people’s everyday lives via AI be a real problem that is almost as dangerous as Terminator, but far more feasible and achievable far earlier?

Future sex, gender and relationships: how close can you get?

Using robots for gender play

Using robots for gender play

I recently gave a public talk at the British Academy about future sex, gender, and relationship, asking the question “How close can you get?”, considering particularly the impact of robots. The above slide is an example. People will one day (between 2050 and 2065 depending on their budget) be able to use an android body as their own or even swap bodies with another person. Some will do so to be young again, many will do so to swap gender. Lots will do both. I often enjoy playing as a woman in computer games, so why not ‘come back’ and live all over again as a woman for real? Except I’ll be 90 in 2050.

The British Academy kindly uploaded the audio track from my talk at

If you want to see the full presentation, here is the PowerPoint file as a pdf:


I guess it is theoretically possible to listen to the audio while reading the presentation. Most of the slides are fairly self-explanatory anyway.

Needless to say, the copyright of the presentation belongs to me, so please don’t reproduce it without permission.


Fluorescent microsphere mist displays

A few 3D mist displays have been demonstrated over the last decade. I’ve seen a couple at trade shows and have been impressed. To date, they use mists or curtains of tiny water droplets to make a 3D space onto which to project an image, so you get a walk-through 3D life-sized display. Like this:

or check out:

Two years ago, I suggested using a forehead-mounted mist projector:

so you could have a 3D image made right in front of you anywhere.

This week, a holographic display has been doing the rounds on Twitter, called Gatebox:

It looks OK but mist displays might be better solution for everyday use because they can be made a lot bigger more cheaply. However, nobody really wants water mist causing electrical problems in their PCs or making their notebook paper soggy. You can use smoke as a mist substitute but then you have a cloud of smoke around you. So…

Suppose instead of using water droplets and walking around veiled in fog or smoke or accompanied by electrical crackling and dead PCs, that the mist was not made of water droplets but tiny dry and obviously non-toxic particles such as fluorescent micro-spheres that are invisible to the naked eye and transparent to visible light so you can’t see the mist at all, and it won’t make stuff damp. Instead of projecting visible light, the particles are made of fluorescent material, so that they are illuminated by a UV projector and fluoresce with the right colour to make the visible display. There are plenty of fluorescent materials that could be made into tiny particles, even nano-particles, and made into an invisible mist that produces a bright and high-resolution display. Even if non-toxic is too big an ask, or the fluorescent material is too expensive to waste, a large box that keeps them contained and recycles them for the next display could still be bigger, better, brighter and cheaper than a large holographic display.

Remember, you saw it here first. My 101st invention of 2016.

AI presents a new route to attack corporate value

As AI increases in corporate, social, economic and political importance, it is becoming a big target for activists and I think there are too many vulnerabilities. I think we should be seeing a lot more articles than we are about what developers are doing to guard against deliberate misdirection or corruption, and already far too much enthusiasm for make AI open source and thereby giving mischief-makers the means to identify weaknesses.

I’ve written hundreds of times about AI and believe it will be a benefit to humanity if we develop it carefully. Current AI systems are not vulnerable to the terminator scenario, so we don’t have to worry about that happening yet. AI can’t yet go rogue and decide to wipe out humans by itself, though future AI could so we’ll soon need to take care with every step.

AI can be used in multiple ways by humans to attack systems.

First and most obvious, it can be used to enhance malware such as trojans or viruses, or to optimize denial of service attacks. AI enhanced security systems already battle against adaptive malware and AI can probe systems in complex ways to find vulnerabilities that would take longer to discover via manual inspection. As well as AI attacking operating systems, it can also attack AI by providing inputs that bias its learning and decision-making, giving AI ‘fake news’ to use current terminology. We don’t know the full extent of secret military AI.

Computer malware will grow in scope to address AI systems to undermine corporate value or political campaigns.

A new route to attacking corporate AI, and hence the value in that company that relates in some way to it is already starting to appear though. As companies such as Google try out AI-driven cars or others try out pavement/sidewalk delivery drones, so mischievous people are already developing devious ways to misdirect or confuse them. Kids will soon have such activity as hobbies. Deliberate deception of AI is much easier when people know how they work, and although it’s nice for AI companies to put their AI stuff out there into the open source markets for others to use to build theirs, that does rather steer future systems towards a mono-culture of vulnerability types. A trick that works against one future AI in one industry might well be adaptable to another use in another industry with a little devious imagination. Let’s take an example.

If someone builds a robot to deliberately step in front of a self-driving car every time it starts moving again, that might bring traffic to a halt, but police could quickly confiscate the robot, and they are expensive, a strong deterrent even if the pranksters are hiding and can’t be found. Cardboard cutouts might be cheaper though, even ones with hinged arms to look a little more lifelike. A social media orchestrated campaign against a company using such cars might involve thousands of people across a country or city deliberately waiting until the worst time to step out into a road when one of their vehicles comes along, thereby creating a sort of denial of service attack with that company seen as the cause of massive inconvenience for everyone. Corporate value would obviously suffer, and it might not always be very easy to circumvent such campaigns.

Similarly, the wheeled delivery drones we’ve been told to expect delivering packages any time soon will also have cameras to allow them to avoid bumping into objects or little old ladies or other people, or cats or dogs or cardboard cutouts or carefully crafted miniature tank traps or diversions or small roadblocks that people and pets can easily step over but drones can’t, that the local kids have built from a few twigs or cardboard from a design that has become viral that day. A few campaigns like that with the cold pizzas or missing packages that result could severely damage corporate value.

AI behind websites might also be similarly defeated. An early experiment in making a Twitter chat-bot that learns how to tweet by itself was quickly encouraged by mischief-makers to start tweeting offensively. If people have some idea how an AI is making its decisions, they will attempt to corrupt or distort it to their own ends. If it is heavily reliant on open source AI, then many of its decision processes will be known well enough for activists to develop appropriate corruption tactics. It’s not to early to predict that the proposed AI-based attempts by Facebook and Twitter to identify and defeat ‘fake news’ will fall right into the hands of people already working out how to use them to smear opposition campaigns with such labels.

It will be a sort of arms race of course, but I don’t think we’re seeing enough about this in the media. There is a great deal of hype about the various AI capabilities, a lot of doom-mongering about job cuts (and a lot of reasonable warnings about job cuts too) but very little about the fight back against AI systems by attacking them on their own ground using their own weaknesses.

That looks to me awfully like there isn’t enough awareness of how easily they can be defeated by deliberate mischief or activism, and I expect to see some red faces and corporate account damage as a result.


This article appeared yesterday that also talks about the bias I mentioned:

Since I wrote this blog, I was asked via Linked-In to clarify why I said that Open Source AI systems would have more security risk. Here is my response:

I wasn’t intending to heap fuel on a dying debate (though since current debate looks the same as in early 1990s it is dying slowly). I like and use open source too. I should have explained my reasoning better to facilitate open source checking: In regular (algorithmic) code, programming error rate should be similar so increasing the number of people checking should cancel out the risk from more contributors so there should be no a priori difference between open and closed. However:

In deep learning, obscurity reappears via neural net weightings being less intuitive to humans. That provides a tempting hiding place.

AI foundations are vulnerable to group-think, where team members share similar world models. These prejudices will affect the nature of OS and CS code and result in AI with inherent and subtle judgment biases which will be less easy to spot than bugs and be more visible to people with alternative world models. Those people are more likely to exist in an OS pool than a CS pool and more likely to be opponents so not share their results.

Deep learning may show the equivalent of political (or masculine and feminine). As well as encouraging group-think, that also distorts the distribution of biases and therefore the cancelling out of errors can no longer be assumed.

Human factors in defeating security often work better than exploiting software bugs. Some of the deep learning AI is designed to mimic humans as well as possible in thinking and in interfacing. I suspect that might also make them more vulnerable to meta-human-factor attacks. Again, exposure to different and diverse cultures will show a non-uniform distribution of error/bias spotting/disclosure/exploitation.

Deep learning will become harder for humans to understand as it develops and becomes more machine dependent. That will amplify the above weaknesses. Think of optical illusions that greatly distort human perception and think of similar in advanced AI deep learning. Errors or biases that are discovered will become more valuable to an opponent since they are less likely to be spotted by others, increasing their black market exploitation risk.

I have not been a programmer for over 20 years and am no security expert so my reasoning may be defective, but at least now you know what my reasoning was and can therefore spot errors in it.

Can we automate restaurant reviews?

Reviews are an important part of modern life. People often consult reviews before buying things, visiting a restaurant or booking a hotel. There are even reviews on the best seats to choose on planes. When reviews are honestly given, they can be very useful to potential buyers, but what if they aren’t honestly give? What if they are glowing reviews written by friends of the restaurant owners, or scathing reviews written by friends of the competition? What if the service received was fine, but the reviewer simply didn’t like the race or gender of the person delivering it? Many reviews fall into these categories, but of course we can’t be sure how many, because when someone writes a review, we don’t know whether they were being honest or not, or whether they are biased or not. Adding a category of automated reviews would add credibility provided the technology is independent of the establishment concerned.

Face recognition software is now so good that it can read lips better than human lip reading experts. It can be used to detect emotions too, distinguishing smiles or frowns, and whether someone is nervous, stressed or relaxed. Voice recognition can discern not only words but changes in pitch and volume that might indicate their emotional context. Wearable devices can also detect emotions such as stress.

Given this wealth of technology capability, cameras and microphones in a restaurant could help verify human reviews and provide machine reviews. Using the checking in process it can identify members of a group that might later submit a review, and thus compare their review with video and audio records of the visit to determine whether it seems reasonably true. This could be done by machine using analysis of gestures, chat and facial expressions. If the person giving a poor review looked unhappy with the taste of the food while they were eating it, then it is credible. If their facial expression were of sheer pleasure and the review said it tasted awful, then that review could be marked as not credible, and furthermore, other reviews by that person could be called into question too. In fact, guests would in effect be given automated reviews of their credibility. Over time, a trust rating would accrue, that could be used to group other reviews by credibility rating.

Totally automated reviews could also be produced, by analyzing facial expressions, conversations and gestures across a whole restaurant full of people. These machine reviews would be processed in the cloud by trusted review companies and could give star ratings for restaurants. They could even take into account what dishes people were eating to give ratings for each dish, as well as more general ratings for entire chains.

Service could also be automatically assessed to some degree too. How long were the people there before they were greeted/served/asked for orders/food delivered. The conversation could even be automatically transcribed in many cases, so comments about rudeness or mistakes could be verified.

Obviously there are many circumstances where this would not work, but there are many where it could, so AI might well become an important player in the reviews business. At a time when restaurants are closing due to malicious bad reviews, or ripping people off in spite of poor quality thanks to dishonest positive reviews, then this might help a lot. A future where people are forced to be more honest in their reviews because they know that AI review checking could damage their reputation if they are found to have been dishonest might cause some people to avoid reviewing altogether, but it could improve the reliability of the reviews that still do happen.

Still not perfect, but it could be a lot better than today, where you rarely know how much a review can be trusted.

Future Augmented Reality

AR has been hot on the list of future IT tech for 25 years. It has been used for various things since smartphones and tablets appeared but really hit the big time with the recent Pokemon craze.

To get an idea of the full potential of augmented reality, recognize that the web and all its impacts on modern life came from the convergence of two medium sized industries – telecoms and computing. Augmented reality will involve the convergence of everything in the real world with everything in the virtual world, including games, media, the web, art, data, visualization, architecture, fashion and even imagination. That convergence will be enabled by ubiquitous mobile broadband, cloud, blockchain payments, IoT, positioning and sensor tech, image recognition, fast graphics chips, display and visor technology and voice and gesture recognition plus many other technologies.

Just as you can put a Pokemon on a lawn, so you could watch aliens flying around in spaceships or cartoon characters or your favorite celebs walking along the street among the other pedestrians. You could just as easily overlay alternative faces onto the strangers passing by.

People will often want to display an avatar to people looking at them, and that could be different for every viewer. That desire competes with the desire of the viewer to decide how to see other people, so there will be some battles over who controls what is seen. Feminists will certainly want to protect women from the obvious objectification that would follow if a woman can’t control how she is seen. In some cases, such objectification and abuse could even reach into hate crime territory, with racist, sexist or homophobic virtual overlays. All this demands control, but it is far from obvious where that control would come from.

As for buildings, they too can have a virtual appearance. Virtual architecture will show off architect visualization skills, but will also be hijacked by the marketing departments of the building residents. In fact, many stakeholders will want to control what you see when you look at a building. The architects, occupants, city authorities, government, mapping agencies, advertisers, software producers and games designers will all try to push appearances at the viewer, but the viewer might want instead to choose to impose one from their own offerings, created in real time by AI or from large existing libraries of online imagery, games or media. No two people walking together on a street would see the same thing.

Interior decor is even more attractive as an AR application. Someone living in a horrible tiny flat could enhance it using AR to give the feeling of far more space and far prettier decor and even local environment. Virtual windows onto Caribbean beaches may be more attractive than looking at mouldy walls and the office block wall that are physically there. Reality is often expensive but images can be free.

Even fashion offers a platform for AR enhancement. An outfit might look great on a celebrity but real life shapes might not measure up. Makeovers take time and money too. In augmented reality, every garment can look as it should, and that makeup can too. The hardest choice will be to choose a large number of virtual outfits and makeups to go with the smaller range of actual physical appearances available from that wardrobe.

Gaming is in pole position, because 3D world design, imagination, visualization and real time rendering technology are all games technology, so perhaps the biggest surprise in the Pokemon success is that it was the first to really grab attention. People could by now be virtually shooting aliens or zombies hoarding up escalators as they wait for their partners. They are a little late, but such widespread use of personal or social gaming on city streets and in malls will come soon.

AR Visors are on their way too, and though the first offerings will be too expensive to achieve widespread adoption, cheaper ones will quickly follow. The internet of things and sensor technology will create abundant ground-up data to make a strong platform. As visors fall in price, so too will the size and power requirements of the processing needed, though much can be cloud-based.

It is a fairly safe bet that marketers will try very hard to force images at us and if they can’t do that via blatant in-your-face advertising, then product placement will become a very fine art. We should expect strong alliances between the big marketing and advertising companies and top games creators.

As AI simultaneously develops, people will be able to generate a lot of their own overlays, explaining to AI what they’d like and having it produced for them in real time. That would undermine marketing use of AR so again there will be some battles for control. Just as we have already seen owners of landmarks try to trademark the image of their buildings to prevent people including them in photographs, so similar battles will fill the courts over AR. What is to stop someone superimposing the image of a nicer building on their own? Should they need to pay a license to do so? What about overlaying celebrity faces on strangers? What about adding multimedia overlays from the web to make dull and ordinary products do exciting things when you use them? A cocktail served in a bar could have a miniature Sydney fireworks display going on over it. That might make it more exciting, but should the media creator be paid and how should that be policed? We’ll need some sort of AR YouTube at the very least with added geolocation.

The whole arts and media industry will see city streets as galleries and stages on which to show off and sell their creations.

Public services will make more mundane use of AR. Simple everyday context-dependent signage is one application, but overlays would be valuable in emergencies too. If police or fire services could superimpose warning on everyone’s visors nearby, that may help save lives in emergencies. Health services will use AR to assist ordinary people to care for a patient until an ambulance arrives

Shopping provide more uses and more battles. AR will show you what a competing shop has on offer right beside the one in front of you. That will make it easy to digitally trespass on a competitor’s shop floor. People can already do that on their smartphone, but AR will put the full image large as life right in front of your eyes to make it very easy to compare two things. Shops won’t want to block comms completely because that would prevent people wanting to enter their shop at all, so they will either have to compete harder or find more elaborate ways of preventing people making direct visual comparisons in-store. Perhaps digital trespassing might become a legal issue.

There will inevitably be a lot of social media use of AR too. If people get together to demonstrate, it will be easier to coordinate them. If police insist they disperse, they could still congregate virtually. Dispersed flash mobs could be coordinated as much as ones in the same location. That makes AR a useful tool for grass-roots democracy, especially demonstrations and direct action, but it also provides a platform for negative uses such as terrorism. Social entrepreneurs will produce vast numbers of custom overlays for millions of different purposes and contexts. Today we have tens of millions of websites and apps. Tomorrow we will have even more AR overlays.

These are just a few of the near term uses of augmented reality and a few hints as issues arising. It will change every aspect of our lives in due course, just as the web has, but more so.


Carbethium, a better-than-scifi material

How to build one of these for real:


Halo light bridge, from

Or indeed one of these:



I recently tweeted that I had an idea how to make the glowy bridges and shields we’ve seen routinely in sci-fi games from Half Life to Destiny, the bridges that seem to appear in a second or two from nothing across a divide, yet are strong enough to drive tanks over, and able to vanish as quickly and completely when they are switched off. I woke today realizing that with a bit of work, that it could be the basis of a general purpose material to make the tanks too, and buildings and construction platforms, bridges, roads and driverless pod systems, personal shields and city defense domes, force fields, drones, planes and gliders, space elevator bases, clothes, sports tracks, robotics, and of course assorted weapons and weapon systems. The material would only appear as needed and could be fully programmable. It could even be used to render buildings from VR to real life in seconds, enabling at least some holodeck functionality. All of this is feasible by 2050.

Since it would be as ethereal as those Halo structures, I first wanted to call the material ethereum, but that name was already taken (for a 2014 block-chain programming platform, which I note could be used to build the smart ANTS network management system that Chris Winter and I developed in BT in 1993), and this new material would be a programmable construction platform so the names would conflict, and etherium is too close. Ethium might work, but it would be based on graphene and carbon nanotubes, and I am quite into carbon so I chose carbethium.

Ages ago I blogged about plasma as a 21st Century building material. I’m still not certain this is feasible, but it may be, and it doesn’t matter for the purposes of this blog anyway.

Around then I also blogged how to make free-floating battle drones and more recently how to make a Star Wars light-saber.

Carbethium would use some of the same principles but would add the enormous strength and high conductivity of graphene to provide the physical properties to make a proper construction material. The programmable matter bits and the instant build would use a combination of 3D interlocking plates, linear induction,  and magnetic wells. A plane such as a light bridge or a light shield would extend from a node in caterpillar track form with plates added as needed until the structure is complete. By reversing the build process, it could withdraw into the node. Bridges that only exist when they are needed would be good fun and we could have them by 2050 as well as the light shields and the light swords, and light tanks.

The last bit worries me. The ethics of carbethium are the typical mixture of enormous potential good and huge potential for abuse to bring death and destruction that we’re learning to expect of the future.

If we can make free-floating battle drones, tanks, robots, planes and rail-gun plasma weapons all appear within seconds, if we can build military bases and erect shield domes around them within seconds, then warfare moves into a new realm. Those countries that develop this stuff first will have a huge advantage, with the ability to send autonomous robotic armies to defeat enemies with little or no risk to their own people. If developed by a James Bond super-villain on a hidden island, it would even be the sort of thing that would enable a serious bid to take over the world.

But in the words of Professor Emmett Brown, “well, I figured, what the hell?”. 2050 values are not 2016 values. Our value set is already on a random walk, disconnected from any anchor, its future direction indicated by a combination of current momentum and a chaos engine linking to random utterances of arbitrary celebrities on social media. 2050 morality on many issues will be the inverse of today’s, just as today’s is on many issues the inverse of the 1970s’. Whatever you do or however politically correct you might think you are today, you will be an outcast before you get old:

We’re already fucked, carbethium just adds some style.

Graphene combines huge tensile strength with enormous electrical conductivity. A plate can be added to the edge of an existing plate and interlocked, I imagine in a hexagonal or triangular mesh. Plates can be designed in many diverse ways to interlock, so that rotating one engages with the next, and reversing the rotation unlocks them. Plates can be pushed to the forward edge by magnetic wells, using linear induction motors, using the graphene itself as the conductor to generate the magnetic field and the design of the structure of the graphene threads enabling the linear induction fields. That would likely require that the structure forms first out of graphene threads, then the gaps between filled by mesh, and plates added to that to make the structure finally solid. This would happen in thickness as well as width, to make a 3D structure, though a graphene bridge would only need to be dozens of atoms thick.

So a bridge made of graphene could start with a single thread, which could be shot across a gap at hundreds of meters per second. I explained how to make a Spiderman-style silk thrower to do just that in a previous blog:

The mesh and 3D build would all follow from that. In theory that could all happen in seconds, the supply of plates and the available power being the primary limiting factors.

Similarly, a shield or indeed any kind of plate could be made by extending carbon mesh out from the edge or center and infilling. We see that kind of technique used often in sci-fi to generate armor, from lost in Space to Iron Man.

The key components in carbetheum are 3D interlocking plate design and magnetic field design for the linear induction motors. Interlocking via rotation is fairly easy in 2D, any spiral will work, and the 3rd dimension is open to any building block manufacturer. 3D interlocking structures are very diverse and often innovative, and some would be more suited to particular applications than others. As for linear induction motors, a circuit is needed to produce the travelling magnetic well, but that circuit is made of the actual construction material. The front edge link between two wires creates a forward-facing magnetic field to propel the next plates and convey enough intertia to them to enable kinetic interlocks.

So it is feasible, and only needs some engineering. The main barrier is price and material quality. Graphene is still expensive to make, as are carbon nanotubes, so we won’t see bridges made of them just yet. The material quality so far is fine for small scale devices, but not yet for major civil engineering.

However, the field is developing extremely quickly because big companies and investors can clearly see the megabucks at the end of the rainbow. We will have almost certainly have large quantity production of high quality graphene for civil engineering by 2050.

This field will be fun. Anyone who plays computer games is already familiar with the idea. Light bridges and shields, or light swords would appear much as in games, but the material would likely  be graphene and nanotubes (or maybe the newfangled molybdenum equivalents). They would glow during construction with the plasma generated by the intense electric and magnetic fields, and the glow would be needed afterward to make these ultra-thin physical barriers clearly visible,but they might become highly transparent otherwise.

Assembling structures as they are needed and disassembling them just as easily will be very resource-friendly, though it is unlikely that carbon will be in short supply. We can just use some oil or coal to get more if needed, or process some CO2. The walls of a building could be grown from the ground up at hundreds of meters per second in theory, with floors growing almost as fast, though there should be little need to do so in practice, apart from pushing space vehicles up so high that they need little fuel to enter orbit. Nevertheless, growing a  building and then even growing the internal structures and even furniture is feasible, all using glowy carbetheum. Electronic soft fabrics, cushions and hard surfaces and support structures are all possible by combining carbon nanotubes and graphene and using the reconfigurable matter properties carbethium convents. So are visual interfaces, electronic windows, electronic wallpaper, electronic carpet, computers, storage, heating, lighting, energy storage and even solar power panels. So is all the comms and IoT and all the smart embdedded control systems you could ever want. So you’d use a computer with VR interface to design whatever kind of building and interior furniture decor you want, and then when you hit the big red button, it would appear in front of your eyes from the carbethium blocks you had delivered. You could also build robots using the same self-assembly approach.

If these structures can assemble fast enough, and I think they could, then a new form of kinetic architecture would appear. This would use the momentum of the construction material to drive the front edges of the surfaces, kinetic assembly allowing otherwise impossible and elaborate arches to be made.

A city transport infrastructure could be built entirely out of carbethium. The linear induction mats could grow along a road, connecting quickly to make a whole city grid. Circuit design allows the infrastructure to steer driverless pods wherever they need to go, and they could also be assembled as required using carbethium. No parking or storage is needed, as the pod would just melt away onto the surface when it isn’t needed.

I could go to town on military and terrorist applications, but more interesting is the use of the defense domes. When I was a kid, I imagined having a house with a defense dome over it. Lots of sci-fi has them now too. Domes have a strong appeal, even though they could also be used as prisons of course. A supply of carbetheum on the city edges could be used to grow a strong dome in minutes or even seconds, and there is no practical limit to how strong it could be. Even if lasers were used to penetrate it, the holes could fill in in real time, replacing material as fast as it is evaporated away.

Anyway, lots of fun. Today’s civil engineering projects like HS2 look more and more primitive by the day, as we finally start to see the true potential of genuinely 21st century construction materials. 2050 is not too early to expect widespread use of carbetheum. It won’t be called that – whoever commercializes it first will name it, or Google or MIT will claim to have just invented it in a decade or so, so my own name for it will be lost to personal history. But remember, you saw it here first.

Diabetes: Electronically controlled drug delivery via smart membrane

This is an invention I made in 2001 as part of my active skin suite to help diabetics. I’ve just been told I am another of the zillions of diabetics in the world so was reminded of it.

This wasn’t feasible in 2001 but it will be very soon, and could be an ideal way of monitoring blood glucose and insulin levels, checking with clinic AI for the correct does, and then opening the membrane pores just enough and long enough to allow the right dose of insulin to pass through. Obviously pore and drug particle design have to be coordinated, but this should be totally feasible. Here’s some pics:

Active skin principles

Active skin principles

Drug delivery overview

Drug delivery overview

Drug delivery mechanism

Drug delivery mechanism

New book: Society Tomorrow

It’s been a while since my last blog. That’s because I’ve been writing another book, my 8th so far. Not the one I was doing on future fashion, which went on the back burner for a while, I’ve only written a third of that one, unless I put it out as a very short book.

This one follows on from You Tomorrow and is called Society Tomorrow, 20% shorter at 90,000 words. It is ready to publish now, so I’m just waiting for feedback from a few people before hitting the button.


Here’s the introduction:

The one thing that we all share is that we will get older over the next few decades. Rapid change affects everyone, but older people don’t always feel the same effects as younger people, and even if we keep up easily today, some of us may find it harder tomorrow. Society will change, in its demographic and ethnic makeup, its values, its structure. We will live very differently. New stresses will come from both changing society and changing technology, but there is no real cause for pessimism. Many things will get better for older people too. We are certainly not heading towards utopia, but the overall quality of life for our ageing population will be significantly better in the future than it is today. In fact, most of the problems ahead are related to quality of life issues in society as a whole, and simply reflect the fact that if you don’t have to worry as much about poor health or poverty, something else will still occupy your mind.

This book follows on from 2013’s You Tomorrow, which is a guide to future life as an individual. It also slightly overlaps my 2013 book Total Sustainability which looks in part at future economic and social issues as part of achieving sustainability too. Rather than replicating topics, this book updates or omits them if they have already been addressed in those two companion books. As a general theme, it looks at wider society and the bigger picture, drawing out implications for both individuals and for society as a whole to deal with. There are plenty to pick from.

If there is one theme that plays through the whole book, it is a strong warning of the problem of increasing polarisation between people of left and right political persuasion. The political centre is being eroded quickly at the moment throughout the West, but alarmingly this does not seem so much to be a passing phase as a longer term trend. With all the potential benefits from future technology, we risk undermining the very fabric of our society. I remain optimistic because it can only be a matter of time before sense prevails and the trend reverses. One day the relative harmony of living peacefully side by side with those with whom we disagree will be restored, by future leaders of higher quality than those we have today.

Otherwise, whereas people used to tolerate each other’s differences, I fear that this increasing intolerance of those who don’t share the same values could lead to conflict if we don’t address it adequately. That intolerance currently manifests itself in increasing authoritarianism, surveillance, and an insidious creep towards George Orwell’s Nineteen Eighty-Four. The worst offenders seem to be our young people, with students seemingly proud of trying to ostracise anyone who dares agree with what they think is correct. Being students, their views hold many self-contradictions and clear lack of thought, but they appear to be building walls to keep any attempt at different thought away.

Altogether, this increasing divide, built largely from sanctimony, is a very dangerous trend, and will take time to reverse even when it is addressed. At the moment, it is still worsening rapidly.

So we face significant dangers, mostly self-inflicted, but we also have hope. The future offers wonderful potential for health, happiness, peace, prosperity. As I address the significant problems lying ahead, I never lose my optimism that they are soluble, but if we are to solve problems, we must first recognize them for what they are and muster the willingness to deal with them. On the current balance of forces, even if we avoid outright civil war, the future looks very much like a gilded cage. We must not ignore the threats. We must acknowledge them, and deal with them.

Then we can all reap the rich rewards the future has to offer.

It will be out soon.