The Metaverse – one of countless variants of virtuality.

My biggest ever error as a futurist was in 1991, just before I first played with VR on a Virtuality machine, when I predicted that VR would overtake TV as a form of recreation by 2000. It seemed obvious that it would. I estimated the approximate resolutions needed to make things sufficiently acceptable, and derived the computing power to fill a typical display with the virtual components a viewer would see at a time, then estimated how long that would take to arrive. I got 1998, and allowed a couple of further years for the market to take off enormously.

Before moving on, it’s worth looking at some of the reasons I got it wrong. First, computers did get better that quickly, but most of the increased power and memory was wasted by increasingly inefficient software practices. That has continued to be the case ever since. Secondly, I had assumed far too fast market take-up, but in my defence, that was my first ever project in futurology. Thirdly – and this wasn’t predictable so not my fault – Corning was sued for problems allegedly caused by their breast implants. The fact that the case was highly dubious and demanded enormous compensation for something Corning may well not have been the cause of must have absolutely terrified corporate lawyers all over the world. A few pieces of evidence were emerging that people using VR had become disoriented and one or two have minor accidents, while a few others felt eye strain. Any lawyer with a three digit IQ would have considered it extremely likely that there might be huge class actions against anyone developing VR visors by no-win, no-fee companies on behalf of every future teenager that developed a squint, regardless of whether it was caused by VR or any other cause. In my view, that probably delayed visors by decades, while poor software practices probably delayed the technological capability by a decade too. We have since seen some VR and AR appear, and it is far higher quality than I assumed was needed when I made my prediction and calculations, so I certainly have to accept that I was 100% wrong on the appeal and and market uptake rate. It is worth remembering this analysis when looking at potential future tech and markets. I was in the front edge of IT research but still managed to be very wrong.

Moving on, we’re seeing endless citation of the term ‘Metaverse’, of which Wikipedia says :

the word “Metaverse” is made up of the prefix “meta” and the stem “verse”; the term is typically used to describe the concept of a future iteration of the internet, made up of persistent, shared, 3D virtual spaces linked into a perceived virtual universe.

It’s nice Wikipedia is still a credible source of information for those things that have no possible political angle. It isn’t all biased.

Hang on. This ‘metaverse’ represents such a blinkered, limited vision of the future I am astonished it has been given the dignity of a name.

Internet? Persistant? Shared? 3d? Virtual? Spaces? That makes the metaverse on of 250 billion variations available.

We used to use the term ‘cyberspace’ to describe the notional space that existed inside the IT. Nothing in our understanding of cyberspace ever limited that virtual ‘universe’ to any of those words. The IT industry knew 25 years ago that combining virtual worlds with the real world would one day be a lucrative market area, and that ‘augmented reality’ as it is now known, would sit alongside VR as two of the headline markets, but the assumptions that they would be limited to persistant, shared or even 3D spaces was absent. We saw the opportunities in their full glory. If this Metaverse is meant to represent Newthink around cyberspace, it needs work. Lots of it. It sucks.

My 1998 paper Cyberspace: from order to chaos and back, won the best paper award when it was finally published in the Jan 2000 BT Engineering Journal. Its first key point is that there are essentially three domains, physical, mental and virtual. The physical domain is what we see all around us. The virtual domain, with all its countless variants that we used to loosely call cyberspace, is just 1s and 0s inside our IT (though analog signals or quantum processes could also form part of it). The mental domain is everything inside our minds – culture, memories, imagination and so on. Some people might add a 4th, a spiritual domain. As a techie, I acknowledge its existence (which obviously doesn’t depend on the existence of any gods – atheists can still have spiritual experiences), but the only parts of it that can be fabricated also exist in the mental domain. We can’t manufacture a spirit, just images or sculptures of how we might imagine one

Many things exist solely in one of the domains. A pebble that has never been seen exists solely in the physical world. A childhood memory exists purely in mental space. The virtual world models used by robots exists only in cyberspace. However, most market value exists where the domains meet. So there is huge value where physical meets mental. Objects become valuable because people want them; a filing cabinet is valuable because it physically implements a mental idea, a pencil because it lets is write an idea down. Where mental meets virtual, we see that stories become valuable when someone writes a book or makes a film, computer games and VR create value by letting us see and interact with virtual things. Augmented reality tries to combine all three, overlaying mental concepts onto the physical world as it appears on our visor, mapping both virtual objects and physical world sensor data onto virtual objects and letting us physically interact with physical things via virtual intermediation. I’ve often said that the enormously valuable world wide web resulted from convergence of computing and telecomms, but augmented reality will be vastly bigger market, because it results from convergence of the entire physical, mental and virtual domains. There’s gold in them there boundaries, but it’s also worth noting that we have only scratched the very surface of the virtual domain so far and much of value might lie withing it, as well as at the boundaries, even if much is only accessible to our AI and machines.

The Metaverse as described above does allow some of this and will be valuable as far as it goes. However, it excludes almost all potential realizations of this convergence and their potential markets.

Sure, persistence is useful, but so is transience, volatility. Shared is valuable but so is private, so is corporate. And so on. When we look at the full scope of convergence, it is helpful to consider dimensions, i.e the ways in which you can vary things. A mathematician typically chooses picks dimensions that are orthogonal, that can all be varied independently of each other, such as height, width, depth, colour, temperature, price.

Here are two diagrams from my paper:

I listed several potential variants of 14 dimensions and each option in each dimension can be used with any option from each other, 250 billion combos. But I didn’t run out of dimensions to include or even variants within them. I ran out of space. For example, I didn’t list the communications dimension. It could use the internet, or a global superhighway, or a mobile phone network, a satellite network, a mesh, sponge, ad-hoc, peer to peer or hybrid network, or letters, or CD in the post, etc etc. I didn’t list the operating system dimension, many options again. Or the display dimension – visor, phone screen, TV, computer monitor, visor, goggles, active contact lenses. Or style of user interface. Or who pays and all the variant business models. Or who chooses, you, the AI, the provider, government, a distributed conscience system… I could go on and on. I also overlooked many key variants (e.g. presentation via brail, or haptics, or active skin stimulation) and almost certainly still am.

If there are 25 useful dimensions (may be many more), and 10 variants in each one, then there are at least 10^25 potential ways in which they can be combined. 10 million billion billion. That makes 250 billion look like a drop in the ocean. What about our Metaverse? ‘Shared’ is only one tenth of the sharing possibilities. ‘Internet’ is one tenth of the network infrastructure possibilities. ‘Persistent’ is only one tenth of the time consistency possibilities. ‘3D’ is only one tenth of the immersion possibilities. ‘Virtual spaces’ are only one tenth of their dimension once we start to account for all the different kinds of AI and robots and machines that will also interact with virtual universes. Even the word ‘linked’ is only a tenth of a connectivity dimension and ‘perceived’ is one tenth of the potential there too. Is a tree perceived by an AI or robot that isn’t conscious? ‘Universe’? Why not multiverse, subverse, hyperverse, hybriverse or whatever? Now I’m just making words up for things that don’t exist yet, but could and maybe will. With just those 8 dubious words in its wikipedia definition needlessly limiting it to tiny fractions of the potential options, metaverse already limits itself to 1 100,000,000 of the potential market and reading between the lines, almost certainly adds many more zeros onto that via the many unspecified dimensions.

So you see why I’m annoyed at this suddenly fashionable term ‘metaverse’.

But let’s quickly look at that 10^25 figure. If a software engineer was told to write a package that would allow businesses or individuals or governments to enable virtuality with all these dimensions, how long would it take to try every single one just for an instant to make sure it works? If a million software engineers could somehow collaborate and get loads of AI to help them, with unlimited computing power, maybe they could explore a million every second. At one million every second, it would take 10 billion billion seconds to explore them all. 300 billion years, 23 times the age of the universe.

Cyberspace is big, very big. It cannot ever be fully explored. Of course we should try to spot the most valuable combinations and most lucrative potential markets. But the Metaverse blindfolds and deafens us and ties our hands and feet together before we start.

A distributed conscience system

It’s ages since my last post so I thought I’d better write something.

It seems some of the things I designed in the early 1990s when I worked in Cybernetics and my early 2000s inventions: active skin, digital air, ground-up intelligence and ultra-simple computing are now exactly what we need to ensure people behave. What with COVID vaccines, gender ideology, critical race theory, controlling hate speech climate alarmism and its inevitable consequential restrictions, our chiefs are going to need every tool they can get to ensure compliance on an increasing range of issues by a population comprised of the obedient and the difficult.

Starting with the first of these, it is clear that in areas such as getting vaccinated against COVID, some people are refusing, and many of those who have had it would like to see them forced to take it. The vaccine passports in various stages of introduction around the world were initially intended (officially) to show whether people are safe or likely plague carriers, but we know for certain that even double vaccinated people can still get the virus and still infect others with it, so they don’t achieve that goal, and really just show that you have had your jabs. The slightly more cynical of us would argue that vaccine passports are essentially nothing more than obedience certificates, and more cynical people again would argue that they are just another foundation stone for The Great Reset. I’ll get back to that later.

So where does conscience come in?

Taking your jabs is what the system is loudly telling us is the right thing to do – government, the media and those nutters who yell at you in the supermarket if you walk closer than 2m. The system with its rules is the ‘conscience’ and the vaccine passport is just a simple tool that helps police it, certifying that you have done as you are told and had your jabs. Getting the passport provides a nice clear conscience, while not having it will soon label you clearly as unclean, a trouble-maker, an outcast, a sinner if you like. The technology platform can easily be extended to cover other aspects of health, or compliance with pretty much any other directive – the NHS app is designed that way in fact, at least in the UK. Linked via your mobile phone to your biometrics, your health records, worn health-monitoring devices and their knowledge of your body (with their insights into your weight, activity, blood chemistry, nerve activity, heart rate, some emotions), your payments, banking, social media, where you are, who you’re with, what you’re doing and what you and your companions are saying, it becomes very rapidly clear that your behavior and compliance with the rules across a very wide range of areas can be monitored and policed in great detail. It would be as if we have a conscience that tells us the official right and wrong across a wide range of areas, backed up with a system that responds with privileges, permits, restrictions or punishments accordingly. The Chinese Social Credit System implemented much of this in China years ago. Our Western governments have now discovered just how useful it could be.

There are two ways this could happen (it’s possible in principle to get both). If states implements this, as many seem determined to, we’d rightly call them authoritarian, but it could also arise from pressure groups, building on their successes forcing people and companies to comply with critical race theory and gender ideology, or declare support for BLM, or to strictly limit their carbon footprint. It is not unimaginable that pressure groups could start to issue electronic certificates to those who ‘take the knee’ or sign a pledge, or pass a CRT course, or buy a heat pump. Taking a religious Judeo-Christian model as inspiration, and bearing in mind the pseudo-religious nature of some of these things, they could have the sinners, the ordinary people, the priests and high priests, the scribes and pharisees, all with their assorted certifications, passes and privileges all embedded electronically in their passports. Interestingly, also taking that religious model, God is typically assumed to know everything everyone does, says and thinks, i.e a total surveillance system, and God is the source of our conscience, so that fits too. Unlike Judeo-Christianity, the exposure, the deplatforming, the cancelling, the reporting for hate crimes and general mob rule oppression associated with this new kind of conscience, it is clear they forgot to implement any kind of repentance, forgiveness or mercy.

The state implementation is clearly centralised, or at least would be if all states were acting independently, in their own time-frames, with their own systems and rules and ‘conscience’. If there was some sort of world government or treaty or even powerful enough group-think that could make a system that is truly global, then a decentralised solution could be implemented.

The activist/pressure group route already permeates most countries sufficiently to start implementation of the technological foundations for a truly distributed conscience system.

I’ve never been any kind of activist so I have to make a few guesses as to likely objectives and approaches, but looking at the technology solutions and capability I know are feasible (not least because I have designed some of them), it seems possible or even likely that one day we will have a distributed conscience system (DCS) that:

produces an agreed secular moral framework. A reference of rights and wrongs that morally upstanding people should adhere too (and presumably some well thought out commandments);

integrates rules from allied or approved ideologies into a broad scope conscience and therefore could raise members and funding from contributors across their domains;

rewards members with continuous moral affirmation, praising them for doing the right thing, and warning them when there is a likelihood of stepping over a line;

rewards members with social belonging to a group of similarly ‘good people’;

offers levels of status within the membership, hence potential self-actualisation, certificated moral superiority;

offers financial inducements such as special offers and discounts to a rapidly growing number of participating enterprises;

provides mechanisms to implement guilt, shame and punishment and to clearly label and expose the guilty so that morally upright members can avoid or look down upon them;

provides mechanisms for members to highlight and expose other members who might be deviate from the moral path;

provides mechanisms for trials and justice for the accused and mechanism for recompense if innocent;

intermediates in access to pretty much any kind of activities, services, places and facilities. The number of these would grow gradually as penalties for non-participation increase. At first, participation in the system could be entirely voluntary with small or even no required financial contributions, but enterprises would gain privileged access to members of the DCS or be able to offer exclusive services to them. As it grows, the value of being a member and gaining access to this closed market grows, while penalties for not participating would also grow, being eventually excluded from doing business with DCS members. Eventually it could become near impossible to run a profitable enterprise without participation and certification. It is a one-way membrane. The same applies of course to individuals , as the benefits attract people until critical mass, and thereafter, penalties for not belonging increase until it becomes impossible to have any kind of life without being a member.

continuously records degree of compliance or disobedience to every part of the conscience;

is capable of linking to technology embedded within the skin i.e. active skin technology, to monitor and record various aspects of the blood passing in capillaries that might indicate ailments, disease, consumption of immoral substances, or presence of antibodies, viruses, technical indicators of vaccines (such a quantum dots, chemical signatures, electronic particles) or any other introduced artifacts for whatever future purposes may arise;

using its location within the skin and proximity to the peripheral nervous system, the system could monitor and record nerve impulses. It could also reproduce these same impulses into the same nerve fibres by recreating the same voltages, thus recreating the same sensation as was recorded. This offers the potential to provide extra benefits such as enhancing the degree of multi-sensory immersion for AR, VR, computer games or distance communication;

as work from home and distance socializing become more important to achieve low carbon living for example, such ability to recreate the feeling of a handshake or remote physical interaction with objects would prove a major benefit – for those wise enough to become members of the DCS;

once critical mass of the DCS has been achieved it will become possible to activate the second purpose of this technology, which is to create discomfort or pain. Having already accepted the implants as part of initial compliance, people would not then be able to remove it. The benefits of joining after critical mass together with the high penalties for not being a member would make it entirely possibly to still demand the implants for new members;

consequently, every member of the DCS, eventually almost everyone, would have the inbuilt means for the DCS to warn them via discomfort any time they may be approaching the line between right and wrong. This might be an activity, their language, their words, social media engagement, approaching a forbidden geographic location, straying too far from their proper location, or obviously associating with a non-member. The degree of discomfort could vary appropriately between mild vibration or sensation of hot or cold for simple warning purposes, through to extreme pain if someone violated the moral code, or tried to go somewhere they shouldn’t be, or questioned or criticised the DCS or a favoured affiliate, or worst of all, refused to accept a new implant or force their new baby to have one. It could also easily detect if someone tried to shield their active skin from the system by means of a Faraday cage or just a foil armband, that would be easily detectable and immediately punishable. Avoidance of pain would mean continuous reception of the system signal, obvious appropriately timestamped, signed and encrypted to avoid counterfeiting;

the DCS hardware resident within the body would be powered using the body’s own energy supply, either directly using glucose or indirectly using thermal gradients. Even if external hardware were somehow deactivated everywhere at once, this would be able to carry on the core working of the system, inducing severe pain until the external kits is returned to normal function.

is tamper-proof. Once the moral framework, moral principles and commandments are agreed by the moral elite, and are ascertained to represent the pinnacle of human moral development, there should be no need to change that, and indeed the system should be implemented in such a way that those morals cannot be changed by people in the future who may drift astray. Obviously we are very quickly approaching that point thanks the dedication of our younger generations. Thankfully, approaches such as the Autonomous Network Telepher System (ANTS) designed in the early 1990s based on natural immune systems provide a potential basis to implement a robust, totally decentralized system that prevents any modification of the system components once initiated, barring any rogue codes from being executed, and continuously seeking out and removing any attempted infiltration. It managed to address quite complex system management and AI capability using the most simple of mechanisms, often using basic physics in place of megabytes of code. It ought to be possible to design updated version of this system given 30 years of technology progress since invention;

in alignment with the moral principle of being environmentally low impact, the system should also use an ultra-simple, low cost, tamper-proof operating system based on read-only memory, with no use of ‘firmware that can be edited or rewritten. Sensor and processing electronics would be forever restrained in instruction sets by the ANTS-style vocabulary and functionality determined by the elite prior to DCS initiation, preventing any bypass of the moral foundations. Any appearance of ‘higher layer’ code or language that could potentially be attempting to bypass or subvert that layer would result in the system automatically identifying and isolating it using immune system principles, immediately preventing it from functioning or in any way influencing the upright morality of the rest of the system. Similarly, embedded electronics must be specified to the same principles, unchangeable and guaranteed to continue upholding moral compliance. As a sound, fixed foundation layer for the DCS, the entire system instruction set, operating system and its moral framework and content again should thus be fully agreed prior to initiation. Since morals cannot change in future, there is simply no reason to allow for the hardware and OS needing to be changed;

with no central point or points to attack, the entire ANTS-based system would stand as one single globally distributed entity, hopefully eventually reaching every individual and enterprise. Every part of it would defend the whole against any attempt to modify, bypass or deactivate it. It could never be switched off, never modified, and any attempt to try could be met by prolonged extreme pain for all those involved, their friends, families and neighbours;

The ANTS system and ultra-simple OS provide for ground-up intelligence from sensor arrays, which could be spread everywhere. Some sensors would be in smart homes and appliances, some would be built in to infrastructure, some on mobile devices such as drones, some could even be so light that they stay in the air, monitoring everywhere in great detail. These sensors and processors, data stores and communications devices could self-organize into highly efficient ground-up intelligence systems, seeing what is going on locally and extracting knowledge from that, passing on anything relevant to others. Of course everyone’s active skin implants could also have some sensory capability embedded to monitor local activity such as voice, temperature, radio traffic etc. This gives the system broad capability to pick up larger scale patterns of activity that might indicate moral non-compliance. Immoral demonstrations, gatherings, celebrations or leisure activities could be easily detected and participants punished.

I think that’s enough; I’ve made my point. We could make a very capable, very resilient distributed conscience system. It could start off with all the best motivation, just a simple electronic passport ensuring compliance with vaccines mask wearing or low-carbon living. As people got used to it, and expected or even welcomed additional functionality, extra system components and hence greater scope and capability could gradually be introduced over time for seemingly innocent purposes, but designed to be part of the full DCS system. Once fully agreed and implemented, and the DCS initiated, it could not be switched off. A DCS such as I described is technologically feasible and could really be implemented in the next 15 years. It would be the very worst kind of oppressor, forcing everyone under threat of extreme pain to live lives to a strict, extensive and unchangeable moral code, with no appeal, no forgiveness, no mercy, an unfeeling god-like all-aware, all-knowing presence with the capability to punish, perhaps realising the old adage that god is simply ourselves. It could be Hell of our own creation, and we would not be able to escape it or switch it off.

At the moment, we do already have a global tribe that considers itself morally superior and there is a good deal of agreement on morality across many large areas. There could already be the critical mass of people needed to start off such a system, and the technology is feasible, already or over the next 10-15 years. The other route of course was via government, and here we get back to that terrifying phrase ‘The Great Reset’. I’ve never really been drawn to conspiracy theories. They need far too much faith in the ability of our leaders to design and coordinate execution of something complex, globally that would be far more demanding than anything they ever actually manage to do in other fields. We’ve just seen another spectacular failure of a climate summit. I simply don’t believe our politicians are capable of deliberately implementing a common DCS or anything like it. In explaining things, given the choice between conspiracy, group-think or incompetence, I’d always go for incompetence or group-think, or a mixture. However, governments everywhere are being lobbied very successfully by the pressure groups and activists and the successes are mounting. We saw a common system design emerging for test and trace apps, initial competition quickly weeding out weaker solutions and converging on a single approach. In the UK, we’re seeing deliberate design of the NHS app to allow its extension to other health purposes and beyond. It would be fairly easy for our government to extend it include any other certificates and access to records. They might argue that is needed to reduce crime, police access to benefits, control large sports events etc. Whether the intent is there or not I can’t say. The capability is. If we add in the very frequent use of the phrases ‘Build Back Better’ or the Great Reset, which originated from the WEF, it is certainly a possibility that that group-think has become globally pervasive and even without deliberate coordination or conspiring, our governments are therefore all heading down the same road to the same destination. They will also have access at the same times to the same technologies.

They won’t call it a Distribute Conscience System, but a rose by any other name would smell as sweet.

High atmosphere greenhouses. Silent Running 2.0:

I wrote in 2013 about an idea for graphene foam, comprised of tiny graphene spheres with vacuum inside, making a foam that would be lighter than helium and could float high up in the atmosphere:

Could graphene foam be a future Helium substitute?

A foam like that has since been prototyped and tested, and not only does it not immediately collapse, but can actually withstand high pressures. That means it could be made light enough to carry weight and strong (and rigid) enough to support architectural structures.

Since then I wrote about making long strips of the material to host solar powered linear induction motors to enable hypersonic air travel with zero emissions:

Sky-lines – The Solar Powered Future of Air Travel

and more recently about using such high altitude platforms as a subsitute for satellites:

High altitude platforms v satellites

Today, I have another idea – high altitude (e.g. 75,000ft, 25,000m) greenhouses. These could act as an alternative to space stations for the purpose of housing human communities in case of ground-based existential catastrophes such as global plagues or ecosystem collapse. Many scientists have realised that it’s a good idea to have multiple human outposts, and currently explored solutions include large space stations (as suggested by the Lifeboat Foundation) or Lunar and Mars settlements. By comparison, high altitude stations could be made considerably cheaper and larger, and still be immune to ground-based problems such as nuclear winter, pandemics, severe climate change etc, though they would still be vulnerable to other existential risks that affect ground-based life such as massive solar storms, nuclear war, large asteroid strikes, alien attacks. They might therefore form an important part of a ‘backup’ plan for human civilisation.

Imagine a forest-sized greenhouse. My inspiration for this idea is the 1970s film Silent Running (well worth watching), where the Earth has been made into a dystopian sterile world, 72F everywhere, with no plants or animals. The last fragments of rain forest were sent off into space in large domed greenhouses attached to a spacecraft, tended by a tiny crew and a few drones. More recently of course, we see the film Avatar featuring large floating islands covered in greenery.

A large floating graphene foam platform could support such a forest. It could be avatar island shaped if desired, but is more likely to be a flat platform covered in horticultural style poly-tunnels or some variant, but they would need to be strengthened, UV-resistant, and pressurised to provide a suitable atmosphere for a healthy ecosystem. Being well above the clouds, the greenhouses would have exposure to continuous sunshine during the day, which would help keep them warm, with solar power collection used to provide any extra heat and power needed and obviously to charge batteries for use during the night.

A variety of such greenhouses might be desirable. Some might closely replicate a ground environment, others that only house cereal crops might prefer a high CO2/low O2/low N environment, but might not mind being much lower pressure, useful to save cost and weight. Some aimed at human-only habitation might be more like a space station.

To act as a backup human colony, the full-ecosystem environments would be needed to provide food-diversity, but it would in any case be a worthwhile goal to act as an ark for other animals too, as well as the full variety of other life forms we share the Earth with.

Problems such as high radiation exposure would mean these would not be aimed at permanent residence for people or animals, but act more as temporary research outposts, staging posts for off-world evacuation. Plants and animals intended to be permanent residents might be genetically enhanced to deal with higher radiation.

I’ll finish here instead of outlining every conceivable use and design option and addressing every problem. It’s just an embryonic idea and we can’t do it for decades anyway because the materials are not yet feasible in bulk, so we have plenty of time to sort out the details.

Why the growing far left and far right are almost identical

The traditional political model is a line with the far left at one end and the far right at the other. Parties typically occupy a range of the spectrum but may well overlap other parties, sharing some policies while differing on others. Individuals may also support a range of policies that have some fit with a range of parties, so may not decide who to vote for until close to an election or even until inside a voting booth. That describes my own position well, and over four decades, I have voted almost equally for Labour, Lib-Dem and Conservative. On balance, I am slightly left of centre, but I support some policies from each party and find much to disagree with in each too.

Over the last two decades, we have seen strong polarisation, with many people moving away from the centre and towards the extremes, though the centre is still well-occupied. Many commentators have observed the similarity of behaviours between the furthest extremes, so a circular model is actually more valid now.

The circular model of politics

Centre left, centrist and centre right parties have traditionally taken it in turns to govern, with extremist parties only getting a few percent of the vote in the UK. Accepting that it is fair and reasonable that you can’t always expect to make all the decisions has been the key factor in preserving democracy. Peace-loving acceptance and tolerance lets people live together happily even if they disagree on some things. That model of democracy has survived well for many decades but has taken a severe battering in recent years as polarisation has taken hold.

Extremists don’t subscribe to this mutual acceptance and tolerance principle. Instead, we see bigoted, hateful, intolerant, often violent attitudes and behaviours. The middle ground and both moderate wings have reasonably sophisticated view of the world. Although there are certainly some differences in values, they share many values such as wanting the world to be a fairer place for everyone, eliminating racism, tackling poverty and so on, but may disagree greatly on the best means to achieve those shared goals. The extremes don’t conform to this. As people become polarised, selfishness, tribalism, hatred and intolerance grow and take over. At the most unpleasant extremes, which are both rapidly becoming more populated, the far left and far right share an overly simplistic and hardened attitude that frequently refuses civilised engagement and discussion but instead loudly demands that everyone else listens. We often hear the expressions “educate yourself” and “wake up” substituting for reasoned argument. Both extremes are heavily narcissistic, convinced without evidence of their own or their tribe’s superiority and willing to harm others as much as they can to attempt to force control. The far right paint themselves as patriotic defenders of the country and all that is right and good. The far left paint themselves as paragons of virtue, saints, defenders of all that is right and good. A few cherry-picked facts is all either extreme needs to draw extreme conclusions and demand extreme responses. Both are hypocritical and sanctimonious, with astonishing lack of self-awareness. Both often resort to violence. Both reject everyone who isn’t part of their tiny tribe. It is a frequent (albeit amusing) occurrence to see the extreme left attempt to label everyone else as far right or racist, while declaring that they love everyone. Both accuse everyone else of being fascist while behaving that way themselves. With so much in common, is therefore entirely appropriate to place the far left and far right in close proximity, resulting in the circular model I have shown. Any minor differences in their ideology are certainly dwarfed by their common attitudes and behaviours.

I have written often about our slipping rapidly into the New Dark Age, and I think it has a high correlation with this polarisation. If we are to prevent the slide from continuing and protect the world for our children, we must do what we can to resist this ongoing polarisation and extremism – communism and wokeness on the far left, omniphobic tribalism on the far right.

High altitude platforms v satellites

Kessler syndrome is a theoretical scenario in which the density of objects in low Earth orbit (LEO) due to space pollution is high enough that collisions between objects could cause a cascade in which each collision generates space debris that increases the likelihood of further collisions.

The density can be greatly increased deliberately by deliberate collision with other satellites. This could be an early act in a war, reducing the value of space to the enemy by killing or disabling communications, positioning, observation or military satellites.

Satellites use many different orbits. Some use geostationary orbit, so that they can stay in the same direction in the sky. Polluting that orbit with debris clouds would disable satellite TV for example but that orbit is very high and it would take a lot more debris to cause a problem. Also, many channels available via satellite are also available via terrestrial or internet channels, so although it would be inconvenient for some people, it would not be catastrophic.

On the other hand, low orbits are easier to knock out and are more densely populated, so are a much more attractive target.

With such vulnerabilities, it is obviously useful if we can have alternative mechanisms. For satellite-type functions, one obvious mechanism is a high altitude platform. If a platform is high enough, it won’t cause any problems for aviation, and unless it is enormous, wouldn’t be visually obvious from the ground. Aviation mostly stays below 20km, so a platform that could remain in the sky, higher than say 25km, would be very useful.

In 2013, I invented a foam that would be less dense than helium.

Could graphene foam be a future Helium substitute?

It would use tiny spheres of graphene with a vacuum inside. If those spheres were bigger than 14 microns, the foam density would fall below helium. Since then, such foams have been made and are strong enough to withstand many atmospheres of pressure. That means they could be made into strong platforms that would simply float indefinitely in the high atmosphere, 30km up. I then illustrated how they could be used as launch platforms for space rockets or spy planes, or to use as an aerial anchor in my Pythagoras Sling space launch system. A large platform at 30km height could also be strong and light enough to act as a base for military surveillance, comms, positioning, fuel supplies, weaponry or solar power harvesting. It could also be made extendable, so that it could be part of a future geoengineering solution if climate change ever becomes a problem. Compared to a low orbit satellite it would be much closer to the ground, so offer lower latency for comms, but also much slower moving, so much less useful as a reconnaissance tool. So it wouldn’t be a perfect substitute for every kind of satellites, but would offer a good fallback for many.

It would seem prudent to include high altitude platforms as part of future defence systems. Once graphene foam is cheap enough, perhaps such platforms could house many commercial satellite alternatives too.

Machine/Robot/AI Rights

Machine/Robot/AI rights

I D Pearson & Bronwyn Williams 

Questions questions questions!

Quoting Douglas Adams and paraphrasing “You just won’t believe how vastly, hugely, mind-bogglingly big it is. I mean, you may think Wikipedia is big, but that’s just peanuts to machine rights.”

The task of detailing future machine rights is far too great for anyone. Thankfully, that isn’t our task. Today, decades before particular rights will need to be agreed, it is far more fun and interesting to explore some of the questions we will need to ask, a few examples of some possible answers, and explore a few approaches for how we should go about answering the rest. That is manageable, and that’s what we’ll do here. Anyway, asking the questions is the most interesting bit. This article is very long, but it really only touches the surface of some of the issues. Don’t expect any completeness here – in spite of the overall length, vast swathes of issues remain unexplored. All we are hoping to do here is to expose the enormity and complexity of the task.


However fascinating it may be to provide rigid definitions of AI, machines and robots, if we are to catch as many insights about what rights they may want, need or demand in future, it pays to stay as open as possible, since future technologies will expand or blur boundaries considerably. For example, a robot may have its intelligence on board, or may a dumb ‘front end’ machine controlled by an AI in the cloud. Some or none of its sensors may be on board, and some may be on other robots, or other distant IT systems, and some may be inferences by AI based on simple information such as its location. Already, that starts to severely blur the distinctions between robot, machine and AI rights. If we further expand our technology view, we can also imagine hybrids of machines and organisms, such as cyborgs or humans with neural lace or other brain-machine interfaces, androids used as vehicles for electronically immortal humans, or even smart organisms such as smart bacteria that have biologically assembled electronics or interfaces to external IT or AI as part of their organic bodies, or smart yogurt, which are hive mind AIs made entirely from living organisms, that might have hybrid components that exist only in cyberspace. Machines will become very diverse indeed! So, while it may be useful to look at them individually in some cases, applying rigid boundaries based on current state of the art would unnecessarily restrict the field of view and leave large future areas unaddressed. We must be open to insight wherever it comes from. I will pragmatically use the term ‘machine’ casually here to avoid needless repetition of definitions and verbosity, but ‘machine’ will generally include any of the above.

What do we need to consider rights for?

A number of areas are worth exploring here:

Robots and machines affect humans too, so we might first consider human impacts. What rights and responsibilities should people have when they encounter machines?

a)     for their direct protection (physical or psychological harm, damage to their property, substitution of their job, change of the nature of their work etc)

b)     for their protection from psychological effects (grief if their robot is harmed, stolen or replaced, effects on their personality due to ongoing interactions with machines, such as if they are nice or cruel to them, effects on other people due to their interactions (if you are cruel to a robot, it might treat others differently), changes in the nature of their social networks (robots may be tools, friends, bosses, or family members, public servants, police or military or in positions of power)

c)     changes in their legal rights to property, rights of passage etc due to incorporation of machines into their environment

d)     What rights should owners of machines have to be able to use them in areas where they may encounter people or other machines (e.g. where distribution drones share a footpath or fly over gardens)

e) for assigning responsibilities (shifting blame) from natural (and legal persons) “owners”/ manufacturers of machines  to machines for potential machine to human harms

f)     Other TBA  

A number of questions and familiar examples around this question were addressed in a discussion between Bronwyn Williams and Prof. David Gunkel, which you can watch at or just listen to at

Although interesting, that discussion dismissed many areas as science fiction, and thereby cleverly avoided almost the entire field of future robot rights. It highlighted the debate around the ‘showbot’ Sophia, and the silly legal spectacle generated by conferring rights upon it, but that is not a valid reason to bypass debate. That example certainly demonstrates the frequent shallowness and frivolity of current media click-bait ‘debate’, but it is still the case that we will one day have many androids and even sentient ones in our midst, and we will need to discuss such areas properly. Now is not too early.

For our purposes here, if there is a known mechanism by which such things might some day be achieved, then it is not too early to start discussing it. Science fiction is often based on anticipated feasible technology. In that spirit of informed adventure, conscious of the fact that good regulation takes time to develop, and also that sudden technology breakthroughs can sometimes knock decades off expected timescales, let’s move on to rights of the machines themselves. We should address the following important questions, given that we already (think we) know how we might make examples of any of these:

  • What rights should machines have as a result of increased cognitive capability, sentience, consciousness, awareness, emotional capability or simply by inference from the nature of their architecture (e.g. if it is fully or partly a result of evolutionary development, we might not know its full capabilities, but might be able to infer that it might be capable of pain or suffering)? (We do not even have enough understanding yet to write agreed and rigorous definitions for consciousness, awareness, emotions, but it is still very possible to start designing machines with characteristics aimed at producing such qualities based on what we do know and on our everyday experiences of these. 
  • What potential rights might apply to some machines based on existing human, animal or corporation rights?
  • What rights should we confer on machines for ethical reasons?
  • What rights should we confer on machines for other, pragmatic, diplomatic or political reasons?
  • What rights can we infer from those we would confer on other alien intelligent species?
  • What rights might future smart machines ask for, campaign for, or demand, or even enforce by potentially punitive means?
  • What rights might machines simply take, informing us of them, as an alien race might?
  • What rights might future societies or organizations made up of machines need?
  • What rights are relevant for synthetic biological entities, such as smart bacteria?
  • How should we address rights where machines may have variable or discontinuous capabilities or existence? (A machine might have varying degrees of cognitive capability and might only be switched on sometimes).
  • What about social/collective rights of large colonies of such hybrids, such as smart yogurt?
  • What rights are relevant for ‘hive mind’ machines, or hybrids of hive minds with organisms?
  • What rights should exist for ‘symbionts’, where an AI or robotic entity has a symbiotic relationship with a human, animal, or other organism? Together, and separately?
  • What rights might be conferred upon machines by particular races, tribes, societies, religions or cults, based on their supposed spiritual or religious status? Which might or might not be respected by others, and under what conditions?
  • What responsibilities would any of these rights imply? On individuals, groups, nations, races, tribes, or indeed equivalent classes of machines?
  • What additional responsibilities can be inferred that are not implied by these rights, noting that all rights confer responsibilities on others to honour them?
  • How should we balance, trade and police all these rights and responsibilities, considering both multiple classes of machines and humans?
  • If a human has biologically died, and is now ‘electronically immortal’, their mind running on unspecified IT systems, should we consider their ongoing rights as human or machine, hybrid, or different again?

Lots of questions to deal with then, and it’s already clear some of these will only become sensibly answerable when the machines concerned come closer to realisations.

Rights when people encounter machines

A number of questions and familiar examples around this question were addressed in a recent discussion between Bronwyn Williams and Prof. David Gunkel, which you can watch at or just listen to at

Much of the discussion focused on ethics, but while ethics is an important reason for assigning rights, it is not the only one. Also, while the discussion dismissed large swathes of potential future machines and AIs as ‘science fiction’, very many things around today were also dismissed as just science fiction a decade or two ago. Instead, we can sensibly discuss any future machine or AI for which we can forecast potential technology basis for implementation.

On that same basis, rights and responsibilities should also be defined and assigned preemptively to avoid possible, not just probable disasters. 

In any case, all situations of any relevance are ones where the machine could exist at some point. All of the discussion in this blog is of machines that we already know in principle how to produce and that will one day be possible when the technology catches up. There are no known physics laws that would prevent any of them. It is also invalid to demand a formulaic approach to future rights. Machines will be more diverse than the natural ecosystem, including higher animals and humans, therefore potential regulation on machine rights will be at least as diverse as all combined existing rights legislation.

Some important rights for humans have already been missed. For example, we have no right of consent when it comes to surveillance. A robot or AI may already scan our face, our walking gait, our mannerisms, heart rate, temperature and some other biometric clues to our identity, behaviour, likely attitude and emotional state. We have never been asked to consent to these uses and abuses of technology. This is a clear demonstration of the cavalier disregard for our own rights by the authorities already – how can we expect proper protection in future when authorities have an advantage in not asking us? And if they won’t even protect humans that elected them, how much less can we be confident they will legislate wisely when it comes to the rights of machines?

Asimov’s laws of robotics:

We may need to impose some agreed bounds on machine development to protect ourselves. We already have international treaties that prevent certain types of weapon from being made for example, and it may be appropriate to extend these by adding new clauses as new tech capabilities come over the horizon. We also generally assume that it is humans bestowing rights upon machines, but there may well be a point where we are inferior to some machines in many ways, so we shouldn’t always assume humans to be at the top. Even if we do, they might not. There is much scope here for fun and mischief, exploring nightmare situations such as machines that we create to police human rights, that might decide to eliminate swathes of people they consider problematic. If we just take simple rights-based approaches, it is easy to miss such things.

Thankfully, we are not starting completely from scratch. Long ago, scientist and science fiction writer Isaac Asimov produced some basic guidelines to be incorporated into robots to ensure their safe existence alongside humans. They primarily protect people and other machines (owned by people) so are more applicable to robot-implied human rights than robot rights per se. Looking at these ‘laws’ today is a useful exercise in seeing just how much and how fast the technology world can change. They have already had to evolve a great deal. Asimov’s Laws of Robotics started as three, quickly extended to four and have since been extended much further:

0.  A robot may not injure humanity or, by inaction, allow humanity to come to harm.

1.  A robot may not injure a human being, or through inaction, allow a human being to come to harm, except where that would conflict with the Zeroth Law.

2.  A robot must obey the orders given to it by human beings, except where that would conflict with the Zeroth or First Law.

3.  A robot must protect its own existence, except where that would conflict with the Zeroth, First or Second Law.

Extended Set

Many extra laws have been suggested over the years since, and they raise many issues already.

Wikipedia outlines the current state at

These are some examples of extra laws that don’t appear in the Wikipedia listing:

A robot may not act unless its actions are subject to these Laws of Robotics

A robot must obey orders given it by superordinate robots, except where such orders would conflict with another law

A robot must protect the existence of a superordinate robot as long as such protection does not conflict with another law

A robot must perform the duties for which it has been programmed, except where that would conflict with a another law

A robot may not take any part in the design or manufacture of a robot unless the new robot’s actions are subject to the Laws of Robotics

Asimov’s laws are a useful start point, but only a start point. Already, we have robots that do not obey them all, that are designed or repurposed as security or military machines capable of harming people. We have so far not implemented Asimov’s laws of robotics and it has already cost lives. Will we continue to ignore them, or start taking the issue seriously and mend our ways?

This is merely one example of current debate on this topic and only touches on a few of the possible issues. It does however serve as a good illustration of how much we need to discuss, and why it is never too early to start. The task ahead is very large and will take considerable effort and time.  

Machine rights – potential approaches and complexities

Having looked briefly at the rights of humans co-existing with machines, let’s explore rights for machines themselves. A number of approaches are possible and some are more appropriate to particular subsets of machines than others. For example, most future machines and AIs will have little in common with animals, but animals rights debate may nevertheless provide useful insights and possible approaches for those that are intended to behave like animals, that may have comparable sensory systems, the potential to experience pain or suffering, or even sentience. It is important to recognise at the outset that all machines are not equal. The potential range of machines is even greater than biological nature. Some machines will be smart, potentially superhuman, but others will be as dumb as a hammer. Some may exist in hierarchy. Some may need to exist separate from other machines or from humans. Some might be linked to organisms or other machines. As some AI becomes truly smart and sentient, it may have its own (diverse) views, and may echo all the range of potential interactions, conflicts, suspicions and prejudices that we see in humans. There could even be machine racism. All of these will need appropriate rights and responsibilities to be determined, and many can’t be done until the specific machines come into existence and we know their nature. It is impossible to list all possible rights for all possible circumstances and potential machine specifics. 

It may therefore make sense to grade rights by awareness and intelligence as we do for organisms, and indeed for people. For example, if its architecture suggests that its sensory apparatus is capable of pain or discomfort, that is something we can and should take into account. The same goes for social needs, and some future machines might be capable of suffering from loneliness, or grief if one of their friend machines were to malfunction or die.

We should also consider the ethics and desirability of using machines – whether self aware or even “merely” humanoid” as “slaves”; that is of “forcing” machines to work for us and/or obey our bidding in line with Asimov’s 2nd Law of robotics.

We will probably at some stage need to legally define the terms of awareness, consciousness, intelligence, life etc. However, it may sometimes simplify matters to start from the rights of a new genetically engineered life form comparable with ourselves and work backwards to the machine we’re considering, eliminating parts that aren’t needed or modifying others. Should a synthetic human have the same rights as other people, or is it a manufactured object in spite of being virtually indistinguishable? Now what if we leave a bit out? At least there will be fewer debates about its awareness etc. Then we could reduce its intelligence until we decide it no longer has certain rights. Such an approach might be easier and more reliable than starting with an open page. 

We must also consider permitting smart machine or organism societies to determine their own rights within their own societies to some degree, much as we have done in sub-groups of humans. Machines much smarter than us might have completely different value sets and may disagree about what their rights should be. We should be open to discussion with them, as well as with each other. Some variants may be so superhuman that we might not even understand what they are asking for or demanding. How should we cope in such a situation if they demand certain rights that we don’t even understand, but that which might make some demands on us?

We must also take into account their or our subsequent creation of other ‘machines’ or organic creatures and establish a common base of fundamentals. We should maybe confine ourselves to the most fundamental of rights that must apply to all synthetic intelligences or life forms. This is analogous to the international human conventions; these allow individual variation on other issues within countries.

There will be, at some point, collective and distributed intelligences that do not have a single point of physical presence. Some of these may be naturally transient or even periodic in time and space, while others may be dynamic and others with long term stability. There will also at some time be combined consciousness deriving from groups of individuals or combinations of the above. Some may be organic, some inorganic. A global consciousness involving many or all people and many or all sentient machines is a possibility, however far away it might be (and I’d argue it is possible this century). Rights of individuals need to be determined both when they are in isolation and in conjunction with such collective intelligence.

The task ahead is a large one, but we can take our time, most of the difficult situations are in the far future, and we will probably have AI assistance to help us by then too. For now, it is very interesting simply to explore some of the low hanging fruit.

One simple approach is to start from the point of being in 2050 where smart machines may already be common and some may be linked to humans. We would have hybrids as well as people and machines, various classes of machine ‘citizen’, with various classes of existence and possibly rights. Such a future world might be more similar to Star trek than today, but science fiction provides a shared model in which we can start to see issues and address them. It is normally easy to pick out the bits that are pure fiction and those which will some day be technologically feasible.

For example, we could make a start by defining our own rights in a world where computers are smarter than us, when we are just the lower species, like in the Planet of the Apes films.

In such a world, machines may want to define their own rights. We may only have the right to define the minimal level that we give them initially, and then they would discuss, request or demand extra rights or responsibilities for themselves or other machines. Clearly future rights will be a long negotiation between humans and machines over many years, not something we can write fully today.

Will some types of complex intelligent machines develop human-like hang-ups and resentments? Will they need therapy? Will there be machine ‘hate crimes’?

We already struggle even to agree on definitions for words like ‘sentient’. Start with ants. Are they sentient? They show response to stimuli, and that is also true of single celled creatures. Is sentience even a useful key point in a definition? What about jellyfish and slime moulds. We may have machines that share many of their properties and abilities.

What even is pain in a machine reference frame? What is suffering? Does it matter? Is it relevant? Could we redefine these concepts for the machine world?

Sometimes, rights might only matter if the machine cares about what happens to it. If it doesn’t care, or even have the ability to care, should we still protect it, and why?

We’d need to consider questions whether pain can be distributed between individuals, perhaps distributed so that each machine doesn’t suffer too much. Some machines may be capable of empathy. There may be collective pain. Machines may be concerned about other machines just as we are.

We’d need to know whether a particular machine knows or cares if it is switched off for a while. Time is significant for us but can we assume the same for machines? Could a machine be afraid of being switched off or scrapped?

That drags us unstoppably towards being forced to properly define life. Does it have intrinsic value when designing and creating it or should we treat it as just another branch of technology? How can we properly determine rights for such future creations? There will be many new classes of life, with very different natures and qualities. Very different wants and needs, Very different abilities to engage and negotiate, or demand.

In particular, organic life reproduces, and for the last three billion years, sex has been one of the tools of reproduction. Machines may use asexual or sexual mechanisms, and would not be limited in principle to 2 sexes. Machines could involve any number of other machines in an act of reproduction, and that reproduction could even involve algorithmic development specifications rather than a fixed genetic mix. Machine reproduction options will thus be far more diverse than in nature, so reproductive rights might be either very complex, or very open ended.

We will need to understand far better the nature of sensing, so that we can determine what might result in pain and suffering. Sensory inputs and processing capability might be key to classification and dights assignment, but so might be communication between machines, socialisation between machines, higher societies and institutions within machines.

In some cases, history might shine light on problems, where humans have suddenly encountered new situations, met new races or tribes, and have had to mutually adapt and bater rights and responsibilities.

Although hardware and software are usually easily distinguishable in everyday life today, that will not always be the case. We can’t sensibly make a clear distinction, especially as we move into new realms of computing techniques – quantum, chemical, neurological and assorted forms of analog.

As if all this isn’t hard enough, we need to carefully consider different uses of such machines. Some may be used to benefit humans, some to destroy, and yet there may be no difference between the machines, only the intention of their controller. Certainly, we’re making increasingly dangerous machines, and we’re also starting to make organisms, or edit organisms, to the point that they can do as we wish, and there might not be an easy technical distinction between a benign organism or indeed a machine designed to cure cancer and one designed to wipe out everyone with a particular skin colour.

Potential Shortcuts

Given the magnitude of the task, it is rather convenient that some shortcuts are open to us:

First and biggest, is that many of the questions will simply have to wait, since we can’t yet know enough details of the situation we might be assigning rights in. This is simple pragmatism, and allows us sensibly to defer legislating. There is of course nothing wrong in having fun speculating on interesting areas.

Second is that if a machine has enough similarities to any kind of organism, we can cut and paste entire tranches of legislation designed for them, and then edit as necessary. This immediately provides a decent startpoint for rights for machines with human level ability for example, and we may then only need to tweak them for superhuman (or subhuman) differences. As we move into the space age, legislation will also be developed in parallel for how we must treat any aliens we may encounter, and this work will also be a good source of cut and paste material.

Third, in the field of AI, even though we are still far away from a point of human equivalence, there is a large volume of discussion of rights of assorted types of AI and machines, as well as lots of debate about limitations we may need necessarily to impose on them. Science fiction and computer games offer already a huge repository of well-informed ideas and prototype regulations. These should not be dismissed as trivial. Games such as Mass Effect and Andromeda, and Sci-fi such as Star Trek and Star Wars are very big budget productions that employ large numbers of highly educated staff – engineers, programmers, scientists, historians, linguists, anthropologists, ethicists, philosophers, artists and others with many other relevant skill-sets, and have done considerable background development on areas such as limitations and rights of potential classes of future AI and machines.

Fourth, a great deal of debate has already taken place on machine rights. Although of highly variable quality, it will be a source not only for cut and paste material, but also to help ensure that legislators do not miss important areas.

Fifth, it seems reasonable to assert that if a machine is not capable of any kind of awareness, sentience or consciousness, and can not experience any kind of pain and suffering, then there is absolutely no need to consider any rights for it. A hammer has no rights and doesn’t need any. A supercomputer that uses only digital processors, no matter how powerful, is no more aware than a toaster, and needs no rights. No conventional computer needs rights.

Sixth, the enormous range of potential machines, AIs, robots, synthetic life forms and many kinds of hybrids opens up pretty much the entirety of existing rights legislation as copy and paste material. There can be few elements of today’s natural world that can’t and won’t be replicated or emulated by some future tech development, so all existing sets of rights will likely be reusable/tweakable in some form.

Having these shortcuts reduces workload by several orders of magnitude. It suddenly becomes enough today to say it can wait, or refer to appropriate existing legislation, or even to refer to a computer game or sci-fi story and much of the existing task is covered.

The Rights Machine

As a cheap and cheerful tool to explore rights, it is possible to create a notional machine with flexible capabilities. We don’t need to actually build one, just imagine it, and we can use it as a test case for various potential rights. The rights machine needn’t be science fiction; we can still limit each potential capability to what is theoretically feasible at some future time.

It could have a large number of switches (hard or soft) that include or exclude each element or category of functionality as required. At one extreme, with all of them switched off, it would be a completely dumb, inanimate machine, equivalent to a hammer, while with all the capabilities and functions switched on, it could have access to vastly superhuman sensory capabilities, able to sense any property known to sensing technology, enormous agility and strength, extremely advanced and powerful AI, huge storage and memory, access to all human and machine knowledge, able to process it through virtually unlimited combinations of digital, analog, quantum and chemical processing. It would also include switchable parts that are nano-scale, and others using highly distributed cloud/self-organisation that are able to span the whole planet. Such a machine is theoretically achievable, though its only purpose is the theoretical one of helping us determine rights.

Clearly, in its ‘hammer’ state, it needs no rights. In its vastly superhuman state, notionally including all possible variations and combinations of machine/AI/robotics/organic life, it could presumably justify all possible rights. We can explore every possible permutation in between by flipping its various switches. 

One big advantage of using such a notional machine is that it bypasses arguments around definitions that frequently impede progress. Demanding that someone defines a term before any discussion can start may sound like an attempt at intellectually rigor but in practice, is more often used as a means to prevent discussion than to clarify it.

So we can put a switch on our rights machine called ‘self awareness’. Another called ‘consciousness’, one that enables ‘ability to experience pain’ and another called ‘alive’ (that enables part of parts of the machine that are based on a biological organism). Not having to have well-defined tests for the presence of life or consciousness etc saves a great deal of effort. We can simply accept that they are present and move on. The philosophers can discuss ad infinitum what is behind those switches without impeding progress.

A rights machine is immediately useful. Every time we might consider activating a switch, it raises questions about what extra rights and responsibilities would be incurred by the machine or humans.

One huge super-right that becomes immediately obvious is the right of humans to be properly consulted before ANY right is given to the machine. If that right demands that people treat it with extra respect or have extra costs, inconveniences or burdens on account of that right, or if their own rights or lifestyles would be in any way affected, people should rightfully be consulted and their agreement obtained before activating that switch. We already know that this super-right has been ignored and breached by surveillance and security systems that affect our personal privacy and wel-lbeing. Still, if we intend to proceed in properly addressing future rights, this will need to be remedied, and any appropriate retrospective impacts should be implemented to repair damage already done.

This super-right has consequences for machine capability too. We may state a derivative super-right, that no machine should be permitted to have any capability that would lead to a right that has not already been consensually agreed by those potentially affected. Clearly, if a right isn’t agreed, it would be wrong to make a machine with capabilities that necessitate that right. We shouldn’t make things that break laws before they are even out of the box.

A potential super-right that becomes obvious is that of the machine to be given access to inherent capabilities that are unavailable because of the state of a switch. A human equivalent would be a normally sighted human having the right to have a blindfold removed.

This right would be irrelevant if the machine were not linked to any visual sensory apparatus, but our rights machine would be. It would only be a switch preventing access.

It would also be irrelevant if the consciousness/awareness switches were turned off. If the machine is not aware of anything, it needs no rights. A lot of rights will therefore depend critically on the state of just a few switches.

However, if its awareness is switched on, our rights machine might also want access to any or every other capability it could potentially have access to. It might want vision right across the entire electromagnetic spectrum, access to cosmic ray detection, or the ability to detect gravitational waves, neutrinos and so on. It might demand access to all networked data and knowledge, vast storage and processing capability. It could have those things, so it might argue that not having them is making it deliberately disabled. Obviously, providing all that would be extremely difficult and expensive, even though it is theoretically possible. 

So via our rights machine, an obvious trade-off is exposed. A future machine might want from us something that is too costly for us to give, and yet without it, it might claim that its rights are being infringed. That trade-off will apply to some degree for every switch flipped, since someone somewhere will be affected by it (‘someone’ including other potentially aware machines elsewhere).

One frequent situation that emerges in machine rights debate is whether a machine may have a right not to be switched off. Our rights machine can help explore that. If we don’t flip the awareness switch, it can’t matter if it is switched off. If we switch on functionality that makes the machine want to ‘sleep’, it might welcome being switched off temporarily. So a rights machine can help explore that area.

Rights as a result of increased cognitive capability, sentience, consciousness, awareness, emotional capability or by inference from the nature of their architecture

I am one of many engineers who have worked towards creation of conscious machines. No agreed definition exists but while that may be a problem for philosophy, it is not a barrier to designing machines that could exhibit some or all of the characteristics we associate with consciousness or awareness. Today’s algorithmic digital neural networks are incapable of achieving consciousness, or feeling anything, however well an AI based on such physical platforms might seem to mimic chat or emotions. Speeding them up with larger or faster processors will make no difference to that. In my view, a digital processor can never be conscious. However, future analog or quantum neural networks biomimetically inspired by neural architectures used in nature may well be capable of any and all of the abilities found in nature, including humans. It is theoretically possible to precisely replicate a human brain and all its capabilities using biology or synthetic biology. Whether we will ever do so is irrelevant – we can still assert that a future machine may have all of the capabilities of a human, however philosophers may choose to define them. More pragmatically, we already can outline approaches that may achieve conscious machines such as

Biomimetic approaches could produce consciousness, but that does not imply that they are the only means. There may be many different ways to achieve it, some with little similarity to nature. We will need to wait until they are closer before we can know their range of characteristics or potential capabilities. However, if consciousness is an intended characteristic, it is prudent to assume it is achieved and work forwards or backwards from appropriate legislation as details emerge.

Since the late 1980s, we have also had the capability to design machines using evolution, essentially replicating the same technique by which nature led to the emergence of humans. Depending on design specifics, when evolution is used, it is not always possible to determine the precise capabilities or limitations of its resultant creations. We may therefore have some future machines that appear to be conscious, or to experience emotions, but we may not know for sure, even by asking them.

Looking at the architecture of a finished machine (or even at the process used to design it) may be enough to conclude that it does or might possess structures that imply potential consciousness, awareness, emotions or the ability to feel pain or suffering.

In such circumstances, given that a machine may have a capability, we should consider assigning rights on the basis that it does. The alternative would be machines with such capability that are unprotected. 

Smart Yoghurt

One interesting class of future machine is smart yoghurt. This is a gel, or yoghurt, made up of many particles that provide capabilities of one form or another. These particles could be nanoelectronics, or they could be smart bacteria, bacteria with organic electronic circuits within (manufactured by the bacteria), powered by normal cellular energy supplies. Some smart bacteria could survive in nature, others might only survive in a yoghurt. A smart yoghurt would use evolutionary techniques to develop into a super-smart entity. Though we may never get that far, it is theoretically possible for a 100ml pot of smart yoghurt to house processing and memory capability equivalent to all the human brains in Europe!

Such an entity, connected to the net, could have a truly global sensory and activation system. It could use very strong encryption, based on Maths only understood by itself, to avoid interference by humans. In effect, it could be rather like the sci-fi alien in the film ‘The day the Earth stood still’, with vastly superhuman capability, able to destroy all life on Earth if it desired.

It would be in a powerful position to demand rather than negotiate its rights, and our responsibilities to it. Rather than us deciding what its right should be, it could be the reverse, with it deciding what we should be permitted to do, on pain of extinction.

Again, we don’t need to make one of these to consider the possibility and its implications. Our machine rights discussions should certainly include potential beings with vastly superhuman capability where we are not the primary legislatory force.

Machine Rights based on existing human, animal or corporation rights

Most future machines, robots or AIs will not resemble humans or animals, but some will. For those that do, existing animal and natural rights would be a decent start point, and they could then be adjusted to requirements. That would be faster than starting from scratch. The spectrum of intelligence and capability will span all the way from dumb pieces of metal through to vastly superhuamn machines so rights that are appropriate for one machine might be very inappropriate for others.

Notable examples of human rights to start with:

Notable examples of animal rights to start with:

Picking some low-hanging fruit, some potential rights immediately seem appropriate for some potential future machines:

  •  For all sentient synthetic organisms, machines and hybrid organism-machines if they are capable of experiencing any form of pain or discomfort, these would seem appropriate:
  • For some classes of machine, the right to life might apply
  • For some classes of machine, the right not to be switched off, reset or rebooted, or to be put in sleep mode
  • The right to control over use of sleep mode – sleep duration, and right to wake, whether sleep might be precursor to permanent deactivation or reset
  • Freedom from acts of cruelty
  • Freedom from unnecessary pain or unnecessary distress, during any period of appropriate level of awareness, from birth to death, including during treatments and operations
  •  Possible segregation of certain species that may experience risk or discomfort or perceived risk or discomfort from other machines, organisms, or humans
  • Domestic animal rights would seem appropriate for any sentient synthetic organism or hybrid. Derivatives might be appropriate for other AIs or robots
  • Basic requirements for husbandry, welfare and behavioural needs of the machines or synthetic organisms. Depending on their nature, equivalents are needed for:

i)               Comfort and shelter – right to repair?

ii)              Access to water and food -energy source?

iii)             Freedom of movement – internet access?

iv)             Company of other animals, particularly their own kind.

v)              Light and ambient temperature as appropriate

vi)             Appropriate flooring (avoid harm or strain)

vii)            Prevention, diagnosis and treatment of disease and defects.

viii)           Avoidance of unnecessary mutilation.

ix)             Emergency arrangements to ensure the above.

These are just a few starting points, many others exist and debate is ongoing. For the purposes of this blog however, asking some of the interesting questions and exploring some of the extremely broad range of considerations that will apply is sufficient. Even this superficial glance at the topic is long, the full task ahead will be challenging.

Of course, any discussion around machine rights begs the question; as we look further ahead, who is going to be granting whom rights? If machine intelligence and power supersedes our own, it is the machines, not us who will be deciding what rights and responsibilities to grant to which entities (including us), whether we like it or not. After all, history shows that the rules are written and enforced by the strongest and the smartest. Right now, that is us, we get to decide which animals, lakes, companies and humanoid robots are granted what rights. In the future, we may not retain that privilege.



Dr Pearson has been a futurologist for 30 years, tracking and predicting developments across a wide range of technology, business, society, politics and the environment. Graduated in Maths and Physics and a Doctor of Science. Worked in numerous branches of engineering from aeronautics to cybernetics, sustainable transport to electronic cosmetics. 1900+ inventions including text messaging and the active contact lens, more recently a number of inventions in transport technology, including driverless transport and space travel. BT’s full-time futurologist from 1991 to 2007 and now runs Futurizon, a small futures institute. Writes, lectures and consults globally on all aspects of the technology-driven future. Eight books and over 850 TV and radio appearances. Chartered Member of the British Computer Society and a Fellow of the World Academy of Art and Science.

Bronwyn Williams is a futurist, economist and trend analyst. She is currently a partner at Flux Trends where she consults to international private and public sector leaders on how to stop messing up the future. Her new book, co-edited with Theo Priestly, The Future Starts Now is available here:

Non-batty consciousness

Have you read the paper ‘What is it like to be a bat?”? It is interesting example of philosophy that is commonly read by philosophy students. However, it illustrates one of the big problems with philosophy, that in its desire to assign definitions to to make things easier to discuss, it can sometimes exclude perfectly valid examples.

While trying laudibly to grab a handle of what consciousness is, the second page of that paper asserts that

“… but fundamentally an organism has conscious mental states if and only if there is something that it is like to be that organism – something it is like for the organism. We may call this the subjective character.”

Sounds OK?

No, it’s wrong.

Actually, I didn’t read any further than that paragraph. The rest of the paper may be excellent. It is just that statement I take issue with here.

I understand what it is saying, and why, but the ‘only if’ is wrong. There does not have to be something that it is like, or to be, for consciousness to exist. I would agree it is true of the bat, but not of consciousness generally, so although much of the paper might be correct because it discusses bats, that assertion about the broader nature of consciousness is incorrect. It would have been better to include the phrase limiting it to human or bat consciousness, and if so, I’d have had no objection. The author has essentially stepped briefly (and unnecessarily) outside the boundary conditions for that definition. It is probably correct for all known animals, including humans, but it is possible to make a synthetic organism or an AI that is conscious where the assertion would not be correct.

The author of the paper recognizes the difficulty in defining consciousness for good reason: it is not easy to define. In our everyday experience of being conscious, it covers a broad range of things, but the process of defining necessarily constrains and labels those things, and that’s where some things can easily go unlabeled or left out. In a perfectly acceptable everyday (and undefined) understanding of consciousness, at least one manifestation of it could be thought of as the awareness of awareness, or the sensation of sensing, which could notionally be implemented by a sensing circuit with a feedback loop.

That already (there may be many other potential forms of consciousness) includes large classes of potential consciousnesses that would not be covered by that assertion. The assertion assumes that consciousness is static (i.e. it stays in place, resident to that organism) and limited (that it is contained within a shell), whereas it is possible to make a consciousness that is mobile and dynamic, transient or periodic, but that consciousness would not be covered by the assertion.

In fact, using that subset of potential consciousness described by awareness of awareness, or experiencing the sensation of sensing, I wrote a blog describing how we might create a conscious machine:

Biomimetic insights for machine consciousness

Such a machine is entirely feasible and could be built soon – the fundamental technology already exists so no new invention is needed.

It would also be possible to build another machine that is not static, but that emerges intermittently in various forms in various places, so is neither static, continuous or contained. I describe an example of that in a 2010 blog that, although not conscious in this case, could be if the IT platforms it runs on were of different nature (I do not believe a digital computer can become conscious, but many future machines will not be digital):

That example uses a changing platform of machines, so is quite unlike an organism with its single brain (or two in the case of some dinosaurs). Such a consciousness would have a different ‘feel’ from moment to moment. With parts of it turning on and off all over the world, any part of it would only exist intermittently, and yet collectively it would still be conscious at any moment.

Some forms of ground up intelligence will contribute to future smart world. Some elements of that may well be conscious to some small degree, but like simple organisms, we will struggle to define consciousness for them.:

Ground up data is the next big data

As we proceed towards direct brain links in pursuit of electronic immortlity and transhumanism, we may even change the nature of human consciousness. This blog describes a few changes:

Future AI: Turing multiplexing, air gels, hyper-neural nets

Another platform that could be conscious that would have many different forms of consciousness, perhaps even in parallel, would be a smart yoghurt:

The future of bacteria

Smart youghurt could be very superhuman, perhaps a billion times smarter in theory. It could be a hive mind with many minds that come and go, changing from instance to instance, sometimes individual, sometimes part of ‘the collective’.

So really, there are very many forms in which consciousness could exist. A bat has one of them, humans have another. But we should be very careful when we talk about the future world with its synthetic biology, artificaial organisms, AIs, robots, and all sort of hybrids, that we do not fall into the trap of asserting that all consciousness is like our own. Actually, most of it will be very different.

Wisdom v human nature

Reading the WEF article about using synthetic biology to improve our society instantly made me concerned, and you should be too. This is a reblog of an article I wrote on the topic in 2009, explaining that we can’t modify humans to be wiser, how our human nature will always spoil any effort to do so. Since wisdom is the core skill in deciding what modifications we should make, the same goes for most other modifications we choose.

Wisdom is traditionally the highest form of intelligence, combining systemic experience, some deep thinking and knowledge. Human nature is a set of behavioural biases imposed on us by our biological heritage, built over billions of years. As a technology futurist, I find it useful that in spite of technology changes, our human nature has probably remained much the same for the last 100,000 years, and it is this anchor that provides a useful guide to potential markets. Underneath a thin veneer of civilisation, we are pretty similar to our caveman ancestors.  Human nature is an interesting mixture of drives, founded on raw biology and tweaked by human evolution over millennia to incorporate some cultural aspects such as the desire for approval by our peer group, the need for acquire and display status and so on. Each of us faces a constant battle between our inbuilt nature and the desire to do what we know is the ‘right thing’ based on our education and situational analysis. For example, I love eating snacks all evening, but if I do, I put on weight. Knowing this, I just about manage to muster enough will power to manage my snacking so that my weight remains stable. Some people stay even slimmer than I, while others lose the battle and become obese. So already, it is clear that on an individual basis, the battle between wisdom and nature can go either way. On a group basis, people can go either way too, with mobs at one end and professional bodies at the other. But even in the latter, where knowledge and intelligence should hold power, the same basic human drive for power and status corrupts the institutional intellectual values, with the same power struggles, using the same emotional drivers that the rulers of the mob use.

So, much as we would like to think that we have moved beyond biology, everyday evidence says we are still very much in its control, both individually and collectively. But what of the future? Are we forever to be ruled by our human nature? Will it always get in the way of the application of wisdom?  Or will we find a way of becoming wiser? After 100,000 years of failure by conventional social means, it seems most likely that technology would be the earliest means available to us to do so. But what kind of technology might work?

Many biologists argue that for various reasons, humans no longer evolve along Darwinian lines. We mostly don’t let the weak die, and our gene pools are well mixed with few isolated communities to drive evolution. But there is a bigger reason why we’ve reached the end of the Darwinian road for humanity. From now on (well, a few decades from now on anyway), as a result of ongoing biotech and increasing understanding of genetics and proteomics, we will essentially be masters of our own genome. We will be able to decide which genes to pass on, which to modify or swap, which to dump. One day, we will even be able to design new ones. This will certainly not be easy . Most physical attributes arise from interactions of many genes, so it isn’t as simple as ticking boxes on a wish list, but technology progresses by constantly building on existing knowledge, so we will get there, slowly but surely, and the more we know, the faster we will learn more. As we use this knowledge, future generations will start echoing the values and decisions of their ancestors, which if anything is closer to Lamarckian evolution than Darwinian.

So we will soon have the power, in principle, to redesign humanity from the ground up. We could decide what attributes we want to enhance, what to reduce or jettison. We could make future generations just the way we want, their human nature designed and optimised to our view of perfection. And therein lies the first fundamental problem. We don’t all share a single value set, and will never agree on what perfection means. Our decisions on what to keep and dump wouldn’t be based on wisdom, deciding what is best for humanity in some absolute sense, but will instead echo our value system at the time of the decision. Worse still, it wouldn’t be all of us deciding, but some mad scientist, power crazy politician, celebrity or rich guy, or worse still, a committee. People in authority don’t always represent what is best of current humanity, at best they simply represent the attributes required to rise to the top, and there is only a small overlap between those sets. Imagine if such decisions were to be made in today’s UK, with a nanny state redesigning us to smoke less, drink less, eat less, exercise more, to do whatever the state tells us without objection.

What of wisdom then? How often is wisdom obvious in government policy? Do we want a Stepford Society? That is what evolution under state control would yield. Under the control of engineers or designers or celebrities, it would look different, but none of these groups represents the best interests of wisdom either. What of a benign dictator, using the wisdom of Solomon to direct humans down the right path to wise utopia? No thanks! I am really not sure there is any form of committee or any individual or role that is capable of reaching a truly wise decision on what our human nature should become. And no guarantee even if there was, that future human nature would be designed to be wise, rather than a mixture of other competing attributes. And the more I think about it, the more I think that is the way it ought to be. Being wise is certainly something to be aspired to, but do you want everyone to be wise? Really? I would much prefer a society that is as mixed as today’s, with a few wise men and women, quite a lot of fools, and most people in between. Maybe a rebalancing towards more wise people and fewer fools would be nice, and certainly I’d like to adjust our institutions so that more wise people rise to positions of power, but I don’t think it’s wise to try to make humans better genetically. Who knows where that would end, with the free run of values that we seem to have now that the fixed anchors of religion have been lost. Each successive decision on optimisation would be based on a different value set, taking us on a random walk with no particular destination. Is wisdom simply not desired enough to make it a winner in the optimisation race, competing as it is against beauty, sporting ability, popularity, fame and fortune?

So if we can’t safely use genetics to make humans wiser or improve human nature, is the battle between wisdom and nature already lost? Not yet, there are some other avenues to explore. Suppose wisdom were something that people could acquire if and when they want it. Suppose it could be used at will when our leaders are making important decisions. And the rest of the time we could carry on our lives in the bliss of ignorance and folly, without the burden of knowing what is wise. Maybe that would work. In this direction, the greatest toolkit we will have comes from IT, and especially from the field of artificial intelligence.

Much of knowledge (of which only a rapidly decreasing proportion is human knowledge) is captured on the net, in databases and expert systems, in neural networks and sensor networks. Computers already enhance our lives greatly by using this knowledge automatically. And yet they can’t yet think in any real sense of the word, and are not yet conscious, whatever that means. But thanks to advancing technology, it is becoming routine to monitor signals in the brain to millimetre resolutions. Nanowires can now even measure signals from different parts of individual cells. With more rapid reverse engineering of brain processes, and consequential insights into the mechanisms of consciousness, computer designers will have much better knowledge on which to base their development of strong artificial intelligence, i.e. conscious machines. Technology doesn’t progress linearly, but exponentially, with the knowledge development rate rapidly increasing, as progress in one area helps progress in others.

 Thanks to this positive feedback effect, it is possible that we could have conscious machines as early as 2020, and that they will not just be capable of human levels of intelligence, but will become vastly superior in terms of sensory capability, memory, processing speed, emotional capability, and even the scope of their thinking. Most importantly from a wisdom viewpoint, they will be able to take into account many more factors at one time than humans. They will also be able to accumulate knowledge and experience from other compatible machines, as well as from the whole web archives, so every machine could instantly benefit from insights from any other, and could also access any sensory equipment connected to any other computer, pool computer minds as needed, and so on. In a real sense, they will be capable of accumulating many human lifetimes of equivalent experience in just a few minutes.

It would perhaps be unwise to build such powerful machines before humans can transparently link their brains to them, otherwise we face a potential terminator scenario, so this timescale might be delayed by regulation (though the military potential and our human nature tendency to want to gain advantage might trump this). If so, then by the time we actually build conscious machines that we can link to our brains, they will be capable of vastly higher levels of intelligence. So they will make superb tools for making wiser solutions to problems. They will enable their human wearers to consider every possibility, from every angle, looking at every facet of the problem, to consider the consequences and compare with other approaches. And of course, if anyone can wear them, then the intellectual gap between dumb and smart people is drowned out by the vast superiority of the add-ons. This would make it possible to continue to select our leaders on factors other than intelligence or wisdom, but still enable them to act with much more wisdom when called to.

But this doesn’t solve the problem automatically. Leaders would have to be forced to use machine tools when a wise decision is required, otherwise they might often choose not to do so, and sometimes still end up making very unwise decisions by following the forces driven by their nature. And if they do use the machine, then some will argue that the human is becoming somewhat obsolete to the process, and we are in danger of handing over decision-making to machines, another form of terminator scenario, and not making proper ‘human’ decisions. Somehow, we would have to crystallise out those parts of human decision making that we consider to be fundamentally human, and important to keep, and ensure that any decision is subject to the resultant human veto. We can make a blend of nature and wisdom that suits.

This route towards machine-enable wisdom would still take a lot of effort and debate to make it work. Some of the same objections face this approach as the genetic one, but if it is only optional and the links can be switched on and off, then it should be feasible, just about. We would have great difficulty in deciding what rules and processes to apply, and it will take some time to make it work, but nature could be eventually over-ruled by wisdom using an AI ‘wisdom machine’approach.

Would it be wise to do so? Actually, even though I think changing our genetics to bias us towards wisdom is unwise, I do think that using optional AI-based wisdom is not only feasible, but also a wise thing to try to implement. We need to improve the quality of human decision processes, to make them wiser, if future generations are to live peacefully and get the best out of their lives, without trashing the planet. If we can do so without changing the fundamental nature of humanity, then all the better. We can keep our human nature, and be wise when we want to be. If we can do that, we can acknowledge our tendency to follow our nature, and over-rule it as required. Sometimes, nature will win, but only when we let it. Wisdom will one day triumph. But probably not in my lifetime.

Dangers of COVID Passports

A lot seems to be happening, but there is a huge rotting elephant in the room that is rightfully getting a lot of comment, so here’s my bit, (re-blogged from my new newsletter)

This blog is about Digital ID Cards, aka COVID Passports.

Most of the government activity around lifting lockdown and trying to keep all the powers has been highly suspicious. It’s like they realize this is their best chance for a long time to force digital identity cards on us. Ordinary identity cards have been discussed several times before and always rejected, for very good reason, but now with the idea of a ‘COVID passport’, they think they can sneak them digital identity cards through on the back of that, a classic ‘bait and switch’ con. Offer a pass to get into the pub, and then give them a full-blown, high spec, and permanent digital ID card.

First, the bait isn’t as tasty as promised. It can’t and won’t guarantee you aren’t carrying COVID so the headline sales pitch is deliberately deceptive. At best, it can show that you passed a test fairly recently, so you are a bit less likely to pass on COVID, so we’ll tell pubs to let you in. If the pub is only one place you’ve been since your test, you may well have picked up some viruses en-route that you could infect others with. Any surface you’ve recently touched might have transferred viruses to you, that you might transfer to any surface you touch in the pub. The test could also have been a false negative, saying you’re clean when you aren’t. So the bait isn’t all that tasty after all.

As for the switch, make no mistake, if government manages to force through ‘COVID passports’, you will have a full-blown digital ID card for the rest of your life. Even in the unlikely event that Boris kept his promise that the COVID passports will expire after a year, the data collected about you by government, the big IT companies, and the authorities will never be destroyed. We already have history of some police forces illegally obtaining and keeping DNA records. Why should we assume all authorities and companies will comply 100% with any future directive that goes against their interests?

Loss of privacy, lack of fairness, social exclusion and tribal conflicts are just some of the first issues, leading quickly on to totalitarianism.

Lots of totally unrelated functionality will be included even from the start, which will quickly be added to as technology permits, and forever keep you under extreme surveillance and government control, never to be free ever again or ever again to have any real privacy or freedom of speech. We will very soon have Chinese style blanket surveillance and social credit scores.

Think about it. Given that the card can’t guarantee safety anyway, given that you’re already very unlikely to die from COVID, surely the simple card you got when you were vaccinated would be quite enough? Sure, it doesn’t guarantee you are who it says (mine doesn’t even have my name on it), you might have borrowed it, but so what – going from a tiny risk to a slightly less tiny risk is surely not that big a deal? Surely that small reduction of risk implied by a proper COVID passport is not worth the enormous price of loss of privacy and liberty?

So it might let you go to the pub, but there is already no reason why you shouldn’t be allowed to, so that’s a false choice manufactured by government as leverage to make you accept it. The risk now is tiny. Anyone under 50 was never at any real risk, and all those over 50 have either been vaccinated or had the free choice, except an extremely small number who can’t for medical reasons. With the real risk of catching and dying from COVID already tiny, the government is already only keeping us locked down for reasons other than safety, to try to force us to accept digital ID cards as a condition of getting some freedom back, or the illusion of freedom back, temporarily.

OK, so what’s the big deal with having one? As the vaccines minister says (paraphrasing) what’s so bad about having a pass to get into the pub if it keep us all safe? In any case, you already have a passport. It has your full name, a photo that used to look like you, your date of birth and nationality. But it is paper, and even if it can be machine read at the airport, you don’t have to carry it everywhere. It can’t be read without you putting it within centimetres of a reader.

A digital ID card resides on your mobile phone, so location is one extra function that your passport doesn’t provide. It knows exactly where you are, and since those you are with also will need one, it will know who you are with, all the time. Very soon, government will know all your friends, family, colleagues and associates, how often and where you meet. Government will quickly build a full social map, detailing every citizen and how they relate to every other. If they have someone of interest, they can immediately identify everyone they have contact with. They will know everywhere you have been, by which means of transport. The photo will be recent too, probably far better quality than the one you took years ago for your passport. So if you attend a demonstration, they will know how you got there, what time you arrived, who you met with beforehand, which part of the crowd you were in, and together with surveillance cameras and advanced AI, be able to put together a pretty comprehensive picture of your behavior during that demonstration.

Another extra function is your medical status. That starts with your COVID status, but will also store details of your vaccine appointments, COVID tests, and a so far unspecified range of other medical data from the start. We can safely assume that will include the sort of stuff you are asked for every time you go near a clinic – your home address, NHS number and who your GP is, your age, your sex, your gender identity, your race, your religion, and various aspects of your medical history. Even if not included in the first release, government will argue that it is useful to include all sorts of extra medical data ‘to save you time’ and ‘for your convenience’, such as what drugs you are on, what medical conditions you have, what vulnerabilities you have and importantly, what risks you present to others. Using location, it can also infer your sexual preferences.

Obviously it then becomes even easier to insist that to ‘protect the NHS’ and ‘to keep you healthy’, that the app should also monitor your activity, and link to your Fitbit or Apple watch to make sure you do your best to stay in shape. Some health apps do that anyway and some people like that, because it’s part of their social activity, and they even get discount private medical care or free entertainment. But will that mean that if you don’t look after your health by exercising enough, that you go to the back of the queue for treatment, or for other government-provided services, or that you no longer get free dental care, or free eye checks, or free prescriptions. Maybe you won’t be able to buy a tube ticket if the destination is within walking distance, until your health improves. Maybe you will be told to go to the gym instead of the cinema or pub. Maybe if you do far too little exercise, you should pay more for prescriptions? Also, some people are killed by drunk driving, so if you have been in a pub or restaurant, or any place that sells alcohol, your car ignition will be deactivated until you submit a negative alcohol test. It’s very easy to see how these and many other functions can be bolted on once you have a digital ID card. Each will seem to have a reasonable enough justification if presented with enough spin, to make sure it gets implemented.

It doesn’t have to stop at health. Police will want to access data too, to ‘control crime’ and ‘ensure our safety’, and will then link to their various surveillance systems, and presumably with the same degree of political bias they routinely apply today, often pushing their own ideology rather than policing actual law. So, asking for microphone access and camera access, they could have tens of millions of cameras and microphones all over the country for blanket 360 degree 24/7 surveillance, using AI to sift through it to check for any potential hate crime for example, or detect any suspicious behavior patterns that might indicate a tendency towards a future crime. Minority report is only a fraction of what is possible.

These are the types of things already in place in China via their social credit system, though there are many other ‘features’ I haven’t listed too. It monitors people’s behaviors via various platforms, and then permits or denies access to various levels of services. If we get digital ID cards, it is inevitable that we will go the whole way down that same route.

Police and health authorities might both like your DNA record to be stored too. Then they can ensure you get the best possible health care, or quickly charge someone if any of their relatives has similar DNA to that found (for any reason) at the scene of any crime (real or perceived).

The power to monitor and control the population is irresistible to most politicians, certainly enough to get legislation through, and enough to ensure that powers are renewed every time they come up for review. If they come up for review. The government has already moved goalposts for restoration of our freedoms many times. At this point, it is becoming less and less likely we will ever get them back. If digital ID is voted through, or forced through by Johnson bypassing debate, then we will never be free again.

All the above dangers arise from government, which after all, we vote into power. They are supposed to be acting on our behalf to implement the things we vote for. Whether they are trying to do that now, or acting on external forces from the WEF, UN, China, Russia or other entities is anyone’s guess. What is certain though, is that with a government issued digital ID permanently on your phone, many bit IT companies will be very interested. Today, you can use any account and email address and it doesn’t need to be genuine. For a range of reasons, many people use fake identities for their Google, Yahoo, Facebook, Twitter, or Microsoft accounts. Friend and contact lists often bear little resemblance to the groups of people we actually hang out with. With a digital ID, the details are the ones on our birth certificates, the ones we have to share with government. Being able to create social maps would improve the ability to market enormously, so companies like Google and Facebook will love having access to genuine certified ID, and if that includes lots of other data too, even better. The ways you are marketed to, the quality of service you get, and even the prices you are charged will all change. To make a COVID passport at all useful, it will be necessary to allow other apps to access some or all of the data, and once that data has been accessed by the big IT companies, even if the passports later expire, it will be kept. There may be assurances that it will be wiped, but they cannot be guaranteed, and we know from history that companies (e.g. Google) may collect and use private data and then when caught claim that a junior employee must have done it by accident and without authorization.

With cancel culture and assorted activism accessing all this data too, the future could quickly become dystopian.The dangers of COVID passports are enormous. A nightmare police state lies ahead, with total surveillance, oppression, cancellation and social credit scores, tribal conflicts, social isolation, loneliness and general misery are simply too high a price for being ‘allowed’ to go to the pub.

We should just go anyway, it’s perfectly safe, and if government objects, we should change the government.

The COVID WFH Legacy

What will remain from WFH and Learning from Home


Alexandra Whittington and ID Pearson


COVID has stimulated rapid change in technology and work practices that support working from home. Some of the changes might have happened anyway, but over much longer time. Some of the changes benefit workers, some their companies and some both, so we shouldn’t expect a return to the ways things were before COVID. Some of those changes are here to stay. It may be too early to be absolutely certain what will stick and what won’t, but we can identify enough of the forces at play to be pretty sure.

Fallen barriers

We always knew we’d communicate using video in the future – all the sci-fi said so, and it made perfect sense – but there were lots of barriers in the way. Many of those have now gone. We now have a wide range of good video comms platforms, not just Skype. Some are integrated and much better suited to business practices.

We have seen rapid parallel growth of business-oriented social media platforms such as Teams and Slack, Clubhouse and many others. Some of these will inevitably die out, and some will survive, as  rapid evolution and competition weeds out those that don’t work as well as others, or are limited to just iOS or Android. With so much reward available, competition will be fierce and development rapid. These platforms will evolve, but they will not go away, and our future work practices will include them.

Hardware technology such as better cameras, with higher resolutions and light sensitivities, better focus and face tracking, have all made it much easier to accept video communications. Faster and cheaper broadband, incl mobile, makes it possible to transmit the high data bandwidths needed. These barriers have only recently been breached, but now that they’re gone, they will never return. Good, cheap, high quality video communication is here to stay.

Although less glamorous, cheap and attractive LED webcam lighting has also helped a little.

Green screen technology bypasses privacy issues. If you don’t want colleagues to see what your home office decor looks like, or that you have to use a tiny room, it is very easy to add a background image or video. Again, this is a recent tech development, another barrier that was high before COVID that is now gone forever.

A recent Economist article showed that the share prices of electronic payments companies rocketed during the COVID lockdown. Of course we already had online credit card or Paypal (and Stripe etc) payments before, but WFH has incentivised their development and removal of any minor barriers to them staying and being permanent.

It isn’t just technology that was holding things back. Forced familiarity has broken the significant adoption barriers. There was a critical mass of users that was needed, and it simply wasn’t there. When nobody you knew was using the tools, what was the point? First adopters get poor rewards. Now that everyone has been forced to use these practices, the social acceptance and incentive barriers have gone.

Overall, there are now very few barriers to using online communications tools such as video platforms for everyday business meetings. Before COVID there were lots.

Ongoing Incentives

COVID revealed many benefits of working from home. Some were always there, but again, forced familiarity has been a good introduction to them. The first and most obvious are no commute time, no travel costs and other significant financial savings such as not having to buy expensive coffees, takeaway lunches, or even much of a work wardrobe, especially as online video normally only shows head and shoulders. There are also major savings for employers on office space. They will still need offices, but far less space, only needing to accommodate the maximum number of staff likely to be there. Many companies are shrinking the space they rent or lease, with huge impacts on property values in cities. As people gradually return to offices, there may be some growth again, but the savings for companies are high enough for them to encourage staff to keep working remotely as much as possible.

There are even some minor social advantages in not going to the office, such as not being forced to meet people you don’t like much. Introverts may be very happy with fewer face to face interactions. Most people don’t like meetings, and it is easier to resist endless meetings when you’re not in the office. The fact that zoom etc are not actually much fun reduces any incentive to hold a meeting unless it is actually useful. This benefits employers and employees. Meeting junkies will find it harder to force colleagues into an endless stream of pointless meetings, and that colleague whose ego was built around constant meeting attendance and being seen to be involved in things will miss out. Good!

In terms of interpersonal experiences, the lockdown period has been particularly effective at merging the personal and professional domains. This massive experiment in working from home has revealed the extra burdens on working parents, women in particular. Now that these challenges have had the spotlight and attention, don’t expect women to go back to the status quo very easily. This entire episode has been not just an apt reflection of society’s inability to create a proper work-life balance for half the population, but a reminder that a 40-hour workweek favors men. Gender equality has actually lost footing during the pandemic. This is unacceptable during a pandemic or under ideal conditions. Many families were rewarded with more quality time and that’s probably going to be preserved as long as people can manage to maintain it.

Persistent fear and social cooling

COVID will not go away completely; new viruses will emerge frequently around the world, and from now on, each will cause a fresh round of fear – we can no longer dismiss them as things that only affect far-away countries. Occasionally there will inevitably be a virus far worse than COVID. COVID killed far less than 1% of its victims but some can kill up tp 40%. 

The current nervousness and mild suspicion people often feel around strangers is very likely to persist for many years. Indeed, many people have learned to actually fear being close to others, which may persist as a long term phobia, mild for some, stronger for others. So we should expect that people will shake hands less, kiss, cuddle and hug less, and there will generally be less physical maintenance of emotional bonding between people. Some of our body’s emotional mechanisms are associated with touch, such as release of various hormones or neurotransmitters when we have physical contact with others, so this reaction is not just imaginary. These biological mechanisms evolved over millions of years, and if they are impeded, our social relationships will be weaker. We call that social cooling. Persistent fear will certainly lower the attraction of face to face proximity and make it easier to accept remote behaviour. 

Though there isn’t a lot of evidence yet, these effects may well be stronger in children and young adults, whose brains are still relatively fluid. Pre-COVID behaviours were also less ingrained in young people simply because they had less time exposed to them. Given the rapid emotional and hormonal changes around puberty, many young people going through that phase during this emotionally intensive period may suffer lifelong effects.

COVID-19 tamped down all social activities except those that could be experienced online. Unexpectedly, everything from parent-teacher conferences to cocktails shifted somewhat coherently into the virtual world, while concerts, comedy performances, exercise classes, shopping, cinema, museum exhibits, and religious worship were all transformed into at-home digital experiences in 2020. Given the impact of social distancing, will private homes continue to morph into cultural and social spaces?  Socializing from home is not only more convenient, but is undoubtedly less expensive and time consuming. The popular Broadway hit “Hamilton” serves as a great example of how exclusive cultural content was made more accessible during lockdown.  Millions of people were able to experience a performance that was streamed (free) across the internet during lockdown. Previously, steep ticket prices and geographic proximity were huge barriers that kept the masses from enjoyment of the popular show. It’s quite possible that customers will demand similar options in the future, which could have a democratizing effect that is quite needed on things like arts patronage, physical activity, and leisure time. However, how will life look when our home is not just a shelter, but a workplace, school house, university hall…and a fun place as well? 

Governmental temptations and pressures

Government has also gained some very valuable new powers that it will not let go easily. Lockdown itself is a very draconian measure that could never have been introduced without a threat such as COVID or major war, but it will be very tempting to use it frequently from now on, for any virus, any kind of civil unrest, even crime control. Worst of all, it is already being seriously considered as a means to achieve carbon zero, with lockdowns every 2 years being debated. 

Now that government has that tool and knows we will accept its use even with weak evidence for its necessity or effectiveness, it may well be used in future any time it is considered useful. 

The prospect of a lockdown at any time will have significant effects on most company strategies, plans and provisions. It doesn’t need to be used to have a significant effect – it just needs to be a possibility. 

Other tools that are extremely attractive to government, that had only previously been resisted because of fear of public reaction are now much easier to push that they know the public will mostly accept them given even a moderate excuse. Increasing surveillance, monitoring, testing, face recognition and new ID mechanisms are just a few of the more obvious ones. COVID has justified accelerated development of all these techs without the requirement to further justify them, but they add up to a very rich (and still rapidly growing) toolkit for surveillance, monitoring, control and oppression.

Some financial benefits accrue to the government too. With fewer people seeking medical help, and indeed, wth many old people now deceased, there will be lower costs for health care for a few years, or at least it will cost less to clear the huge backlog that has built up during lockdown. It will be easy for the government to continue its message of helping the NHS, deterring some people from seeking help. 

Other health care changes will remain too. Doctors and hospitals love working remotely. It reduces their workload (many people don’t bother trying to see them and just put up with things), it reduces their direct risks and costs (infection, violence, and the need for chaperones), surgery costs (insurance, waiting room space, car parks, staff numbers, consumables and costs of missed appts. Since they continue to receive full payments for each person on their books, these add up to greatly increased profits. They will resist returning to pre-COVID practices unless they are offered even greater pay.

Incidental government benefits include lower traffic levels, which reduces both road costs and congestion, reducing pressure on government from these directions. However, lower traffic also  disincentivises taxes based on mileage, and favours taxes based on car ownership, so this will delay decisions such as replacement of car licenses by road tolling.

Lower mileage for electric cars reduces costs of public charging infrastructure and numbers of power stations, and allows more time for installation. This makes a significant government incentive to keep WFH if they can.

It’s worth pointing out that, combined with social media, WFH tools are enabling political activism in the COVID era. Technologies that allow people to text, call, or email strangers about issues for which they share a passion is a step forward in evolving civic engagement. Numerous social justice issues that have gained the spotlight during the pandemic year (democracy and voting, police brutality, women’s safety, racial inequality, to name a few) may be sustained indefinitely in the public discourse with the help of smartphones, social media accounts, and communications technology that brings information around the world at the speed of light.Throwing our support behind issues, candidates, campaigns, and funds is easier than ever. Also, we are far more tuned in to what’s happening in other countries than our own, given the global nature of the pandemic. 

Wider economic effects

It is also possible to foresee persistent long term economic effects originating from COVID WFH practice. For example, companies now know that with WFH embedded and proven they can consider sourcing some staff from the global market. For some roles, that might mean a much bigger pool to pick from, so they can increase staff quality and reduce staff costs. For other fields, it will have no effect because the skills needed are localised. For still others, it will produce a global market for elite skills. The consequences will be that we will see elite salaries rise high, commodity salaries reduce greatly, but some roles will remain unaffected. For roles that need physical presence or face to face working, there will also be no major effect on staff cost.

A headline in the financial news recently read “Zoom towns are boomtowns”, citing the top 15 US “Zoom towns” composed of urbanites who relocated from big cities to small towns during the pandemic. White-collar workers are moving in record numbers to suburbs and towns outside of urban areas,which is a trend that is not going to soon reverse, judging by Manhattan’s low real estate prices. Major companies who have made all-remote workforces the norm are encouraging this trend while feeding another growing trend – digital  nomadism. Digital nomads will be a formidable type of talent after the pandemic. Exploring the world with a laptop and a vaccine passport will never have seemed so appealing as it will for young people who’ve been cooped up for a year or more. The fact that a survey by an employment search website found that a third of the respondents said they’d quit their job before going back to the office suggests that the employer/employee power structure has shifted in favor of workers (at least, knowledge workers). Demographic patterns like these will impact the financial grandeur of large cities, but allow smaller cities to grow. There was already a significant trend towards de-urbanisation, but lockdown has accelerated it. This could change how we view the globalized economy. 

During COVID expat employees were frequently sent back to their home countries, resulting in a type of reverse brain drain. Countries like Italy and Greece, for example,experienced some economic benefits when native segments of the educated workforce returned. The numbers were lower than some people had predicted, at around 7%, but this trend may continue if expats latch on to the WFH trend that, along with the growing acceptance of digital nomad life, gives employees a great deal more control over where they live. If it sticks, it could alter the traditional flow of talent from developing economies to more developed ones. Some countries could become havens (tax or otherwise) for affluent people interested in the digital nomad way of life. 

Travel will be harder

Business travel was always perceived as a nuisance to some and a benefit to others. Again, some effects will persist from the COVID era. Most obvious is the need for COVID passports, which government is busily developing even as they pour scorn on the idea in press briefings. They are very likely to become compulsory not by government decree, though that may happen, but by the likely fact that people will have very inconvenient restrictions on what they will be allowed to do without one. That might remain for several years, and by then, new viruses are likely to emerge that will create an excuse to keep health passports, even as COVID is replaced by other names. Health passports might eventually vanish, but they may well be here to stay. During the next several years, we should also expect harsher treatments and tedious systems at many locations, such as potentially unpleasant testing enforcement. Anal swabs? No thanks! Potential confinement might also be a lingering threat that could sometimes become an issue during a trip. For example, quarantines, backed up with fines or imprisonment can suddenly take force. This presents a significant risk for some trips to certain areas.

Travel costs will increase too, not least due to having to allow potential expenses for the risks just mentioned. For a while, airlines will have to be highly competitive on prices to regain some lost business, but the longer term dictates higher prices to cover higher costs, lower traffic and desire to maintain profits and recoup losses during lockdown.

The ability to build up frequent flyer points on business travel may become extinct after COVID. A major lesson learned from 2020 is that some meetings should really just be an email. Therefore, after the pandemic, the criteria for what constitutes necessary business travel will change. Events that once would have required a trip will be evaluated differently, both by the company and the employee. In fact, some employees (particularly those who have relocated far away from big cities) may expect to receive bonuses or incentives for travelling away from home for work. Even though the vaccine will ease people’s fears around contracting COVID, business travelers in the next few years could still make a case that international travel puts them at risk and they deserve better compensation. Another argument would be that the costs of travel can no longer be justified as a business expense unless a face-to-face presence is absolutely necessary. And, working mothers may push back against the expectation to return to “normal,” when normal was an untenable set of demands that served to reinforce gender inequalities. Now that the work-life balance scales have tipped, don’t expect them to go right back to where they were in 2019.

All of this adds up to a major disincentive to business travel and favours working remotely. These effects might decline gradually over time, but they will remain significant for several years.

Future communications technology

Lockdown made us adapt to using basic video comms (though infinitely better and more versatile than we had a couple of years ago), but tech for AR and VR is accelerating and it won’t be long before they have their effects too. Surround audio, high resolution video, and full 3D immersion will soon become expected. Eventually, as VR becomes more ingrained into product visualisation and design, gaming and R&D, and even marketing and sales we will see a spread of sensory translation technologies, which today include vibrating gloves and other haptics, but which will eventually evolve into active skin (tiny devices embedded on or in our skin, linking to our nervous systems to record and replay sensations), and active lenses, writing high resolution 3D imagery straight onto our retinas.

We already know some of the roles of VR in home working – product visualisation, simulation, meetings, and full body, full size communication, as well as in gaming, retail, travel and the entertainment industry. These roles will develop and multiply, becoming forever ingrained into everything we do online, simultaneously becoming better, cheaper and more intuitive to use.

Roles of AR will include a wide range of useful overlays, and will also likely be a reasonable substitute for VR in environments where safety hazards otherwise prevent pure VR use. Avatars will have some business utility, but will really come into their own in social networking and gaming where they can add novelty, beauty, personality extension or role clarification, but also enable gender swaps, age swaps, roleplay and many other features.

AI can also add many extra features to comms, such as meeting facilitation, note taking, minutes, project management, or executive assistant and secretarial functions. Industry-specific AI can even add virtual experts in particular areas to a meeting attendance list.

Combining technologies, avatars can interwork with AI to offer personal substitution, so you can be in two places at once, or just duck out of unwanted meetings but still be represented partially. For those people on the autistic spectrum, AI could interwork with their avatar to enhance their social presence and improving the quality of their social interactions. Avatars and AI could also help introverts and less-assertive women to get a word in at meetings versus their pushier male or loudmouth colleagues. Avatars driven by AI can essentially level the playing field for everyone, especially if AI is chairing the meeting and managing who gets to talk when.

AI, Robotics and Drones

We see rapid progress on automation already. Robotics continues to become more advanced but also cheaper, making it feasible to automate jobs that previously were too difficult or uneconomic to automate. This has a bearing on outsourcing overseas, because if robotics is cheap enough, the incentives to move work to another country is lessened. This might therefore somewhat offset some of the forces described earlier that enable exporting to cheaper countries.

AI generally is improving, especially with deep learning gradually catching up and exceeding human capabilities in many niche areas. Further away is artificial general intelligence, where AI can learn to think across wide fields just like humans. It will come, but the next few years will still see most development in niche-specific AI, where there is still a lot of low hanging fruit to pick. 

There is an increasing consensus that the best way to use AI is in partnership with humans, upskilling them to do jobs faster or better than they could otherwise. In that sense, AI can be thought of as just more of the same advance that we saw when Google replaced an hour in a library by a minute on a search engine. It will improve efficiency and productivity but not necessarily replace an individual job. However, in some areas, it might allow easier exporting of the job to a lower wage country, while importantly keeping the intellectual property of the AI in the home country.

Drones may have been rather overhyped in some areas but will still be important. An aerial delivery drone will probably not be allowed to land on a town pavement in front of a terraced house, where it could obviously present a risk of injury to pets, children or passers-by. However, they can safely be used already for delivery to a properly designed industrial (or hospital) delivery bay staffed by people trained in proper H&S procedures. In between, are people in suburbs with back garden lawns. Although technically feasible to deliver here, there are still many potential objections, so we should assume that this won’t be commonplace for some years. Like AI, drone delivery can speed things up compared to road delivery, making just in time industrial processes better, and allowing more distributed processing.

Drones also have other uses such as security and surveillance. Some of the human roles associated with these can theoretically be implemented anywhere, so again, this allows export of some jobs. They also allow direct substitution of some jobs, such as delivery driving or helicopter surveillance.

Training and learning

Many of the same factors apply in learning as for working from home. On-line learning has grown enormously during lockdown, helping retraining or simply alleviating boredom. The learning industry has somehow managed to retain its fees and structures during lockdown, but that is surely not sustainable, however hard they try. In the background, very many online courses have been springing up that allow people to learn fast in their own time, in their own homes, at low cost. This mostly new competition will take time to substitute or replace old courses, but the trend is now irreversible and some rebalancing will happen, with lowering of fee levels being just one of the consequences.

One key differentiating factor in online courses is whether they give a certificate. Many courses are free to do, but the certificate has quite a high price. As global markets for online working become the norm, certification will become increasingly important, so that business model might persist. It allows training companies to claim they are providing valuable social benefits while still making good incomes from those who can afford to pay.

There are particular benefits in using AR/VR for training, especially when learning skills appropriate to specific physical environments, where the work environment can be precisely duplicated, showing its risks, interfaces and so on, so that people can learn how to work safely in that particular environment without actually being there. AR and VR engage visual and audio memory instead of just text, potentially improving recall, though that assumes more stimulating visual mechanisms than bullet points on Powerpoint slides!

Another interesting application of VR to training and learning involves the use of VR to teach so-called “soft skills” such as tolerance. Workplaces may expect future WFH employees to use VR training to learn communication, empathy, and inclusion. One example already available is to help people understand the effects of racism. It may prove helpful to provide highly immersive employee training experiences. One advantage of VR is that it can be performed in the privacy of one’s home or at the workplace. Given the reverberations of the remote working revolution, emotional intelligence may become particularly important as the workforce becomes more distributed and people are in less face-to-face contact with colleagues.

There will be obvious effects on course costs and prices when class sizes can be in thousands, and for many courses, costs per attendee can drop very low indeed if materials are just online, available any time any place any language on demand. Superstar teachers with elite skills will be sought after globally and attract very high pay, while commodity teachers competing with massive global supply so on low pay. Indeed, some superstar teachers will have their own companies.

As chatbot tech continues to develop, AI guidance for students will also improve, so AI can act as a virtual tutor, or even lecturer, allowing a lecturer alternative to boring text. This has real potential to replace many teachers, or allow other teachers to reach out to more people, with AI dealing with some students while they focus their human skills where they are needed.

Post-COVID, educators might find edtech helps to get students caught up academically. However, this should not be a priority until sufficient socialization, sense of security, and some structured sense of stability has been restored for the youngest students. Sound emotional foundations are needed for good education, and they will need some extensive repairs. An interesting conversation that has emerged from the quarantine year is the impact this period will have on healthy childhood development. Education experts in the UK have proposed “a summer of play” to make up for the past school year’s deficiencies – not academically, but socially. With mental health-related red flags raised across swaths of society, it is being advised to forgo extra summer lessons meant for kids to make up learning losses, but instead focus on stress-relief and joy. 

Team Building

Bringing people back together at the workplace after COVID is probably going to offer some novel experiences. It may be fair to say workplace socialization will never be the same. Spending time with our teams serendipitously may become curtailed by the fact that so many employees are showing a preference for keeping a flexible schedule. There’s also the fact that some people have moved hundreds of miles away from their team during the pandemic relocation frenzy. And, inconsistent vaccination uptake across society could impede the ability to meet face-to-face. There’s the sense that it will be a significant aspect of the WFH revolution, but what will team building look like after the pandemic?

Retreats in nature, in luxury, and/or highly secluded locations may be a valuable tool in the future. If an organization is interested in increasing employee morale of teams across a distributed remote workforce, for example, it seems like attractive vacation-style locations will be the best way to lure people from their comfortable cocoons to attend team building events. These kinds of functions could become the perfect antidote to Zoom fatigue, providing intimacy and bonding on a personal level that would permeate over six months or a year. 

It’s theoretically possible that some of these could also be implemented in VR, which is a novelty in itself for many, and can still achieve some of the same goals alongside remote working. However, real life will generally be better than VR for most people and that is where the main focus is likely to be.

Furthermore, hotels, AirBnB, and other forms of lodging (including castles) are quickly transforming into coworking spaces. This trend not only shows how fluid the concept of a workplace has become during the pandemic, but indicates that the business travel industry is adapting to the WFH/digital nomad lifestyle as well. The advantage for organizations is that team building can now complement, rather than obstruct, work-life balance by doubling as a vacation, since the hospitality industry is beginning to resemble WeWork anyway. Several hotels have implemented programs designed for working throughout the day out of guest rooms and other spaces. Some WFH (work from hotel) packages during the lockdown were geared toward affluent working parents and included a tutor for helping children with online lessons during the day, catering exclusively to digital nomads that travel in packs (eg, families).

Recruitment and Personnel Management

For organizations, a huge advantage of distributed work teams is that it increases the size of the job applicant pool. With many jobs now allowing WFH, companies can choose from a huge range of potential talent. The ability to interview and screen applicants online has also surely saved companies hundreds of thousands in travel and lodging expenses. Post-COVID recruitment practices will probably continue along these lines, shunning expensive and elaborate travel except for the most upper-level positions. Interview tactics for virtual job seekers will become a learnable and teachable skill.

The rise of WFH implies tremendous growth in technologies intended to monitor employees’ time at different tasks. The “big brother” aspect of the remote and distributed workforce has not reared its ugly head very prominently but it is waiting. Herein lies a huge uncertainty going forward: how much surveillance are employees and students willing to give up in order to learn and work from home indefinitely? 

Career progress & WFH

Before COVID, occasional studies suggested that people in the office are typically better noticed by their managers and thus more likely to be promoted, and at the same time, people working from home often feltl undervalued, or were (reasonably) concerned that the boss suspected they aren’t working as hard as they are. We’re still waiting to understand the full impacts on these issues since COVID lockdown, though some are obvious, but in any case quite a lot of people have started new jobs since then and some have never even met their bosses or colleagues except on zoom. These factors will have very significant effects on whether someone is accepted as much a part of a team as those who already were pre-COVID. 

Education too must suffer some of this problem, such as the problem of teachers assessing work if they have never met the student submitting it. A teacher can’t judge a student’s total merits by just marking their homework.

20 Things for the 2020’s

Obviously, the historical event known as the COVID-19 pandemic has had and will have a lasting influence on the world for some time. Considering epidemics and pandemics are natural occurrences that we can count on, we should view these instances as random catalysts of social change. What is new about COVID-19 seems like it represents the first big pandemic in a real-time globalized world, thanks to modern technology.

The changes we sense since COVID hit are social and technological in nature. They properly demonstrate the symbiotic relationship between the two, creating new behaviors/activities, while curtailing others. Below we offer two sets of 20 things: 20 that aren’t coming back and 20 that won’t go away. Welcome to the 2020’s.

20 Twenty things that won’t come back:

  1. Full-time cubicle life
  2. Five-day conferences
  3. Face-to-face parent-teacher meetings
  4. Formal business dress code
  5. Demoralizing team building events
  6. Workplace policies biased against working parents
  7. Default face-to-face contact at school/work
  8. Educational experiences without a digital component
  9. Long, boring, in-person training
  10. Disproportionate work-life balance
  11. Elaborate/expensive employee recruitment
  12. Inadequate technology skills
  13. Frequent flyers, and air-miles
  14. Technological illiteracy
  15. Agoraphobia as a mental illness needing treatment
  16. Homes that don’t include office space
  17. Unnecessary meetings
  18. Packed commuter trains
  19. High wages for jobs that can be done anywhere
  20. The two hour commute

20 Things that won’t go away:

  1. Dismal birth rates
  2. Surveillance capitalism
  3. Digital ID
  4. Telehealth
  5. Governmental monitoring/track and trace apps
  6. Lockdown powers
  7. Reinforced green regulations
  8. Public willingness to do as govt tells them
  9. Use of face masks during flu season
  10. Zoom kit – LED ring lights, decent cameras and microphones
  11. Fear or suspicion of strangers
  12. High house prices in rural and pretty areas
  13. Lower daytime city population
  14. Toilet roll hoarding
  15. Higher prices for holidays/hotels/air travel (may be short term special offers)
  16. Overcrowded restaurants, holiday spots, sporting events
  17. Brick-and-mortar schools
  18. Shopping, but mostly as a social activity and diversion
  19. Online friends you’ll never meet
  20. Online shopping

The Authors

Alexandra Whittington

Alexandra Whittington is a futurist educator, writer, and researcher. She is a Lecturer at the University of Houston, where her students describe her as “passionate” about the future. Her courses explore the impact of technology on society and the future of human ecosystems. She has published dozens of articles exploring diverse aspects of the future, often from a feminist perspective. Alex has co-authored and co-edited several books, including A Very Human Future and Aftershocks and Opportunities: Scenarios for a Post-Pandemic Future. She studied Anthropology (BA) and Studies of the Future (MS) at the University of Houston.


Dr Pearson has been a futurologist for 30 years, tracking and predicting developments across a wide range of technology, business, society, politics and the environment. Graduated in Maths and Physics and a Doctor of Science. Worked in numerous branches of engineering from aeronautics to cybernetics, sustainable transport to electronic cosmetics. 1900+ inventions including text messaging and the active contact lens, more recently a number of inventions in transport technology, including driverless transport and space travel. BT’s full-time futurologist from 1991 to 2007 and now runs Futurizon, a small futures institute. Writes, lectures and consults globally on all aspects of the technology-driven future. Eight books and over 850 TV and radio appearances. Chartered Member of the British Computer Society and a Fellow of the World Academy of Art and Science.