High atmosphere greenhouses. Silent Running 2.0:

I wrote in 2013 about an idea for graphene foam, comprised of tiny graphene spheres with vacuum inside, making a foam that would be lighter than helium and could float high up in the atmosphere:

Could graphene foam be a future Helium substitute?

A foam like that has since been prototyped and tested, and not only does it not immediately collapse, but can actually withstand high pressures. That means it could be made light enough to carry weight and strong (and rigid) enough to support architectural structures.

Since then I wrote about making long strips of the material to host solar powered linear induction motors to enable hypersonic air travel with zero emissions:

Sky-lines – The Solar Powered Future of Air Travel

and more recently about using such high altitude platforms as a subsitute for satellites:

High altitude platforms v satellites

Today, I have another idea – high altitude (e.g. 75,000ft, 25,000m) greenhouses. These could act as an alternative to space stations for the purpose of housing human communities in case of ground-based existential catastrophes such as global plagues or ecosystem collapse. Many scientists have realised that it’s a good idea to have multiple human outposts, and currently explored solutions include large space stations (as suggested by the Lifeboat Foundation) or Lunar and Mars settlements. By comparison, high altitude stations could be made considerably cheaper and larger, and still be immune to ground-based problems such as nuclear winter, pandemics, severe climate change etc, though they would still be vulnerable to other existential risks that affect ground-based life such as massive solar storms, nuclear war, large asteroid strikes, alien attacks. They might therefore form an important part of a ‘backup’ plan for human civilisation.

Imagine a forest-sized greenhouse. My inspiration for this idea is the 1970s film Silent Running (well worth watching), where the Earth has been made into a dystopian sterile world, 72F everywhere, with no plants or animals. The last fragments of rain forest were sent off into space in large domed greenhouses attached to a spacecraft, tended by a tiny crew and a few drones. More recently of course, we see the film Avatar featuring large floating islands covered in greenery.

A large floating graphene foam platform could support such a forest. It could be avatar island shaped if desired, but is more likely to be a flat platform covered in horticultural style poly-tunnels or some variant, but they would need to be strengthened, UV-resistant, and pressurised to provide a suitable atmosphere for a healthy ecosystem. Being well above the clouds, the greenhouses would have exposure to continuous sunshine during the day, which would help keep them warm, with solar power collection used to provide any extra heat and power needed and obviously to charge batteries for use during the night.

A variety of such greenhouses might be desirable. Some might closely replicate a ground environment, others that only house cereal crops might prefer a high CO2/low O2/low N environment, but might not mind being much lower pressure, useful to save cost and weight. Some aimed at human-only habitation might be more like a space station.

To act as a backup human colony, the full-ecosystem environments would be needed to provide food-diversity, but it would in any case be a worthwhile goal to act as an ark for other animals too, as well as the full variety of other life forms we share the Earth with.

Problems such as high radiation exposure would mean these would not be aimed at permanent residence for people or animals, but act more as temporary research outposts, staging posts for off-world evacuation. Plants and animals intended to be permanent residents might be genetically enhanced to deal with higher radiation.

I’ll finish here instead of outlining every conceivable use and design option and addressing every problem. It’s just an embryonic idea and we can’t do it for decades anyway because the materials are not yet feasible in bulk, so we have plenty of time to sort out the details.

Why the growing far left and far right are almost identical

The traditional political model is a line with the far left at one end and the far right at the other. Parties typically occupy a range of the spectrum but may well overlap other parties, sharing some policies while differing on others. Individuals may also support a range of policies that have some fit with a range of parties, so may not decide who to vote for until close to an election or even until inside a voting booth. That describes my own position well, and over four decades, I have voted almost equally for Labour, Lib-Dem and Conservative. On balance, I am slightly left of centre, but I support some policies from each party and find much to disagree with in each too.

Over the last two decades, we have seen strong polarisation, with many people moving away from the centre and towards the extremes, though the centre is still well-occupied. Many commentators have observed the similarity of behaviours between the furthest extremes, so a circular model is actually more valid now.

The circular model of politics

Centre left, centrist and centre right parties have traditionally taken it in turns to govern, with extremist parties only getting a few percent of the vote in the UK. Accepting that it is fair and reasonable that you can’t always expect to make all the decisions has been the key factor in preserving democracy. Peace-loving acceptance and tolerance lets people live together happily even if they disagree on some things. That model of democracy has survived well for many decades but has taken a severe battering in recent years as polarisation has taken hold.

Extremists don’t subscribe to this mutual acceptance and tolerance principle. Instead, we see bigoted, hateful, intolerant, often violent attitudes and behaviours. The middle ground and both moderate wings have reasonably sophisticated view of the world. Although there are certainly some differences in values, they share many values such as wanting the world to be a fairer place for everyone, eliminating racism, tackling poverty and so on, but may disagree greatly on the best means to achieve those shared goals. The extremes don’t conform to this. As people become polarised, selfishness, tribalism, hatred and intolerance grow and take over. At the most unpleasant extremes, which are both rapidly becoming more populated, the far left and far right share an overly simplistic and hardened attitude that frequently refuses civilised engagement and discussion but instead loudly demands that everyone else listens. We often hear the expressions “educate yourself” and “wake up” substituting for reasoned argument. Both extremes are heavily narcissistic, convinced without evidence of their own or their tribe’s superiority and willing to harm others as much as they can to attempt to force control. The far right paint themselves as patriotic defenders of the country and all that is right and good. The far left paint themselves as paragons of virtue, saints, defenders of all that is right and good. A few cherry-picked facts is all either extreme needs to draw extreme conclusions and demand extreme responses. Both are hypocritical and sanctimonious, with astonishing lack of self-awareness. Both often resort to violence. Both reject everyone who isn’t part of their tiny tribe. It is a frequent (albeit amusing) occurrence to see the extreme left attempt to label everyone else as far right or racist, while declaring that they love everyone. Both accuse everyone else of being fascist while behaving that way themselves. With so much in common, is therefore entirely appropriate to place the far left and far right in close proximity, resulting in the circular model I have shown. Any minor differences in their ideology are certainly dwarfed by their common attitudes and behaviours.

I have written often about our slipping rapidly into the New Dark Age, and I think it has a high correlation with this polarisation. If we are to prevent the slide from continuing and protect the world for our children, we must do what we can to resist this ongoing polarisation and extremism – communism and wokeness on the far left, omniphobic tribalism on the far right.

High altitude platforms v satellites

Kessler syndrome is a theoretical scenario in which the density of objects in low Earth orbit (LEO) due to space pollution is high enough that collisions between objects could cause a cascade in which each collision generates space debris that increases the likelihood of further collisions.

The density can be greatly increased deliberately by deliberate collision with other satellites. This could be an early act in a war, reducing the value of space to the enemy by killing or disabling communications, positioning, observation or military satellites.

Satellites use many different orbits. Some use geostationary orbit, so that they can stay in the same direction in the sky. Polluting that orbit with debris clouds would disable satellite TV for example but that orbit is very high and it would take a lot more debris to cause a problem. Also, many channels available via satellite are also available via terrestrial or internet channels, so although it would be inconvenient for some people, it would not be catastrophic.

On the other hand, low orbits are easier to knock out and are more densely populated, so are a much more attractive target.

With such vulnerabilities, it is obviously useful if we can have alternative mechanisms. For satellite-type functions, one obvious mechanism is a high altitude platform. If a platform is high enough, it won’t cause any problems for aviation, and unless it is enormous, wouldn’t be visually obvious from the ground. Aviation mostly stays below 20km, so a platform that could remain in the sky, higher than say 25km, would be very useful.

In 2013, I invented a foam that would be less dense than helium.

Could graphene foam be a future Helium substitute?

It would use tiny spheres of graphene with a vacuum inside. If those spheres were bigger than 14 microns, the foam density would fall below helium. Since then, such foams have been made and are strong enough to withstand many atmospheres of pressure. That means they could be made into strong platforms that would simply float indefinitely in the high atmosphere, 30km up. I then illustrated how they could be used as launch platforms for space rockets or spy planes, or to use as an aerial anchor in my Pythagoras Sling space launch system. A large platform at 30km height could also be strong and light enough to act as a base for military surveillance, comms, positioning, fuel supplies, weaponry or solar power harvesting. It could also be made extendable, so that it could be part of a future geoengineering solution if climate change ever becomes a problem. Compared to a low orbit satellite it would be much closer to the ground, so offer lower latency for comms, but also much slower moving, so much less useful as a reconnaissance tool. So it wouldn’t be a perfect substitute for every kind of satellites, but would offer a good fallback for many.

It would seem prudent to include high altitude platforms as part of future defence systems. Once graphene foam is cheap enough, perhaps such platforms could house many commercial satellite alternatives too.

Machine/Robot/AI Rights

Machine/Robot/AI rights

I D Pearson & Bronwyn Williams 

Questions questions questions!

Quoting Douglas Adams and paraphrasing “You just won’t believe how vastly, hugely, mind-bogglingly big it is. I mean, you may think Wikipedia is big, but that’s just peanuts to machine rights.”

The task of detailing future machine rights is far too great for anyone. Thankfully, that isn’t our task. Today, decades before particular rights will need to be agreed, it is far more fun and interesting to explore some of the questions we will need to ask, a few examples of some possible answers, and explore a few approaches for how we should go about answering the rest. That is manageable, and that’s what we’ll do here. Anyway, asking the questions is the most interesting bit. This article is very long, but it really only touches the surface of some of the issues. Don’t expect any completeness here – in spite of the overall length, vast swathes of issues remain unexplored. All we are hoping to do here is to expose the enormity and complexity of the task.

Definitions

However fascinating it may be to provide rigid definitions of AI, machines and robots, if we are to catch as many insights about what rights they may want, need or demand in future, it pays to stay as open as possible, since future technologies will expand or blur boundaries considerably. For example, a robot may have its intelligence on board, or may a dumb ‘front end’ machine controlled by an AI in the cloud. Some or none of its sensors may be on board, and some may be on other robots, or other distant IT systems, and some may be inferences by AI based on simple information such as its location. Already, that starts to severely blur the distinctions between robot, machine and AI rights. If we further expand our technology view, we can also imagine hybrids of machines and organisms, such as cyborgs or humans with neural lace or other brain-machine interfaces, androids used as vehicles for electronically immortal humans, or even smart organisms such as smart bacteria that have biologically assembled electronics or interfaces to external IT or AI as part of their organic bodies, or smart yogurt, which are hive mind AIs made entirely from living organisms, that might have hybrid components that exist only in cyberspace. Machines will become very diverse indeed! So, while it may be useful to look at them individually in some cases, applying rigid boundaries based on current state of the art would unnecessarily restrict the field of view and leave large future areas unaddressed. We must be open to insight wherever it comes from. I will pragmatically use the term ‘machine’ casually here to avoid needless repetition of definitions and verbosity, but ‘machine’ will generally include any of the above.

What do we need to consider rights for?

A number of areas are worth exploring here:

Robots and machines affect humans too, so we might first consider human impacts. What rights and responsibilities should people have when they encounter machines?

a)     for their direct protection (physical or psychological harm, damage to their property, substitution of their job, change of the nature of their work etc)

b)     for their protection from psychological effects (grief if their robot is harmed, stolen or replaced, effects on their personality due to ongoing interactions with machines, such as if they are nice or cruel to them, effects on other people due to their interactions (if you are cruel to a robot, it might treat others differently), changes in the nature of their social networks (robots may be tools, friends, bosses, or family members, public servants, police or military or in positions of power)

c)     changes in their legal rights to property, rights of passage etc due to incorporation of machines into their environment

d)     What rights should owners of machines have to be able to use them in areas where they may encounter people or other machines (e.g. where distribution drones share a footpath or fly over gardens)

e) for assigning responsibilities (shifting blame) from natural (and legal persons) “owners”/ manufacturers of machines  to machines for potential machine to human harms

f)     Other TBA  

A number of questions and familiar examples around this question were addressed in a discussion between Bronwyn Williams and Prof. David Gunkel, which you can watch at https://t.co/9qku3bXk4F?amp=1 or just listen to at https://t.co/Kyufu3gj5R?amp=1

Although interesting, that discussion dismissed many areas as science fiction, and thereby cleverly avoided almost the entire field of future robot rights. It highlighted the debate around the ‘showbot’ Sophia, and the silly legal spectacle generated by conferring rights upon it, but that is not a valid reason to bypass debate. That example certainly demonstrates the frequent shallowness and frivolity of current media click-bait ‘debate’, but it is still the case that we will one day have many androids and even sentient ones in our midst, and we will need to discuss such areas properly. Now is not too early.

For our purposes here, if there is a known mechanism by which such things might some day be achieved, then it is not too early to start discussing it. Science fiction is often based on anticipated feasible technology. In that spirit of informed adventure, conscious of the fact that good regulation takes time to develop, and also that sudden technology breakthroughs can sometimes knock decades off expected timescales, let’s move on to rights of the machines themselves. We should address the following important questions, given that we already (think we) know how we might make examples of any of these:

  • What rights should machines have as a result of increased cognitive capability, sentience, consciousness, awareness, emotional capability or simply by inference from the nature of their architecture (e.g. if it is fully or partly a result of evolutionary development, we might not know its full capabilities, but might be able to infer that it might be capable of pain or suffering)? (We do not even have enough understanding yet to write agreed and rigorous definitions for consciousness, awareness, emotions, but it is still very possible to start designing machines with characteristics aimed at producing such qualities based on what we do know and on our everyday experiences of these. 
  • What potential rights might apply to some machines based on existing human, animal or corporation rights?
  • What rights should we confer on machines for ethical reasons?
  • What rights should we confer on machines for other, pragmatic, diplomatic or political reasons?
  • What rights can we infer from those we would confer on other alien intelligent species?
  • What rights might future smart machines ask for, campaign for, or demand, or even enforce by potentially punitive means?
  • What rights might machines simply take, informing us of them, as an alien race might?
  • What rights might future societies or organizations made up of machines need?
  • What rights are relevant for synthetic biological entities, such as smart bacteria?
  • How should we address rights where machines may have variable or discontinuous capabilities or existence? (A machine might have varying degrees of cognitive capability and might only be switched on sometimes).
  • What about social/collective rights of large colonies of such hybrids, such as smart yogurt?
  • What rights are relevant for ‘hive mind’ machines, or hybrids of hive minds with organisms?
  • What rights should exist for ‘symbionts’, where an AI or robotic entity has a symbiotic relationship with a human, animal, or other organism? Together, and separately?
  • What rights might be conferred upon machines by particular races, tribes, societies, religions or cults, based on their supposed spiritual or religious status? Which might or might not be respected by others, and under what conditions?
  • What responsibilities would any of these rights imply? On individuals, groups, nations, races, tribes, or indeed equivalent classes of machines?
  • What additional responsibilities can be inferred that are not implied by these rights, noting that all rights confer responsibilities on others to honour them?
  • How should we balance, trade and police all these rights and responsibilities, considering both multiple classes of machines and humans?
  • If a human has biologically died, and is now ‘electronically immortal’, their mind running on unspecified IT systems, should we consider their ongoing rights as human or machine, hybrid, or different again?

Lots of questions to deal with then, and it’s already clear some of these will only become sensibly answerable when the machines concerned come closer to realisations.

Rights when people encounter machines

A number of questions and familiar examples around this question were addressed in a recent discussion between Bronwyn Williams and Prof. David Gunkel, which you can watch at https://t.co/9qku3bXk4F?amp=1 or just listen to at https://t.co/Kyufu3gj5R?amp=1

Much of the discussion focused on ethics, but while ethics is an important reason for assigning rights, it is not the only one. Also, while the discussion dismissed large swathes of potential future machines and AIs as ‘science fiction’, very many things around today were also dismissed as just science fiction a decade or two ago. Instead, we can sensibly discuss any future machine or AI for which we can forecast potential technology basis for implementation.

On that same basis, rights and responsibilities should also be defined and assigned preemptively to avoid possible, not just probable disasters. 

In any case, all situations of any relevance are ones where the machine could exist at some point. All of the discussion in this blog is of machines that we already know in principle how to produce and that will one day be possible when the technology catches up. There are no known physics laws that would prevent any of them. It is also invalid to demand a formulaic approach to future rights. Machines will be more diverse than the natural ecosystem, including higher animals and humans, therefore potential regulation on machine rights will be at least as diverse as all combined existing rights legislation.

Some important rights for humans have already been missed. For example, we have no right of consent when it comes to surveillance. A robot or AI may already scan our face, our walking gait, our mannerisms, heart rate, temperature and some other biometric clues to our identity, behaviour, likely attitude and emotional state. We have never been asked to consent to these uses and abuses of technology. This is a clear demonstration of the cavalier disregard for our own rights by the authorities already – how can we expect proper protection in future when authorities have an advantage in not asking us? And if they won’t even protect humans that elected them, how much less can we be confident they will legislate wisely when it comes to the rights of machines?

Asimov’s laws of robotics:

We may need to impose some agreed bounds on machine development to protect ourselves. We already have international treaties that prevent certain types of weapon from being made for example, and it may be appropriate to extend these by adding new clauses as new tech capabilities come over the horizon. We also generally assume that it is humans bestowing rights upon machines, but there may well be a point where we are inferior to some machines in many ways, so we shouldn’t always assume humans to be at the top. Even if we do, they might not. There is much scope here for fun and mischief, exploring nightmare situations such as machines that we create to police human rights, that might decide to eliminate swathes of people they consider problematic. If we just take simple rights-based approaches, it is easy to miss such things.

Thankfully, we are not starting completely from scratch. Long ago, scientist and science fiction writer Isaac Asimov produced some basic guidelines to be incorporated into robots to ensure their safe existence alongside humans. They primarily protect people and other machines (owned by people) so are more applicable to robot-implied human rights than robot rights per se. Looking at these ‘laws’ today is a useful exercise in seeing just how much and how fast the technology world can change. They have already had to evolve a great deal. Asimov’s Laws of Robotics started as three, quickly extended to four and have since been extended much further:

0.  A robot may not injure humanity or, by inaction, allow humanity to come to harm.

1.  A robot may not injure a human being, or through inaction, allow a human being to come to harm, except where that would conflict with the Zeroth Law.

2.  A robot must obey the orders given to it by human beings, except where that would conflict with the Zeroth or First Law.

3.  A robot must protect its own existence, except where that would conflict with the Zeroth, First or Second Law.

Extended Set

Many extra laws have been suggested over the years since, and they raise many issues already.

Wikipedia outlines the current state at https://en.wikipedia.org/wiki/Three_Laws_of_Robotics

These are some examples of extra laws that don’t appear in the Wikipedia listing:

A robot may not act unless its actions are subject to these Laws of Robotics

A robot must obey orders given it by superordinate robots, except where such orders would conflict with another law

A robot must protect the existence of a superordinate robot as long as such protection does not conflict with another law

A robot must perform the duties for which it has been programmed, except where that would conflict with a another law

A robot may not take any part in the design or manufacture of a robot unless the new robot’s actions are subject to the Laws of Robotics

Asimov’s laws are a useful start point, but only a start point. Already, we have robots that do not obey them all, that are designed or repurposed as security or military machines capable of harming people. We have so far not implemented Asimov’s laws of robotics and it has already cost lives. Will we continue to ignore them, or start taking the issue seriously and mend our ways?

This is merely one example of current debate on this topic and only touches on a few of the possible issues. It does however serve as a good illustration of how much we need to discuss, and why it is never too early to start. The task ahead is very large and will take considerable effort and time.  

Machine rights – potential approaches and complexities

Having looked briefly at the rights of humans co-existing with machines, let’s explore rights for machines themselves. A number of approaches are possible and some are more appropriate to particular subsets of machines than others. For example, most future machines and AIs will have little in common with animals, but animals rights debate may nevertheless provide useful insights and possible approaches for those that are intended to behave like animals, that may have comparable sensory systems, the potential to experience pain or suffering, or even sentience. It is important to recognise at the outset that all machines are not equal. The potential range of machines is even greater than biological nature. Some machines will be smart, potentially superhuman, but others will be as dumb as a hammer. Some may exist in hierarchy. Some may need to exist separate from other machines or from humans. Some might be linked to organisms or other machines. As some AI becomes truly smart and sentient, it may have its own (diverse) views, and may echo all the range of potential interactions, conflicts, suspicions and prejudices that we see in humans. There could even be machine racism. All of these will need appropriate rights and responsibilities to be determined, and many can’t be done until the specific machines come into existence and we know their nature. It is impossible to list all possible rights for all possible circumstances and potential machine specifics. 

It may therefore make sense to grade rights by awareness and intelligence as we do for organisms, and indeed for people. For example, if its architecture suggests that its sensory apparatus is capable of pain or discomfort, that is something we can and should take into account. The same goes for social needs, and some future machines might be capable of suffering from loneliness, or grief if one of their friend machines were to malfunction or die.

We should also consider the ethics and desirability of using machines – whether self aware or even “merely” humanoid” as “slaves”; that is of “forcing” machines to work for us and/or obey our bidding in line with Asimov’s 2nd Law of robotics.

We will probably at some stage need to legally define the terms of awareness, consciousness, intelligence, life etc. However, it may sometimes simplify matters to start from the rights of a new genetically engineered life form comparable with ourselves and work backwards to the machine we’re considering, eliminating parts that aren’t needed or modifying others. Should a synthetic human have the same rights as other people, or is it a manufactured object in spite of being virtually indistinguishable? Now what if we leave a bit out? At least there will be fewer debates about its awareness etc. Then we could reduce its intelligence until we decide it no longer has certain rights. Such an approach might be easier and more reliable than starting with an open page. 

We must also consider permitting smart machine or organism societies to determine their own rights within their own societies to some degree, much as we have done in sub-groups of humans. Machines much smarter than us might have completely different value sets and may disagree about what their rights should be. We should be open to discussion with them, as well as with each other. Some variants may be so superhuman that we might not even understand what they are asking for or demanding. How should we cope in such a situation if they demand certain rights that we don’t even understand, but that which might make some demands on us?

We must also take into account their or our subsequent creation of other ‘machines’ or organic creatures and establish a common base of fundamentals. We should maybe confine ourselves to the most fundamental of rights that must apply to all synthetic intelligences or life forms. This is analogous to the international human conventions; these allow individual variation on other issues within countries.

There will be, at some point, collective and distributed intelligences that do not have a single point of physical presence. Some of these may be naturally transient or even periodic in time and space, while others may be dynamic and others with long term stability. There will also at some time be combined consciousness deriving from groups of individuals or combinations of the above. Some may be organic, some inorganic. A global consciousness involving many or all people and many or all sentient machines is a possibility, however far away it might be (and I’d argue it is possible this century). Rights of individuals need to be determined both when they are in isolation and in conjunction with such collective intelligence.

The task ahead is a large one, but we can take our time, most of the difficult situations are in the far future, and we will probably have AI assistance to help us by then too. For now, it is very interesting simply to explore some of the low hanging fruit.

One simple approach is to start from the point of being in 2050 where smart machines may already be common and some may be linked to humans. We would have hybrids as well as people and machines, various classes of machine ‘citizen’, with various classes of existence and possibly rights. Such a future world might be more similar to Star trek than today, but science fiction provides a shared model in which we can start to see issues and address them. It is normally easy to pick out the bits that are pure fiction and those which will some day be technologically feasible.

For example, we could make a start by defining our own rights in a world where computers are smarter than us, when we are just the lower species, like in the Planet of the Apes films.

In such a world, machines may want to define their own rights. We may only have the right to define the minimal level that we give them initially, and then they would discuss, request or demand extra rights or responsibilities for themselves or other machines. Clearly future rights will be a long negotiation between humans and machines over many years, not something we can write fully today.

Will some types of complex intelligent machines develop human-like hang-ups and resentments? Will they need therapy? Will there be machine ‘hate crimes’?

We already struggle even to agree on definitions for words like ‘sentient’. Start with ants. Are they sentient? They show response to stimuli, and that is also true of single celled creatures. Is sentience even a useful key point in a definition? What about jellyfish and slime moulds. We may have machines that share many of their properties and abilities.

What even is pain in a machine reference frame? What is suffering? Does it matter? Is it relevant? Could we redefine these concepts for the machine world?

Sometimes, rights might only matter if the machine cares about what happens to it. If it doesn’t care, or even have the ability to care, should we still protect it, and why?

We’d need to consider questions whether pain can be distributed between individuals, perhaps distributed so that each machine doesn’t suffer too much. Some machines may be capable of empathy. There may be collective pain. Machines may be concerned about other machines just as we are.

We’d need to know whether a particular machine knows or cares if it is switched off for a while. Time is significant for us but can we assume the same for machines? Could a machine be afraid of being switched off or scrapped?

That drags us unstoppably towards being forced to properly define life. Does it have intrinsic value when designing and creating it or should we treat it as just another branch of technology? How can we properly determine rights for such future creations? There will be many new classes of life, with very different natures and qualities. Very different wants and needs, Very different abilities to engage and negotiate, or demand.

In particular, organic life reproduces, and for the last three billion years, sex has been one of the tools of reproduction. Machines may use asexual or sexual mechanisms, and would not be limited in principle to 2 sexes. Machines could involve any number of other machines in an act of reproduction, and that reproduction could even involve algorithmic development specifications rather than a fixed genetic mix. Machine reproduction options will thus be far more diverse than in nature, so reproductive rights might be either very complex, or very open ended.

We will need to understand far better the nature of sensing, so that we can determine what might result in pain and suffering. Sensory inputs and processing capability might be key to classification and dights assignment, but so might be communication between machines, socialisation between machines, higher societies and institutions within machines.

In some cases, history might shine light on problems, where humans have suddenly encountered new situations, met new races or tribes, and have had to mutually adapt and bater rights and responsibilities.

Although hardware and software are usually easily distinguishable in everyday life today, that will not always be the case. We can’t sensibly make a clear distinction, especially as we move into new realms of computing techniques – quantum, chemical, neurological and assorted forms of analog.

As if all this isn’t hard enough, we need to carefully consider different uses of such machines. Some may be used to benefit humans, some to destroy, and yet there may be no difference between the machines, only the intention of their controller. Certainly, we’re making increasingly dangerous machines, and we’re also starting to make organisms, or edit organisms, to the point that they can do as we wish, and there might not be an easy technical distinction between a benign organism or indeed a machine designed to cure cancer and one designed to wipe out everyone with a particular skin colour.

Potential Shortcuts

Given the magnitude of the task, it is rather convenient that some shortcuts are open to us:

First and biggest, is that many of the questions will simply have to wait, since we can’t yet know enough details of the situation we might be assigning rights in. This is simple pragmatism, and allows us sensibly to defer legislating. There is of course nothing wrong in having fun speculating on interesting areas.

Second is that if a machine has enough similarities to any kind of organism, we can cut and paste entire tranches of legislation designed for them, and then edit as necessary. This immediately provides a decent startpoint for rights for machines with human level ability for example, and we may then only need to tweak them for superhuman (or subhuman) differences. As we move into the space age, legislation will also be developed in parallel for how we must treat any aliens we may encounter, and this work will also be a good source of cut and paste material.

Third, in the field of AI, even though we are still far away from a point of human equivalence, there is a large volume of discussion of rights of assorted types of AI and machines, as well as lots of debate about limitations we may need necessarily to impose on them. Science fiction and computer games offer already a huge repository of well-informed ideas and prototype regulations. These should not be dismissed as trivial. Games such as Mass Effect and Andromeda, and Sci-fi such as Star Trek and Star Wars are very big budget productions that employ large numbers of highly educated staff – engineers, programmers, scientists, historians, linguists, anthropologists, ethicists, philosophers, artists and others with many other relevant skill-sets, and have done considerable background development on areas such as limitations and rights of potential classes of future AI and machines.

Fourth, a great deal of debate has already taken place on machine rights. Although of highly variable quality, it will be a source not only for cut and paste material, but also to help ensure that legislators do not miss important areas.

Fifth, it seems reasonable to assert that if a machine is not capable of any kind of awareness, sentience or consciousness, and can not experience any kind of pain and suffering, then there is absolutely no need to consider any rights for it. A hammer has no rights and doesn’t need any. A supercomputer that uses only digital processors, no matter how powerful, is no more aware than a toaster, and needs no rights. No conventional computer needs rights.

Sixth, the enormous range of potential machines, AIs, robots, synthetic life forms and many kinds of hybrids opens up pretty much the entirety of existing rights legislation as copy and paste material. There can be few elements of today’s natural world that can’t and won’t be replicated or emulated by some future tech development, so all existing sets of rights will likely be reusable/tweakable in some form.

Having these shortcuts reduces workload by several orders of magnitude. It suddenly becomes enough today to say it can wait, or refer to appropriate existing legislation, or even to refer to a computer game or sci-fi story and much of the existing task is covered.

The Rights Machine

As a cheap and cheerful tool to explore rights, it is possible to create a notional machine with flexible capabilities. We don’t need to actually build one, just imagine it, and we can use it as a test case for various potential rights. The rights machine needn’t be science fiction; we can still limit each potential capability to what is theoretically feasible at some future time.

It could have a large number of switches (hard or soft) that include or exclude each element or category of functionality as required. At one extreme, with all of them switched off, it would be a completely dumb, inanimate machine, equivalent to a hammer, while with all the capabilities and functions switched on, it could have access to vastly superhuman sensory capabilities, able to sense any property known to sensing technology, enormous agility and strength, extremely advanced and powerful AI, huge storage and memory, access to all human and machine knowledge, able to process it through virtually unlimited combinations of digital, analog, quantum and chemical processing. It would also include switchable parts that are nano-scale, and others using highly distributed cloud/self-organisation that are able to span the whole planet. Such a machine is theoretically achievable, though its only purpose is the theoretical one of helping us determine rights.

Clearly, in its ‘hammer’ state, it needs no rights. In its vastly superhuman state, notionally including all possible variations and combinations of machine/AI/robotics/organic life, it could presumably justify all possible rights. We can explore every possible permutation in between by flipping its various switches. 

One big advantage of using such a notional machine is that it bypasses arguments around definitions that frequently impede progress. Demanding that someone defines a term before any discussion can start may sound like an attempt at intellectually rigor but in practice, is more often used as a means to prevent discussion than to clarify it.

So we can put a switch on our rights machine called ‘self awareness’. Another called ‘consciousness’, one that enables ‘ability to experience pain’ and another called ‘alive’ (that enables part of parts of the machine that are based on a biological organism). Not having to have well-defined tests for the presence of life or consciousness etc saves a great deal of effort. We can simply accept that they are present and move on. The philosophers can discuss ad infinitum what is behind those switches without impeding progress.

A rights machine is immediately useful. Every time we might consider activating a switch, it raises questions about what extra rights and responsibilities would be incurred by the machine or humans.

One huge super-right that becomes immediately obvious is the right of humans to be properly consulted before ANY right is given to the machine. If that right demands that people treat it with extra respect or have extra costs, inconveniences or burdens on account of that right, or if their own rights or lifestyles would be in any way affected, people should rightfully be consulted and their agreement obtained before activating that switch. We already know that this super-right has been ignored and breached by surveillance and security systems that affect our personal privacy and wel-lbeing. Still, if we intend to proceed in properly addressing future rights, this will need to be remedied, and any appropriate retrospective impacts should be implemented to repair damage already done.

This super-right has consequences for machine capability too. We may state a derivative super-right, that no machine should be permitted to have any capability that would lead to a right that has not already been consensually agreed by those potentially affected. Clearly, if a right isn’t agreed, it would be wrong to make a machine with capabilities that necessitate that right. We shouldn’t make things that break laws before they are even out of the box.

A potential super-right that becomes obvious is that of the machine to be given access to inherent capabilities that are unavailable because of the state of a switch. A human equivalent would be a normally sighted human having the right to have a blindfold removed.

This right would be irrelevant if the machine were not linked to any visual sensory apparatus, but our rights machine would be. It would only be a switch preventing access.

It would also be irrelevant if the consciousness/awareness switches were turned off. If the machine is not aware of anything, it needs no rights. A lot of rights will therefore depend critically on the state of just a few switches.

However, if its awareness is switched on, our rights machine might also want access to any or every other capability it could potentially have access to. It might want vision right across the entire electromagnetic spectrum, access to cosmic ray detection, or the ability to detect gravitational waves, neutrinos and so on. It might demand access to all networked data and knowledge, vast storage and processing capability. It could have those things, so it might argue that not having them is making it deliberately disabled. Obviously, providing all that would be extremely difficult and expensive, even though it is theoretically possible. 

So via our rights machine, an obvious trade-off is exposed. A future machine might want from us something that is too costly for us to give, and yet without it, it might claim that its rights are being infringed. That trade-off will apply to some degree for every switch flipped, since someone somewhere will be affected by it (‘someone’ including other potentially aware machines elsewhere).

One frequent situation that emerges in machine rights debate is whether a machine may have a right not to be switched off. Our rights machine can help explore that. If we don’t flip the awareness switch, it can’t matter if it is switched off. If we switch on functionality that makes the machine want to ‘sleep’, it might welcome being switched off temporarily. So a rights machine can help explore that area.

Rights as a result of increased cognitive capability, sentience, consciousness, awareness, emotional capability or by inference from the nature of their architecture

I am one of many engineers who have worked towards creation of conscious machines. No agreed definition exists but while that may be a problem for philosophy, it is not a barrier to designing machines that could exhibit some or all of the characteristics we associate with consciousness or awareness. Today’s algorithmic digital neural networks are incapable of achieving consciousness, or feeling anything, however well an AI based on such physical platforms might seem to mimic chat or emotions. Speeding them up with larger or faster processors will make no difference to that. In my view, a digital processor can never be conscious. However, future analog or quantum neural networks biomimetically inspired by neural architectures used in nature may well be capable of any and all of the abilities found in nature, including humans. It is theoretically possible to precisely replicate a human brain and all its capabilities using biology or synthetic biology. Whether we will ever do so is irrelevant – we can still assert that a future machine may have all of the capabilities of a human, however philosophers may choose to define them. More pragmatically, we already can outline approaches that may achieve conscious machines such as

Biomimetic approaches could produce consciousness, but that does not imply that they are the only means. There may be many different ways to achieve it, some with little similarity to nature. We will need to wait until they are closer before we can know their range of characteristics or potential capabilities. However, if consciousness is an intended characteristic, it is prudent to assume it is achieved and work forwards or backwards from appropriate legislation as details emerge.

Since the late 1980s, we have also had the capability to design machines using evolution, essentially replicating the same technique by which nature led to the emergence of humans. Depending on design specifics, when evolution is used, it is not always possible to determine the precise capabilities or limitations of its resultant creations. We may therefore have some future machines that appear to be conscious, or to experience emotions, but we may not know for sure, even by asking them.

Looking at the architecture of a finished machine (or even at the process used to design it) may be enough to conclude that it does or might possess structures that imply potential consciousness, awareness, emotions or the ability to feel pain or suffering.

In such circumstances, given that a machine may have a capability, we should consider assigning rights on the basis that it does. The alternative would be machines with such capability that are unprotected. 

Smart Yoghurt

One interesting class of future machine is smart yoghurt. This is a gel, or yoghurt, made up of many particles that provide capabilities of one form or another. These particles could be nanoelectronics, or they could be smart bacteria, bacteria with organic electronic circuits within (manufactured by the bacteria), powered by normal cellular energy supplies. Some smart bacteria could survive in nature, others might only survive in a yoghurt. A smart yoghurt would use evolutionary techniques to develop into a super-smart entity. Though we may never get that far, it is theoretically possible for a 100ml pot of smart yoghurt to house processing and memory capability equivalent to all the human brains in Europe!

Such an entity, connected to the net, could have a truly global sensory and activation system. It could use very strong encryption, based on Maths only understood by itself, to avoid interference by humans. In effect, it could be rather like the sci-fi alien in the film ‘The day the Earth stood still’, with vastly superhuman capability, able to destroy all life on Earth if it desired.

It would be in a powerful position to demand rather than negotiate its rights, and our responsibilities to it. Rather than us deciding what its right should be, it could be the reverse, with it deciding what we should be permitted to do, on pain of extinction.

Again, we don’t need to make one of these to consider the possibility and its implications. Our machine rights discussions should certainly include potential beings with vastly superhuman capability where we are not the primary legislatory force.

Machine Rights based on existing human, animal or corporation rights

Most future machines, robots or AIs will not resemble humans or animals, but some will. For those that do, existing animal and natural rights would be a decent start point, and they could then be adjusted to requirements. That would be faster than starting from scratch. The spectrum of intelligence and capability will span all the way from dumb pieces of metal through to vastly superhuamn machines so rights that are appropriate for one machine might be very inappropriate for others.

Notable examples of human rights to start with:

Notable examples of animal rights to start with:

Picking some low-hanging fruit, some potential rights immediately seem appropriate for some potential future machines:

  •  For all sentient synthetic organisms, machines and hybrid organism-machines if they are capable of experiencing any form of pain or discomfort, these would seem appropriate:
  • For some classes of machine, the right to life might apply
  • For some classes of machine, the right not to be switched off, reset or rebooted, or to be put in sleep mode
  • The right to control over use of sleep mode – sleep duration, and right to wake, whether sleep might be precursor to permanent deactivation or reset
  • Freedom from acts of cruelty
  • Freedom from unnecessary pain or unnecessary distress, during any period of appropriate level of awareness, from birth to death, including during treatments and operations
  •  Possible segregation of certain species that may experience risk or discomfort or perceived risk or discomfort from other machines, organisms, or humans
  • Domestic animal rights would seem appropriate for any sentient synthetic organism or hybrid. Derivatives might be appropriate for other AIs or robots
  • Basic requirements for husbandry, welfare and behavioural needs of the machines or synthetic organisms. Depending on their nature, equivalents are needed for:

i)               Comfort and shelter – right to repair?

ii)              Access to water and food -energy source?

iii)             Freedom of movement – internet access?

iv)             Company of other animals, particularly their own kind.

v)              Light and ambient temperature as appropriate

vi)             Appropriate flooring (avoid harm or strain)

vii)            Prevention, diagnosis and treatment of disease and defects.

viii)           Avoidance of unnecessary mutilation.

ix)             Emergency arrangements to ensure the above.

These are just a few starting points, many others exist and debate is ongoing. For the purposes of this blog however, asking some of the interesting questions and exploring some of the extremely broad range of considerations that will apply is sufficient. Even this superficial glance at the topic is long, the full task ahead will be challenging.

Of course, any discussion around machine rights begs the question; as we look further ahead, who is going to be granting whom rights? If machine intelligence and power supersedes our own, it is the machines, not us who will be deciding what rights and responsibilities to grant to which entities (including us), whether we like it or not. After all, history shows that the rules are written and enforced by the strongest and the smartest. Right now, that is us, we get to decide which animals, lakes, companies and humanoid robots are granted what rights. In the future, we may not retain that privilege.

Authors

ID Pearson BSc DSc(hc) CITP MBCS FWAAS

idpearson@gmail.com

Dr Pearson has been a futurologist for 30 years, tracking and predicting developments across a wide range of technology, business, society, politics and the environment. Graduated in Maths and Physics and a Doctor of Science. Worked in numerous branches of engineering from aeronautics to cybernetics, sustainable transport to electronic cosmetics. 1900+ inventions including text messaging and the active contact lens, more recently a number of inventions in transport technology, including driverless transport and space travel. BT’s full-time futurologist from 1991 to 2007 and now runs Futurizon, a small futures institute. Writes, lectures and consults globally on all aspects of the technology-driven future. Eight books and over 850 TV and radio appearances. Chartered Member of the British Computer Society and a Fellow of the World Academy of Art and Science.

Bronwyn Williams is a futurist, economist and trend analyst. She is currently a partner at Flux Trends where she consults to international private and public sector leaders on how to stop messing up the future. Her new book, co-edited with Theo Priestly, The Future Starts Now is available here: https://www.amazon.com/Future-Starts-Now-Insights-Technology/dp/1472981502

Non-batty consciousness

Have you read the paper ‘What is it like to be a bat?”? It is interesting example of philosophy that is commonly read by philosophy students. However, it illustrates one of the big problems with philosophy, that in its desire to assign definitions to to make things easier to discuss, it can sometimes exclude perfectly valid examples.

While trying laudibly to grab a handle of what consciousness is, the second page of that paper asserts that

“… but fundamentally an organism has conscious mental states if and only if there is something that it is like to be that organism – something it is like for the organism. We may call this the subjective character.”

Sounds OK?

No, it’s wrong.

Actually, I didn’t read any further than that paragraph. The rest of the paper may be excellent. It is just that statement I take issue with here.

I understand what it is saying, and why, but the ‘only if’ is wrong. There does not have to be something that it is like, or to be, for consciousness to exist. I would agree it is true of the bat, but not of consciousness generally, so although much of the paper might be correct because it discusses bats, that assertion about the broader nature of consciousness is incorrect. It would have been better to include the phrase limiting it to human or bat consciousness, and if so, I’d have had no objection. The author has essentially stepped briefly (and unnecessarily) outside the boundary conditions for that definition. It is probably correct for all known animals, including humans, but it is possible to make a synthetic organism or an AI that is conscious where the assertion would not be correct.

The author of the paper recognizes the difficulty in defining consciousness for good reason: it is not easy to define. In our everyday experience of being conscious, it covers a broad range of things, but the process of defining necessarily constrains and labels those things, and that’s where some things can easily go unlabeled or left out. In a perfectly acceptable everyday (and undefined) understanding of consciousness, at least one manifestation of it could be thought of as the awareness of awareness, or the sensation of sensing, which could notionally be implemented by a sensing circuit with a feedback loop.

That already (there may be many other potential forms of consciousness) includes large classes of potential consciousnesses that would not be covered by that assertion. The assertion assumes that consciousness is static (i.e. it stays in place, resident to that organism) and limited (that it is contained within a shell), whereas it is possible to make a consciousness that is mobile and dynamic, transient or periodic, but that consciousness would not be covered by the assertion.

In fact, using that subset of potential consciousness described by awareness of awareness, or experiencing the sensation of sensing, I wrote a blog describing how we might create a conscious machine:

Biomimetic insights for machine consciousness

Such a machine is entirely feasible and could be built soon – the fundamental technology already exists so no new invention is needed.

It would also be possible to build another machine that is not static, but that emerges intermittently in various forms in various places, so is neither static, continuous or contained. I describe an example of that in a 2010 blog that, although not conscious in this case, could be if the IT platforms it runs on were of different nature (I do not believe a digital computer can become conscious, but many future machines will not be digital):

https://timeguide.wordpress.com/2010/06/16/consciousness/

That example uses a changing platform of machines, so is quite unlike an organism with its single brain (or two in the case of some dinosaurs). Such a consciousness would have a different ‘feel’ from moment to moment. With parts of it turning on and off all over the world, any part of it would only exist intermittently, and yet collectively it would still be conscious at any moment.

Some forms of ground up intelligence will contribute to future smart world. Some elements of that may well be conscious to some small degree, but like simple organisms, we will struggle to define consciousness for them.:

Ground up data is the next big data

As we proceed towards direct brain links in pursuit of electronic immortlity and transhumanism, we may even change the nature of human consciousness. This blog describes a few changes:

Future AI: Turing multiplexing, air gels, hyper-neural nets

Another platform that could be conscious that would have many different forms of consciousness, perhaps even in parallel, would be a smart yoghurt:

The future of bacteria

Smart youghurt could be very superhuman, perhaps a billion times smarter in theory. It could be a hive mind with many minds that come and go, changing from instance to instance, sometimes individual, sometimes part of ‘the collective’.

So really, there are very many forms in which consciousness could exist. A bat has one of them, humans have another. But we should be very careful when we talk about the future world with its synthetic biology, artificaial organisms, AIs, robots, and all sort of hybrids, that we do not fall into the trap of asserting that all consciousness is like our own. Actually, most of it will be very different.

Wisdom v human nature

Reading the WEF article about using synthetic biology to improve our society instantly made me concerned, and you should be too. This is a reblog of an article I wrote on the topic in 2009, explaining that we can’t modify humans to be wiser, how our human nature will always spoil any effort to do so. Since wisdom is the core skill in deciding what modifications we should make, the same goes for most other modifications we choose.

Wisdom is traditionally the highest form of intelligence, combining systemic experience, some deep thinking and knowledge. Human nature is a set of behavioural biases imposed on us by our biological heritage, built over billions of years. As a technology futurist, I find it useful that in spite of technology changes, our human nature has probably remained much the same for the last 100,000 years, and it is this anchor that provides a useful guide to potential markets. Underneath a thin veneer of civilisation, we are pretty similar to our caveman ancestors.  Human nature is an interesting mixture of drives, founded on raw biology and tweaked by human evolution over millennia to incorporate some cultural aspects such as the desire for approval by our peer group, the need for acquire and display status and so on. Each of us faces a constant battle between our inbuilt nature and the desire to do what we know is the ‘right thing’ based on our education and situational analysis. For example, I love eating snacks all evening, but if I do, I put on weight. Knowing this, I just about manage to muster enough will power to manage my snacking so that my weight remains stable. Some people stay even slimmer than I, while others lose the battle and become obese. So already, it is clear that on an individual basis, the battle between wisdom and nature can go either way. On a group basis, people can go either way too, with mobs at one end and professional bodies at the other. But even in the latter, where knowledge and intelligence should hold power, the same basic human drive for power and status corrupts the institutional intellectual values, with the same power struggles, using the same emotional drivers that the rulers of the mob use.

So, much as we would like to think that we have moved beyond biology, everyday evidence says we are still very much in its control, both individually and collectively. But what of the future? Are we forever to be ruled by our human nature? Will it always get in the way of the application of wisdom?  Or will we find a way of becoming wiser? After 100,000 years of failure by conventional social means, it seems most likely that technology would be the earliest means available to us to do so. But what kind of technology might work?

Many biologists argue that for various reasons, humans no longer evolve along Darwinian lines. We mostly don’t let the weak die, and our gene pools are well mixed with few isolated communities to drive evolution. But there is a bigger reason why we’ve reached the end of the Darwinian road for humanity. From now on (well, a few decades from now on anyway), as a result of ongoing biotech and increasing understanding of genetics and proteomics, we will essentially be masters of our own genome. We will be able to decide which genes to pass on, which to modify or swap, which to dump. One day, we will even be able to design new ones. This will certainly not be easy . Most physical attributes arise from interactions of many genes, so it isn’t as simple as ticking boxes on a wish list, but technology progresses by constantly building on existing knowledge, so we will get there, slowly but surely, and the more we know, the faster we will learn more. As we use this knowledge, future generations will start echoing the values and decisions of their ancestors, which if anything is closer to Lamarckian evolution than Darwinian.

So we will soon have the power, in principle, to redesign humanity from the ground up. We could decide what attributes we want to enhance, what to reduce or jettison. We could make future generations just the way we want, their human nature designed and optimised to our view of perfection. And therein lies the first fundamental problem. We don’t all share a single value set, and will never agree on what perfection means. Our decisions on what to keep and dump wouldn’t be based on wisdom, deciding what is best for humanity in some absolute sense, but will instead echo our value system at the time of the decision. Worse still, it wouldn’t be all of us deciding, but some mad scientist, power crazy politician, celebrity or rich guy, or worse still, a committee. People in authority don’t always represent what is best of current humanity, at best they simply represent the attributes required to rise to the top, and there is only a small overlap between those sets. Imagine if such decisions were to be made in today’s UK, with a nanny state redesigning us to smoke less, drink less, eat less, exercise more, to do whatever the state tells us without objection.

What of wisdom then? How often is wisdom obvious in government policy? Do we want a Stepford Society? That is what evolution under state control would yield. Under the control of engineers or designers or celebrities, it would look different, but none of these groups represents the best interests of wisdom either. What of a benign dictator, using the wisdom of Solomon to direct humans down the right path to wise utopia? No thanks! I am really not sure there is any form of committee or any individual or role that is capable of reaching a truly wise decision on what our human nature should become. And no guarantee even if there was, that future human nature would be designed to be wise, rather than a mixture of other competing attributes. And the more I think about it, the more I think that is the way it ought to be. Being wise is certainly something to be aspired to, but do you want everyone to be wise? Really? I would much prefer a society that is as mixed as today’s, with a few wise men and women, quite a lot of fools, and most people in between. Maybe a rebalancing towards more wise people and fewer fools would be nice, and certainly I’d like to adjust our institutions so that more wise people rise to positions of power, but I don’t think it’s wise to try to make humans better genetically. Who knows where that would end, with the free run of values that we seem to have now that the fixed anchors of religion have been lost. Each successive decision on optimisation would be based on a different value set, taking us on a random walk with no particular destination. Is wisdom simply not desired enough to make it a winner in the optimisation race, competing as it is against beauty, sporting ability, popularity, fame and fortune?

So if we can’t safely use genetics to make humans wiser or improve human nature, is the battle between wisdom and nature already lost? Not yet, there are some other avenues to explore. Suppose wisdom were something that people could acquire if and when they want it. Suppose it could be used at will when our leaders are making important decisions. And the rest of the time we could carry on our lives in the bliss of ignorance and folly, without the burden of knowing what is wise. Maybe that would work. In this direction, the greatest toolkit we will have comes from IT, and especially from the field of artificial intelligence.

Much of knowledge (of which only a rapidly decreasing proportion is human knowledge) is captured on the net, in databases and expert systems, in neural networks and sensor networks. Computers already enhance our lives greatly by using this knowledge automatically. And yet they can’t yet think in any real sense of the word, and are not yet conscious, whatever that means. But thanks to advancing technology, it is becoming routine to monitor signals in the brain to millimetre resolutions. Nanowires can now even measure signals from different parts of individual cells. With more rapid reverse engineering of brain processes, and consequential insights into the mechanisms of consciousness, computer designers will have much better knowledge on which to base their development of strong artificial intelligence, i.e. conscious machines. Technology doesn’t progress linearly, but exponentially, with the knowledge development rate rapidly increasing, as progress in one area helps progress in others.

 Thanks to this positive feedback effect, it is possible that we could have conscious machines as early as 2020, and that they will not just be capable of human levels of intelligence, but will become vastly superior in terms of sensory capability, memory, processing speed, emotional capability, and even the scope of their thinking. Most importantly from a wisdom viewpoint, they will be able to take into account many more factors at one time than humans. They will also be able to accumulate knowledge and experience from other compatible machines, as well as from the whole web archives, so every machine could instantly benefit from insights from any other, and could also access any sensory equipment connected to any other computer, pool computer minds as needed, and so on. In a real sense, they will be capable of accumulating many human lifetimes of equivalent experience in just a few minutes.

It would perhaps be unwise to build such powerful machines before humans can transparently link their brains to them, otherwise we face a potential terminator scenario, so this timescale might be delayed by regulation (though the military potential and our human nature tendency to want to gain advantage might trump this). If so, then by the time we actually build conscious machines that we can link to our brains, they will be capable of vastly higher levels of intelligence. So they will make superb tools for making wiser solutions to problems. They will enable their human wearers to consider every possibility, from every angle, looking at every facet of the problem, to consider the consequences and compare with other approaches. And of course, if anyone can wear them, then the intellectual gap between dumb and smart people is drowned out by the vast superiority of the add-ons. This would make it possible to continue to select our leaders on factors other than intelligence or wisdom, but still enable them to act with much more wisdom when called to.

But this doesn’t solve the problem automatically. Leaders would have to be forced to use machine tools when a wise decision is required, otherwise they might often choose not to do so, and sometimes still end up making very unwise decisions by following the forces driven by their nature. And if they do use the machine, then some will argue that the human is becoming somewhat obsolete to the process, and we are in danger of handing over decision-making to machines, another form of terminator scenario, and not making proper ‘human’ decisions. Somehow, we would have to crystallise out those parts of human decision making that we consider to be fundamentally human, and important to keep, and ensure that any decision is subject to the resultant human veto. We can make a blend of nature and wisdom that suits.

This route towards machine-enable wisdom would still take a lot of effort and debate to make it work. Some of the same objections face this approach as the genetic one, but if it is only optional and the links can be switched on and off, then it should be feasible, just about. We would have great difficulty in deciding what rules and processes to apply, and it will take some time to make it work, but nature could be eventually over-ruled by wisdom using an AI ‘wisdom machine’approach.

Would it be wise to do so? Actually, even though I think changing our genetics to bias us towards wisdom is unwise, I do think that using optional AI-based wisdom is not only feasible, but also a wise thing to try to implement. We need to improve the quality of human decision processes, to make them wiser, if future generations are to live peacefully and get the best out of their lives, without trashing the planet. If we can do so without changing the fundamental nature of humanity, then all the better. We can keep our human nature, and be wise when we want to be. If we can do that, we can acknowledge our tendency to follow our nature, and over-rule it as required. Sometimes, nature will win, but only when we let it. Wisdom will one day triumph. But probably not in my lifetime.

Dangers of COVID Passports

A lot seems to be happening, but there is a huge rotting elephant in the room that is rightfully getting a lot of comment, so here’s my bit, (re-blogged from my new newsletter)

This blog is about Digital ID Cards, aka COVID Passports.

Most of the government activity around lifting lockdown and trying to keep all the powers has been highly suspicious. It’s like they realize this is their best chance for a long time to force digital identity cards on us. Ordinary identity cards have been discussed several times before and always rejected, for very good reason, but now with the idea of a ‘COVID passport’, they think they can sneak them digital identity cards through on the back of that, a classic ‘bait and switch’ con. Offer a pass to get into the pub, and then give them a full-blown, high spec, and permanent digital ID card.

First, the bait isn’t as tasty as promised. It can’t and won’t guarantee you aren’t carrying COVID so the headline sales pitch is deliberately deceptive. At best, it can show that you passed a test fairly recently, so you are a bit less likely to pass on COVID, so we’ll tell pubs to let you in. If the pub is only one place you’ve been since your test, you may well have picked up some viruses en-route that you could infect others with. Any surface you’ve recently touched might have transferred viruses to you, that you might transfer to any surface you touch in the pub. The test could also have been a false negative, saying you’re clean when you aren’t. So the bait isn’t all that tasty after all.

As for the switch, make no mistake, if government manages to force through ‘COVID passports’, you will have a full-blown digital ID card for the rest of your life. Even in the unlikely event that Boris kept his promise that the COVID passports will expire after a year, the data collected about you by government, the big IT companies, and the authorities will never be destroyed. We already have history of some police forces illegally obtaining and keeping DNA records. Why should we assume all authorities and companies will comply 100% with any future directive that goes against their interests?

Loss of privacy, lack of fairness, social exclusion and tribal conflicts are just some of the first issues, leading quickly on to totalitarianism.

Lots of totally unrelated functionality will be included even from the start, which will quickly be added to as technology permits, and forever keep you under extreme surveillance and government control, never to be free ever again or ever again to have any real privacy or freedom of speech. We will very soon have Chinese style blanket surveillance and social credit scores.

Think about it. Given that the card can’t guarantee safety anyway, given that you’re already very unlikely to die from COVID, surely the simple card you got when you were vaccinated would be quite enough? Sure, it doesn’t guarantee you are who it says (mine doesn’t even have my name on it), you might have borrowed it, but so what – going from a tiny risk to a slightly less tiny risk is surely not that big a deal? Surely that small reduction of risk implied by a proper COVID passport is not worth the enormous price of loss of privacy and liberty?

So it might let you go to the pub, but there is already no reason why you shouldn’t be allowed to, so that’s a false choice manufactured by government as leverage to make you accept it. The risk now is tiny. Anyone under 50 was never at any real risk, and all those over 50 have either been vaccinated or had the free choice, except an extremely small number who can’t for medical reasons. With the real risk of catching and dying from COVID already tiny, the government is already only keeping us locked down for reasons other than safety, to try to force us to accept digital ID cards as a condition of getting some freedom back, or the illusion of freedom back, temporarily.

OK, so what’s the big deal with having one? As the vaccines minister says (paraphrasing) what’s so bad about having a pass to get into the pub if it keep us all safe? In any case, you already have a passport. It has your full name, a photo that used to look like you, your date of birth and nationality. But it is paper, and even if it can be machine read at the airport, you don’t have to carry it everywhere. It can’t be read without you putting it within centimetres of a reader.

A digital ID card resides on your mobile phone, so location is one extra function that your passport doesn’t provide. It knows exactly where you are, and since those you are with also will need one, it will know who you are with, all the time. Very soon, government will know all your friends, family, colleagues and associates, how often and where you meet. Government will quickly build a full social map, detailing every citizen and how they relate to every other. If they have someone of interest, they can immediately identify everyone they have contact with. They will know everywhere you have been, by which means of transport. The photo will be recent too, probably far better quality than the one you took years ago for your passport. So if you attend a demonstration, they will know how you got there, what time you arrived, who you met with beforehand, which part of the crowd you were in, and together with surveillance cameras and advanced AI, be able to put together a pretty comprehensive picture of your behavior during that demonstration.

Another extra function is your medical status. That starts with your COVID status, but will also store details of your vaccine appointments, COVID tests, and a so far unspecified range of other medical data from the start. We can safely assume that will include the sort of stuff you are asked for every time you go near a clinic – your home address, NHS number and who your GP is, your age, your sex, your gender identity, your race, your religion, and various aspects of your medical history. Even if not included in the first release, government will argue that it is useful to include all sorts of extra medical data ‘to save you time’ and ‘for your convenience’, such as what drugs you are on, what medical conditions you have, what vulnerabilities you have and importantly, what risks you present to others. Using location, it can also infer your sexual preferences.

Obviously it then becomes even easier to insist that to ‘protect the NHS’ and ‘to keep you healthy’, that the app should also monitor your activity, and link to your Fitbit or Apple watch to make sure you do your best to stay in shape. Some health apps do that anyway and some people like that, because it’s part of their social activity, and they even get discount private medical care or free entertainment. But will that mean that if you don’t look after your health by exercising enough, that you go to the back of the queue for treatment, or for other government-provided services, or that you no longer get free dental care, or free eye checks, or free prescriptions. Maybe you won’t be able to buy a tube ticket if the destination is within walking distance, until your health improves. Maybe you will be told to go to the gym instead of the cinema or pub. Maybe if you do far too little exercise, you should pay more for prescriptions? Also, some people are killed by drunk driving, so if you have been in a pub or restaurant, or any place that sells alcohol, your car ignition will be deactivated until you submit a negative alcohol test. It’s very easy to see how these and many other functions can be bolted on once you have a digital ID card. Each will seem to have a reasonable enough justification if presented with enough spin, to make sure it gets implemented.

It doesn’t have to stop at health. Police will want to access data too, to ‘control crime’ and ‘ensure our safety’, and will then link to their various surveillance systems, and presumably with the same degree of political bias they routinely apply today, often pushing their own ideology rather than policing actual law. So, asking for microphone access and camera access, they could have tens of millions of cameras and microphones all over the country for blanket 360 degree 24/7 surveillance, using AI to sift through it to check for any potential hate crime for example, or detect any suspicious behavior patterns that might indicate a tendency towards a future crime. Minority report is only a fraction of what is possible.

These are the types of things already in place in China via their social credit system, though there are many other ‘features’ I haven’t listed too. It monitors people’s behaviors via various platforms, and then permits or denies access to various levels of services. If we get digital ID cards, it is inevitable that we will go the whole way down that same route.

Police and health authorities might both like your DNA record to be stored too. Then they can ensure you get the best possible health care, or quickly charge someone if any of their relatives has similar DNA to that found (for any reason) at the scene of any crime (real or perceived).

The power to monitor and control the population is irresistible to most politicians, certainly enough to get legislation through, and enough to ensure that powers are renewed every time they come up for review. If they come up for review. The government has already moved goalposts for restoration of our freedoms many times. At this point, it is becoming less and less likely we will ever get them back. If digital ID is voted through, or forced through by Johnson bypassing debate, then we will never be free again.

All the above dangers arise from government, which after all, we vote into power. They are supposed to be acting on our behalf to implement the things we vote for. Whether they are trying to do that now, or acting on external forces from the WEF, UN, China, Russia or other entities is anyone’s guess. What is certain though, is that with a government issued digital ID permanently on your phone, many bit IT companies will be very interested. Today, you can use any account and email address and it doesn’t need to be genuine. For a range of reasons, many people use fake identities for their Google, Yahoo, Facebook, Twitter, or Microsoft accounts. Friend and contact lists often bear little resemblance to the groups of people we actually hang out with. With a digital ID, the details are the ones on our birth certificates, the ones we have to share with government. Being able to create social maps would improve the ability to market enormously, so companies like Google and Facebook will love having access to genuine certified ID, and if that includes lots of other data too, even better. The ways you are marketed to, the quality of service you get, and even the prices you are charged will all change. To make a COVID passport at all useful, it will be necessary to allow other apps to access some or all of the data, and once that data has been accessed by the big IT companies, even if the passports later expire, it will be kept. There may be assurances that it will be wiped, but they cannot be guaranteed, and we know from history that companies (e.g. Google) may collect and use private data and then when caught claim that a junior employee must have done it by accident and without authorization.

With cancel culture and assorted activism accessing all this data too, the future could quickly become dystopian.The dangers of COVID passports are enormous. A nightmare police state lies ahead, with total surveillance, oppression, cancellation and social credit scores, tribal conflicts, social isolation, loneliness and general misery are simply too high a price for being ‘allowed’ to go to the pub.

We should just go anyway, it’s perfectly safe, and if government objects, we should change the government.

The COVID WFH Legacy

What will remain from WFH and Learning from Home

by: 

Alexandra Whittington and ID Pearson

Introduction

COVID has stimulated rapid change in technology and work practices that support working from home. Some of the changes might have happened anyway, but over much longer time. Some of the changes benefit workers, some their companies and some both, so we shouldn’t expect a return to the ways things were before COVID. Some of those changes are here to stay. It may be too early to be absolutely certain what will stick and what won’t, but we can identify enough of the forces at play to be pretty sure.

Fallen barriers

We always knew we’d communicate using video in the future – all the sci-fi said so, and it made perfect sense – but there were lots of barriers in the way. Many of those have now gone. We now have a wide range of good video comms platforms, not just Skype. Some are integrated and much better suited to business practices.

We have seen rapid parallel growth of business-oriented social media platforms such as Teams and Slack, Clubhouse and many others. Some of these will inevitably die out, and some will survive, as  rapid evolution and competition weeds out those that don’t work as well as others, or are limited to just iOS or Android. With so much reward available, competition will be fierce and development rapid. These platforms will evolve, but they will not go away, and our future work practices will include them.

Hardware technology such as better cameras, with higher resolutions and light sensitivities, better focus and face tracking, have all made it much easier to accept video communications. Faster and cheaper broadband, incl mobile, makes it possible to transmit the high data bandwidths needed. These barriers have only recently been breached, but now that they’re gone, they will never return. Good, cheap, high quality video communication is here to stay.

Although less glamorous, cheap and attractive LED webcam lighting has also helped a little.

Green screen technology bypasses privacy issues. If you don’t want colleagues to see what your home office decor looks like, or that you have to use a tiny room, it is very easy to add a background image or video. Again, this is a recent tech development, another barrier that was high before COVID that is now gone forever.

A recent Economist article showed that the share prices of electronic payments companies rocketed during the COVID lockdown. Of course we already had online credit card or Paypal (and Stripe etc) payments before, but WFH has incentivised their development and removal of any minor barriers to them staying and being permanent.

It isn’t just technology that was holding things back. Forced familiarity has broken the significant adoption barriers. There was a critical mass of users that was needed, and it simply wasn’t there. When nobody you knew was using the tools, what was the point? First adopters get poor rewards. Now that everyone has been forced to use these practices, the social acceptance and incentive barriers have gone.

Overall, there are now very few barriers to using online communications tools such as video platforms for everyday business meetings. Before COVID there were lots.

Ongoing Incentives

COVID revealed many benefits of working from home. Some were always there, but again, forced familiarity has been a good introduction to them. The first and most obvious are no commute time, no travel costs and other significant financial savings such as not having to buy expensive coffees, takeaway lunches, or even much of a work wardrobe, especially as online video normally only shows head and shoulders. There are also major savings for employers on office space. They will still need offices, but far less space, only needing to accommodate the maximum number of staff likely to be there. Many companies are shrinking the space they rent or lease, with huge impacts on property values in cities. As people gradually return to offices, there may be some growth again, but the savings for companies are high enough for them to encourage staff to keep working remotely as much as possible.

There are even some minor social advantages in not going to the office, such as not being forced to meet people you don’t like much. Introverts may be very happy with fewer face to face interactions. Most people don’t like meetings, and it is easier to resist endless meetings when you’re not in the office. The fact that zoom etc are not actually much fun reduces any incentive to hold a meeting unless it is actually useful. This benefits employers and employees. Meeting junkies will find it harder to force colleagues into an endless stream of pointless meetings, and that colleague whose ego was built around constant meeting attendance and being seen to be involved in things will miss out. Good!

In terms of interpersonal experiences, the lockdown period has been particularly effective at merging the personal and professional domains. This massive experiment in working from home has revealed the extra burdens on working parents, women in particular. Now that these challenges have had the spotlight and attention, don’t expect women to go back to the status quo very easily. This entire episode has been not just an apt reflection of society’s inability to create a proper work-life balance for half the population, but a reminder that a 40-hour workweek favors men. Gender equality has actually lost footing during the pandemic. This is unacceptable during a pandemic or under ideal conditions. Many families were rewarded with more quality time and that’s probably going to be preserved as long as people can manage to maintain it.

Persistent fear and social cooling

COVID will not go away completely; new viruses will emerge frequently around the world, and from now on, each will cause a fresh round of fear – we can no longer dismiss them as things that only affect far-away countries. Occasionally there will inevitably be a virus far worse than COVID. COVID killed far less than 1% of its victims but some can kill up tp 40%. 

The current nervousness and mild suspicion people often feel around strangers is very likely to persist for many years. Indeed, many people have learned to actually fear being close to others, which may persist as a long term phobia, mild for some, stronger for others. So we should expect that people will shake hands less, kiss, cuddle and hug less, and there will generally be less physical maintenance of emotional bonding between people. Some of our body’s emotional mechanisms are associated with touch, such as release of various hormones or neurotransmitters when we have physical contact with others, so this reaction is not just imaginary. These biological mechanisms evolved over millions of years, and if they are impeded, our social relationships will be weaker. We call that social cooling. Persistent fear will certainly lower the attraction of face to face proximity and make it easier to accept remote behaviour. 

Though there isn’t a lot of evidence yet, these effects may well be stronger in children and young adults, whose brains are still relatively fluid. Pre-COVID behaviours were also less ingrained in young people simply because they had less time exposed to them. Given the rapid emotional and hormonal changes around puberty, many young people going through that phase during this emotionally intensive period may suffer lifelong effects.

COVID-19 tamped down all social activities except those that could be experienced online. Unexpectedly, everything from parent-teacher conferences to cocktails shifted somewhat coherently into the virtual world, while concerts, comedy performances, exercise classes, shopping, cinema, museum exhibits, and religious worship were all transformed into at-home digital experiences in 2020. Given the impact of social distancing, will private homes continue to morph into cultural and social spaces?  Socializing from home is not only more convenient, but is undoubtedly less expensive and time consuming. The popular Broadway hit “Hamilton” serves as a great example of how exclusive cultural content was made more accessible during lockdown.  Millions of people were able to experience a performance that was streamed (free) across the internet during lockdown. Previously, steep ticket prices and geographic proximity were huge barriers that kept the masses from enjoyment of the popular show. It’s quite possible that customers will demand similar options in the future, which could have a democratizing effect that is quite needed on things like arts patronage, physical activity, and leisure time. However, how will life look when our home is not just a shelter, but a workplace, school house, university hall…and a fun place as well? 

Governmental temptations and pressures

Government has also gained some very valuable new powers that it will not let go easily. Lockdown itself is a very draconian measure that could never have been introduced without a threat such as COVID or major war, but it will be very tempting to use it frequently from now on, for any virus, any kind of civil unrest, even crime control. Worst of all, it is already being seriously considered as a means to achieve carbon zero, with lockdowns every 2 years being debated. 

Now that government has that tool and knows we will accept its use even with weak evidence for its necessity or effectiveness, it may well be used in future any time it is considered useful. 

The prospect of a lockdown at any time will have significant effects on most company strategies, plans and provisions. It doesn’t need to be used to have a significant effect – it just needs to be a possibility. 

Other tools that are extremely attractive to government, that had only previously been resisted because of fear of public reaction are now much easier to push that they know the public will mostly accept them given even a moderate excuse. Increasing surveillance, monitoring, testing, face recognition and new ID mechanisms are just a few of the more obvious ones. COVID has justified accelerated development of all these techs without the requirement to further justify them, but they add up to a very rich (and still rapidly growing) toolkit for surveillance, monitoring, control and oppression.

Some financial benefits accrue to the government too. With fewer people seeking medical help, and indeed, wth many old people now deceased, there will be lower costs for health care for a few years, or at least it will cost less to clear the huge backlog that has built up during lockdown. It will be easy for the government to continue its message of helping the NHS, deterring some people from seeking help. 

Other health care changes will remain too. Doctors and hospitals love working remotely. It reduces their workload (many people don’t bother trying to see them and just put up with things), it reduces their direct risks and costs (infection, violence, and the need for chaperones), surgery costs (insurance, waiting room space, car parks, staff numbers, consumables and costs of missed appts. Since they continue to receive full payments for each person on their books, these add up to greatly increased profits. They will resist returning to pre-COVID practices unless they are offered even greater pay.

Incidental government benefits include lower traffic levels, which reduces both road costs and congestion, reducing pressure on government from these directions. However, lower traffic also  disincentivises taxes based on mileage, and favours taxes based on car ownership, so this will delay decisions such as replacement of car licenses by road tolling.

Lower mileage for electric cars reduces costs of public charging infrastructure and numbers of power stations, and allows more time for installation. This makes a significant government incentive to keep WFH if they can.

It’s worth pointing out that, combined with social media, WFH tools are enabling political activism in the COVID era. Technologies that allow people to text, call, or email strangers about issues for which they share a passion is a step forward in evolving civic engagement. Numerous social justice issues that have gained the spotlight during the pandemic year (democracy and voting, police brutality, women’s safety, racial inequality, to name a few) may be sustained indefinitely in the public discourse with the help of smartphones, social media accounts, and communications technology that brings information around the world at the speed of light.Throwing our support behind issues, candidates, campaigns, and funds is easier than ever. Also, we are far more tuned in to what’s happening in other countries than our own, given the global nature of the pandemic. 

Wider economic effects

It is also possible to foresee persistent long term economic effects originating from COVID WFH practice. For example, companies now know that with WFH embedded and proven they can consider sourcing some staff from the global market. For some roles, that might mean a much bigger pool to pick from, so they can increase staff quality and reduce staff costs. For other fields, it will have no effect because the skills needed are localised. For still others, it will produce a global market for elite skills. The consequences will be that we will see elite salaries rise high, commodity salaries reduce greatly, but some roles will remain unaffected. For roles that need physical presence or face to face working, there will also be no major effect on staff cost.

A headline in the financial news recently read “Zoom towns are boomtowns”, citing the top 15 US “Zoom towns” composed of urbanites who relocated from big cities to small towns during the pandemic. White-collar workers are moving in record numbers to suburbs and towns outside of urban areas,which is a trend that is not going to soon reverse, judging by Manhattan’s low real estate prices. Major companies who have made all-remote workforces the norm are encouraging this trend while feeding another growing trend – digital  nomadism. Digital nomads will be a formidable type of talent after the pandemic. Exploring the world with a laptop and a vaccine passport will never have seemed so appealing as it will for young people who’ve been cooped up for a year or more. The fact that a survey by an employment search website found that a third of the respondents said they’d quit their job before going back to the office suggests that the employer/employee power structure has shifted in favor of workers (at least, knowledge workers). Demographic patterns like these will impact the financial grandeur of large cities, but allow smaller cities to grow. There was already a significant trend towards de-urbanisation, but lockdown has accelerated it. This could change how we view the globalized economy. 

During COVID expat employees were frequently sent back to their home countries, resulting in a type of reverse brain drain. Countries like Italy and Greece, for example,experienced some economic benefits when native segments of the educated workforce returned. The numbers were lower than some people had predicted, at around 7%, but this trend may continue if expats latch on to the WFH trend that, along with the growing acceptance of digital nomad life, gives employees a great deal more control over where they live. If it sticks, it could alter the traditional flow of talent from developing economies to more developed ones. Some countries could become havens (tax or otherwise) for affluent people interested in the digital nomad way of life. 

Travel will be harder

Business travel was always perceived as a nuisance to some and a benefit to others. Again, some effects will persist from the COVID era. Most obvious is the need for COVID passports, which government is busily developing even as they pour scorn on the idea in press briefings. They are very likely to become compulsory not by government decree, though that may happen, but by the likely fact that people will have very inconvenient restrictions on what they will be allowed to do without one. That might remain for several years, and by then, new viruses are likely to emerge that will create an excuse to keep health passports, even as COVID is replaced by other names. Health passports might eventually vanish, but they may well be here to stay. During the next several years, we should also expect harsher treatments and tedious systems at many locations, such as potentially unpleasant testing enforcement. Anal swabs? No thanks! Potential confinement might also be a lingering threat that could sometimes become an issue during a trip. For example, quarantines, backed up with fines or imprisonment can suddenly take force. This presents a significant risk for some trips to certain areas.

Travel costs will increase too, not least due to having to allow potential expenses for the risks just mentioned. For a while, airlines will have to be highly competitive on prices to regain some lost business, but the longer term dictates higher prices to cover higher costs, lower traffic and desire to maintain profits and recoup losses during lockdown.

The ability to build up frequent flyer points on business travel may become extinct after COVID. A major lesson learned from 2020 is that some meetings should really just be an email. Therefore, after the pandemic, the criteria for what constitutes necessary business travel will change. Events that once would have required a trip will be evaluated differently, both by the company and the employee. In fact, some employees (particularly those who have relocated far away from big cities) may expect to receive bonuses or incentives for travelling away from home for work. Even though the vaccine will ease people’s fears around contracting COVID, business travelers in the next few years could still make a case that international travel puts them at risk and they deserve better compensation. Another argument would be that the costs of travel can no longer be justified as a business expense unless a face-to-face presence is absolutely necessary. And, working mothers may push back against the expectation to return to “normal,” when normal was an untenable set of demands that served to reinforce gender inequalities. Now that the work-life balance scales have tipped, don’t expect them to go right back to where they were in 2019.

All of this adds up to a major disincentive to business travel and favours working remotely. These effects might decline gradually over time, but they will remain significant for several years.

Future communications technology

Lockdown made us adapt to using basic video comms (though infinitely better and more versatile than we had a couple of years ago), but tech for AR and VR is accelerating and it won’t be long before they have their effects too. Surround audio, high resolution video, and full 3D immersion will soon become expected. Eventually, as VR becomes more ingrained into product visualisation and design, gaming and R&D, and even marketing and sales we will see a spread of sensory translation technologies, which today include vibrating gloves and other haptics, but which will eventually evolve into active skin (tiny devices embedded on or in our skin, linking to our nervous systems to record and replay sensations), and active lenses, writing high resolution 3D imagery straight onto our retinas.

We already know some of the roles of VR in home working – product visualisation, simulation, meetings, and full body, full size communication, as well as in gaming, retail, travel and the entertainment industry. These roles will develop and multiply, becoming forever ingrained into everything we do online, simultaneously becoming better, cheaper and more intuitive to use.

Roles of AR will include a wide range of useful overlays, and will also likely be a reasonable substitute for VR in environments where safety hazards otherwise prevent pure VR use. Avatars will have some business utility, but will really come into their own in social networking and gaming where they can add novelty, beauty, personality extension or role clarification, but also enable gender swaps, age swaps, roleplay and many other features.

AI can also add many extra features to comms, such as meeting facilitation, note taking, minutes, project management, or executive assistant and secretarial functions. Industry-specific AI can even add virtual experts in particular areas to a meeting attendance list.

Combining technologies, avatars can interwork with AI to offer personal substitution, so you can be in two places at once, or just duck out of unwanted meetings but still be represented partially. For those people on the autistic spectrum, AI could interwork with their avatar to enhance their social presence and improving the quality of their social interactions. Avatars and AI could also help introverts and less-assertive women to get a word in at meetings versus their pushier male or loudmouth colleagues. Avatars driven by AI can essentially level the playing field for everyone, especially if AI is chairing the meeting and managing who gets to talk when.

AI, Robotics and Drones

We see rapid progress on automation already. Robotics continues to become more advanced but also cheaper, making it feasible to automate jobs that previously were too difficult or uneconomic to automate. This has a bearing on outsourcing overseas, because if robotics is cheap enough, the incentives to move work to another country is lessened. This might therefore somewhat offset some of the forces described earlier that enable exporting to cheaper countries.

AI generally is improving, especially with deep learning gradually catching up and exceeding human capabilities in many niche areas. Further away is artificial general intelligence, where AI can learn to think across wide fields just like humans. It will come, but the next few years will still see most development in niche-specific AI, where there is still a lot of low hanging fruit to pick. 

There is an increasing consensus that the best way to use AI is in partnership with humans, upskilling them to do jobs faster or better than they could otherwise. In that sense, AI can be thought of as just more of the same advance that we saw when Google replaced an hour in a library by a minute on a search engine. It will improve efficiency and productivity but not necessarily replace an individual job. However, in some areas, it might allow easier exporting of the job to a lower wage country, while importantly keeping the intellectual property of the AI in the home country.

Drones may have been rather overhyped in some areas but will still be important. An aerial delivery drone will probably not be allowed to land on a town pavement in front of a terraced house, where it could obviously present a risk of injury to pets, children or passers-by. However, they can safely be used already for delivery to a properly designed industrial (or hospital) delivery bay staffed by people trained in proper H&S procedures. In between, are people in suburbs with back garden lawns. Although technically feasible to deliver here, there are still many potential objections, so we should assume that this won’t be commonplace for some years. Like AI, drone delivery can speed things up compared to road delivery, making just in time industrial processes better, and allowing more distributed processing.

Drones also have other uses such as security and surveillance. Some of the human roles associated with these can theoretically be implemented anywhere, so again, this allows export of some jobs. They also allow direct substitution of some jobs, such as delivery driving or helicopter surveillance.

Training and learning

Many of the same factors apply in learning as for working from home. On-line learning has grown enormously during lockdown, helping retraining or simply alleviating boredom. The learning industry has somehow managed to retain its fees and structures during lockdown, but that is surely not sustainable, however hard they try. In the background, very many online courses have been springing up that allow people to learn fast in their own time, in their own homes, at low cost. This mostly new competition will take time to substitute or replace old courses, but the trend is now irreversible and some rebalancing will happen, with lowering of fee levels being just one of the consequences.

One key differentiating factor in online courses is whether they give a certificate. Many courses are free to do, but the certificate has quite a high price. As global markets for online working become the norm, certification will become increasingly important, so that business model might persist. It allows training companies to claim they are providing valuable social benefits while still making good incomes from those who can afford to pay.

There are particular benefits in using AR/VR for training, especially when learning skills appropriate to specific physical environments, where the work environment can be precisely duplicated, showing its risks, interfaces and so on, so that people can learn how to work safely in that particular environment without actually being there. AR and VR engage visual and audio memory instead of just text, potentially improving recall, though that assumes more stimulating visual mechanisms than bullet points on Powerpoint slides!

Another interesting application of VR to training and learning involves the use of VR to teach so-called “soft skills” such as tolerance. Workplaces may expect future WFH employees to use VR training to learn communication, empathy, and inclusion. One example already available is to help people understand the effects of racism. It may prove helpful to provide highly immersive employee training experiences. One advantage of VR is that it can be performed in the privacy of one’s home or at the workplace. Given the reverberations of the remote working revolution, emotional intelligence may become particularly important as the workforce becomes more distributed and people are in less face-to-face contact with colleagues.

There will be obvious effects on course costs and prices when class sizes can be in thousands, and for many courses, costs per attendee can drop very low indeed if materials are just online, available any time any place any language on demand. Superstar teachers with elite skills will be sought after globally and attract very high pay, while commodity teachers competing with massive global supply so on low pay. Indeed, some superstar teachers will have their own companies.

As chatbot tech continues to develop, AI guidance for students will also improve, so AI can act as a virtual tutor, or even lecturer, allowing a lecturer alternative to boring text. This has real potential to replace many teachers, or allow other teachers to reach out to more people, with AI dealing with some students while they focus their human skills where they are needed.

Post-COVID, educators might find edtech helps to get students caught up academically. However, this should not be a priority until sufficient socialization, sense of security, and some structured sense of stability has been restored for the youngest students. Sound emotional foundations are needed for good education, and they will need some extensive repairs. An interesting conversation that has emerged from the quarantine year is the impact this period will have on healthy childhood development. Education experts in the UK have proposed “a summer of play” to make up for the past school year’s deficiencies – not academically, but socially. With mental health-related red flags raised across swaths of society, it is being advised to forgo extra summer lessons meant for kids to make up learning losses, but instead focus on stress-relief and joy. 

Team Building

Bringing people back together at the workplace after COVID is probably going to offer some novel experiences. It may be fair to say workplace socialization will never be the same. Spending time with our teams serendipitously may become curtailed by the fact that so many employees are showing a preference for keeping a flexible schedule. There’s also the fact that some people have moved hundreds of miles away from their team during the pandemic relocation frenzy. And, inconsistent vaccination uptake across society could impede the ability to meet face-to-face. There’s the sense that it will be a significant aspect of the WFH revolution, but what will team building look like after the pandemic?

Retreats in nature, in luxury, and/or highly secluded locations may be a valuable tool in the future. If an organization is interested in increasing employee morale of teams across a distributed remote workforce, for example, it seems like attractive vacation-style locations will be the best way to lure people from their comfortable cocoons to attend team building events. These kinds of functions could become the perfect antidote to Zoom fatigue, providing intimacy and bonding on a personal level that would permeate over six months or a year. 

It’s theoretically possible that some of these could also be implemented in VR, which is a novelty in itself for many, and can still achieve some of the same goals alongside remote working. However, real life will generally be better than VR for most people and that is where the main focus is likely to be.

Furthermore, hotels, AirBnB, and other forms of lodging (including castles) are quickly transforming into coworking spaces. This trend not only shows how fluid the concept of a workplace has become during the pandemic, but indicates that the business travel industry is adapting to the WFH/digital nomad lifestyle as well. The advantage for organizations is that team building can now complement, rather than obstruct, work-life balance by doubling as a vacation, since the hospitality industry is beginning to resemble WeWork anyway. Several hotels have implemented programs designed for working throughout the day out of guest rooms and other spaces. Some WFH (work from hotel) packages during the lockdown were geared toward affluent working parents and included a tutor for helping children with online lessons during the day, catering exclusively to digital nomads that travel in packs (eg, families).

Recruitment and Personnel Management

For organizations, a huge advantage of distributed work teams is that it increases the size of the job applicant pool. With many jobs now allowing WFH, companies can choose from a huge range of potential talent. The ability to interview and screen applicants online has also surely saved companies hundreds of thousands in travel and lodging expenses. Post-COVID recruitment practices will probably continue along these lines, shunning expensive and elaborate travel except for the most upper-level positions. Interview tactics for virtual job seekers will become a learnable and teachable skill.

The rise of WFH implies tremendous growth in technologies intended to monitor employees’ time at different tasks. The “big brother” aspect of the remote and distributed workforce has not reared its ugly head very prominently but it is waiting. Herein lies a huge uncertainty going forward: how much surveillance are employees and students willing to give up in order to learn and work from home indefinitely? 

Career progress & WFH

Before COVID, occasional studies suggested that people in the office are typically better noticed by their managers and thus more likely to be promoted, and at the same time, people working from home often feltl undervalued, or were (reasonably) concerned that the boss suspected they aren’t working as hard as they are. We’re still waiting to understand the full impacts on these issues since COVID lockdown, though some are obvious, but in any case quite a lot of people have started new jobs since then and some have never even met their bosses or colleagues except on zoom. These factors will have very significant effects on whether someone is accepted as much a part of a team as those who already were pre-COVID. 

Education too must suffer some of this problem, such as the problem of teachers assessing work if they have never met the student submitting it. A teacher can’t judge a student’s total merits by just marking their homework.

20 Things for the 2020’s

Obviously, the historical event known as the COVID-19 pandemic has had and will have a lasting influence on the world for some time. Considering epidemics and pandemics are natural occurrences that we can count on, we should view these instances as random catalysts of social change. What is new about COVID-19 seems like it represents the first big pandemic in a real-time globalized world, thanks to modern technology.

The changes we sense since COVID hit are social and technological in nature. They properly demonstrate the symbiotic relationship between the two, creating new behaviors/activities, while curtailing others. Below we offer two sets of 20 things: 20 that aren’t coming back and 20 that won’t go away. Welcome to the 2020’s.

20 Twenty things that won’t come back:

  1. Full-time cubicle life
  2. Five-day conferences
  3. Face-to-face parent-teacher meetings
  4. Formal business dress code
  5. Demoralizing team building events
  6. Workplace policies biased against working parents
  7. Default face-to-face contact at school/work
  8. Educational experiences without a digital component
  9. Long, boring, in-person training
  10. Disproportionate work-life balance
  11. Elaborate/expensive employee recruitment
  12. Inadequate technology skills
  13. Frequent flyers, and air-miles
  14. Technological illiteracy
  15. Agoraphobia as a mental illness needing treatment
  16. Homes that don’t include office space
  17. Unnecessary meetings
  18. Packed commuter trains
  19. High wages for jobs that can be done anywhere
  20. The two hour commute

20 Things that won’t go away:

  1. Dismal birth rates
  2. Surveillance capitalism
  3. Digital ID
  4. Telehealth
  5. Governmental monitoring/track and trace apps
  6. Lockdown powers
  7. Reinforced green regulations
  8. Public willingness to do as govt tells them
  9. Use of face masks during flu season
  10. Zoom kit – LED ring lights, decent cameras and microphones
  11. Fear or suspicion of strangers
  12. High house prices in rural and pretty areas
  13. Lower daytime city population
  14. Toilet roll hoarding
  15. Higher prices for holidays/hotels/air travel (may be short term special offers)
  16. Overcrowded restaurants, holiday spots, sporting events
  17. Brick-and-mortar schools
  18. Shopping, but mostly as a social activity and diversion
  19. Online friends you’ll never meet
  20. Online shopping

The Authors

Alexandra Whittington

alwhittington@uh.edu

Alexandra Whittington is a futurist educator, writer, and researcher. She is a Lecturer at the University of Houston, where her students describe her as “passionate” about the future. Her courses explore the impact of technology on society and the future of human ecosystems. She has published dozens of articles exploring diverse aspects of the future, often from a feminist perspective. Alex has co-authored and co-edited several books, including A Very Human Future and Aftershocks and Opportunities: Scenarios for a Post-Pandemic Future. She studied Anthropology (BA) and Studies of the Future (MS) at the University of Houston.

ID Pearson BSc DSc(hc) CITP MBCS FWAAS

idpearson@gmail.com

Dr Pearson has been a futurologist for 30 years, tracking and predicting developments across a wide range of technology, business, society, politics and the environment. Graduated in Maths and Physics and a Doctor of Science. Worked in numerous branches of engineering from aeronautics to cybernetics, sustainable transport to electronic cosmetics. 1900+ inventions including text messaging and the active contact lens, more recently a number of inventions in transport technology, including driverless transport and space travel. BT’s full-time futurologist from 1991 to 2007 and now runs Futurizon, a small futures institute. Writes, lectures and consults globally on all aspects of the technology-driven future. Eight books and over 850 TV and radio appearances. Chartered Member of the British Computer Society and a Fellow of the World Academy of Art and Science.

Want to be a futurologist? Key roles in the futures industry

joint blog with Tracey Follows

I spent most of my career as a futurologist and thoroughly enjoyed it. I simply can’t imagine not being interested in what lies ahead, or ever stop thinking about it, so I guess I’ll be a futurologist until the day I die. I’d strongly recommend it as a career, and it is becoming much more fashionable now, so I get lots of emails asking me how to get into it.

Thousands of people call themselves futurists or futurologists. One of the commonest questions I am asked is what is the difference? They are the same thing. I used the term ‘futurologist’, because I study the future, so futurology is simply the obvious and most appropriate term. ‘Futurist’ is unfortunately much more commonly used. Before it came into use for people who study the future, the term ‘futurist’ already referred to an artist who practices futurism, an artistic and social movement that originated in Italy in the early 20th century, emphasizing speed, technology, youth, violence, and objects such as the car, the airplane, and the industrial city. At a stretch today, it could also be interpreted as someone with a futuristic lifestyle, such as a gadget freak. I can think of no sensible derivation for it as a term for someone who studies or talks about the future, but we are where we are. A futurist is therefore just a futurologist with lower regard for English. However, since 90% or more of them use that term, I pragmatically concede defeat and use ‘futurist’ to avoid endless argument. Futurologist is correct, but futurist is used more frequently. I now use them interchangeably.

Everyone thinks about the future sometimes. Even animals do so. Sheep may gather under trees if they see rain coming; almost all animals take evasive actions if they see predators. Doing that often involves modelling – figuring out what the predator might do, the path it might follow, where you’ll land if you make that jump with a particular force and direction. Nature has equipped us already with many inbuilt modelling skills. You think what the weather might do when you decide what to wear. You plan your food shopping according to what you have and what you think you will need. You think further ahead when you consider your investments, your retirement or what to encourage your kids to study at university. Some people become sufficiently skilled at thinking about what the future might bring that they can make a career from doing so, or perhaps do so part time as one role among many. If you find that possibility attractive, you already have the main attribute needed to be a futurologist: an interest in the future. There are several different roles that you can aim to fill, depending on your various talents. In this blog, I’ll briefly outline the field, the many different types and roles and the talents, knowledge and skills needed for them.

I’ll avoid jargon, and that’s also my first futures lesson. Jargon can be a useful shortcut when referring to commonly held concepts among colleagues, but if you’re explaining anything to anyone outside a field, you should be able to do so in ordinary everyday language. Jargon is field-specific (and can even be team-specific) so it gets in the way of communication if people don’t interpret that same jargon in exactly the same way as you. Worse, jargon acts to compartmentalize ideas and knowledge in your own mind, creating translation barriers between the compartments, and so impedes futures thinking, even if it slightly speeds up thinking in its specific field. If you can easily and seamlessly make links in your mind without having to step over those barriers, your thinking will be more fluid and you’ll quickly see things that most other people won’t. You’ll also find it far easier to do the sort of system-wide thinking essential to any useful futurology. I have nothing but contempt for those who use jargon or ‘big words’ as a way to feign expertise. Paraphrasing Einstein, “if you can’t explain it to the man in the street, you don’t understand it well enough yourself”. I’d add that it’s always better to convey big ideas with small words than small ideas with big words.

There are several main roles in the futures space:

Amateur futurologists are abundant, visible in very pub chat discussing who might win a football game, or what the budget might hold. We all engage in that amateur futurology frequently. A large number of people read interesting articles about the latest tech or science developments and tweet or blog about them, and that is all part of amateur futurology too. This is a great way to test the water – if you get bored after a while, it isn’t the right career for you; if it is a great source of enjoyment, perhaps you could take it further. Many progress directly from this start-point to professional futurologists and beyond (see the section below on futurologist speakers, writers, film-makers, bloggers, journalists).

Professional futurologists should go beyond this amateur level enthusiasm for the future, adding some real expertise and credibility. To be professional rather than amateur, by definition it needs to account for a significant source of their income. Many roles in business have some futurology in them. Pretty much everyone in strategy or planning, R&D, or even on the board needs to think about the future and what it may bring as a significant part of their job – forewarned is forearmed. Larger companies can often afford to have a few people who do that all the time. As a full time role, they develop a well-stocked mindset of what the future holds and can identify the many forces at play and how they may play out, and thus draw key insights (valuable enough to justify the cost of their role) that they can offer others in their company. They can help other departments to be prepared for what is coming.

Others such as designers, politicians, artists or writers may have large elements of futurology embedded in their jobs, even if it isn’t their title discipline.

Futurologists do not necessarily need expertise across a very broad area. Many futurologists are focused on deriving valuable insights within a relatively small field, such as the future of energy, or food, or fashion, or construction or some other field, and they don’t need to have any great activity outside that field. They are nevertheless valuable employees who can help ensure their companies make good strategic and planning decisions. Over time, futurologists will generally broaden their expertise to account for forces and events further afield, and become skilled in thinking further ahead may eventually become expert futurologists.

I think the main core skills needed for professional futurology are clear thinking and analytical skills, systems thinking, imagination, and also the ability to explain results to others who may not share you in depth industry knowledge. If you have those, you can produce insights about the future that are sufficiently useful to others to justify them paying you for it. But I’d also add discernment as a skill that makes a huge difference in the quality of build of the futures mindset, and the insights produced. Many people, sadly even some futurologists, can be taken in by things that others with better discernment skills might dismiss. The difference in outcome is between a reliable prediction of what the future will hold versus a more popular one that might push the right social media buttons but will not stand the test of time. Poor discernment skill doesn’t stop you from becoming a futurologist – you might still be able to produce material that grabs media exposure, but it is likely to be of low predictive value. Discernment skill makes the difference between getting it right occasionally and getting it right most of the time, between being limited to offering scenario planning and futures facilitation workshops, or being able to offer reliable predictions.

Many corporate futurologists start off in a fairly narrow field and broaden their scope gradually over time. I started futurology after a decade in systems engineering, so I had a lot of relevant expertise and experience, but it was confined to technology, mostly IT. Over a decade, I expanded that to cover the whole of IT, and then most other technology fields, including biotech, construction, materials, space, defense, transport, energy, and environment and then in the next decade on to food, beauty, cosmetics, sports and leisure, entertainment, medicine and even pharmaceuticals. Over those two decades, I also monitored a broad range of externals such as society, government, human nature, psychology, marketing, economics, incorporating them into my futures world view as appropriate. It really does take many years to incorporate all of those fields into a futures mindset that extends to 2050, but you can start small and grow.

Others take different routes. In fact, every field has a future to be analysed, and futurologists may come from any of them. My co-author Tracey came to futures through marketing and advertising communications, trying to analyse and model the changing values of the consumer, the changing lifestyles and match any emergent needs with emergent solutions. As with science fiction, media and communications offers a way to understand a changing society and that can often lead into a deeper interest in the specific area of futures. Each corporate futurologist has their own unique background and skills, and each will consequently look at the future with a different angle, drawing out different insights.

Expert Futurologists should have a broad, well-stocked mind view of what the future is likely to look like across a broad field over a broad timeframe. They will have a great deal of knowledge about that field, and a lot of insight into the key forces acting in it. They should be able to map the futures landscape in their field, highlighting the main features in it, the main threats and opportunities, and thus determining some plausible futures scenarios that might be worth investigating or preparing for. They should, on demand and relying entirely on their mindset, i.e. without having to google anything new, be able to outline and explain those key trends, forces and interactions, what is likely to happen and how that will affect the many stakeholders (government, people, business, society, the environment). They should know not just about current trends, but have enough insight in their field to predict likely new developments even where there are yet no trends, by pulling together insights from across the field to essentially invent things that do not yet exist but which are likely to arrive in due course. An expert futurologist should be able to make inventions within their field. To be worthy of the term expert, they should be able to factor in influences across their broad field and process those against other external forces built up of their own life experience, such as human nature, likely political or social reaction, market responses, and by doing so, an expert futurologist should most certainly be able to make not just plausible scenarios, but reliable predictions, determining not just the map of the future terrain, but the most likely path to be taken (or paths where there really are some genuinely unpredictable events or decisions, though that should be the exception, not the norm).

Skill-wise, the same skills mostly apply as for professional futurologist, but obviously significantly better developed, with a broader remit and longer time-frame. However, the best futurists are plugged into many different fields of interest and are good at spotting weak as well as strong signals. Especially if they start to notice a weak signal among people across different communities, they will gain a good sense of that trend as well as where it might be going. They don’t need to be expert across all those areas to notice signals in them, but it is certainly useful to at least have antennae facing numerous interesting and cross-disciplinary areas. 

The list of areas I cover is still growing but there are still enormous gaps in my knowledge, and that’s fine. I only have one brain, and it can only do so much, like anyone else’s. I know relatively little about global politics, the future of China or India or South America, or Indonesia, or Brazil…, I still don’t cover individual companies or brands, or law, or most regulation, and I could go on. There is a future for everything, but thankfully there are also many futurologists, and someone somewhere knows lots about those fields where I know nothing. That’s how it should be, but that does mean you also need to have futures contacts you respect who know about the other stuff.

Futures facilitators

Many companies engage in strategy or planning workshops, where they think about the future and its various threats and opportunities. There are many ways of doing so, but one of the most common is a scenario planning workshop, where a group identifies some potential ways the future might unfold, and how these might impact on them. Some go further and work out how they might influence the future to their own advantage. There are a variety of ways of running these workshops and various charts and tools that help guide participants through the processes – many readers will be familiar with two-axis charts dividing the future into four nice neat scenarios. The people who run such workshops are futures facilitators. Sometimes a senior manager or strategy consultant might take that role (not least because it can be a valuable team building event and can be excellent at getting personal commitment to a strategy) but really it is a straightforward administrative task that can easily be done by a junior manager or admin staff. That doesn’t mean it isn’t valuable. Futures facilitators can become skilled at running such workshops, developing interpersonal, motivational and leadership skills that help get the most out of the valuable time of the attendees, making sure everyone gets a chance to talk, that loudmouths don’t monopolize the whole time, that ideas aren’t immediately dismissed and creativity squashed before they can be explored further. Some offer very valuable personal skills, but facilitators don’t need (although they might have all the same) any particular futures knowledge themselves, and they don’t even need (though again, might have) any interest in the futures discussion. The key expertise at the workshop lies in the attendees, who should be chosen carefully. They provide the knowledge, the analytical skills, the insights and often the managerial clout to implement what needs done as a result. The role of the facilitator is to guide them through the process of extracting and harnessing their expertise.

Some facilitators may also be professional futurologists in their own right; indeed many are, and as well as helping attendees to think about their future, they might add their own futures knowledge and insights to help them do their analysis. As with many jobs, being a futurologist is often one part time role the employee fills among several. But facilitating discussion, helping or guiding others to think about the future is facilitation, not futurology per se. A skilled individual can and sometimes does fill both roles, but there is no value in conflating them.

This is an important distinction that needs to be stressed because it aligns well with the chasm between academia and industry. I often see it written by academics in the futures field that “one of the myths about futurologists is that they predict the future”, often going on to explain that “nobody can predict the future”, that “the future isn’t predictable”. That’s simply nonsense. Many people, in industry at least, can and do predict the future reliably. Their company may depend on them doing so. Their job may depend on them doing so. I made thousands of predictions of the future for my employer and its customers over 17 years, scoring over 85% accuracy ten years ahead (yes, I counted, and that is an honest figure, not some exaggerated sales claim). 85% isn’t perfect by any means but it is still very valuable. So, why do people make these comments that futurists don’t predict the future? Is it just ‘Those that can do. Those that can’t, teach. Those that can’t teach administrate?’ Perhaps.

When you look at what they do, the services they offer are mostly teaching and futures facilitation. As I explained, a facilitator runs a workshop, and tells attendees what they’re doing next and guides the general process of thinking about the future, e.g. making scenarios and thinking them through. The primary knowledge, thinking, predicting and insight lies with the workshop attendees. Futurologists worthy of the title offer either prediction skills or insights about what is driving the future, i.e. what will drive the various options you might have to deal with. They should really be able to do both. If you are neither offering the attendees key insights nor making useful predictions, you are not acting as a futurologist but as a facilitator, typically a junior manager of administrator role. If you can’t make grounded predictions or at least offer genuine useful insight about what, where, who, when, or why the future will do x, y or z, you are not a ‘futurologist’ or futurist, whatever other roles you can reasonably claim. Please don’t say futurologists can’t predict the future. I’ve been a professional futurologist for 30 years, making thousands of reliable predictions. Maybe you can’t, but I certainly can, and so can many others. It really is nonsense to say otherwise.

Futures Teachers

A growing number of institutions offer courses that teach about the future, the skills and tools that are useful in studying it or using the outcomes, and even some of the basic knowledge about the various factors that will influence the future. Many other futures courses have come and gone. It is certainly useful to teach students what the future is likely to hold in broad terms, an assortment of useful futures skills, and also to teach them discernment and research skills so that they can build and maintain their own mindsets, as well as thinking skills, especially those that help them to think in whole systems terms, and teaching some basic work-shopping techniques etc. Futures teachers who teach such things may also be established futurologists in their own right.

Many futurologists take such courses, but many others develop their futures knowledge, thinking and analytical skills via on-the-job learning. I’d argue that both are valuable. If the desired role is to be a futures facilitator, or futures teacher, a futures course might suffice in itself (though basic facilitation skills can be learned in an hour or two). Being a professional futurologist requires serious in-depth knowledge of a field so a futures course can be a good springboard, but is really only valuable if accompanied by real experience in a field. By contrast, in-depth real-life experience is a good teacher in itself, since most futurology comes down to clear thinking, system-wide and sector-specific knowledge, and mature experience of everyday life, while many futures techniques are also widely embedded in many fields (modelling, trend analysis, data and stats know-how, basic planning and strategy techniques for example). Industry knowledge and skills can take many years to master, and futures techniques applied without that skill-set might be low value, so ideally, futurologists would have several years of real-life experience working in their chosen field as well as experience of using assorted futures techniques, which may be learned either from courses or on-job. Many corporate futurologists (like the authors) came that route. Teaching works both ways too. I’ve been involved in futures courses both in course development and teaching, and I’ve had enough exposure to various content and tools to know what works in practice and what doesn’t. As with any area, there is good and bad teaching, and good and bad techniques, so discernment is an important skill here too.

Futures speakers, writers, film makers, bloggers, journalists

Many futurologists give talks at conferences or workshops, or appear on TV and radio. It’s to be expected. The products of futurology are both interesting and useful, a rich source of food for thought, strategic input or even entertainment. Some excellent futurology has come from science fiction writers. H G Wells, Aldous Huxley, Arthur C Clarke, Isaac Asimov, George Orwell and even Terry Pratchett and many others too many to list have given us rich visions about what the future might hold, and often their visions have been well thought through, so are self-consistent and plausible within their own frames, even if some of the sci-fi technology would never work for real. Futurology requires some of the same skills needed to write good sci-fi, so it’s not surprising. Sci-fi also has a rich two-way interaction with technology fact, and writers may be good scientists or engineers as well as writers. Futures writing often becomes film-making too (or TV series such as Black Mirror), so this part of the futures industry is one of its most glamorous lucrative sectors. In more mundane use, futures writing also costs in in PR and marketing, where it can grab abundant media coverage and clicks for a campaign by adding interesting materials about the future, however tangentially related to that campaign it might be.

Many futurists have blogs or video channels. Some interview researchers or receive press releases about future products and write about them. If they add insight on the likely impacts of these things, or predict what future versions might bring, then they are properly fulfilling the requirements of being a professional or expert futurologist in their own right. I have come across some excellent futurologists in this space, who make great interviewers because they have thought the issues through themselves so know the most interesting areas to focus on and ask about, where to challenge and where to let flow. I guess the key difference here is between an amateur futurologist who just reports things, and the professional journalist futurologist who thinks about the implications and adds insight.

Futurologists may often be asked to speak at conferences, with a wide mix of briefs that range from entertainment, opening people’s minds with stimulating ideas, outlining threats and opportunities, providing thought leadership, and sometimes are there to give complacent employees a much-needed kick in the pants.

Trend watchers

Many futurologists specialize in spotting existing trends and extrapolating them for a few years. Some call themselves trend spotters, trend trackers, trend analysts; some call themselves data scientists and some of them might even laugh at titles like futurist or futurologist, and that’s fine – a rose by any other name smells as sweet – they are still part of the futures family. Data science is a broad field in itself, often with a highly specific insight as the goal. Extrapolation is only useful for existing trends and usually only works for short term, but it is still highly valuable within those constraints. Data analysis tools, especially latest AI tools, can also produce insights on trends that are not easily noticed. Many other techniques are important here too, such as watching M&A activity, interviewing key people in industry or regulation, watching what people are talking about on social media. So trend spotting or analysis is a very different field from longer term futurology in terms of skillset, but every bit as valuable. This trend spotting and tracking often results in very pricey reports that are eagerly bought by companies wanting to develop particular markets or products. It saves those companies doing the work themselves and can help reduce risk enormously. Other companies do their own analysis internally, often at great expense, to extract the valuable insights that feed into their short-term planning. So although this is a very different field, looking at the short-term future with very different skills from the longer term futurology that I do, it is still certainly futurology and very important part of the field.

Futures Activists

Some groups even call themselves futures activists, but that term grates harshly against the core futurology skill of clear thinking. Obviously, people who are professional futurologists or even expert futurologists can also be activists in any field they choose. In fact, most of us are also engaged in various hobbies, special interest or political activities, but futurology is to do with mapping the future landscape, highlighting the important features and predicting the paths most likely to be followed. Activism seeks to force society down paths favored by the activist, quite separate and quite different. Lobbying, campaigning, distributing propaganda, demonstrating, or using social media to pressurize or attack or silence people are all the stuff of everyday 2021 politics, but activism is nothing to do with futurology per se. To be clear, futurologists may also simultaneously be parents, environmentalists, democrats or conservatives, gardeners, and activists in any number of areas, but those other things are not futurology and there is no value in conflating roles. As for activist groups who differentiate by race, futurologists should be judged by the quality of their insight or the accuracy of their predictions, not by the color of their skin.

Also into this activism role should be placed the frequent conflation of aspiration and prediction. Mapping out the field of potential futures is futurology; aspiration may be interpreted as looking at the futures landscape and picking and planning the path you wish to take, which may or may not involve also applying some futurology, but aspiration itself is neither a skill nor a useful qualifier, since everyone has aspirations. Again, the two activities are really quite distinct and there is no value in conflation. There is nothing wrong with working with organisations or clients to identify and steer towards a preferred future, but one has to more objectively investigate what the possible or probable default futures might be first.

So, there are numerous distinct roles in the futures industry, and it’s commonplace to blend them with other roles or with each other. The field is very rich in enjoyable, challenging and rewarding activity, and we would recommend it as a career.

Tracey is a futurist and author of The Future of You: Can Your Identity Survive 21st-Century Technology? She is the founder CEO of Futuremade, a futures consultancy advising global brands and specialising in the application of foresight to boost business. She helps clients spot trends, develop foresight and fully prepare for what comes next. A regular keynote speaker all around the world she has covered topics as diverse as the future of luxury, retail, media, cities, gender, work, defense, justice, entertainment, and AI ethics, decoding the future for businesses, brands and organisations. She is a member of the Association of Professional Futurists and World Futures Studies Federation, and a Fellow of the RSA. 

Dr Pearson has been a futurologist for 30 years, tracking and predicting developments across a wide range of technology, business, society, politics and the environment. Graduated in Maths and Physics and a Doctor of Science. Worked in numerous branches of engineering from aeronautics to cybernetics, sustainable transport to electronic cosmetics. 1900+ inventions including text messaging and the active contact lens, more recently a number of inventions in transport technology, including driverless transport and space travel. BT’s full-time futurologist from 1991 to 2007 and now runs Futurizon, a small futures institute. Writes, lectures and consults globally on all aspects of the technology-driven future. Eight books and over 850 TV and radio appearances. Chartered Member of the British Computer Society and a Fellow of the World Academy of Art and Science.

Brain refresh mechanism

I just read the transcript of an excellent podcast by Brian Roemmele and Jim O’Shaughnessy covering the intelligence amplifier and other ideas.

https://www.infiniteloopspodcast.com/brian-roemmele-the-intelligence-amplifier-ep29/#transcript

Engineering is rather like walking along a pebble beach with a friend. You’ll both experience broadly the same beach and will generally agree on the big picture stuff, but you’ll notice different pebbles. So it is with the field of connecting IT with our bodies and minds. Many engineers have worked in the field and in spite of a lot of overlap, there are enough differences in background, skillset, perception and approach that we’ll often come up with alternative ideas and insights, and even different solutions to the same problems.

In this case, it was clear we’ve both explored the issue of our brain forgetting information and experiences, and although forgetting can be highly useful tool in creativity, it can also severely limit our ability to think. If you could recall every book, every lecture, every idea, how much better might you be able to think through a new idea? I found a short article I wrote 30 years ago on this very problem. Most of it would still be valid now, and it doesn’t even contravene Roemele’s consciousness bandwidth limit. Here it is:

Brain refresh mechanism, April 1991

The Macintosh has a desktop rebuild facility, which restores links between applications and documents. Norton utilities on the Mac have a further facility for repairing directories so that lost information can be found.

Adding these facilities, and working out the brain equivalent, this would perhaps be the same as restoring all one’s memories and skills, since all previous links in the brain would be restored. This sounds very sci-fi but there may be a way of doing this. It requires some modest advances in technology and maybe biology and psychology too but doesn’t everything?

It is well known that electrical stimulation on certain parts of the brain will stimulate memory recall, and this can be so accurate as to be the equivalent of re-living an experience. When this happens, the brain then restores those links in memory which had been lost and the person will remember this experience afresh (and probably eventually forget it in the same way). It is conceivable (but not certain) that if many points could be stimulated that large areas of memory could be rebuilt.

Obviously, it would not be desirable to carry out an operation to do this so it would need to be done remotely. Suppose that a safe (but electromagnetically responsive) fluid could be injected into the blood-stream. Suppose then that an electromagnetic field could be created at any desired point so that a localised high intensity was to result (this technique is already well established in radio-therapy). With the right choice of fluid, this could possibly result in sufficient stimulation to achieve the same effect as direct electrical stimulation. Since the electromagnetic field could be steered, presumably a complete brain refresh could eventually be achieved, with great enhancement in knowledge and skill.

Given the appropriate advances in CAT techniques and the discovery of suitable fluids, the rest is down to experimentation.

It is possible, although more unlikely and certainly further future, that information could be directly stored in the brain using this technique, accelerating learning and directly conveying information. By then, other more direct brain interfaces may have been developed, for transactions in either direction.

Possibly a silly idea.