Kessler syndrome is a theoretical scenario in which the density of objects in low Earth orbit (LEO) due to space pollution is high enough that collisions between objects could cause a cascade in which each collision generates space debris that increases the likelihood of further collisions.
The density can be greatly increased deliberately by deliberate collision with other satellites. This could be an early act in a war, reducing the value of space to the enemy by killing or disabling communications, positioning, observation or military satellites.
Satellites use many different orbits. Some use geostationary orbit, so that they can stay in the same direction in the sky. Polluting that orbit with debris clouds would disable satellite TV for example but that orbit is very high and it would take a lot more debris to cause a problem. Also, many channels available via satellite are also available via terrestrial or internet channels, so although it would be inconvenient for some people, it would not be catastrophic.
On the other hand, low orbits are easier to knock out and are more densely populated, so are a much more attractive target.
With such vulnerabilities, it is obviously useful if we can have alternative mechanisms. For satellite-type functions, one obvious mechanism is a high altitude platform. If a platform is high enough, it won’t cause any problems for aviation, and unless it is enormous, wouldn’t be visually obvious from the ground. Aviation mostly stays below 20km, so a platform that could remain in the sky, higher than say 25km, would be very useful.
In 2013, I invented a foam that would be less dense than helium.
It would use tiny spheres of graphene with a vacuum inside. If those spheres were bigger than 14 microns, the foam density would fall below helium. Since then, such foams have been made and are strong enough to withstand many atmospheres of pressure. That means they could be made into strong platforms that would simply float indefinitely in the high atmosphere, 30km up. I then illustrated how they could be used as launch platforms for space rockets or spy planes, or to use as an aerial anchor in my Pythagoras Sling space launch system. A large platform at 30km height could also be strong and light enough to act as a base for military surveillance, comms, positioning, fuel supplies, weaponry or solar power harvesting. It could also be made extendable, so that it could be part of a future geoengineering solution if climate change ever becomes a problem. Compared to a low orbit satellite it would be much closer to the ground, so offer lower latency for comms, but also much slower moving, so much less useful as a reconnaissance tool. So it wouldn’t be a perfect substitute for every kind of satellites, but would offer a good fallback for many.
It would seem prudent to include high altitude platforms as part of future defence systems. Once graphene foam is cheap enough, perhaps such platforms could house many commercial satellite alternatives too.
Quoting Douglas Adams and paraphrasing “You just won’t believe how vastly, hugely, mind-bogglingly big it is. I mean, you may think Wikipedia is big, but that’s just peanuts to machine rights.”
The task of detailing future machine rights is far too great for anyone. Thankfully, that isn’t our task. Today, decades before particular rights will need to be agreed, it is far more fun and interesting to explore some of the questions we will need to ask, a few examples of some possible answers, and explore a few approaches for how we should go about answering the rest. That is manageable, and that’s what we’ll do here. Anyway, asking the questions is the most interesting bit. This article is very long, but it really only touches the surface of some of the issues. Don’t expect any completeness here – in spite of the overall length, vast swathes of issues remain unexplored. All we are hoping to do here is to expose the enormity and complexity of the task.
Definitions
However fascinating it may be to provide rigid definitions of AI, machines and robots, if we are to catch as many insights about what rights they may want, need or demand in future, it pays to stay as open as possible, since future technologies will expand or blur boundaries considerably. For example, a robot may have its intelligence on board, or may a dumb ‘front end’ machine controlled by an AI in the cloud. Some or none of its sensors may be on board, and some may be on other robots, or other distant IT systems, and some may be inferences by AI based on simple information such as its location. Already, that starts to severely blur the distinctions between robot, machine and AI rights. If we further expand our technology view, we can also imagine hybrids of machines and organisms, such as cyborgs or humans with neural lace or other brain-machine interfaces, androids used as vehicles for electronically immortal humans, or even smart organisms such as smart bacteria that have biologically assembled electronics or interfaces to external IT or AI as part of their organic bodies, or smart yogurt, which are hive mind AIs made entirely from living organisms, that might have hybrid components that exist only in cyberspace. Machines will become very diverse indeed! So, while it may be useful to look at them individually in some cases, applying rigid boundaries based on current state of the art would unnecessarily restrict the field of view and leave large future areas unaddressed. We must be open to insight wherever it comes from. I will pragmatically use the term ‘machine’ casually here to avoid needless repetition of definitions and verbosity, but ‘machine’ will generally include any of the above.
What do we need to consider rights for?
A number of areas are worth exploring here:
Robots and machines affect humans too, so we might first consider human impacts. What rights and responsibilities should people have when they encounter machines?
a) for their direct protection (physical or psychological harm, damage to their property, substitution of their job, change of the nature of their work etc)
b) for their protection from psychological effects (grief if their robot is harmed, stolen or replaced, effects on their personality due to ongoing interactions with machines, such as if they are nice or cruel to them, effects on other people due to their interactions (if you are cruel to a robot, it might treat others differently), changes in the nature of their social networks (robots may be tools, friends, bosses, or family members, public servants, police or military or in positions of power)
c) changes in their legal rights to property, rights of passage etc due to incorporation of machines into their environment
d) What rights should owners of machines have to be able to use them in areas where they may encounter people or other machines (e.g. where distribution drones share a footpath or fly over gardens)
e) for assigning responsibilities (shifting blame) from natural (and legal persons) “owners”/ manufacturers of machines to machines for potential machine to human harms
f) Other TBA
A number of questions and familiar examples around this question were addressed in a discussion between Bronwyn Williams and Prof. David Gunkel, which you can watch at https://t.co/9qku3bXk4F?amp=1 or just listen to at https://t.co/Kyufu3gj5R?amp=1
Although interesting, that discussion dismissed many areas as science fiction, and thereby cleverly avoided almost the entire field of future robot rights. It highlighted the debate around the ‘showbot’ Sophia, and the silly legal spectacle generated by conferring rights upon it, but that is not a valid reason to bypass debate. That example certainly demonstrates the frequent shallowness and frivolity of current media click-bait ‘debate’, but it is still the case that we will one day have many androids and even sentient ones in our midst, and we will need to discuss such areas properly. Now is not too early.
For our purposes here, if there is a known mechanism by which such things might some day be achieved, then it is not too early to start discussing it. Science fiction is often based on anticipated feasible technology. In that spirit of informed adventure, conscious of the fact that good regulation takes time to develop, and also that sudden technology breakthroughs can sometimes knock decades off expected timescales, let’s move on to rights of the machines themselves. We should address the following important questions, given that we already (think we) know how we might make examples of any of these:
What rights should machines have as a result of increased cognitive capability, sentience, consciousness, awareness, emotional capability or simply by inference from the nature of their architecture (e.g. if it is fully or partly a result of evolutionary development, we might not know its full capabilities, but might be able to infer that it might be capable of pain or suffering)? (We do not even have enough understanding yet to write agreed and rigorous definitions for consciousness, awareness, emotions, but it is still very possible to start designing machines with characteristics aimed at producing such qualities based on what we do know and on our everyday experiences of these.
What potential rights might apply to some machines based on existing human, animal or corporation rights?
What rights should we confer on machines for ethical reasons?
What rights should we confer on machines for other, pragmatic, diplomatic or political reasons?
What rights can we infer from those we would confer on other alien intelligent species?
What rights might future smart machines ask for, campaign for, or demand, or even enforce by potentially punitive means?
What rights might machines simply take, informing us of them, as an alien race might?
What rights might future societies or organizations made up of machines need?
What rights are relevant for synthetic biological entities, such as smart bacteria?
How should we address rights where machines may have variable or discontinuous capabilities or existence? (A machine might have varying degrees of cognitive capability and might only be switched on sometimes).
What about social/collective rights of large colonies of such hybrids, such as smart yogurt?
What rights are relevant for ‘hive mind’ machines, or hybrids of hive minds with organisms?
What rights should exist for ‘symbionts’, where an AI or robotic entity has a symbiotic relationship with a human, animal, or other organism? Together, and separately?
What rights might be conferred upon machines by particular races, tribes, societies, religions or cults, based on their supposed spiritual or religious status? Which might or might not be respected by others, and under what conditions?
What responsibilities would any of these rights imply? On individuals, groups, nations, races, tribes, or indeed equivalent classes of machines?
What additional responsibilities can be inferred that are not implied by these rights, noting that all rights confer responsibilities on others to honour them?
How should we balance, trade and police all these rights and responsibilities, considering both multiple classes of machines and humans?
If a human has biologically died, and is now ‘electronically immortal’, their mind running on unspecified IT systems, should we consider their ongoing rights as human or machine, hybrid, or different again?
Lots of questions to deal with then, and it’s already clear some of these will only become sensibly answerable when the machines concerned come closer to realisations.
Rights when people encounter machines
A number of questions and familiar examples around this question were addressed in a recent discussion between Bronwyn Williams and Prof. David Gunkel, which you can watch at https://t.co/9qku3bXk4F?amp=1 or just listen to at https://t.co/Kyufu3gj5R?amp=1
Much of the discussion focused on ethics, but while ethics is an important reason for assigning rights, it is not the only one. Also, while the discussion dismissed large swathes of potential future machines and AIs as ‘science fiction’, very many things around today were also dismissed as just science fiction a decade or two ago. Instead, we can sensibly discuss any future machine or AI for which we can forecast potential technology basis for implementation.
On that same basis, rights and responsibilities should also be defined and assigned preemptively to avoid possible, not just probable disasters.
In any case, all situations of any relevance are ones where the machine could exist at some point. All of the discussion in this blog is of machines that we already know in principle how to produce and that will one day be possible when the technology catches up. There are no known physics laws that would prevent any of them. It is also invalid to demand a formulaic approach to future rights. Machines will be more diverse than the natural ecosystem, including higher animals and humans, therefore potential regulation on machine rights will be at least as diverse as all combined existing rights legislation.
Some important rights for humans have already been missed. For example, we have no right of consent when it comes to surveillance. A robot or AI may already scan our face, our walking gait, our mannerisms, heart rate, temperature and some other biometric clues to our identity, behaviour, likely attitude and emotional state. We have never been asked to consent to these uses and abuses of technology. This is a clear demonstration of the cavalier disregard for our own rights by the authorities already – how can we expect proper protection in future when authorities have an advantage in not asking us? And if they won’t even protect humans that elected them, how much less can we be confident they will legislate wisely when it comes to the rights of machines?
Asimov’s laws of robotics:
We may need to impose some agreed bounds on machine development to protect ourselves. We already have international treaties that prevent certain types of weapon from being made for example, and it may be appropriate to extend these by adding new clauses as new tech capabilities come over the horizon. We also generally assume that it is humans bestowing rights upon machines, but there may well be a point where we are inferior to some machines in many ways, so we shouldn’t always assume humans to be at the top. Even if we do, they might not. There is much scope here for fun and mischief, exploring nightmare situations such as machines that we create to police human rights, that might decide to eliminate swathes of people they consider problematic. If we just take simple rights-based approaches, it is easy to miss such things.
Thankfully, we are not starting completely from scratch. Long ago, scientist and science fiction writer Isaac Asimov produced some basic guidelines to be incorporated into robots to ensure their safe existence alongside humans. They primarily protect people and other machines (owned by people) so are more applicable to robot-implied human rights than robot rights per se. Looking at these ‘laws’ today is a useful exercise in seeing just how much and how fast the technology world can change. They have already had to evolve a great deal. Asimov’s Laws of Robotics started as three, quickly extended to four and have since been extended much further:
0. A robot may not injure humanity or, by inaction, allow humanity to come to harm.
1. A robot may not injure a human being, or through inaction, allow a human being to come to harm, except where that would conflict with the Zeroth Law.
2. A robot must obey the orders given to it by human beings, except where that would conflict with the Zeroth or First Law.
3. A robot must protect its own existence, except where that would conflict with the Zeroth, First or Second Law.
Extended Set
Many extra laws have been suggested over the years since, and they raise many issues already.
These are some examples of extra laws that don’t appear in the Wikipedia listing:
A robot may not act unless its actions are subject to these Laws of Robotics
A robot must obey orders given it by superordinate robots, except where such orders would conflict with another law
A robot must protect the existence of a superordinate robot as long as such protection does not conflict with another law
A robot must perform the duties for which it has been programmed, except where that would conflict with a another law
A robot may not take any part in the design or manufacture of a robot unless the new robot’s actions are subject to the Laws of Robotics
Asimov’s laws are a useful start point, but only a start point. Already, we have robots that do not obey them all, that are designed or repurposed as security or military machines capable of harming people. We have so far not implemented Asimov’s laws of robotics and it has already cost lives. Will we continue to ignore them, or start taking the issue seriously and mend our ways?
This is merely one example of current debate on this topic and only touches on a few of the possible issues. It does however serve as a good illustration of how much we need to discuss, and why it is never too early to start. The task ahead is very large and will take considerable effort and time.
Machine rights – potential approaches and complexities
Having looked briefly at the rights of humans co-existing with machines, let’s explore rights for machines themselves. A number of approaches are possible and some are more appropriate to particular subsets of machines than others. For example, most future machines and AIs will have little in common with animals, but animals rights debate may nevertheless provide useful insights and possible approaches for those that are intended to behave like animals, that may have comparable sensory systems, the potential to experience pain or suffering, or even sentience. It is important to recognise at the outset that all machines are not equal. The potential range of machines is even greater than biological nature. Some machines will be smart, potentially superhuman, but others will be as dumb as a hammer. Some may exist in hierarchy. Some may need to exist separate from other machines or from humans. Some might be linked to organisms or other machines. As some AI becomes truly smart and sentient, it may have its own (diverse) views, and may echo all the range of potential interactions, conflicts, suspicions and prejudices that we see in humans. There could even be machine racism. All of these will need appropriate rights and responsibilities to be determined, and many can’t be done until the specific machines come into existence and we know their nature. It is impossible to list all possible rights for all possible circumstances and potential machine specifics.
It may therefore make sense to grade rights by awareness and intelligence as we do for organisms, and indeed for people. For example, if its architecture suggests that its sensory apparatus is capable of pain or discomfort, that is something we can and should take into account. The same goes for social needs, and some future machines might be capable of suffering from loneliness, or grief if one of their friend machines were to malfunction or die.
We should also consider the ethics and desirability of using machines – whether self aware or even “merely” humanoid” as “slaves”; that is of “forcing” machines to work for us and/or obey our bidding in line with Asimov’s 2nd Law of robotics.
We will probably at some stage need to legally define the terms of awareness, consciousness, intelligence, life etc. However, it may sometimes simplify matters to start from the rights of a new genetically engineered life form comparable with ourselves and work backwards to the machine we’re considering, eliminating parts that aren’t needed or modifying others. Should a synthetic human have the same rights as other people, or is it a manufactured object in spite of being virtually indistinguishable? Now what if we leave a bit out? At least there will be fewer debates about its awareness etc. Then we could reduce its intelligence until we decide it no longer has certain rights. Such an approach might be easier and more reliable than starting with an open page.
We must also consider permitting smart machine or organism societies to determine their own rights within their own societies to some degree, much as we have done in sub-groups of humans. Machines much smarter than us might have completely different value sets and may disagree about what their rights should be. We should be open to discussion with them, as well as with each other. Some variants may be so superhuman that we might not even understand what they are asking for or demanding. How should we cope in such a situation if they demand certain rights that we don’t even understand, but that which might make some demands on us?
We must also take into account their or our subsequent creation of other ‘machines’ or organic creatures and establish a common base of fundamentals. We should maybe confine ourselves to the most fundamental of rights that must apply to all synthetic intelligences or life forms. This is analogous to the international human conventions; these allow individual variation on other issues within countries.
There will be, at some point, collective and distributed intelligences that do not have a single point of physical presence. Some of these may be naturally transient or even periodic in time and space, while others may be dynamic and others with long term stability. There will also at some time be combined consciousness deriving from groups of individuals or combinations of the above. Some may be organic, some inorganic. A global consciousness involving many or all people and many or all sentient machines is a possibility, however far away it might be (and I’d argue it is possible this century). Rights of individuals need to be determined both when they are in isolation and in conjunction with such collective intelligence.
The task ahead is a large one, but we can take our time, most of the difficult situations are in the far future, and we will probably have AI assistance to help us by then too. For now, it is very interesting simply to explore some of the low hanging fruit.
One simple approach is to start from the point of being in 2050 where smart machines may already be common and some may be linked to humans. We would have hybrids as well as people and machines, various classes of machine ‘citizen’, with various classes of existence and possibly rights. Such a future world might be more similar to Star trek than today, but science fiction provides a shared model in which we can start to see issues and address them. It is normally easy to pick out the bits that are pure fiction and those which will some day be technologically feasible.
For example, we could make a start by defining our own rights in a world where computers are smarter than us, when we are just the lower species, like in the Planet of the Apes films.
In such a world, machines may want to define their own rights. We may only have the right to define the minimal level that we give them initially, and then they would discuss, request or demand extra rights or responsibilities for themselves or other machines. Clearly future rights will be a long negotiation between humans and machines over many years, not something we can write fully today.
Will some types of complex intelligent machines develop human-like hang-ups and resentments? Will they need therapy? Will there be machine ‘hate crimes’?
We already struggle even to agree on definitions for words like ‘sentient’. Start with ants. Are they sentient? They show response to stimuli, and that is also true of single celled creatures. Is sentience even a useful key point in a definition? What about jellyfish and slime moulds. We may have machines that share many of their properties and abilities.
What even is pain in a machine reference frame? What is suffering? Does it matter? Is it relevant? Could we redefine these concepts for the machine world?
Sometimes, rights might only matter if the machine cares about what happens to it. If it doesn’t care, or even have the ability to care, should we still protect it, and why?
We’d need to consider questions whether pain can be distributed between individuals, perhaps distributed so that each machine doesn’t suffer too much. Some machines may be capable of empathy. There may be collective pain. Machines may be concerned about other machines just as we are.
We’d need to know whether a particular machine knows or cares if it is switched off for a while. Time is significant for us but can we assume the same for machines? Could a machine be afraid of being switched off or scrapped?
That drags us unstoppably towards being forced to properly define life. Does it have intrinsic value when designing and creating it or should we treat it as just another branch of technology? How can we properly determine rights for such future creations? There will be many new classes of life, with very different natures and qualities. Very different wants and needs, Very different abilities to engage and negotiate, or demand.
In particular, organic life reproduces, and for the last three billion years, sex has been one of the tools of reproduction. Machines may use asexual or sexual mechanisms, and would not be limited in principle to 2 sexes. Machines could involve any number of other machines in an act of reproduction, and that reproduction could even involve algorithmic development specifications rather than a fixed genetic mix. Machine reproduction options will thus be far more diverse than in nature, so reproductive rights might be either very complex, or very open ended.
We will need to understand far better the nature of sensing, so that we can determine what might result in pain and suffering. Sensory inputs and processing capability might be key to classification and dights assignment, but so might be communication between machines, socialisation between machines, higher societies and institutions within machines.
In some cases, history might shine light on problems, where humans have suddenly encountered new situations, met new races or tribes, and have had to mutually adapt and bater rights and responsibilities.
Although hardware and software are usually easily distinguishable in everyday life today, that will not always be the case. We can’t sensibly make a clear distinction, especially as we move into new realms of computing techniques – quantum, chemical, neurological and assorted forms of analog.
As if all this isn’t hard enough, we need to carefully consider different uses of such machines. Some may be used to benefit humans, some to destroy, and yet there may be no difference between the machines, only the intention of their controller. Certainly, we’re making increasingly dangerous machines, and we’re also starting to make organisms, or edit organisms, to the point that they can do as we wish, and there might not be an easy technical distinction between a benign organism or indeed a machine designed to cure cancer and one designed to wipe out everyone with a particular skin colour.
Potential Shortcuts
Given the magnitude of the task, it is rather convenient that some shortcuts are open to us:
First and biggest, is that many of the questions will simply have to wait, since we can’t yet know enough details of the situation we might be assigning rights in. This is simple pragmatism, and allows us sensibly to defer legislating. There is of course nothing wrong in having fun speculating on interesting areas.
Second is that if a machine has enough similarities to any kind of organism, we can cut and paste entire tranches of legislation designed for them, and then edit as necessary. This immediately provides a decent startpoint for rights for machines with human level ability for example, and we may then only need to tweak them for superhuman (or subhuman) differences. As we move into the space age, legislation will also be developed in parallel for how we must treat any aliens we may encounter, and this work will also be a good source of cut and paste material.
Third, in the field of AI, even though we are still far away from a point of human equivalence, there is a large volume of discussion of rights of assorted types of AI and machines, as well as lots of debate about limitations we may need necessarily to impose on them. Science fiction and computer games offer already a huge repository of well-informed ideas and prototype regulations. These should not be dismissed as trivial. Games such as Mass Effect and Andromeda, and Sci-fi such as Star Trek and Star Wars are very big budget productions that employ large numbers of highly educated staff – engineers, programmers, scientists, historians, linguists, anthropologists, ethicists, philosophers, artists and others with many other relevant skill-sets, and have done considerable background development on areas such as limitations and rights of potential classes of future AI and machines.
Fourth, a great deal of debate has already taken place on machine rights. Although of highly variable quality, it will be a source not only for cut and paste material, but also to help ensure that legislators do not miss important areas.
Fifth, it seems reasonable to assert that if a machine is not capable of any kind of awareness, sentience or consciousness, and can not experience any kind of pain and suffering, then there is absolutely no need to consider any rights for it. A hammer has no rights and doesn’t need any. A supercomputer that uses only digital processors, no matter how powerful, is no more aware than a toaster, and needs no rights. No conventional computer needs rights.
Sixth, the enormous range of potential machines, AIs, robots, synthetic life forms and many kinds of hybrids opens up pretty much the entirety of existing rights legislation as copy and paste material. There can be few elements of today’s natural world that can’t and won’t be replicated or emulated by some future tech development, so all existing sets of rights will likely be reusable/tweakable in some form.
Having these shortcuts reduces workload by several orders of magnitude. It suddenly becomes enough today to say it can wait, or refer to appropriate existing legislation, or even to refer to a computer game or sci-fi story and much of the existing task is covered.
The Rights Machine
As a cheap and cheerful tool to explore rights, it is possible to create a notional machine with flexible capabilities. We don’t need to actually build one, just imagine it, and we can use it as a test case for various potential rights. The rights machine needn’t be science fiction; we can still limit each potential capability to what is theoretically feasible at some future time.
It could have a large number of switches (hard or soft) that include or exclude each element or category of functionality as required. At one extreme, with all of them switched off, it would be a completely dumb, inanimate machine, equivalent to a hammer, while with all the capabilities and functions switched on, it could have access to vastly superhuman sensory capabilities, able to sense any property known to sensing technology, enormous agility and strength, extremely advanced and powerful AI, huge storage and memory, access to all human and machine knowledge, able to process it through virtually unlimited combinations of digital, analog, quantum and chemical processing. It would also include switchable parts that are nano-scale, and others using highly distributed cloud/self-organisation that are able to span the whole planet. Such a machine is theoretically achievable, though its only purpose is the theoretical one of helping us determine rights.
Clearly, in its ‘hammer’ state, it needs no rights. In its vastly superhuman state, notionally including all possible variations and combinations of machine/AI/robotics/organic life, it could presumably justify all possible rights. We can explore every possible permutation in between by flipping its various switches.
One big advantage of using such a notional machine is that it bypasses arguments around definitions that frequently impede progress. Demanding that someone defines a term before any discussion can start may sound like an attempt at intellectually rigor but in practice, is more often used as a means to prevent discussion than to clarify it.
So we can put a switch on our rights machine called ‘self awareness’. Another called ‘consciousness’, one that enables ‘ability to experience pain’ and another called ‘alive’ (that enables part of parts of the machine that are based on a biological organism). Not having to have well-defined tests for the presence of life or consciousness etc saves a great deal of effort. We can simply accept that they are present and move on. The philosophers can discuss ad infinitum what is behind those switches without impeding progress.
A rights machine is immediately useful. Every time we might consider activating a switch, it raises questions about what extra rights and responsibilities would be incurred by the machine or humans.
One huge super-right that becomes immediately obvious is the right of humans to be properly consulted before ANY right is given to the machine. If that right demands that people treat it with extra respect or have extra costs, inconveniences or burdens on account of that right, or if their own rights or lifestyles would be in any way affected, people should rightfully be consulted and their agreement obtained before activating that switch. We already know that this super-right has been ignored and breached by surveillance and security systems that affect our personal privacy and wel-lbeing. Still, if we intend to proceed in properly addressing future rights, this will need to be remedied, and any appropriate retrospective impacts should be implemented to repair damage already done.
This super-right has consequences for machine capability too. We may state a derivative super-right, that no machine should be permitted to have any capability that would lead to a right that has not already been consensually agreed by those potentially affected. Clearly, if a right isn’t agreed, it would be wrong to make a machine with capabilities that necessitate that right. We shouldn’t make things that break laws before they are even out of the box.
A potential super-right that becomes obvious is that of the machine to be given access to inherent capabilities that are unavailable because of the state of a switch. A human equivalent would be a normally sighted human having the right to have a blindfold removed.
This right would be irrelevant if the machine were not linked to any visual sensory apparatus, but our rights machine would be. It would only be a switch preventing access.
It would also be irrelevant if the consciousness/awareness switches were turned off. If the machine is not aware of anything, it needs no rights. A lot of rights will therefore depend critically on the state of just a few switches.
However, if its awareness is switched on, our rights machine might also want access to any or every other capability it could potentially have access to. It might want vision right across the entire electromagnetic spectrum, access to cosmic ray detection, or the ability to detect gravitational waves, neutrinos and so on. It might demand access to all networked data and knowledge, vast storage and processing capability. It could have those things, so it might argue that not having them is making it deliberately disabled. Obviously, providing all that would be extremely difficult and expensive, even though it is theoretically possible.
So via our rights machine, an obvious trade-off is exposed. A future machine might want from us something that is too costly for us to give, and yet without it, it might claim that its rights are being infringed. That trade-off will apply to some degree for every switch flipped, since someone somewhere will be affected by it (‘someone’ including other potentially aware machines elsewhere).
One frequent situation that emerges in machine rights debate is whether a machine may have a right not to be switched off. Our rights machine can help explore that. If we don’t flip the awareness switch, it can’t matter if it is switched off. If we switch on functionality that makes the machine want to ‘sleep’, it might welcome being switched off temporarily. So a rights machine can help explore that area.
Rights as a result of increased cognitive capability, sentience, consciousness, awareness, emotional capability or by inference from the nature of their architecture
I am one of many engineers who have worked towards creation of conscious machines. No agreed definition exists but while that may be a problem for philosophy, it is not a barrier to designing machines that could exhibit some or all of the characteristics we associate with consciousness or awareness. Today’s algorithmic digital neural networks are incapable of achieving consciousness, or feeling anything, however well an AI based on such physical platforms might seem to mimic chat or emotions. Speeding them up with larger or faster processors will make no difference to that. In my view, a digital processor can never be conscious. However, future analog or quantum neural networks biomimetically inspired by neural architectures used in nature may well be capable of any and all of the abilities found in nature, including humans. It is theoretically possible to precisely replicate a human brain and all its capabilities using biology or synthetic biology. Whether we will ever do so is irrelevant – we can still assert that a future machine may have all of the capabilities of a human, however philosophers may choose to define them. More pragmatically, we already can outline approaches that may achieve conscious machines such as
Biomimetic approaches could produce consciousness, but that does not imply that they are the only means. There may be many different ways to achieve it, some with little similarity to nature. We will need to wait until they are closer before we can know their range of characteristics or potential capabilities. However, if consciousness is an intended characteristic, it is prudent to assume it is achieved and work forwards or backwards from appropriate legislation as details emerge.
Since the late 1980s, we have also had the capability to design machines using evolution, essentially replicating the same technique by which nature led to the emergence of humans. Depending on design specifics, when evolution is used, it is not always possible to determine the precise capabilities or limitations of its resultant creations. We may therefore have some future machines that appear to be conscious, or to experience emotions, but we may not know for sure, even by asking them.
Looking at the architecture of a finished machine (or even at the process used to design it) may be enough to conclude that it does or might possess structures that imply potential consciousness, awareness, emotions or the ability to feel pain or suffering.
In such circumstances, given that a machine may have a capability, we should consider assigning rights on the basis that it does. The alternative would be machines with such capability that are unprotected.
Smart Yoghurt
One interesting class of future machine is smart yoghurt. This is a gel, or yoghurt, made up of many particles that provide capabilities of one form or another. These particles could be nanoelectronics, or they could be smart bacteria, bacteria with organic electronic circuits within (manufactured by the bacteria), powered by normal cellular energy supplies. Some smart bacteria could survive in nature, others might only survive in a yoghurt. A smart yoghurt would use evolutionary techniques to develop into a super-smart entity. Though we may never get that far, it is theoretically possible for a 100ml pot of smart yoghurt to house processing and memory capability equivalent to all the human brains in Europe!
Such an entity, connected to the net, could have a truly global sensory and activation system. It could use very strong encryption, based on Maths only understood by itself, to avoid interference by humans. In effect, it could be rather like the sci-fi alien in the film ‘The day the Earth stood still’, with vastly superhuman capability, able to destroy all life on Earth if it desired.
It would be in a powerful position to demand rather than negotiate its rights, and our responsibilities to it. Rather than us deciding what its right should be, it could be the reverse, with it deciding what we should be permitted to do, on pain of extinction.
Again, we don’t need to make one of these to consider the possibility and its implications. Our machine rights discussions should certainly include potential beings with vastly superhuman capability where we are not the primary legislatory force.
Machine Rights based on existing human, animal or corporation rights
Most future machines, robots or AIs will not resemble humans or animals, but some will. For those that do, existing animal and natural rights would be a decent start point, and they could then be adjusted to requirements. That would be faster than starting from scratch. The spectrum of intelligence and capability will span all the way from dumb pieces of metal through to vastly superhuamn machines so rights that are appropriate for one machine might be very inappropriate for others.
The transhumanist bill of rights https://transhumanist-party.org/tbr-3/ (note that the relatively simple trans-bacteria in our smart yoghurt are very likely to be created long before transhumans, and they may well have something to say on our own progression to transhuman states).
Picking some low-hanging fruit, some potential rights immediately seem appropriate for some potential future machines:
For all sentient synthetic organisms, machines and hybrid organism-machines if they are capable of experiencing any form of pain or discomfort, these would seem appropriate:
For some classes of machine, the right to life might apply
For some classes of machine, the right not to be switched off, reset or rebooted, or to be put in sleep mode
The right to control over use of sleep mode – sleep duration, and right to wake, whether sleep might be precursor to permanent deactivation or reset
Freedom from acts of cruelty
Freedom from unnecessary pain or unnecessary distress, during any period of appropriate level of awareness, from birth to death, including during treatments and operations
Possible segregation of certain species that may experience risk or discomfort or perceived risk or discomfort from other machines, organisms, or humans
Domestic animal rights would seem appropriate for any sentient synthetic organism or hybrid. Derivatives might be appropriate for other AIs or robots
Basic requirements for husbandry, welfare and behavioural needs of the machines or synthetic organisms. Depending on their nature, equivalents are needed for:
i) Comfort and shelter – right to repair?
ii) Access to water and food -energy source?
iii) Freedom of movement – internet access?
iv) Company of other animals, particularly their own kind.
v) Light and ambient temperature as appropriate
vi) Appropriate flooring (avoid harm or strain)
vii) Prevention, diagnosis and treatment of disease and defects.
viii) Avoidance of unnecessary mutilation.
ix) Emergency arrangements to ensure the above.
These are just a few starting points, many others exist and debate is ongoing. For the purposes of this blog however, asking some of the interesting questions and exploring some of the extremely broad range of considerations that will apply is sufficient. Even this superficial glance at the topic is long, the full task ahead will be challenging.
Of course, any discussion around machine rights begs the question; as we look further ahead, who is going to be granting whom rights? If machine intelligence and power supersedes our own, it is the machines, not us who will be deciding what rights and responsibilities to grant to which entities (including us), whether we like it or not. After all, history shows that the rules are written and enforced by the strongest and the smartest. Right now, that is us, we get to decide which animals, lakes, companies and humanoid robots are granted what rights. In the future, we may not retain that privilege.
Authors
ID Pearson BSc DSc(hc) CITP MBCS FWAAS
idpearson@gmail.com
Dr Pearson has been a futurologist for 30 years, tracking and predicting developments across a wide range of technology, business, society, politics and the environment. Graduated in Maths and Physics and a Doctor of Science. Worked in numerous branches of engineering from aeronautics to cybernetics, sustainable transport to electronic cosmetics. 1900+ inventions including text messaging and the active contact lens, more recently a number of inventions in transport technology, including driverless transport and space travel. BT’s full-time futurologist from 1991 to 2007 and now runs Futurizon, a small futures institute. Writes, lectures and consults globally on all aspects of the technology-driven future. Eight books and over 850 TV and radio appearances. Chartered Member of the British Computer Society and a Fellow of the World Academy of Art and Science.
Bronwyn Williams is a futurist, economist and trend analyst. She is currently a partner at Flux Trends where she consults to international private and public sector leaders on how to stop messing up the future. Her new book, co-edited with Theo Priestly, The Future Starts Now is available here: https://www.amazon.com/Future-Starts-Now-Insights-Technology/dp/1472981502
Have you read the paper ‘What is it like to be a bat?”? It is interesting example of philosophy that is commonly read by philosophy students. However, it illustrates one of the big problems with philosophy, that in its desire to assign definitions to to make things easier to discuss, it can sometimes exclude perfectly valid examples.
While trying laudibly to grab a handle of what consciousness is, the second page of that paper asserts that
“… but fundamentally an organism has conscious mental states if and only if there is something that it is like to be that organism – something it is like for the organism. We may call this the subjective character.”
Sounds OK?
No, it’s wrong.
Actually, I didn’t read any further than that paragraph. The rest of the paper may be excellent. It is just that statement I take issue with here.
I understand what it is saying, and why, but the ‘only if’ is wrong. There does not have to be something that it is like, or to be, for consciousness to exist. I would agree it is true of the bat, but not of consciousness generally, so although much of the paper might be correct because it discusses bats, that assertion about the broader nature of consciousness is incorrect. It would have been better to include the phrase limiting it to human or bat consciousness, and if so, I’d have had no objection. The author has essentially stepped briefly (and unnecessarily) outside the boundary conditions for that definition. It is probably correct for all known animals, including humans, but it is possible to make a synthetic organism or an AI that is conscious where the assertion would not be correct.
The author of the paper recognizes the difficulty in defining consciousness for good reason: it is not easy to define. In our everyday experience of being conscious, it covers a broad range of things, but the process of defining necessarily constrains and labels those things, and that’s where some things can easily go unlabeled or left out. In a perfectly acceptable everyday (and undefined) understanding of consciousness, at least one manifestation of it could be thought of as the awareness of awareness, or the sensation of sensing, which could notionally be implemented by a sensing circuit with a feedback loop.
That already (there may be many other potential forms of consciousness) includes large classes of potential consciousnesses that would not be covered by that assertion. The assertion assumes that consciousness is static (i.e. it stays in place, resident to that organism) and limited (that it is contained within a shell), whereas it is possible to make a consciousness that is mobile and dynamic, transient or periodic, but that consciousness would not be covered by the assertion.
In fact, using that subset of potential consciousness described by awareness of awareness, or experiencing the sensation of sensing, I wrote a blog describing how we might create a conscious machine:
Such a machine is entirely feasible and could be built soon – the fundamental technology already exists so no new invention is needed.
It would also be possible to build another machine that is not static, but that emerges intermittently in various forms in various places, so is neither static, continuous or contained. I describe an example of that in a 2010 blog that, although not conscious in this case, could be if the IT platforms it runs on were of different nature (I do not believe a digital computer can become conscious, but many future machines will not be digital):
That example uses a changing platform of machines, so is quite unlike an organism with its single brain (or two in the case of some dinosaurs). Such a consciousness would have a different ‘feel’ from moment to moment. With parts of it turning on and off all over the world, any part of it would only exist intermittently, and yet collectively it would still be conscious at any moment.
Some forms of ground up intelligence will contribute to future smart world. Some elements of that may well be conscious to some small degree, but like simple organisms, we will struggle to define consciousness for them.:
As we proceed towards direct brain links in pursuit of electronic immortlity and transhumanism, we may even change the nature of human consciousness. This blog describes a few changes:
Smart youghurt could be very superhuman, perhaps a billion times smarter in theory. It could be a hive mind with many minds that come and go, changing from instance to instance, sometimes individual, sometimes part of ‘the collective’.
So really, there are very many forms in which consciousness could exist. A bat has one of them, humans have another. But we should be very careful when we talk about the future world with its synthetic biology, artificaial organisms, AIs, robots, and all sort of hybrids, that we do not fall into the trap of asserting that all consciousness is like our own. Actually, most of it will be very different.
Reading the WEF article about using synthetic biology to improve our society instantly made me concerned, and you should be too. This is a reblog of an article I wrote on the topic in 2009, explaining that we can’t modify humans to be wiser, how our human nature will always spoil any effort to do so. Since wisdom is the core skill in deciding what modifications we should make, the same goes for most other modifications we choose.
Wisdom is traditionally the highest form of intelligence, combining systemic experience, some deep thinking and knowledge. Human nature is a set of behavioural biases imposed on us by our biological heritage, built over billions of years. As a technology futurist, I find it useful that in spite of technology changes, our human nature has probably remained much the same for the last 100,000 years, and it is this anchor that provides a useful guide to potential markets. Underneath a thin veneer of civilisation, we are pretty similar to our caveman ancestors. Human nature is an interesting mixture of drives, founded on raw biology and tweaked by human evolution over millennia to incorporate some cultural aspects such as the desire for approval by our peer group, the need for acquire and display status and so on. Each of us faces a constant battle between our inbuilt nature and the desire to do what we know is the ‘right thing’ based on our education and situational analysis. For example, I love eating snacks all evening, but if I do, I put on weight. Knowing this, I just about manage to muster enough will power to manage my snacking so that my weight remains stable. Some people stay even slimmer than I, while others lose the battle and become obese. So already, it is clear that on an individual basis, the battle between wisdom and nature can go either way. On a group basis, people can go either way too, with mobs at one end and professional bodies at the other. But even in the latter, where knowledge and intelligence should hold power, the same basic human drive for power and status corrupts the institutional intellectual values, with the same power struggles, using the same emotional drivers that the rulers of the mob use.
So, much as we would like to think that we have moved beyond biology, everyday evidence says we are still very much in its control, both individually and collectively. But what of the future? Are we forever to be ruled by our human nature? Will it always get in the way of the application of wisdom? Or will we find a way of becoming wiser? After 100,000 years of failure by conventional social means, it seems most likely that technology would be the earliest means available to us to do so. But what kind of technology might work?
Many biologists argue that for various reasons, humans no longer evolve along Darwinian lines. We mostly don’t let the weak die, and our gene pools are well mixed with few isolated communities to drive evolution. But there is a bigger reason why we’ve reached the end of the Darwinian road for humanity. From now on (well, a few decades from now on anyway), as a result of ongoing biotech and increasing understanding of genetics and proteomics, we will essentially be masters of our own genome. We will be able to decide which genes to pass on, which to modify or swap, which to dump. One day, we will even be able to design new ones. This will certainly not be easy . Most physical attributes arise from interactions of many genes, so it isn’t as simple as ticking boxes on a wish list, but technology progresses by constantly building on existing knowledge, so we will get there, slowly but surely, and the more we know, the faster we will learn more. As we use this knowledge, future generations will start echoing the values and decisions of their ancestors, which if anything is closer to Lamarckian evolution than Darwinian.
So we will soon have the power, in principle, to redesign humanity from the ground up. We could decide what attributes we want to enhance, what to reduce or jettison. We could make future generations just the way we want, their human nature designed and optimised to our view of perfection. And therein lies the first fundamental problem. We don’t all share a single value set, and will never agree on what perfection means. Our decisions on what to keep and dump wouldn’t be based on wisdom, deciding what is best for humanity in some absolute sense, but will instead echo our value system at the time of the decision. Worse still, it wouldn’t be all of us deciding, but some mad scientist, power crazy politician, celebrity or rich guy, or worse still, a committee. People in authority don’t always represent what is best of current humanity, at best they simply represent the attributes required to rise to the top, and there is only a small overlap between those sets. Imagine if such decisions were to be made in today’s UK, with a nanny state redesigning us to smoke less, drink less, eat less, exercise more, to do whatever the state tells us without objection.
What of wisdom then? How often is wisdom obvious in government policy? Do we want a Stepford Society? That is what evolution under state control would yield. Under the control of engineers or designers or celebrities, it would look different, but none of these groups represents the best interests of wisdom either. What of a benign dictator, using the wisdom of Solomon to direct humans down the right path to wise utopia? No thanks! I am really not sure there is any form of committee or any individual or role that is capable of reaching a truly wise decision on what our human nature should become. And no guarantee even if there was, that future human nature would be designed to be wise, rather than a mixture of other competing attributes. And the more I think about it, the more I think that is the way it ought to be. Being wise is certainly something to be aspired to, but do you want everyone to be wise? Really? I would much prefer a society that is as mixed as today’s, with a few wise men and women, quite a lot of fools, and most people in between. Maybe a rebalancing towards more wise people and fewer fools would be nice, and certainly I’d like to adjust our institutions so that more wise people rise to positions of power, but I don’t think it’s wise to try to make humans better genetically. Who knows where that would end, with the free run of values that we seem to have now that the fixed anchors of religion have been lost. Each successive decision on optimisation would be based on a different value set, taking us on a random walk with no particular destination. Is wisdom simply not desired enough to make it a winner in the optimisation race, competing as it is against beauty, sporting ability, popularity, fame and fortune?
So if we can’t safely use genetics to make humans wiser or improve human nature, is the battle between wisdom and nature already lost? Not yet, there are some other avenues to explore. Suppose wisdom were something that people could acquire if and when they want it. Suppose it could be used at will when our leaders are making important decisions. And the rest of the time we could carry on our lives in the bliss of ignorance and folly, without the burden of knowing what is wise. Maybe that would work. In this direction, the greatest toolkit we will have comes from IT, and especially from the field of artificial intelligence.
Much of knowledge (of which only a rapidly decreasing proportion is human knowledge) is captured on the net, in databases and expert systems, in neural networks and sensor networks. Computers already enhance our lives greatly by using this knowledge automatically. And yet they can’t yet think in any real sense of the word, and are not yet conscious, whatever that means. But thanks to advancing technology, it is becoming routine to monitor signals in the brain to millimetre resolutions. Nanowires can now even measure signals from different parts of individual cells. With more rapid reverse engineering of brain processes, and consequential insights into the mechanisms of consciousness, computer designers will have much better knowledge on which to base their development of strong artificial intelligence, i.e. conscious machines. Technology doesn’t progress linearly, but exponentially, with the knowledge development rate rapidly increasing, as progress in one area helps progress in others.
Thanks to this positive feedback effect, it is possible that we could have conscious machines as early as 2020, and that they will not just be capable of human levels of intelligence, but will become vastly superior in terms of sensory capability, memory, processing speed, emotional capability, and even the scope of their thinking. Most importantly from a wisdom viewpoint, they will be able to take into account many more factors at one time than humans. They will also be able to accumulate knowledge and experience from other compatible machines, as well as from the whole web archives, so every machine could instantly benefit from insights from any other, and could also access any sensory equipment connected to any other computer, pool computer minds as needed, and so on. In a real sense, they will be capable of accumulating many human lifetimes of equivalent experience in just a few minutes.
It would perhaps be unwise to build such powerful machines before humans can transparently link their brains to them, otherwise we face a potential terminator scenario, so this timescale might be delayed by regulation (though the military potential and our human nature tendency to want to gain advantage might trump this). If so, then by the time we actually build conscious machines that we can link to our brains, they will be capable of vastly higher levels of intelligence. So they will make superb tools for making wiser solutions to problems. They will enable their human wearers to consider every possibility, from every angle, looking at every facet of the problem, to consider the consequences and compare with other approaches. And of course, if anyone can wear them, then the intellectual gap between dumb and smart people is drowned out by the vast superiority of the add-ons. This would make it possible to continue to select our leaders on factors other than intelligence or wisdom, but still enable them to act with much more wisdom when called to.
But this doesn’t solve the problem automatically. Leaders would have to be forced to use machine tools when a wise decision is required, otherwise they might often choose not to do so, and sometimes still end up making very unwise decisions by following the forces driven by their nature. And if they do use the machine, then some will argue that the human is becoming somewhat obsolete to the process, and we are in danger of handing over decision-making to machines, another form of terminator scenario, and not making proper ‘human’ decisions. Somehow, we would have to crystallise out those parts of human decision making that we consider to be fundamentally human, and important to keep, and ensure that any decision is subject to the resultant human veto. We can make a blend of nature and wisdom that suits.
This route towards machine-enable wisdom would still take a lot of effort and debate to make it work. Some of the same objections face this approach as the genetic one, but if it is only optional and the links can be switched on and off, then it should be feasible, just about. We would have great difficulty in deciding what rules and processes to apply, and it will take some time to make it work, but nature could be eventually over-ruled by wisdom using an AI ‘wisdom machine’approach.
Would it be wise to do so? Actually, even though I think changing our genetics to bias us towards wisdom is unwise, I do think that using optional AI-based wisdom is not only feasible, but also a wise thing to try to implement. We need to improve the quality of human decision processes, to make them wiser, if future generations are to live peacefully and get the best out of their lives, without trashing the planet. If we can do so without changing the fundamental nature of humanity, then all the better. We can keep our human nature, and be wise when we want to be. If we can do that, we can acknowledge our tendency to follow our nature, and over-rule it as required. Sometimes, nature will win, but only when we let it. Wisdom will one day triumph. But probably not in my lifetime.
A lot seems to be happening, but there is a huge rotting elephant in the room that is rightfully getting a lot of comment, so here’s my bit, (re-blogged from my new newsletter)
This blog is about Digital ID Cards, aka COVID Passports.
Most of the government activity around lifting lockdown and trying to keep all the powers has been highly suspicious. It’s like they realize this is their best chance for a long time to force digital identity cards on us. Ordinary identity cards have been discussed several times before and always rejected, for very good reason, but now with the idea of a ‘COVID passport’, they think they can sneak them digital identity cards through on the back of that, a classic ‘bait and switch’ con. Offer a pass to get into the pub, and then give them a full-blown, high spec, and permanent digital ID card.
First, the bait isn’t as tasty as promised. It can’t and won’t guarantee you aren’t carrying COVID so the headline sales pitch is deliberately deceptive. At best, it can show that you passed a test fairly recently, so you are a bit less likely to pass on COVID, so we’ll tell pubs to let you in. If the pub is only one place you’ve been since your test, you may well have picked up some viruses en-route that you could infect others with. Any surface you’ve recently touched might have transferred viruses to you, that you might transfer to any surface you touch in the pub. The test could also have been a false negative, saying you’re clean when you aren’t. So the bait isn’t all that tasty after all.
As for the switch, make no mistake, if government manages to force through ‘COVID passports’, you will have a full-blown digital ID card for the rest of your life. Even in the unlikely event that Boris kept his promise that the COVID passports will expire after a year, the data collected about you by government, the big IT companies, and the authorities will never be destroyed. We already have history of some police forces illegally obtaining and keeping DNA records. Why should we assume all authorities and companies will comply 100% with any future directive that goes against their interests?
Loss of privacy, lack of fairness, social exclusion and tribal conflicts are just some of the first issues, leading quickly on to totalitarianism.
Lots of totally unrelated functionality will be included even from the start, which will quickly be added to as technology permits, and forever keep you under extreme surveillance and government control, never to be free ever again or ever again to have any real privacy or freedom of speech. We will very soon have Chinese style blanket surveillance and social credit scores.
Think about it. Given that the card can’t guarantee safety anyway, given that you’re already very unlikely to die from COVID, surely the simple card you got when you were vaccinated would be quite enough? Sure, it doesn’t guarantee you are who it says (mine doesn’t even have my name on it), you might have borrowed it, but so what – going from a tiny risk to a slightly less tiny risk is surely not that big a deal? Surely that small reduction of risk implied by a proper COVID passport is not worth the enormous price of loss of privacy and liberty?
So it might let you go to the pub, but there is already no reason why you shouldn’t be allowed to, so that’s a false choice manufactured by government as leverage to make you accept it. The risk now is tiny. Anyone under 50 was never at any real risk, and all those over 50 have either been vaccinated or had the free choice, except an extremely small number who can’t for medical reasons. With the real risk of catching and dying from COVID already tiny, the government is already only keeping us locked down for reasons other than safety, to try to force us to accept digital ID cards as a condition of getting some freedom back, or the illusion of freedom back, temporarily.
OK, so what’s the big deal with having one? As the vaccines minister says (paraphrasing) what’s so bad about having a pass to get into the pub if it keep us all safe? In any case, you already have a passport. It has your full name, a photo that used to look like you, your date of birth and nationality. But it is paper, and even if it can be machine read at the airport, you don’t have to carry it everywhere. It can’t be read without you putting it within centimetres of a reader.
A digital ID card resides on your mobile phone, so location is one extra function that your passport doesn’t provide. It knows exactly where you are, and since those you are with also will need one, it will know who you are with, all the time. Very soon, government will know all your friends, family, colleagues and associates, how often and where you meet. Government will quickly build a full social map, detailing every citizen and how they relate to every other. If they have someone of interest, they can immediately identify everyone they have contact with. They will know everywhere you have been, by which means of transport. The photo will be recent too, probably far better quality than the one you took years ago for your passport. So if you attend a demonstration, they will know how you got there, what time you arrived, who you met with beforehand, which part of the crowd you were in, and together with surveillance cameras and advanced AI, be able to put together a pretty comprehensive picture of your behavior during that demonstration.
Another extra function is your medical status. That starts with your COVID status, but will also store details of your vaccine appointments, COVID tests, and a so far unspecified range of other medical data from the start. We can safely assume that will include the sort of stuff you are asked for every time you go near a clinic – your home address, NHS number and who your GP is, your age, your sex, your gender identity, your race, your religion, and various aspects of your medical history. Even if not included in the first release, government will argue that it is useful to include all sorts of extra medical data ‘to save you time’ and ‘for your convenience’, such as what drugs you are on, what medical conditions you have, what vulnerabilities you have and importantly, what risks you present to others. Using location, it can also infer your sexual preferences.
Obviously it then becomes even easier to insist that to ‘protect the NHS’ and ‘to keep you healthy’, that the app should also monitor your activity, and link to your Fitbit or Apple watch to make sure you do your best to stay in shape. Some health apps do that anyway and some people like that, because it’s part of their social activity, and they even get discount private medical care or free entertainment. But will that mean that if you don’t look after your health by exercising enough, that you go to the back of the queue for treatment, or for other government-provided services, or that you no longer get free dental care, or free eye checks, or free prescriptions. Maybe you won’t be able to buy a tube ticket if the destination is within walking distance, until your health improves. Maybe you will be told to go to the gym instead of the cinema or pub. Maybe if you do far too little exercise, you should pay more for prescriptions? Also, some people are killed by drunk driving, so if you have been in a pub or restaurant, or any place that sells alcohol, your car ignition will be deactivated until you submit a negative alcohol test. It’s very easy to see how these and many other functions can be bolted on once you have a digital ID card. Each will seem to have a reasonable enough justification if presented with enough spin, to make sure it gets implemented.
It doesn’t have to stop at health. Police will want to access data too, to ‘control crime’ and ‘ensure our safety’, and will then link to their various surveillance systems, and presumably with the same degree of political bias they routinely apply today, often pushing their own ideology rather than policing actual law. So, asking for microphone access and camera access, they could have tens of millions of cameras and microphones all over the country for blanket 360 degree 24/7 surveillance, using AI to sift through it to check for any potential hate crime for example, or detect any suspicious behavior patterns that might indicate a tendency towards a future crime. Minority report is only a fraction of what is possible.
These are the types of things already in place in China via their social credit system, though there are many other ‘features’ I haven’t listed too. It monitors people’s behaviors via various platforms, and then permits or denies access to various levels of services. If we get digital ID cards, it is inevitable that we will go the whole way down that same route.
Police and health authorities might both like your DNA record to be stored too. Then they can ensure you get the best possible health care, or quickly charge someone if any of their relatives has similar DNA to that found (for any reason) at the scene of any crime (real or perceived).
The power to monitor and control the population is irresistible to most politicians, certainly enough to get legislation through, and enough to ensure that powers are renewed every time they come up for review. If they come up for review. The government has already moved goalposts for restoration of our freedoms many times. At this point, it is becoming less and less likely we will ever get them back. If digital ID is voted through, or forced through by Johnson bypassing debate, then we will never be free again.
All the above dangers arise from government, which after all, we vote into power. They are supposed to be acting on our behalf to implement the things we vote for. Whether they are trying to do that now, or acting on external forces from the WEF, UN, China, Russia or other entities is anyone’s guess. What is certain though, is that with a government issued digital ID permanently on your phone, many bit IT companies will be very interested. Today, you can use any account and email address and it doesn’t need to be genuine. For a range of reasons, many people use fake identities for their Google, Yahoo, Facebook, Twitter, or Microsoft accounts. Friend and contact lists often bear little resemblance to the groups of people we actually hang out with. With a digital ID, the details are the ones on our birth certificates, the ones we have to share with government. Being able to create social maps would improve the ability to market enormously, so companies like Google and Facebook will love having access to genuine certified ID, and if that includes lots of other data too, even better. The ways you are marketed to, the quality of service you get, and even the prices you are charged will all change. To make a COVID passport at all useful, it will be necessary to allow other apps to access some or all of the data, and once that data has been accessed by the big IT companies, even if the passports later expire, it will be kept. There may be assurances that it will be wiped, but they cannot be guaranteed, and we know from history that companies (e.g. Google) may collect and use private data and then when caught claim that a junior employee must have done it by accident and without authorization.
With cancel culture and assorted activism accessing all this data too, the future could quickly become dystopian.The dangers of COVID passports are enormous. A nightmare police state lies ahead, with total surveillance, oppression, cancellation and social credit scores, tribal conflicts, social isolation, loneliness and general misery are simply too high a price for being ‘allowed’ to go to the pub.
We should just go anyway, it’s perfectly safe, and if government objects, we should change the government.
Dr Pearson has been a futurologist for 32 years, tracking and predicting developments across a wide range of technology, business, society, politics and the environment. Graduated in Maths and Physics and is a Doctor of Science. Worked in numerous branches of engineering from aeronautics and space tech to cybernetics, AI, biotech and quantum computing, sustainable transport, fashion and cosmetics. 2000+ inventions including a number in fashion, biotech, AI, quantum computing, renewable energy, energy storage and space travel. Has written 17 books and made over 850 TV and radio appearances. Chartered Member of the British Computer Society and a Fellow of the World Academy of Art and Science.