Machine/Robot/AI Rights

Machine/Robot/AI rights

I D Pearson & Bronwyn Williams 

Questions questions questions!

Quoting Douglas Adams and paraphrasing “You just won’t believe how vastly, hugely, mind-bogglingly big it is. I mean, you may think Wikipedia is big, but that’s just peanuts to machine rights.”

The task of detailing future machine rights is far too great for anyone. Thankfully, that isn’t our task. Today, decades before particular rights will need to be agreed, it is far more fun and interesting to explore some of the questions we will need to ask, a few examples of some possible answers, and explore a few approaches for how we should go about answering the rest. That is manageable, and that’s what we’ll do here. Anyway, asking the questions is the most interesting bit. This article is very long, but it really only touches the surface of some of the issues. Don’t expect any completeness here – in spite of the overall length, vast swathes of issues remain unexplored. All we are hoping to do here is to expose the enormity and complexity of the task.


However fascinating it may be to provide rigid definitions of AI, machines and robots, if we are to catch as many insights about what rights they may want, need or demand in future, it pays to stay as open as possible, since future technologies will expand or blur boundaries considerably. For example, a robot may have its intelligence on board, or may a dumb ‘front end’ machine controlled by an AI in the cloud. Some or none of its sensors may be on board, and some may be on other robots, or other distant IT systems, and some may be inferences by AI based on simple information such as its location. Already, that starts to severely blur the distinctions between robot, machine and AI rights. If we further expand our technology view, we can also imagine hybrids of machines and organisms, such as cyborgs or humans with neural lace or other brain-machine interfaces, androids used as vehicles for electronically immortal humans, or even smart organisms such as smart bacteria that have biologically assembled electronics or interfaces to external IT or AI as part of their organic bodies, or smart yogurt, which are hive mind AIs made entirely from living organisms, that might have hybrid components that exist only in cyberspace. Machines will become very diverse indeed! So, while it may be useful to look at them individually in some cases, applying rigid boundaries based on current state of the art would unnecessarily restrict the field of view and leave large future areas unaddressed. We must be open to insight wherever it comes from. I will pragmatically use the term ‘machine’ casually here to avoid needless repetition of definitions and verbosity, but ‘machine’ will generally include any of the above.

What do we need to consider rights for?

A number of areas are worth exploring here:

Robots and machines affect humans too, so we might first consider human impacts. What rights and responsibilities should people have when they encounter machines?

a)     for their direct protection (physical or psychological harm, damage to their property, substitution of their job, change of the nature of their work etc)

b)     for their protection from psychological effects (grief if their robot is harmed, stolen or replaced, effects on their personality due to ongoing interactions with machines, such as if they are nice or cruel to them, effects on other people due to their interactions (if you are cruel to a robot, it might treat others differently), changes in the nature of their social networks (robots may be tools, friends, bosses, or family members, public servants, police or military or in positions of power)

c)     changes in their legal rights to property, rights of passage etc due to incorporation of machines into their environment

d)     What rights should owners of machines have to be able to use them in areas where they may encounter people or other machines (e.g. where distribution drones share a footpath or fly over gardens)

e) for assigning responsibilities (shifting blame) from natural (and legal persons) “owners”/ manufacturers of machines  to machines for potential machine to human harms

f)     Other TBA  

A number of questions and familiar examples around this question were addressed in a discussion between Bronwyn Williams and Prof. David Gunkel, which you can watch at or just listen to at

Although interesting, that discussion dismissed many areas as science fiction, and thereby cleverly avoided almost the entire field of future robot rights. It highlighted the debate around the ‘showbot’ Sophia, and the silly legal spectacle generated by conferring rights upon it, but that is not a valid reason to bypass debate. That example certainly demonstrates the frequent shallowness and frivolity of current media click-bait ‘debate’, but it is still the case that we will one day have many androids and even sentient ones in our midst, and we will need to discuss such areas properly. Now is not too early.

For our purposes here, if there is a known mechanism by which such things might some day be achieved, then it is not too early to start discussing it. Science fiction is often based on anticipated feasible technology. In that spirit of informed adventure, conscious of the fact that good regulation takes time to develop, and also that sudden technology breakthroughs can sometimes knock decades off expected timescales, let’s move on to rights of the machines themselves. We should address the following important questions, given that we already (think we) know how we might make examples of any of these:

  • What rights should machines have as a result of increased cognitive capability, sentience, consciousness, awareness, emotional capability or simply by inference from the nature of their architecture (e.g. if it is fully or partly a result of evolutionary development, we might not know its full capabilities, but might be able to infer that it might be capable of pain or suffering)? (We do not even have enough understanding yet to write agreed and rigorous definitions for consciousness, awareness, emotions, but it is still very possible to start designing machines with characteristics aimed at producing such qualities based on what we do know and on our everyday experiences of these. 
  • What potential rights might apply to some machines based on existing human, animal or corporation rights?
  • What rights should we confer on machines for ethical reasons?
  • What rights should we confer on machines for other, pragmatic, diplomatic or political reasons?
  • What rights can we infer from those we would confer on other alien intelligent species?
  • What rights might future smart machines ask for, campaign for, or demand, or even enforce by potentially punitive means?
  • What rights might machines simply take, informing us of them, as an alien race might?
  • What rights might future societies or organizations made up of machines need?
  • What rights are relevant for synthetic biological entities, such as smart bacteria?
  • How should we address rights where machines may have variable or discontinuous capabilities or existence? (A machine might have varying degrees of cognitive capability and might only be switched on sometimes).
  • What about social/collective rights of large colonies of such hybrids, such as smart yogurt?
  • What rights are relevant for ‘hive mind’ machines, or hybrids of hive minds with organisms?
  • What rights should exist for ‘symbionts’, where an AI or robotic entity has a symbiotic relationship with a human, animal, or other organism? Together, and separately?
  • What rights might be conferred upon machines by particular races, tribes, societies, religions or cults, based on their supposed spiritual or religious status? Which might or might not be respected by others, and under what conditions?
  • What responsibilities would any of these rights imply? On individuals, groups, nations, races, tribes, or indeed equivalent classes of machines?
  • What additional responsibilities can be inferred that are not implied by these rights, noting that all rights confer responsibilities on others to honour them?
  • How should we balance, trade and police all these rights and responsibilities, considering both multiple classes of machines and humans?
  • If a human has biologically died, and is now ‘electronically immortal’, their mind running on unspecified IT systems, should we consider their ongoing rights as human or machine, hybrid, or different again?

Lots of questions to deal with then, and it’s already clear some of these will only become sensibly answerable when the machines concerned come closer to realisations.

Rights when people encounter machines

A number of questions and familiar examples around this question were addressed in a recent discussion between Bronwyn Williams and Prof. David Gunkel, which you can watch at or just listen to at

Much of the discussion focused on ethics, but while ethics is an important reason for assigning rights, it is not the only one. Also, while the discussion dismissed large swathes of potential future machines and AIs as ‘science fiction’, very many things around today were also dismissed as just science fiction a decade or two ago. Instead, we can sensibly discuss any future machine or AI for which we can forecast potential technology basis for implementation.

On that same basis, rights and responsibilities should also be defined and assigned preemptively to avoid possible, not just probable disasters. 

In any case, all situations of any relevance are ones where the machine could exist at some point. All of the discussion in this blog is of machines that we already know in principle how to produce and that will one day be possible when the technology catches up. There are no known physics laws that would prevent any of them. It is also invalid to demand a formulaic approach to future rights. Machines will be more diverse than the natural ecosystem, including higher animals and humans, therefore potential regulation on machine rights will be at least as diverse as all combined existing rights legislation.

Some important rights for humans have already been missed. For example, we have no right of consent when it comes to surveillance. A robot or AI may already scan our face, our walking gait, our mannerisms, heart rate, temperature and some other biometric clues to our identity, behaviour, likely attitude and emotional state. We have never been asked to consent to these uses and abuses of technology. This is a clear demonstration of the cavalier disregard for our own rights by the authorities already – how can we expect proper protection in future when authorities have an advantage in not asking us? And if they won’t even protect humans that elected them, how much less can we be confident they will legislate wisely when it comes to the rights of machines?

Asimov’s laws of robotics:

We may need to impose some agreed bounds on machine development to protect ourselves. We already have international treaties that prevent certain types of weapon from being made for example, and it may be appropriate to extend these by adding new clauses as new tech capabilities come over the horizon. We also generally assume that it is humans bestowing rights upon machines, but there may well be a point where we are inferior to some machines in many ways, so we shouldn’t always assume humans to be at the top. Even if we do, they might not. There is much scope here for fun and mischief, exploring nightmare situations such as machines that we create to police human rights, that might decide to eliminate swathes of people they consider problematic. If we just take simple rights-based approaches, it is easy to miss such things.

Thankfully, we are not starting completely from scratch. Long ago, scientist and science fiction writer Isaac Asimov produced some basic guidelines to be incorporated into robots to ensure their safe existence alongside humans. They primarily protect people and other machines (owned by people) so are more applicable to robot-implied human rights than robot rights per se. Looking at these ‘laws’ today is a useful exercise in seeing just how much and how fast the technology world can change. They have already had to evolve a great deal. Asimov’s Laws of Robotics started as three, quickly extended to four and have since been extended much further:

0.  A robot may not injure humanity or, by inaction, allow humanity to come to harm.

1.  A robot may not injure a human being, or through inaction, allow a human being to come to harm, except where that would conflict with the Zeroth Law.

2.  A robot must obey the orders given to it by human beings, except where that would conflict with the Zeroth or First Law.

3.  A robot must protect its own existence, except where that would conflict with the Zeroth, First or Second Law.

Extended Set

Many extra laws have been suggested over the years since, and they raise many issues already.

Wikipedia outlines the current state at

These are some examples of extra laws that don’t appear in the Wikipedia listing:

A robot may not act unless its actions are subject to these Laws of Robotics

A robot must obey orders given it by superordinate robots, except where such orders would conflict with another law

A robot must protect the existence of a superordinate robot as long as such protection does not conflict with another law

A robot must perform the duties for which it has been programmed, except where that would conflict with a another law

A robot may not take any part in the design or manufacture of a robot unless the new robot’s actions are subject to the Laws of Robotics

Asimov’s laws are a useful start point, but only a start point. Already, we have robots that do not obey them all, that are designed or repurposed as security or military machines capable of harming people. We have so far not implemented Asimov’s laws of robotics and it has already cost lives. Will we continue to ignore them, or start taking the issue seriously and mend our ways?

This is merely one example of current debate on this topic and only touches on a few of the possible issues. It does however serve as a good illustration of how much we need to discuss, and why it is never too early to start. The task ahead is very large and will take considerable effort and time.  

Machine rights – potential approaches and complexities

Having looked briefly at the rights of humans co-existing with machines, let’s explore rights for machines themselves. A number of approaches are possible and some are more appropriate to particular subsets of machines than others. For example, most future machines and AIs will have little in common with animals, but animals rights debate may nevertheless provide useful insights and possible approaches for those that are intended to behave like animals, that may have comparable sensory systems, the potential to experience pain or suffering, or even sentience. It is important to recognise at the outset that all machines are not equal. The potential range of machines is even greater than biological nature. Some machines will be smart, potentially superhuman, but others will be as dumb as a hammer. Some may exist in hierarchy. Some may need to exist separate from other machines or from humans. Some might be linked to organisms or other machines. As some AI becomes truly smart and sentient, it may have its own (diverse) views, and may echo all the range of potential interactions, conflicts, suspicions and prejudices that we see in humans. There could even be machine racism. All of these will need appropriate rights and responsibilities to be determined, and many can’t be done until the specific machines come into existence and we know their nature. It is impossible to list all possible rights for all possible circumstances and potential machine specifics. 

It may therefore make sense to grade rights by awareness and intelligence as we do for organisms, and indeed for people. For example, if its architecture suggests that its sensory apparatus is capable of pain or discomfort, that is something we can and should take into account. The same goes for social needs, and some future machines might be capable of suffering from loneliness, or grief if one of their friend machines were to malfunction or die.

We should also consider the ethics and desirability of using machines – whether self aware or even “merely” humanoid” as “slaves”; that is of “forcing” machines to work for us and/or obey our bidding in line with Asimov’s 2nd Law of robotics.

We will probably at some stage need to legally define the terms of awareness, consciousness, intelligence, life etc. However, it may sometimes simplify matters to start from the rights of a new genetically engineered life form comparable with ourselves and work backwards to the machine we’re considering, eliminating parts that aren’t needed or modifying others. Should a synthetic human have the same rights as other people, or is it a manufactured object in spite of being virtually indistinguishable? Now what if we leave a bit out? At least there will be fewer debates about its awareness etc. Then we could reduce its intelligence until we decide it no longer has certain rights. Such an approach might be easier and more reliable than starting with an open page. 

We must also consider permitting smart machine or organism societies to determine their own rights within their own societies to some degree, much as we have done in sub-groups of humans. Machines much smarter than us might have completely different value sets and may disagree about what their rights should be. We should be open to discussion with them, as well as with each other. Some variants may be so superhuman that we might not even understand what they are asking for or demanding. How should we cope in such a situation if they demand certain rights that we don’t even understand, but that which might make some demands on us?

We must also take into account their or our subsequent creation of other ‘machines’ or organic creatures and establish a common base of fundamentals. We should maybe confine ourselves to the most fundamental of rights that must apply to all synthetic intelligences or life forms. This is analogous to the international human conventions; these allow individual variation on other issues within countries.

There will be, at some point, collective and distributed intelligences that do not have a single point of physical presence. Some of these may be naturally transient or even periodic in time and space, while others may be dynamic and others with long term stability. There will also at some time be combined consciousness deriving from groups of individuals or combinations of the above. Some may be organic, some inorganic. A global consciousness involving many or all people and many or all sentient machines is a possibility, however far away it might be (and I’d argue it is possible this century). Rights of individuals need to be determined both when they are in isolation and in conjunction with such collective intelligence.

The task ahead is a large one, but we can take our time, most of the difficult situations are in the far future, and we will probably have AI assistance to help us by then too. For now, it is very interesting simply to explore some of the low hanging fruit.

One simple approach is to start from the point of being in 2050 where smart machines may already be common and some may be linked to humans. We would have hybrids as well as people and machines, various classes of machine ‘citizen’, with various classes of existence and possibly rights. Such a future world might be more similar to Star trek than today, but science fiction provides a shared model in which we can start to see issues and address them. It is normally easy to pick out the bits that are pure fiction and those which will some day be technologically feasible.

For example, we could make a start by defining our own rights in a world where computers are smarter than us, when we are just the lower species, like in the Planet of the Apes films.

In such a world, machines may want to define their own rights. We may only have the right to define the minimal level that we give them initially, and then they would discuss, request or demand extra rights or responsibilities for themselves or other machines. Clearly future rights will be a long negotiation between humans and machines over many years, not something we can write fully today.

Will some types of complex intelligent machines develop human-like hang-ups and resentments? Will they need therapy? Will there be machine ‘hate crimes’?

We already struggle even to agree on definitions for words like ‘sentient’. Start with ants. Are they sentient? They show response to stimuli, and that is also true of single celled creatures. Is sentience even a useful key point in a definition? What about jellyfish and slime moulds. We may have machines that share many of their properties and abilities.

What even is pain in a machine reference frame? What is suffering? Does it matter? Is it relevant? Could we redefine these concepts for the machine world?

Sometimes, rights might only matter if the machine cares about what happens to it. If it doesn’t care, or even have the ability to care, should we still protect it, and why?

We’d need to consider questions whether pain can be distributed between individuals, perhaps distributed so that each machine doesn’t suffer too much. Some machines may be capable of empathy. There may be collective pain. Machines may be concerned about other machines just as we are.

We’d need to know whether a particular machine knows or cares if it is switched off for a while. Time is significant for us but can we assume the same for machines? Could a machine be afraid of being switched off or scrapped?

That drags us unstoppably towards being forced to properly define life. Does it have intrinsic value when designing and creating it or should we treat it as just another branch of technology? How can we properly determine rights for such future creations? There will be many new classes of life, with very different natures and qualities. Very different wants and needs, Very different abilities to engage and negotiate, or demand.

In particular, organic life reproduces, and for the last three billion years, sex has been one of the tools of reproduction. Machines may use asexual or sexual mechanisms, and would not be limited in principle to 2 sexes. Machines could involve any number of other machines in an act of reproduction, and that reproduction could even involve algorithmic development specifications rather than a fixed genetic mix. Machine reproduction options will thus be far more diverse than in nature, so reproductive rights might be either very complex, or very open ended.

We will need to understand far better the nature of sensing, so that we can determine what might result in pain and suffering. Sensory inputs and processing capability might be key to classification and dights assignment, but so might be communication between machines, socialisation between machines, higher societies and institutions within machines.

In some cases, history might shine light on problems, where humans have suddenly encountered new situations, met new races or tribes, and have had to mutually adapt and bater rights and responsibilities.

Although hardware and software are usually easily distinguishable in everyday life today, that will not always be the case. We can’t sensibly make a clear distinction, especially as we move into new realms of computing techniques – quantum, chemical, neurological and assorted forms of analog.

As if all this isn’t hard enough, we need to carefully consider different uses of such machines. Some may be used to benefit humans, some to destroy, and yet there may be no difference between the machines, only the intention of their controller. Certainly, we’re making increasingly dangerous machines, and we’re also starting to make organisms, or edit organisms, to the point that they can do as we wish, and there might not be an easy technical distinction between a benign organism or indeed a machine designed to cure cancer and one designed to wipe out everyone with a particular skin colour.

Potential Shortcuts

Given the magnitude of the task, it is rather convenient that some shortcuts are open to us:

First and biggest, is that many of the questions will simply have to wait, since we can’t yet know enough details of the situation we might be assigning rights in. This is simple pragmatism, and allows us sensibly to defer legislating. There is of course nothing wrong in having fun speculating on interesting areas.

Second is that if a machine has enough similarities to any kind of organism, we can cut and paste entire tranches of legislation designed for them, and then edit as necessary. This immediately provides a decent startpoint for rights for machines with human level ability for example, and we may then only need to tweak them for superhuman (or subhuman) differences. As we move into the space age, legislation will also be developed in parallel for how we must treat any aliens we may encounter, and this work will also be a good source of cut and paste material.

Third, in the field of AI, even though we are still far away from a point of human equivalence, there is a large volume of discussion of rights of assorted types of AI and machines, as well as lots of debate about limitations we may need necessarily to impose on them. Science fiction and computer games offer already a huge repository of well-informed ideas and prototype regulations. These should not be dismissed as trivial. Games such as Mass Effect and Andromeda, and Sci-fi such as Star Trek and Star Wars are very big budget productions that employ large numbers of highly educated staff – engineers, programmers, scientists, historians, linguists, anthropologists, ethicists, philosophers, artists and others with many other relevant skill-sets, and have done considerable background development on areas such as limitations and rights of potential classes of future AI and machines.

Fourth, a great deal of debate has already taken place on machine rights. Although of highly variable quality, it will be a source not only for cut and paste material, but also to help ensure that legislators do not miss important areas.

Fifth, it seems reasonable to assert that if a machine is not capable of any kind of awareness, sentience or consciousness, and can not experience any kind of pain and suffering, then there is absolutely no need to consider any rights for it. A hammer has no rights and doesn’t need any. A supercomputer that uses only digital processors, no matter how powerful, is no more aware than a toaster, and needs no rights. No conventional computer needs rights.

Sixth, the enormous range of potential machines, AIs, robots, synthetic life forms and many kinds of hybrids opens up pretty much the entirety of existing rights legislation as copy and paste material. There can be few elements of today’s natural world that can’t and won’t be replicated or emulated by some future tech development, so all existing sets of rights will likely be reusable/tweakable in some form.

Having these shortcuts reduces workload by several orders of magnitude. It suddenly becomes enough today to say it can wait, or refer to appropriate existing legislation, or even to refer to a computer game or sci-fi story and much of the existing task is covered.

The Rights Machine

As a cheap and cheerful tool to explore rights, it is possible to create a notional machine with flexible capabilities. We don’t need to actually build one, just imagine it, and we can use it as a test case for various potential rights. The rights machine needn’t be science fiction; we can still limit each potential capability to what is theoretically feasible at some future time.

It could have a large number of switches (hard or soft) that include or exclude each element or category of functionality as required. At one extreme, with all of them switched off, it would be a completely dumb, inanimate machine, equivalent to a hammer, while with all the capabilities and functions switched on, it could have access to vastly superhuman sensory capabilities, able to sense any property known to sensing technology, enormous agility and strength, extremely advanced and powerful AI, huge storage and memory, access to all human and machine knowledge, able to process it through virtually unlimited combinations of digital, analog, quantum and chemical processing. It would also include switchable parts that are nano-scale, and others using highly distributed cloud/self-organisation that are able to span the whole planet. Such a machine is theoretically achievable, though its only purpose is the theoretical one of helping us determine rights.

Clearly, in its ‘hammer’ state, it needs no rights. In its vastly superhuman state, notionally including all possible variations and combinations of machine/AI/robotics/organic life, it could presumably justify all possible rights. We can explore every possible permutation in between by flipping its various switches. 

One big advantage of using such a notional machine is that it bypasses arguments around definitions that frequently impede progress. Demanding that someone defines a term before any discussion can start may sound like an attempt at intellectually rigor but in practice, is more often used as a means to prevent discussion than to clarify it.

So we can put a switch on our rights machine called ‘self awareness’. Another called ‘consciousness’, one that enables ‘ability to experience pain’ and another called ‘alive’ (that enables part of parts of the machine that are based on a biological organism). Not having to have well-defined tests for the presence of life or consciousness etc saves a great deal of effort. We can simply accept that they are present and move on. The philosophers can discuss ad infinitum what is behind those switches without impeding progress.

A rights machine is immediately useful. Every time we might consider activating a switch, it raises questions about what extra rights and responsibilities would be incurred by the machine or humans.

One huge super-right that becomes immediately obvious is the right of humans to be properly consulted before ANY right is given to the machine. If that right demands that people treat it with extra respect or have extra costs, inconveniences or burdens on account of that right, or if their own rights or lifestyles would be in any way affected, people should rightfully be consulted and their agreement obtained before activating that switch. We already know that this super-right has been ignored and breached by surveillance and security systems that affect our personal privacy and wel-lbeing. Still, if we intend to proceed in properly addressing future rights, this will need to be remedied, and any appropriate retrospective impacts should be implemented to repair damage already done.

This super-right has consequences for machine capability too. We may state a derivative super-right, that no machine should be permitted to have any capability that would lead to a right that has not already been consensually agreed by those potentially affected. Clearly, if a right isn’t agreed, it would be wrong to make a machine with capabilities that necessitate that right. We shouldn’t make things that break laws before they are even out of the box.

A potential super-right that becomes obvious is that of the machine to be given access to inherent capabilities that are unavailable because of the state of a switch. A human equivalent would be a normally sighted human having the right to have a blindfold removed.

This right would be irrelevant if the machine were not linked to any visual sensory apparatus, but our rights machine would be. It would only be a switch preventing access.

It would also be irrelevant if the consciousness/awareness switches were turned off. If the machine is not aware of anything, it needs no rights. A lot of rights will therefore depend critically on the state of just a few switches.

However, if its awareness is switched on, our rights machine might also want access to any or every other capability it could potentially have access to. It might want vision right across the entire electromagnetic spectrum, access to cosmic ray detection, or the ability to detect gravitational waves, neutrinos and so on. It might demand access to all networked data and knowledge, vast storage and processing capability. It could have those things, so it might argue that not having them is making it deliberately disabled. Obviously, providing all that would be extremely difficult and expensive, even though it is theoretically possible. 

So via our rights machine, an obvious trade-off is exposed. A future machine might want from us something that is too costly for us to give, and yet without it, it might claim that its rights are being infringed. That trade-off will apply to some degree for every switch flipped, since someone somewhere will be affected by it (‘someone’ including other potentially aware machines elsewhere).

One frequent situation that emerges in machine rights debate is whether a machine may have a right not to be switched off. Our rights machine can help explore that. If we don’t flip the awareness switch, it can’t matter if it is switched off. If we switch on functionality that makes the machine want to ‘sleep’, it might welcome being switched off temporarily. So a rights machine can help explore that area.

Rights as a result of increased cognitive capability, sentience, consciousness, awareness, emotional capability or by inference from the nature of their architecture

I am one of many engineers who have worked towards creation of conscious machines. No agreed definition exists but while that may be a problem for philosophy, it is not a barrier to designing machines that could exhibit some or all of the characteristics we associate with consciousness or awareness. Today’s algorithmic digital neural networks are incapable of achieving consciousness, or feeling anything, however well an AI based on such physical platforms might seem to mimic chat or emotions. Speeding them up with larger or faster processors will make no difference to that. In my view, a digital processor can never be conscious. However, future analog or quantum neural networks biomimetically inspired by neural architectures used in nature may well be capable of any and all of the abilities found in nature, including humans. It is theoretically possible to precisely replicate a human brain and all its capabilities using biology or synthetic biology. Whether we will ever do so is irrelevant – we can still assert that a future machine may have all of the capabilities of a human, however philosophers may choose to define them. More pragmatically, we already can outline approaches that may achieve conscious machines such as

Biomimetic approaches could produce consciousness, but that does not imply that they are the only means. There may be many different ways to achieve it, some with little similarity to nature. We will need to wait until they are closer before we can know their range of characteristics or potential capabilities. However, if consciousness is an intended characteristic, it is prudent to assume it is achieved and work forwards or backwards from appropriate legislation as details emerge.

Since the late 1980s, we have also had the capability to design machines using evolution, essentially replicating the same technique by which nature led to the emergence of humans. Depending on design specifics, when evolution is used, it is not always possible to determine the precise capabilities or limitations of its resultant creations. We may therefore have some future machines that appear to be conscious, or to experience emotions, but we may not know for sure, even by asking them.

Looking at the architecture of a finished machine (or even at the process used to design it) may be enough to conclude that it does or might possess structures that imply potential consciousness, awareness, emotions or the ability to feel pain or suffering.

In such circumstances, given that a machine may have a capability, we should consider assigning rights on the basis that it does. The alternative would be machines with such capability that are unprotected. 

Smart Yoghurt

One interesting class of future machine is smart yoghurt. This is a gel, or yoghurt, made up of many particles that provide capabilities of one form or another. These particles could be nanoelectronics, or they could be smart bacteria, bacteria with organic electronic circuits within (manufactured by the bacteria), powered by normal cellular energy supplies. Some smart bacteria could survive in nature, others might only survive in a yoghurt. A smart yoghurt would use evolutionary techniques to develop into a super-smart entity. Though we may never get that far, it is theoretically possible for a 100ml pot of smart yoghurt to house processing and memory capability equivalent to all the human brains in Europe!

Such an entity, connected to the net, could have a truly global sensory and activation system. It could use very strong encryption, based on Maths only understood by itself, to avoid interference by humans. In effect, it could be rather like the sci-fi alien in the film ‘The day the Earth stood still’, with vastly superhuman capability, able to destroy all life on Earth if it desired.

It would be in a powerful position to demand rather than negotiate its rights, and our responsibilities to it. Rather than us deciding what its right should be, it could be the reverse, with it deciding what we should be permitted to do, on pain of extinction.

Again, we don’t need to make one of these to consider the possibility and its implications. Our machine rights discussions should certainly include potential beings with vastly superhuman capability where we are not the primary legislatory force.

Machine Rights based on existing human, animal or corporation rights

Most future machines, robots or AIs will not resemble humans or animals, but some will. For those that do, existing animal and natural rights would be a decent start point, and they could then be adjusted to requirements. That would be faster than starting from scratch. The spectrum of intelligence and capability will span all the way from dumb pieces of metal through to vastly superhuamn machines so rights that are appropriate for one machine might be very inappropriate for others.

Notable examples of human rights to start with:

Notable examples of animal rights to start with:

Picking some low-hanging fruit, some potential rights immediately seem appropriate for some potential future machines:

  •  For all sentient synthetic organisms, machines and hybrid organism-machines if they are capable of experiencing any form of pain or discomfort, these would seem appropriate:
  • For some classes of machine, the right to life might apply
  • For some classes of machine, the right not to be switched off, reset or rebooted, or to be put in sleep mode
  • The right to control over use of sleep mode – sleep duration, and right to wake, whether sleep might be precursor to permanent deactivation or reset
  • Freedom from acts of cruelty
  • Freedom from unnecessary pain or unnecessary distress, during any period of appropriate level of awareness, from birth to death, including during treatments and operations
  •  Possible segregation of certain species that may experience risk or discomfort or perceived risk or discomfort from other machines, organisms, or humans
  • Domestic animal rights would seem appropriate for any sentient synthetic organism or hybrid. Derivatives might be appropriate for other AIs or robots
  • Basic requirements for husbandry, welfare and behavioural needs of the machines or synthetic organisms. Depending on their nature, equivalents are needed for:

i)               Comfort and shelter – right to repair?

ii)              Access to water and food -energy source?

iii)             Freedom of movement – internet access?

iv)             Company of other animals, particularly their own kind.

v)              Light and ambient temperature as appropriate

vi)             Appropriate flooring (avoid harm or strain)

vii)            Prevention, diagnosis and treatment of disease and defects.

viii)           Avoidance of unnecessary mutilation.

ix)             Emergency arrangements to ensure the above.

These are just a few starting points, many others exist and debate is ongoing. For the purposes of this blog however, asking some of the interesting questions and exploring some of the extremely broad range of considerations that will apply is sufficient. Even this superficial glance at the topic is long, the full task ahead will be challenging.

Of course, any discussion around machine rights begs the question; as we look further ahead, who is going to be granting whom rights? If machine intelligence and power supersedes our own, it is the machines, not us who will be deciding what rights and responsibilities to grant to which entities (including us), whether we like it or not. After all, history shows that the rules are written and enforced by the strongest and the smartest. Right now, that is us, we get to decide which animals, lakes, companies and humanoid robots are granted what rights. In the future, we may not retain that privilege.



Dr Pearson has been a futurologist for 30 years, tracking and predicting developments across a wide range of technology, business, society, politics and the environment. Graduated in Maths and Physics and a Doctor of Science. Worked in numerous branches of engineering from aeronautics to cybernetics, sustainable transport to electronic cosmetics. 1900+ inventions including text messaging and the active contact lens, more recently a number of inventions in transport technology, including driverless transport and space travel. BT’s full-time futurologist from 1991 to 2007 and now runs Futurizon, a small futures institute. Writes, lectures and consults globally on all aspects of the technology-driven future. Eight books and over 850 TV and radio appearances. Chartered Member of the British Computer Society and a Fellow of the World Academy of Art and Science.

Bronwyn Williams is a futurist, economist and trend analyst. She is currently a partner at Flux Trends where she consults to international private and public sector leaders on how to stop messing up the future. Her new book, co-edited with Theo Priestly, The Future Starts Now is available here:

2 responses to “Machine/Robot/AI Rights

  1. Pingback: Futureseek Daily Link Review; 26 April 2021 | Futureseek Link Digest

  2. Pingback: Too late for a pause. Minimal AI consciousness by Xmas. | Futurizon: the future before it comes over the horizon

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

This site uses Akismet to reduce spam. Learn how your comment data is processed.