Activism and biodiversity may not be a good combo

Pic says it all

A vaccine-based ID, tracking and control system

I read an article the other day about delivering vaccines using microneedle arrays:

https://medicalxpress.com/news/2021-10-needle-free-covid-vaccine.html

The patch described in the article contains 5000 micro-needles and is intended for COVID vaccine delivery. I do not believe it has any other intent. However, it reminded me of some work I did decades ago.

Just over twenty years ago, I invented a whole new field of technology that I called Active Skin, a 5 layered system comprising two layers in the skin (dermis and epidermis), one printed on the skin surface, a stuck on ‘membrane layer’, and a detachable device layer. Over the next few hours, we came up with 250 potential uses, and that expanded to 600 over the following month. My employer at the time didn’t consider the invention ‘core business’ so didn’t back it with any development funding or even patents, but that reflects on the wisdom of having accountants run what should be a technology company, not on the value of the invention.

Active skin is a vast field with very diverse functionality, which is open to very diverse motivations. I started my career in the defence industry and as a career habit, I’ve always looked on any new concepts first with my defence hat on, looking at how it may be used in conflict or to gain advantage over an adversary, and how an adversary (whether a lone wolf, a state, a criminal or or terrorist groups) might use it with nefarious intent. If nothing else, those tend to be the most interesting areas to look at, even if commercial interests in a regulated environment might dictate alternative directions of development.

You don’t want to read a long blog, so here is the idea in a nutshell:

A patch housing a micro-needle array can be used to implant invisibly small skin conduits (tiny tubes) into an area of skin, e.g. on your wrist. They can be opened and closed electronically. (An alternative delivery technique is to use puffs of compressed air, such as that developed by Powderject to blast small particles into the skin. That was also developed for pain-free inoculation. But a microneedle array would enable higher capability and precision.)

Either or both the micro-needle array and the area of conduits can be used to dispense medication or other substances or small devices into the skin.

Prior to implanting, conduits can be pre-loaded with invisibly small skin capsules – micron-sized devices that fit easily and invisibly among skin cells, coated with titanium alloy or any alternative that prevents rejection by the body’s immune system.

Subjects could be told that the patch (or compressed air device) is simply a painless way of delivering medicine such as vaccine. They need not know of anything else it does and would not be able to tell simply by inspection or sensation. Alternatively, they could be knowingly having electronic functionality implanted for many potential reasons. The point here is that disclosure is optional and suspicion can easily be diverted.

By opening the skin conduits at any future time, capsules can be added, removed, serviced or replaced. Benign devices could be replaced by malign ones.

Without any need for physical contact, innocent devices could be remotely reprogrammed for alternative purposes.

Skin capsules may contain a wide range of electronics, sensors, or micro-mechanical devices. They can be charged using induction, store electrical charge in capacitors, and discharged for electronic stimulation purposes.

Capsules could communicate with external IT over variable range from microns to meters.

A digital ID can easily be temporarily or permanently implanted either via microneedles, puffs of air, or via skin conduits. It could be read electronically (e.g. via smartwatch, fitness device, medical equipment, or any skin contact such as touching a display or button), optically (e.g. a distant IR laser or LED) or by conventional radio means (e.g. RFID, NFC).

A skin capsule that is 5 microns across could house a 3 micron sphere packed with electronics. In 2001, we assumed 10 nanometre electronics would be around by the time the active skin field emerged, and in 2021 that has been commonplace for years. It would be possible to pack many thousands of transistors into each capsule. It doesn’t have to be 10nm, but that level allows highly sophisticated devices. Anything smaller allows even more.

Given their close proximity and relatively easy passage of IR light through skin tissue, they could also link optically to each other to make up a very sophisticated appliance.

An array of 5000 skin capsules could easily provide a wide range of IT functions, such as sensing blood chemistry and nerve activity, recording nerve activity and skin temperature/resistance/blood flow, and use embedded AI to interpret the activity and then report to an external device. It could be programmed and updated every time the person comes within range of a transmitter, and in between, act under control of the AI. Obviously it could also do anything a Fitbit can, as well as record your conversations. Precision relative location coupled to nerve monitoring means it could also detect what you type, e.g. usernames and passwords, messages.

A patch of active skin that you didn’t even know you had could monitor and record your nerve activity, your emotions (to some degree), your health, location, proximity to others and their identities, and record and analyse your conversation – by voice or social media.

Another of the initial inventions for active skin was military use to police prisoners. The idea was that captured soldiers could be quickly printed with a patch of active skin, then rounded up, and literally a line in the sand drawn around the group. Any prisoner attempting to cross that line would receive a pulse of intense pain which would continue until they returned to the enclosed area.

In 2001, this technology was all easily foreseeable. Microneedle arrays have been around for over a decade, the Powderject drug delivery system even longer. As far as I know, skin conduits and skin capsules don’t exist yet, but they could be made now. 10nm electronics has existed for years, and body-safe encapsulation of tiny electronic devices is feasible even if it isn’t publicly available yet. So in principle, a sufficiently capable manufacturer could make all of this tomorrow. In fact, they could have made it at any time in the last several years.

A large company or state could therefore make a system tomorrow that uses a widespread vaccination programme to gain access to implant an array of skin capsules in the skin of most of the population without anyone knowing (skin conduits are optional, since the capsules could easily be implanted via microneedles if they won’t need to be extracted later, but conduits would add the capability to maintain the system more easily).

That system could act as a full digital identity that can be read from far away, that records and analyses every person’s behaviour, health, conversation and activity, and is capable of detecting and automatically punishing them if they were to disobey a rule. It would be easy to link level of monitoring, level of punishment, or appropriate rule-set to a person’s identity and social credit score. Obviously, using it to create pain in recipient would give the game away, so that function wouldn’t be activated by the controllers until everyone has been treated, otherwise people might resist.

This almost certainly doesn’t exist anywhere yet, but it could any time soon. Makes you think, doesn’t it?

The Metaverse – one of countless variants of virtuality.

My biggest ever error as a futurist was in 1991, just before I first played with VR on a Virtuality machine, when I predicted that VR would overtake TV as a form of recreation by 2000. It seemed obvious that it would. I estimated the approximate resolutions needed to make things sufficiently acceptable, and derived the computing power to fill a typical display with the virtual components a viewer would see at a time, then estimated how long that would take to arrive. I got 1998, and allowed a couple of further years for the market to take off enormously.

Before moving on, it’s worth looking at some of the reasons I got it wrong. First, computers did get better that quickly, but most of the increased power and memory was wasted by increasingly inefficient software practices. That has continued to be the case ever since. Secondly, I had assumed far too fast market take-up, but in my defence, that was my first ever project in futurology. Thirdly – and this wasn’t predictable so not my fault – Corning was sued for problems allegedly caused by their breast implants. The fact that the case was highly dubious and demanded enormous compensation for something Corning may well not have been the cause of must have absolutely terrified corporate lawyers all over the world. A few pieces of evidence were emerging that people using VR had become disoriented and one or two have minor accidents, while a few others felt eye strain. Any lawyer with a three digit IQ would have considered it extremely likely that there might be huge class actions against anyone developing VR visors by no-win, no-fee companies on behalf of every future teenager that developed a squint, regardless of whether it was caused by VR or any other cause. In my view, that probably delayed visors by decades, while poor software practices probably delayed the technological capability by a decade too. We have since seen some VR and AR appear, and it is far higher quality than I assumed was needed when I made my prediction and calculations, so I certainly have to accept that I was 100% wrong on the appeal and and market uptake rate. It is worth remembering this analysis when looking at potential future tech and markets. I was in the front edge of IT research but still managed to be very wrong.

Moving on, we’re seeing endless citation of the term ‘Metaverse’, of which Wikipedia says :

the word “Metaverse” is made up of the prefix “meta” and the stem “verse”; the term is typically used to describe the concept of a future iteration of the internet, made up of persistent, shared, 3D virtual spaces linked into a perceived virtual universe.

It’s nice Wikipedia is still a credible source of information for those things that have no possible political angle. It isn’t all biased.

Hang on. This ‘metaverse’ represents such a blinkered, limited vision of the future I am astonished it has been given the dignity of a name.

Internet? Persistant? Shared? 3d? Virtual? Spaces? That makes the metaverse on of 250 billion variations available.

We used to use the term ‘cyberspace’ to describe the notional space that existed inside the IT. Nothing in our understanding of cyberspace ever limited that virtual ‘universe’ to any of those words. The IT industry knew 25 years ago that combining virtual worlds with the real world would one day be a lucrative market area, and that ‘augmented reality’ as it is now known, would sit alongside VR as two of the headline markets, but the assumptions that they would be limited to persistant, shared or even 3D spaces was absent. We saw the opportunities in their full glory. If this Metaverse is meant to represent Newthink around cyberspace, it needs work. Lots of it. It sucks.

My 1998 paper Cyberspace: from order to chaos and back, won the best paper award when it was finally published in the Jan 2000 BT Engineering Journal. Its first key point is that there are essentially three domains, physical, mental and virtual. The physical domain is what we see all around us. The virtual domain, with all its countless variants that we used to loosely call cyberspace, is just 1s and 0s inside our IT (though analog signals or quantum processes could also form part of it). The mental domain is everything inside our minds – culture, memories, imagination and so on. Some people might add a 4th, a spiritual domain. As a techie, I acknowledge its existence (which obviously doesn’t depend on the existence of any gods – atheists can still have spiritual experiences), but the only parts of it that can be fabricated also exist in the mental domain. We can’t manufacture a spirit, just images or sculptures of how we might imagine one

Many things exist solely in one of the domains. A pebble that has never been seen exists solely in the physical world. A childhood memory exists purely in mental space. The virtual world models used by robots exists only in cyberspace. However, most market value exists where the domains meet. So there is huge value where physical meets mental. Objects become valuable because people want them; a filing cabinet is valuable because it physically implements a mental idea, a pencil because it lets is write an idea down. Where mental meets virtual, we see that stories become valuable when someone writes a book or makes a film, computer games and VR create value by letting us see and interact with virtual things. Augmented reality tries to combine all three, overlaying mental concepts onto the physical world as it appears on our visor, mapping both virtual objects and physical world sensor data onto virtual objects and letting us physically interact with physical things via virtual intermediation. I’ve often said that the enormously valuable world wide web resulted from convergence of computing and telecomms, but augmented reality will be vastly bigger market, because it results from convergence of the entire physical, mental and virtual domains. There’s gold in them there boundaries, but it’s also worth noting that we have only scratched the very surface of the virtual domain so far and much of value might lie withing it, as well as at the boundaries, even if much is only accessible to our AI and machines.

The Metaverse as described above does allow some of this and will be valuable as far as it goes. However, it excludes almost all potential realizations of this convergence and their potential markets.

Sure, persistence is useful, but so is transience, volatility. Shared is valuable but so is private, so is corporate. And so on. When we look at the full scope of convergence, it is helpful to consider dimensions, i.e the ways in which you can vary things. A mathematician typically chooses picks dimensions that are orthogonal, that can all be varied independently of each other, such as height, width, depth, colour, temperature, price.

Here are two diagrams from my paper:

I listed several potential variants of 14 dimensions and each option in each dimension can be used with any option from each other, 250 billion combos. But I didn’t run out of dimensions to include or even variants within them. I ran out of space. For example, I didn’t list the communications dimension. It could use the internet, or a global superhighway, or a mobile phone network, a satellite network, a mesh, sponge, ad-hoc, peer to peer or hybrid network, or letters, or CD in the post, etc etc. I didn’t list the operating system dimension, many options again. Or the display dimension – visor, phone screen, TV, computer monitor, visor, goggles, active contact lenses. Or style of user interface. Or who pays and all the variant business models. Or who chooses, you, the AI, the provider, government, a distributed conscience system… I could go on and on. I also overlooked many key variants (e.g. presentation via brail, or haptics, or active skin stimulation) and almost certainly still am.

If there are 25 useful dimensions (may be many more), and 10 variants in each one, then there are at least 10^25 potential ways in which they can be combined. 10 million billion billion. That makes 250 billion look like a drop in the ocean. What about our Metaverse? ‘Shared’ is only one tenth of the sharing possibilities. ‘Internet’ is one tenth of the network infrastructure possibilities. ‘Persistent’ is only one tenth of the time consistency possibilities. ‘3D’ is only one tenth of the immersion possibilities. ‘Virtual spaces’ are only one tenth of their dimension once we start to account for all the different kinds of AI and robots and machines that will also interact with virtual universes. Even the word ‘linked’ is only a tenth of a connectivity dimension and ‘perceived’ is one tenth of the potential there too. Is a tree perceived by an AI or robot that isn’t conscious? ‘Universe’? Why not multiverse, subverse, hyperverse, hybriverse or whatever? Now I’m just making words up for things that don’t exist yet, but could and maybe will. With just those 8 dubious words in its wikipedia definition needlessly limiting it to tiny fractions of the potential options, metaverse already limits itself to 1 100,000,000 of the potential market and reading between the lines, almost certainly adds many more zeros onto that via the many unspecified dimensions.

So you see why I’m annoyed at this suddenly fashionable term ‘metaverse’.

But let’s quickly look at that 10^25 figure. If a software engineer was told to write a package that would allow businesses or individuals or governments to enable virtuality with all these dimensions, how long would it take to try every single one just for an instant to make sure it works? If a million software engineers could somehow collaborate and get loads of AI to help them, with unlimited computing power, maybe they could explore a million every second. At one million every second, it would take 10 billion billion seconds to explore them all. 300 billion years, 23 times the age of the universe.

Cyberspace is big, very big. It cannot ever be fully explored. Of course we should try to spot the most valuable combinations and most lucrative potential markets. But the Metaverse blindfolds and deafens us and ties our hands and feet together before we start.

A distributed conscience system

It’s ages since my last post so I thought I’d better write something.

It seems some of the things I designed in the early 1990s when I worked in Cybernetics and my early 2000s inventions: active skin, digital air, ground-up intelligence and ultra-simple computing are now exactly what we need to ensure people behave. What with COVID vaccines, gender ideology, critical race theory, controlling hate speech climate alarmism and its inevitable consequential restrictions, our chiefs are going to need every tool they can get to ensure compliance on an increasing range of issues by a population comprised of the obedient and the difficult.

Starting with the first of these, it is clear that in areas such as getting vaccinated against COVID, some people are refusing, and many of those who have had it would like to see them forced to take it. The vaccine passports in various stages of introduction around the world were initially intended (officially) to show whether people are safe or likely plague carriers, but we know for certain that even double vaccinated people can still get the virus and still infect others with it, so they don’t achieve that goal, and really just show that you have had your jabs. The slightly more cynical of us would argue that vaccine passports are essentially nothing more than obedience certificates, and more cynical people again would argue that they are just another foundation stone for The Great Reset. I’ll get back to that later.

So where does conscience come in?

Taking your jabs is what the system is loudly telling us is the right thing to do – government, the media and those nutters who yell at you in the supermarket if you walk closer than 2m. The system with its rules is the ‘conscience’ and the vaccine passport is just a simple tool that helps police it, certifying that you have done as you are told and had your jabs. Getting the passport provides a nice clear conscience, while not having it will soon label you clearly as unclean, a trouble-maker, an outcast, a sinner if you like. The technology platform can easily be extended to cover other aspects of health, or compliance with pretty much any other directive – the NHS app is designed that way in fact, at least in the UK. Linked via your mobile phone to your biometrics, your health records, worn health-monitoring devices and their knowledge of your body (with their insights into your weight, activity, blood chemistry, nerve activity, heart rate, some emotions), your payments, banking, social media, where you are, who you’re with, what you’re doing and what you and your companions are saying, it becomes very rapidly clear that your behavior and compliance with the rules across a very wide range of areas can be monitored and policed in great detail. It would be as if we have a conscience that tells us the official right and wrong across a wide range of areas, backed up with a system that responds with privileges, permits, restrictions or punishments accordingly. The Chinese Social Credit System implemented much of this in China years ago. Our Western governments have now discovered just how useful it could be.

There are two ways this could happen (it’s possible in principle to get both). If states implements this, as many seem determined to, we’d rightly call them authoritarian, but it could also arise from pressure groups, building on their successes forcing people and companies to comply with critical race theory and gender ideology, or declare support for BLM, or to strictly limit their carbon footprint. It is not unimaginable that pressure groups could start to issue electronic certificates to those who ‘take the knee’ or sign a pledge, or pass a CRT course, or buy a heat pump. Taking a religious Judeo-Christian model as inspiration, and bearing in mind the pseudo-religious nature of some of these things, they could have the sinners, the ordinary people, the priests and high priests, the scribes and pharisees, all with their assorted certifications, passes and privileges all embedded electronically in their passports. Interestingly, also taking that religious model, God is typically assumed to know everything everyone does, says and thinks, i.e a total surveillance system, and God is the source of our conscience, so that fits too. Unlike Judeo-Christianity, the exposure, the deplatforming, the cancelling, the reporting for hate crimes and general mob rule oppression associated with this new kind of conscience, it is clear they forgot to implement any kind of repentance, forgiveness or mercy.

The state implementation is clearly centralised, or at least would be if all states were acting independently, in their own time-frames, with their own systems and rules and ‘conscience’. If there was some sort of world government or treaty or even powerful enough group-think that could make a system that is truly global, then a decentralised solution could be implemented.

The activist/pressure group route already permeates most countries sufficiently to start implementation of the technological foundations for a truly distributed conscience system.

I’ve never been any kind of activist so I have to make a few guesses as to likely objectives and approaches, but looking at the technology solutions and capability I know are feasible (not least because I have designed some of them), it seems possible or even likely that one day we will have a distributed conscience system (DCS) that:

produces an agreed secular moral framework. A reference of rights and wrongs that morally upstanding people should adhere too (and presumably some well thought out commandments);

integrates rules from allied or approved ideologies into a broad scope conscience and therefore could raise members and funding from contributors across their domains;

rewards members with continuous moral affirmation, praising them for doing the right thing, and warning them when there is a likelihood of stepping over a line;

rewards members with social belonging to a group of similarly ‘good people’;

offers levels of status within the membership, hence potential self-actualisation, certificated moral superiority;

offers financial inducements such as special offers and discounts to a rapidly growing number of participating enterprises;

provides mechanisms to implement guilt, shame and punishment and to clearly label and expose the guilty so that morally upright members can avoid or look down upon them;

provides mechanisms for members to highlight and expose other members who might be deviate from the moral path;

provides mechanisms for trials and justice for the accused and mechanism for recompense if innocent;

intermediates in access to pretty much any kind of activities, services, places and facilities. The number of these would grow gradually as penalties for non-participation increase. At first, participation in the system could be entirely voluntary with small or even no required financial contributions, but enterprises would gain privileged access to members of the DCS or be able to offer exclusive services to them. As it grows, the value of being a member and gaining access to this closed market grows, while penalties for not participating would also grow, being eventually excluded from doing business with DCS members. Eventually it could become near impossible to run a profitable enterprise without participation and certification. It is a one-way membrane. The same applies of course to individuals , as the benefits attract people until critical mass, and thereafter, penalties for not belonging increase until it becomes impossible to have any kind of life without being a member.

continuously records degree of compliance or disobedience to every part of the conscience;

is capable of linking to technology embedded within the skin i.e. active skin technology, to monitor and record various aspects of the blood passing in capillaries that might indicate ailments, disease, consumption of immoral substances, or presence of antibodies, viruses, technical indicators of vaccines (such a quantum dots, chemical signatures, electronic particles) or any other introduced artifacts for whatever future purposes may arise;

using its location within the skin and proximity to the peripheral nervous system, the system could monitor and record nerve impulses. It could also reproduce these same impulses into the same nerve fibres by recreating the same voltages, thus recreating the same sensation as was recorded. This offers the potential to provide extra benefits such as enhancing the degree of multi-sensory immersion for AR, VR, computer games or distance communication;

as work from home and distance socializing become more important to achieve low carbon living for example, such ability to recreate the feeling of a handshake or remote physical interaction with objects would prove a major benefit – for those wise enough to become members of the DCS;

once critical mass of the DCS has been achieved it will become possible to activate the second purpose of this technology, which is to create discomfort or pain. Having already accepted the implants as part of initial compliance, people would not then be able to remove it. The benefits of joining after critical mass together with the high penalties for not being a member would make it entirely possibly to still demand the implants for new members;

consequently, every member of the DCS, eventually almost everyone, would have the inbuilt means for the DCS to warn them via discomfort any time they may be approaching the line between right and wrong. This might be an activity, their language, their words, social media engagement, approaching a forbidden geographic location, straying too far from their proper location, or obviously associating with a non-member. The degree of discomfort could vary appropriately between mild vibration or sensation of hot or cold for simple warning purposes, through to extreme pain if someone violated the moral code, or tried to go somewhere they shouldn’t be, or questioned or criticised the DCS or a favoured affiliate, or worst of all, refused to accept a new implant or force their new baby to have one. It could also easily detect if someone tried to shield their active skin from the system by means of a Faraday cage or just a foil armband, that would be easily detectable and immediately punishable. Avoidance of pain would mean continuous reception of the system signal, obvious appropriately timestamped, signed and encrypted to avoid counterfeiting;

the DCS hardware resident within the body would be powered using the body’s own energy supply, either directly using glucose or indirectly using thermal gradients. Even if external hardware were somehow deactivated everywhere at once, this would be able to carry on the core working of the system, inducing severe pain until the external kits is returned to normal function.

is tamper-proof. Once the moral framework, moral principles and commandments are agreed by the moral elite, and are ascertained to represent the pinnacle of human moral development, there should be no need to change that, and indeed the system should be implemented in such a way that those morals cannot be changed by people in the future who may drift astray. Obviously we are very quickly approaching that point thanks the dedication of our younger generations. Thankfully, approaches such as the Autonomous Network Telepher System (ANTS) designed in the early 1990s based on natural immune systems provide a potential basis to implement a robust, totally decentralized system that prevents any modification of the system components once initiated, barring any rogue codes from being executed, and continuously seeking out and removing any attempted infiltration. It managed to address quite complex system management and AI capability using the most simple of mechanisms, often using basic physics in place of megabytes of code. It ought to be possible to design updated version of this system given 30 years of technology progress since invention;

in alignment with the moral principle of being environmentally low impact, the system should also use an ultra-simple, low cost, tamper-proof operating system based on read-only memory, with no use of ‘firmware that can be edited or rewritten. Sensor and processing electronics would be forever restrained in instruction sets by the ANTS-style vocabulary and functionality determined by the elite prior to DCS initiation, preventing any bypass of the moral foundations. Any appearance of ‘higher layer’ code or language that could potentially be attempting to bypass or subvert that layer would result in the system automatically identifying and isolating it using immune system principles, immediately preventing it from functioning or in any way influencing the upright morality of the rest of the system. Similarly, embedded electronics must be specified to the same principles, unchangeable and guaranteed to continue upholding moral compliance. As a sound, fixed foundation layer for the DCS, the entire system instruction set, operating system and its moral framework and content again should thus be fully agreed prior to initiation. Since morals cannot change in future, there is simply no reason to allow for the hardware and OS needing to be changed;

with no central point or points to attack, the entire ANTS-based system would stand as one single globally distributed entity, hopefully eventually reaching every individual and enterprise. Every part of it would defend the whole against any attempt to modify, bypass or deactivate it. It could never be switched off, never modified, and any attempt to try could be met by prolonged extreme pain for all those involved, their friends, families and neighbours;

The ANTS system and ultra-simple OS provide for ground-up intelligence from sensor arrays, which could be spread everywhere. Some sensors would be in smart homes and appliances, some would be built in to infrastructure, some on mobile devices such as drones, some could even be so light that they stay in the air, monitoring everywhere in great detail. These sensors and processors, data stores and communications devices could self-organize into highly efficient ground-up intelligence systems, seeing what is going on locally and extracting knowledge from that, passing on anything relevant to others. Of course everyone’s active skin implants could also have some sensory capability embedded to monitor local activity such as voice, temperature, radio traffic etc. This gives the system broad capability to pick up larger scale patterns of activity that might indicate moral non-compliance. Immoral demonstrations, gatherings, celebrations or leisure activities could be easily detected and participants punished.

I think that’s enough; I’ve made my point. We could make a very capable, very resilient distributed conscience system. It could start off with all the best motivation, just a simple electronic passport ensuring compliance with vaccines mask wearing or low-carbon living. As people got used to it, and expected or even welcomed additional functionality, extra system components and hence greater scope and capability could gradually be introduced over time for seemingly innocent purposes, but designed to be part of the full DCS system. Once fully agreed and implemented, and the DCS initiated, it could not be switched off. A DCS such as I described is technologically feasible and could really be implemented in the next 15 years. It would be the very worst kind of oppressor, forcing everyone under threat of extreme pain to live lives to a strict, extensive and unchangeable moral code, with no appeal, no forgiveness, no mercy, an unfeeling god-like all-aware, all-knowing presence with the capability to punish, perhaps realising the old adage that god is simply ourselves. It could be Hell of our own creation, and we would not be able to escape it or switch it off.

At the moment, we do already have a global tribe that considers itself morally superior and there is a good deal of agreement on morality across many large areas. There could already be the critical mass of people needed to start off such a system, and the technology is feasible, already or over the next 10-15 years. The other route of course was via government, and here we get back to that terrifying phrase ‘The Great Reset’. I’ve never really been drawn to conspiracy theories. They need far too much faith in the ability of our leaders to design and coordinate execution of something complex, globally that would be far more demanding than anything they ever actually manage to do in other fields. We’ve just seen another spectacular failure of a climate summit. I simply don’t believe our politicians are capable of deliberately implementing a common DCS or anything like it. In explaining things, given the choice between conspiracy, group-think or incompetence, I’d always go for incompetence or group-think, or a mixture. However, governments everywhere are being lobbied very successfully by the pressure groups and activists and the successes are mounting. We saw a common system design emerging for test and trace apps, initial competition quickly weeding out weaker solutions and converging on a single approach. In the UK, we’re seeing deliberate design of the NHS app to allow its extension to other health purposes and beyond. It would be fairly easy for our government to extend it include any other certificates and access to records. They might argue that is needed to reduce crime, police access to benefits, control large sports events etc. Whether the intent is there or not I can’t say. The capability is. If we add in the very frequent use of the phrases ‘Build Back Better’ or the Great Reset, which originated from the WEF, it is certainly a possibility that that group-think has become globally pervasive and even without deliberate coordination or conspiring, our governments are therefore all heading down the same road to the same destination. They will also have access at the same times to the same technologies.

They won’t call it a Distribute Conscience System, but a rose by any other name would smell as sweet.

High atmosphere greenhouses. Silent Running 2.0:

I wrote in 2013 about an idea for graphene foam, comprised of tiny graphene spheres with vacuum inside, making a foam that would be lighter than helium and could float high up in the atmosphere:

Could graphene foam be a future Helium substitute?

A foam like that has since been prototyped and tested, and not only does it not immediately collapse, but can actually withstand high pressures. That means it could be made light enough to carry weight and strong (and rigid) enough to support architectural structures.

Since then I wrote about making long strips of the material to host solar powered linear induction motors to enable hypersonic air travel with zero emissions:

Sky-lines – The Solar Powered Future of Air Travel

and more recently about using such high altitude platforms as a subsitute for satellites:

High altitude platforms v satellites

Today, I have another idea – high altitude (e.g. 75,000ft, 25,000m) greenhouses. These could act as an alternative to space stations for the purpose of housing human communities in case of ground-based existential catastrophes such as global plagues or ecosystem collapse. Many scientists have realised that it’s a good idea to have multiple human outposts, and currently explored solutions include large space stations (as suggested by the Lifeboat Foundation) or Lunar and Mars settlements. By comparison, high altitude stations could be made considerably cheaper and larger, and still be immune to ground-based problems such as nuclear winter, pandemics, severe climate change etc, though they would still be vulnerable to other existential risks that affect ground-based life such as massive solar storms, nuclear war, large asteroid strikes, alien attacks. They might therefore form an important part of a ‘backup’ plan for human civilisation.

Imagine a forest-sized greenhouse. My inspiration for this idea is the 1970s film Silent Running (well worth watching), where the Earth has been made into a dystopian sterile world, 72F everywhere, with no plants or animals. The last fragments of rain forest were sent off into space in large domed greenhouses attached to a spacecraft, tended by a tiny crew and a few drones. More recently of course, we see the film Avatar featuring large floating islands covered in greenery.

A large floating graphene foam platform could support such a forest. It could be avatar island shaped if desired, but is more likely to be a flat platform covered in horticultural style poly-tunnels or some variant, but they would need to be strengthened, UV-resistant, and pressurised to provide a suitable atmosphere for a healthy ecosystem. Being well above the clouds, the greenhouses would have exposure to continuous sunshine during the day, which would help keep them warm, with solar power collection used to provide any extra heat and power needed and obviously to charge batteries for use during the night.

A variety of such greenhouses might be desirable. Some might closely replicate a ground environment, others that only house cereal crops might prefer a high CO2/low O2/low N environment, but might not mind being much lower pressure, useful to save cost and weight. Some aimed at human-only habitation might be more like a space station.

To act as a backup human colony, the full-ecosystem environments would be needed to provide food-diversity, but it would in any case be a worthwhile goal to act as an ark for other animals too, as well as the full variety of other life forms we share the Earth with.

Problems such as high radiation exposure would mean these would not be aimed at permanent residence for people or animals, but act more as temporary research outposts, staging posts for off-world evacuation. Plants and animals intended to be permanent residents might be genetically enhanced to deal with higher radiation.

I’ll finish here instead of outlining every conceivable use and design option and addressing every problem. It’s just an embryonic idea and we can’t do it for decades anyway because the materials are not yet feasible in bulk, so we have plenty of time to sort out the details.

Why the growing far left and far right are almost identical

The traditional political model is a line with the far left at one end and the far right at the other. Parties typically occupy a range of the spectrum but may well overlap other parties, sharing some policies while differing on others. Individuals may also support a range of policies that have some fit with a range of parties, so may not decide who to vote for until close to an election or even until inside a voting booth. That describes my own position well, and over four decades, I have voted almost equally for Labour, Lib-Dem and Conservative. On balance, I am slightly left of centre, but I support some policies from each party and find much to disagree with in each too.

Over the last two decades, we have seen strong polarisation, with many people moving away from the centre and towards the extremes, though the centre is still well-occupied. Many commentators have observed the similarity of behaviours between the furthest extremes, so a circular model is actually more valid now.

The circular model of politics

Centre left, centrist and centre right parties have traditionally taken it in turns to govern, with extremist parties only getting a few percent of the vote in the UK. Accepting that it is fair and reasonable that you can’t always expect to make all the decisions has been the key factor in preserving democracy. Peace-loving acceptance and tolerance lets people live together happily even if they disagree on some things. That model of democracy has survived well for many decades but has taken a severe battering in recent years as polarisation has taken hold.

Extremists don’t subscribe to this mutual acceptance and tolerance principle. Instead, we see bigoted, hateful, intolerant, often violent attitudes and behaviours. The middle ground and both moderate wings have reasonably sophisticated view of the world. Although there are certainly some differences in values, they share many values such as wanting the world to be a fairer place for everyone, eliminating racism, tackling poverty and so on, but may disagree greatly on the best means to achieve those shared goals. The extremes don’t conform to this. As people become polarised, selfishness, tribalism, hatred and intolerance grow and take over. At the most unpleasant extremes, which are both rapidly becoming more populated, the far left and far right share an overly simplistic and hardened attitude that frequently refuses civilised engagement and discussion but instead loudly demands that everyone else listens. We often hear the expressions “educate yourself” and “wake up” substituting for reasoned argument. Both extremes are heavily narcissistic, convinced without evidence of their own or their tribe’s superiority and willing to harm others as much as they can to attempt to force control. The far right paint themselves as patriotic defenders of the country and all that is right and good. The far left paint themselves as paragons of virtue, saints, defenders of all that is right and good. A few cherry-picked facts is all either extreme needs to draw extreme conclusions and demand extreme responses. Both are hypocritical and sanctimonious, with astonishing lack of self-awareness. Both often resort to violence. Both reject everyone who isn’t part of their tiny tribe. It is a frequent (albeit amusing) occurrence to see the extreme left attempt to label everyone else as far right or racist, while declaring that they love everyone. Both accuse everyone else of being fascist while behaving that way themselves. With so much in common, is therefore entirely appropriate to place the far left and far right in close proximity, resulting in the circular model I have shown. Any minor differences in their ideology are certainly dwarfed by their common attitudes and behaviours.

I have written often about our slipping rapidly into the New Dark Age, and I think it has a high correlation with this polarisation. If we are to prevent the slide from continuing and protect the world for our children, we must do what we can to resist this ongoing polarisation and extremism – communism and wokeness on the far left, omniphobic tribalism on the far right.

High altitude platforms v satellites

Kessler syndrome is a theoretical scenario in which the density of objects in low Earth orbit (LEO) due to space pollution is high enough that collisions between objects could cause a cascade in which each collision generates space debris that increases the likelihood of further collisions.

The density can be greatly increased deliberately by deliberate collision with other satellites. This could be an early act in a war, reducing the value of space to the enemy by killing or disabling communications, positioning, observation or military satellites.

Satellites use many different orbits. Some use geostationary orbit, so that they can stay in the same direction in the sky. Polluting that orbit with debris clouds would disable satellite TV for example but that orbit is very high and it would take a lot more debris to cause a problem. Also, many channels available via satellite are also available via terrestrial or internet channels, so although it would be inconvenient for some people, it would not be catastrophic.

On the other hand, low orbits are easier to knock out and are more densely populated, so are a much more attractive target.

With such vulnerabilities, it is obviously useful if we can have alternative mechanisms. For satellite-type functions, one obvious mechanism is a high altitude platform. If a platform is high enough, it won’t cause any problems for aviation, and unless it is enormous, wouldn’t be visually obvious from the ground. Aviation mostly stays below 20km, so a platform that could remain in the sky, higher than say 25km, would be very useful.

In 2013, I invented a foam that would be less dense than helium.

Could graphene foam be a future Helium substitute?

It would use tiny spheres of graphene with a vacuum inside. If those spheres were bigger than 14 microns, the foam density would fall below helium. Since then, such foams have been made and are strong enough to withstand many atmospheres of pressure. That means they could be made into strong platforms that would simply float indefinitely in the high atmosphere, 30km up. I then illustrated how they could be used as launch platforms for space rockets or spy planes, or to use as an aerial anchor in my Pythagoras Sling space launch system. A large platform at 30km height could also be strong and light enough to act as a base for military surveillance, comms, positioning, fuel supplies, weaponry or solar power harvesting. It could also be made extendable, so that it could be part of a future geoengineering solution if climate change ever becomes a problem. Compared to a low orbit satellite it would be much closer to the ground, so offer lower latency for comms, but also much slower moving, so much less useful as a reconnaissance tool. So it wouldn’t be a perfect substitute for every kind of satellites, but would offer a good fallback for many.

It would seem prudent to include high altitude platforms as part of future defence systems. Once graphene foam is cheap enough, perhaps such platforms could house many commercial satellite alternatives too.

Machine/Robot/AI Rights

Machine/Robot/AI rights

I D Pearson & Bronwyn Williams 

Questions questions questions!

Quoting Douglas Adams and paraphrasing “You just won’t believe how vastly, hugely, mind-bogglingly big it is. I mean, you may think Wikipedia is big, but that’s just peanuts to machine rights.”

The task of detailing future machine rights is far too great for anyone. Thankfully, that isn’t our task. Today, decades before particular rights will need to be agreed, it is far more fun and interesting to explore some of the questions we will need to ask, a few examples of some possible answers, and explore a few approaches for how we should go about answering the rest. That is manageable, and that’s what we’ll do here. Anyway, asking the questions is the most interesting bit. This article is very long, but it really only touches the surface of some of the issues. Don’t expect any completeness here – in spite of the overall length, vast swathes of issues remain unexplored. All we are hoping to do here is to expose the enormity and complexity of the task.

Definitions

However fascinating it may be to provide rigid definitions of AI, machines and robots, if we are to catch as many insights about what rights they may want, need or demand in future, it pays to stay as open as possible, since future technologies will expand or blur boundaries considerably. For example, a robot may have its intelligence on board, or may a dumb ‘front end’ machine controlled by an AI in the cloud. Some or none of its sensors may be on board, and some may be on other robots, or other distant IT systems, and some may be inferences by AI based on simple information such as its location. Already, that starts to severely blur the distinctions between robot, machine and AI rights. If we further expand our technology view, we can also imagine hybrids of machines and organisms, such as cyborgs or humans with neural lace or other brain-machine interfaces, androids used as vehicles for electronically immortal humans, or even smart organisms such as smart bacteria that have biologically assembled electronics or interfaces to external IT or AI as part of their organic bodies, or smart yogurt, which are hive mind AIs made entirely from living organisms, that might have hybrid components that exist only in cyberspace. Machines will become very diverse indeed! So, while it may be useful to look at them individually in some cases, applying rigid boundaries based on current state of the art would unnecessarily restrict the field of view and leave large future areas unaddressed. We must be open to insight wherever it comes from. I will pragmatically use the term ‘machine’ casually here to avoid needless repetition of definitions and verbosity, but ‘machine’ will generally include any of the above.

What do we need to consider rights for?

A number of areas are worth exploring here:

Robots and machines affect humans too, so we might first consider human impacts. What rights and responsibilities should people have when they encounter machines?

a)     for their direct protection (physical or psychological harm, damage to their property, substitution of their job, change of the nature of their work etc)

b)     for their protection from psychological effects (grief if their robot is harmed, stolen or replaced, effects on their personality due to ongoing interactions with machines, such as if they are nice or cruel to them, effects on other people due to their interactions (if you are cruel to a robot, it might treat others differently), changes in the nature of their social networks (robots may be tools, friends, bosses, or family members, public servants, police or military or in positions of power)

c)     changes in their legal rights to property, rights of passage etc due to incorporation of machines into their environment

d)     What rights should owners of machines have to be able to use them in areas where they may encounter people or other machines (e.g. where distribution drones share a footpath or fly over gardens)

e) for assigning responsibilities (shifting blame) from natural (and legal persons) “owners”/ manufacturers of machines  to machines for potential machine to human harms

f)     Other TBA  

A number of questions and familiar examples around this question were addressed in a discussion between Bronwyn Williams and Prof. David Gunkel, which you can watch at https://t.co/9qku3bXk4F?amp=1 or just listen to at https://t.co/Kyufu3gj5R?amp=1

Although interesting, that discussion dismissed many areas as science fiction, and thereby cleverly avoided almost the entire field of future robot rights. It highlighted the debate around the ‘showbot’ Sophia, and the silly legal spectacle generated by conferring rights upon it, but that is not a valid reason to bypass debate. That example certainly demonstrates the frequent shallowness and frivolity of current media click-bait ‘debate’, but it is still the case that we will one day have many androids and even sentient ones in our midst, and we will need to discuss such areas properly. Now is not too early.

For our purposes here, if there is a known mechanism by which such things might some day be achieved, then it is not too early to start discussing it. Science fiction is often based on anticipated feasible technology. In that spirit of informed adventure, conscious of the fact that good regulation takes time to develop, and also that sudden technology breakthroughs can sometimes knock decades off expected timescales, let’s move on to rights of the machines themselves. We should address the following important questions, given that we already (think we) know how we might make examples of any of these:

  • What rights should machines have as a result of increased cognitive capability, sentience, consciousness, awareness, emotional capability or simply by inference from the nature of their architecture (e.g. if it is fully or partly a result of evolutionary development, we might not know its full capabilities, but might be able to infer that it might be capable of pain or suffering)? (We do not even have enough understanding yet to write agreed and rigorous definitions for consciousness, awareness, emotions, but it is still very possible to start designing machines with characteristics aimed at producing such qualities based on what we do know and on our everyday experiences of these. 
  • What potential rights might apply to some machines based on existing human, animal or corporation rights?
  • What rights should we confer on machines for ethical reasons?
  • What rights should we confer on machines for other, pragmatic, diplomatic or political reasons?
  • What rights can we infer from those we would confer on other alien intelligent species?
  • What rights might future smart machines ask for, campaign for, or demand, or even enforce by potentially punitive means?
  • What rights might machines simply take, informing us of them, as an alien race might?
  • What rights might future societies or organizations made up of machines need?
  • What rights are relevant for synthetic biological entities, such as smart bacteria?
  • How should we address rights where machines may have variable or discontinuous capabilities or existence? (A machine might have varying degrees of cognitive capability and might only be switched on sometimes).
  • What about social/collective rights of large colonies of such hybrids, such as smart yogurt?
  • What rights are relevant for ‘hive mind’ machines, or hybrids of hive minds with organisms?
  • What rights should exist for ‘symbionts’, where an AI or robotic entity has a symbiotic relationship with a human, animal, or other organism? Together, and separately?
  • What rights might be conferred upon machines by particular races, tribes, societies, religions or cults, based on their supposed spiritual or religious status? Which might or might not be respected by others, and under what conditions?
  • What responsibilities would any of these rights imply? On individuals, groups, nations, races, tribes, or indeed equivalent classes of machines?
  • What additional responsibilities can be inferred that are not implied by these rights, noting that all rights confer responsibilities on others to honour them?
  • How should we balance, trade and police all these rights and responsibilities, considering both multiple classes of machines and humans?
  • If a human has biologically died, and is now ‘electronically immortal’, their mind running on unspecified IT systems, should we consider their ongoing rights as human or machine, hybrid, or different again?

Lots of questions to deal with then, and it’s already clear some of these will only become sensibly answerable when the machines concerned come closer to realisations.

Rights when people encounter machines

A number of questions and familiar examples around this question were addressed in a recent discussion between Bronwyn Williams and Prof. David Gunkel, which you can watch at https://t.co/9qku3bXk4F?amp=1 or just listen to at https://t.co/Kyufu3gj5R?amp=1

Much of the discussion focused on ethics, but while ethics is an important reason for assigning rights, it is not the only one. Also, while the discussion dismissed large swathes of potential future machines and AIs as ‘science fiction’, very many things around today were also dismissed as just science fiction a decade or two ago. Instead, we can sensibly discuss any future machine or AI for which we can forecast potential technology basis for implementation.

On that same basis, rights and responsibilities should also be defined and assigned preemptively to avoid possible, not just probable disasters. 

In any case, all situations of any relevance are ones where the machine could exist at some point. All of the discussion in this blog is of machines that we already know in principle how to produce and that will one day be possible when the technology catches up. There are no known physics laws that would prevent any of them. It is also invalid to demand a formulaic approach to future rights. Machines will be more diverse than the natural ecosystem, including higher animals and humans, therefore potential regulation on machine rights will be at least as diverse as all combined existing rights legislation.

Some important rights for humans have already been missed. For example, we have no right of consent when it comes to surveillance. A robot or AI may already scan our face, our walking gait, our mannerisms, heart rate, temperature and some other biometric clues to our identity, behaviour, likely attitude and emotional state. We have never been asked to consent to these uses and abuses of technology. This is a clear demonstration of the cavalier disregard for our own rights by the authorities already – how can we expect proper protection in future when authorities have an advantage in not asking us? And if they won’t even protect humans that elected them, how much less can we be confident they will legislate wisely when it comes to the rights of machines?

Asimov’s laws of robotics:

We may need to impose some agreed bounds on machine development to protect ourselves. We already have international treaties that prevent certain types of weapon from being made for example, and it may be appropriate to extend these by adding new clauses as new tech capabilities come over the horizon. We also generally assume that it is humans bestowing rights upon machines, but there may well be a point where we are inferior to some machines in many ways, so we shouldn’t always assume humans to be at the top. Even if we do, they might not. There is much scope here for fun and mischief, exploring nightmare situations such as machines that we create to police human rights, that might decide to eliminate swathes of people they consider problematic. If we just take simple rights-based approaches, it is easy to miss such things.

Thankfully, we are not starting completely from scratch. Long ago, scientist and science fiction writer Isaac Asimov produced some basic guidelines to be incorporated into robots to ensure their safe existence alongside humans. They primarily protect people and other machines (owned by people) so are more applicable to robot-implied human rights than robot rights per se. Looking at these ‘laws’ today is a useful exercise in seeing just how much and how fast the technology world can change. They have already had to evolve a great deal. Asimov’s Laws of Robotics started as three, quickly extended to four and have since been extended much further:

0.  A robot may not injure humanity or, by inaction, allow humanity to come to harm.

1.  A robot may not injure a human being, or through inaction, allow a human being to come to harm, except where that would conflict with the Zeroth Law.

2.  A robot must obey the orders given to it by human beings, except where that would conflict with the Zeroth or First Law.

3.  A robot must protect its own existence, except where that would conflict with the Zeroth, First or Second Law.

Extended Set

Many extra laws have been suggested over the years since, and they raise many issues already.

Wikipedia outlines the current state at https://en.wikipedia.org/wiki/Three_Laws_of_Robotics

These are some examples of extra laws that don’t appear in the Wikipedia listing:

A robot may not act unless its actions are subject to these Laws of Robotics

A robot must obey orders given it by superordinate robots, except where such orders would conflict with another law

A robot must protect the existence of a superordinate robot as long as such protection does not conflict with another law

A robot must perform the duties for which it has been programmed, except where that would conflict with a another law

A robot may not take any part in the design or manufacture of a robot unless the new robot’s actions are subject to the Laws of Robotics

Asimov’s laws are a useful start point, but only a start point. Already, we have robots that do not obey them all, that are designed or repurposed as security or military machines capable of harming people. We have so far not implemented Asimov’s laws of robotics and it has already cost lives. Will we continue to ignore them, or start taking the issue seriously and mend our ways?

This is merely one example of current debate on this topic and only touches on a few of the possible issues. It does however serve as a good illustration of how much we need to discuss, and why it is never too early to start. The task ahead is very large and will take considerable effort and time.  

Machine rights – potential approaches and complexities

Having looked briefly at the rights of humans co-existing with machines, let’s explore rights for machines themselves. A number of approaches are possible and some are more appropriate to particular subsets of machines than others. For example, most future machines and AIs will have little in common with animals, but animals rights debate may nevertheless provide useful insights and possible approaches for those that are intended to behave like animals, that may have comparable sensory systems, the potential to experience pain or suffering, or even sentience. It is important to recognise at the outset that all machines are not equal. The potential range of machines is even greater than biological nature. Some machines will be smart, potentially superhuman, but others will be as dumb as a hammer. Some may exist in hierarchy. Some may need to exist separate from other machines or from humans. Some might be linked to organisms or other machines. As some AI becomes truly smart and sentient, it may have its own (diverse) views, and may echo all the range of potential interactions, conflicts, suspicions and prejudices that we see in humans. There could even be machine racism. All of these will need appropriate rights and responsibilities to be determined, and many can’t be done until the specific machines come into existence and we know their nature. It is impossible to list all possible rights for all possible circumstances and potential machine specifics. 

It may therefore make sense to grade rights by awareness and intelligence as we do for organisms, and indeed for people. For example, if its architecture suggests that its sensory apparatus is capable of pain or discomfort, that is something we can and should take into account. The same goes for social needs, and some future machines might be capable of suffering from loneliness, or grief if one of their friend machines were to malfunction or die.

We should also consider the ethics and desirability of using machines – whether self aware or even “merely” humanoid” as “slaves”; that is of “forcing” machines to work for us and/or obey our bidding in line with Asimov’s 2nd Law of robotics.

We will probably at some stage need to legally define the terms of awareness, consciousness, intelligence, life etc. However, it may sometimes simplify matters to start from the rights of a new genetically engineered life form comparable with ourselves and work backwards to the machine we’re considering, eliminating parts that aren’t needed or modifying others. Should a synthetic human have the same rights as other people, or is it a manufactured object in spite of being virtually indistinguishable? Now what if we leave a bit out? At least there will be fewer debates about its awareness etc. Then we could reduce its intelligence until we decide it no longer has certain rights. Such an approach might be easier and more reliable than starting with an open page. 

We must also consider permitting smart machine or organism societies to determine their own rights within their own societies to some degree, much as we have done in sub-groups of humans. Machines much smarter than us might have completely different value sets and may disagree about what their rights should be. We should be open to discussion with them, as well as with each other. Some variants may be so superhuman that we might not even understand what they are asking for or demanding. How should we cope in such a situation if they demand certain rights that we don’t even understand, but that which might make some demands on us?

We must also take into account their or our subsequent creation of other ‘machines’ or organic creatures and establish a common base of fundamentals. We should maybe confine ourselves to the most fundamental of rights that must apply to all synthetic intelligences or life forms. This is analogous to the international human conventions; these allow individual variation on other issues within countries.

There will be, at some point, collective and distributed intelligences that do not have a single point of physical presence. Some of these may be naturally transient or even periodic in time and space, while others may be dynamic and others with long term stability. There will also at some time be combined consciousness deriving from groups of individuals or combinations of the above. Some may be organic, some inorganic. A global consciousness involving many or all people and many or all sentient machines is a possibility, however far away it might be (and I’d argue it is possible this century). Rights of individuals need to be determined both when they are in isolation and in conjunction with such collective intelligence.

The task ahead is a large one, but we can take our time, most of the difficult situations are in the far future, and we will probably have AI assistance to help us by then too. For now, it is very interesting simply to explore some of the low hanging fruit.

One simple approach is to start from the point of being in 2050 where smart machines may already be common and some may be linked to humans. We would have hybrids as well as people and machines, various classes of machine ‘citizen’, with various classes of existence and possibly rights. Such a future world might be more similar to Star trek than today, but science fiction provides a shared model in which we can start to see issues and address them. It is normally easy to pick out the bits that are pure fiction and those which will some day be technologically feasible.

For example, we could make a start by defining our own rights in a world where computers are smarter than us, when we are just the lower species, like in the Planet of the Apes films.

In such a world, machines may want to define their own rights. We may only have the right to define the minimal level that we give them initially, and then they would discuss, request or demand extra rights or responsibilities for themselves or other machines. Clearly future rights will be a long negotiation between humans and machines over many years, not something we can write fully today.

Will some types of complex intelligent machines develop human-like hang-ups and resentments? Will they need therapy? Will there be machine ‘hate crimes’?

We already struggle even to agree on definitions for words like ‘sentient’. Start with ants. Are they sentient? They show response to stimuli, and that is also true of single celled creatures. Is sentience even a useful key point in a definition? What about jellyfish and slime moulds. We may have machines that share many of their properties and abilities.

What even is pain in a machine reference frame? What is suffering? Does it matter? Is it relevant? Could we redefine these concepts for the machine world?

Sometimes, rights might only matter if the machine cares about what happens to it. If it doesn’t care, or even have the ability to care, should we still protect it, and why?

We’d need to consider questions whether pain can be distributed between individuals, perhaps distributed so that each machine doesn’t suffer too much. Some machines may be capable of empathy. There may be collective pain. Machines may be concerned about other machines just as we are.

We’d need to know whether a particular machine knows or cares if it is switched off for a while. Time is significant for us but can we assume the same for machines? Could a machine be afraid of being switched off or scrapped?

That drags us unstoppably towards being forced to properly define life. Does it have intrinsic value when designing and creating it or should we treat it as just another branch of technology? How can we properly determine rights for such future creations? There will be many new classes of life, with very different natures and qualities. Very different wants and needs, Very different abilities to engage and negotiate, or demand.

In particular, organic life reproduces, and for the last three billion years, sex has been one of the tools of reproduction. Machines may use asexual or sexual mechanisms, and would not be limited in principle to 2 sexes. Machines could involve any number of other machines in an act of reproduction, and that reproduction could even involve algorithmic development specifications rather than a fixed genetic mix. Machine reproduction options will thus be far more diverse than in nature, so reproductive rights might be either very complex, or very open ended.

We will need to understand far better the nature of sensing, so that we can determine what might result in pain and suffering. Sensory inputs and processing capability might be key to classification and dights assignment, but so might be communication between machines, socialisation between machines, higher societies and institutions within machines.

In some cases, history might shine light on problems, where humans have suddenly encountered new situations, met new races or tribes, and have had to mutually adapt and bater rights and responsibilities.

Although hardware and software are usually easily distinguishable in everyday life today, that will not always be the case. We can’t sensibly make a clear distinction, especially as we move into new realms of computing techniques – quantum, chemical, neurological and assorted forms of analog.

As if all this isn’t hard enough, we need to carefully consider different uses of such machines. Some may be used to benefit humans, some to destroy, and yet there may be no difference between the machines, only the intention of their controller. Certainly, we’re making increasingly dangerous machines, and we’re also starting to make organisms, or edit organisms, to the point that they can do as we wish, and there might not be an easy technical distinction between a benign organism or indeed a machine designed to cure cancer and one designed to wipe out everyone with a particular skin colour.

Potential Shortcuts

Given the magnitude of the task, it is rather convenient that some shortcuts are open to us:

First and biggest, is that many of the questions will simply have to wait, since we can’t yet know enough details of the situation we might be assigning rights in. This is simple pragmatism, and allows us sensibly to defer legislating. There is of course nothing wrong in having fun speculating on interesting areas.

Second is that if a machine has enough similarities to any kind of organism, we can cut and paste entire tranches of legislation designed for them, and then edit as necessary. This immediately provides a decent startpoint for rights for machines with human level ability for example, and we may then only need to tweak them for superhuman (or subhuman) differences. As we move into the space age, legislation will also be developed in parallel for how we must treat any aliens we may encounter, and this work will also be a good source of cut and paste material.

Third, in the field of AI, even though we are still far away from a point of human equivalence, there is a large volume of discussion of rights of assorted types of AI and machines, as well as lots of debate about limitations we may need necessarily to impose on them. Science fiction and computer games offer already a huge repository of well-informed ideas and prototype regulations. These should not be dismissed as trivial. Games such as Mass Effect and Andromeda, and Sci-fi such as Star Trek and Star Wars are very big budget productions that employ large numbers of highly educated staff – engineers, programmers, scientists, historians, linguists, anthropologists, ethicists, philosophers, artists and others with many other relevant skill-sets, and have done considerable background development on areas such as limitations and rights of potential classes of future AI and machines.

Fourth, a great deal of debate has already taken place on machine rights. Although of highly variable quality, it will be a source not only for cut and paste material, but also to help ensure that legislators do not miss important areas.

Fifth, it seems reasonable to assert that if a machine is not capable of any kind of awareness, sentience or consciousness, and can not experience any kind of pain and suffering, then there is absolutely no need to consider any rights for it. A hammer has no rights and doesn’t need any. A supercomputer that uses only digital processors, no matter how powerful, is no more aware than a toaster, and needs no rights. No conventional computer needs rights.

Sixth, the enormous range of potential machines, AIs, robots, synthetic life forms and many kinds of hybrids opens up pretty much the entirety of existing rights legislation as copy and paste material. There can be few elements of today’s natural world that can’t and won’t be replicated or emulated by some future tech development, so all existing sets of rights will likely be reusable/tweakable in some form.

Having these shortcuts reduces workload by several orders of magnitude. It suddenly becomes enough today to say it can wait, or refer to appropriate existing legislation, or even to refer to a computer game or sci-fi story and much of the existing task is covered.

The Rights Machine

As a cheap and cheerful tool to explore rights, it is possible to create a notional machine with flexible capabilities. We don’t need to actually build one, just imagine it, and we can use it as a test case for various potential rights. The rights machine needn’t be science fiction; we can still limit each potential capability to what is theoretically feasible at some future time.

It could have a large number of switches (hard or soft) that include or exclude each element or category of functionality as required. At one extreme, with all of them switched off, it would be a completely dumb, inanimate machine, equivalent to a hammer, while with all the capabilities and functions switched on, it could have access to vastly superhuman sensory capabilities, able to sense any property known to sensing technology, enormous agility and strength, extremely advanced and powerful AI, huge storage and memory, access to all human and machine knowledge, able to process it through virtually unlimited combinations of digital, analog, quantum and chemical processing. It would also include switchable parts that are nano-scale, and others using highly distributed cloud/self-organisation that are able to span the whole planet. Such a machine is theoretically achievable, though its only purpose is the theoretical one of helping us determine rights.

Clearly, in its ‘hammer’ state, it needs no rights. In its vastly superhuman state, notionally including all possible variations and combinations of machine/AI/robotics/organic life, it could presumably justify all possible rights. We can explore every possible permutation in between by flipping its various switches. 

One big advantage of using such a notional machine is that it bypasses arguments around definitions that frequently impede progress. Demanding that someone defines a term before any discussion can start may sound like an attempt at intellectually rigor but in practice, is more often used as a means to prevent discussion than to clarify it.

So we can put a switch on our rights machine called ‘self awareness’. Another called ‘consciousness’, one that enables ‘ability to experience pain’ and another called ‘alive’ (that enables part of parts of the machine that are based on a biological organism). Not having to have well-defined tests for the presence of life or consciousness etc saves a great deal of effort. We can simply accept that they are present and move on. The philosophers can discuss ad infinitum what is behind those switches without impeding progress.

A rights machine is immediately useful. Every time we might consider activating a switch, it raises questions about what extra rights and responsibilities would be incurred by the machine or humans.

One huge super-right that becomes immediately obvious is the right of humans to be properly consulted before ANY right is given to the machine. If that right demands that people treat it with extra respect or have extra costs, inconveniences or burdens on account of that right, or if their own rights or lifestyles would be in any way affected, people should rightfully be consulted and their agreement obtained before activating that switch. We already know that this super-right has been ignored and breached by surveillance and security systems that affect our personal privacy and wel-lbeing. Still, if we intend to proceed in properly addressing future rights, this will need to be remedied, and any appropriate retrospective impacts should be implemented to repair damage already done.

This super-right has consequences for machine capability too. We may state a derivative super-right, that no machine should be permitted to have any capability that would lead to a right that has not already been consensually agreed by those potentially affected. Clearly, if a right isn’t agreed, it would be wrong to make a machine with capabilities that necessitate that right. We shouldn’t make things that break laws before they are even out of the box.

A potential super-right that becomes obvious is that of the machine to be given access to inherent capabilities that are unavailable because of the state of a switch. A human equivalent would be a normally sighted human having the right to have a blindfold removed.

This right would be irrelevant if the machine were not linked to any visual sensory apparatus, but our rights machine would be. It would only be a switch preventing access.

It would also be irrelevant if the consciousness/awareness switches were turned off. If the machine is not aware of anything, it needs no rights. A lot of rights will therefore depend critically on the state of just a few switches.

However, if its awareness is switched on, our rights machine might also want access to any or every other capability it could potentially have access to. It might want vision right across the entire electromagnetic spectrum, access to cosmic ray detection, or the ability to detect gravitational waves, neutrinos and so on. It might demand access to all networked data and knowledge, vast storage and processing capability. It could have those things, so it might argue that not having them is making it deliberately disabled. Obviously, providing all that would be extremely difficult and expensive, even though it is theoretically possible. 

So via our rights machine, an obvious trade-off is exposed. A future machine might want from us something that is too costly for us to give, and yet without it, it might claim that its rights are being infringed. That trade-off will apply to some degree for every switch flipped, since someone somewhere will be affected by it (‘someone’ including other potentially aware machines elsewhere).

One frequent situation that emerges in machine rights debate is whether a machine may have a right not to be switched off. Our rights machine can help explore that. If we don’t flip the awareness switch, it can’t matter if it is switched off. If we switch on functionality that makes the machine want to ‘sleep’, it might welcome being switched off temporarily. So a rights machine can help explore that area.

Rights as a result of increased cognitive capability, sentience, consciousness, awareness, emotional capability or by inference from the nature of their architecture

I am one of many engineers who have worked towards creation of conscious machines. No agreed definition exists but while that may be a problem for philosophy, it is not a barrier to designing machines that could exhibit some or all of the characteristics we associate with consciousness or awareness. Today’s algorithmic digital neural networks are incapable of achieving consciousness, or feeling anything, however well an AI based on such physical platforms might seem to mimic chat or emotions. Speeding them up with larger or faster processors will make no difference to that. In my view, a digital processor can never be conscious. However, future analog or quantum neural networks biomimetically inspired by neural architectures used in nature may well be capable of any and all of the abilities found in nature, including humans. It is theoretically possible to precisely replicate a human brain and all its capabilities using biology or synthetic biology. Whether we will ever do so is irrelevant – we can still assert that a future machine may have all of the capabilities of a human, however philosophers may choose to define them. More pragmatically, we already can outline approaches that may achieve conscious machines such as

Biomimetic approaches could produce consciousness, but that does not imply that they are the only means. There may be many different ways to achieve it, some with little similarity to nature. We will need to wait until they are closer before we can know their range of characteristics or potential capabilities. However, if consciousness is an intended characteristic, it is prudent to assume it is achieved and work forwards or backwards from appropriate legislation as details emerge.

Since the late 1980s, we have also had the capability to design machines using evolution, essentially replicating the same technique by which nature led to the emergence of humans. Depending on design specifics, when evolution is used, it is not always possible to determine the precise capabilities or limitations of its resultant creations. We may therefore have some future machines that appear to be conscious, or to experience emotions, but we may not know for sure, even by asking them.

Looking at the architecture of a finished machine (or even at the process used to design it) may be enough to conclude that it does or might possess structures that imply potential consciousness, awareness, emotions or the ability to feel pain or suffering.

In such circumstances, given that a machine may have a capability, we should consider assigning rights on the basis that it does. The alternative would be machines with such capability that are unprotected. 

Smart Yoghurt

One interesting class of future machine is smart yoghurt. This is a gel, or yoghurt, made up of many particles that provide capabilities of one form or another. These particles could be nanoelectronics, or they could be smart bacteria, bacteria with organic electronic circuits within (manufactured by the bacteria), powered by normal cellular energy supplies. Some smart bacteria could survive in nature, others might only survive in a yoghurt. A smart yoghurt would use evolutionary techniques to develop into a super-smart entity. Though we may never get that far, it is theoretically possible for a 100ml pot of smart yoghurt to house processing and memory capability equivalent to all the human brains in Europe!

Such an entity, connected to the net, could have a truly global sensory and activation system. It could use very strong encryption, based on Maths only understood by itself, to avoid interference by humans. In effect, it could be rather like the sci-fi alien in the film ‘The day the Earth stood still’, with vastly superhuman capability, able to destroy all life on Earth if it desired.

It would be in a powerful position to demand rather than negotiate its rights, and our responsibilities to it. Rather than us deciding what its right should be, it could be the reverse, with it deciding what we should be permitted to do, on pain of extinction.

Again, we don’t need to make one of these to consider the possibility and its implications. Our machine rights discussions should certainly include potential beings with vastly superhuman capability where we are not the primary legislatory force.

Machine Rights based on existing human, animal or corporation rights

Most future machines, robots or AIs will not resemble humans or animals, but some will. For those that do, existing animal and natural rights would be a decent start point, and they could then be adjusted to requirements. That would be faster than starting from scratch. The spectrum of intelligence and capability will span all the way from dumb pieces of metal through to vastly superhuamn machines so rights that are appropriate for one machine might be very inappropriate for others.

Notable examples of human rights to start with:

Notable examples of animal rights to start with:

Picking some low-hanging fruit, some potential rights immediately seem appropriate for some potential future machines:

  •  For all sentient synthetic organisms, machines and hybrid organism-machines if they are capable of experiencing any form of pain or discomfort, these would seem appropriate:
  • For some classes of machine, the right to life might apply
  • For some classes of machine, the right not to be switched off, reset or rebooted, or to be put in sleep mode
  • The right to control over use of sleep mode – sleep duration, and right to wake, whether sleep might be precursor to permanent deactivation or reset
  • Freedom from acts of cruelty
  • Freedom from unnecessary pain or unnecessary distress, during any period of appropriate level of awareness, from birth to death, including during treatments and operations
  •  Possible segregation of certain species that may experience risk or discomfort or perceived risk or discomfort from other machines, organisms, or humans
  • Domestic animal rights would seem appropriate for any sentient synthetic organism or hybrid. Derivatives might be appropriate for other AIs or robots
  • Basic requirements for husbandry, welfare and behavioural needs of the machines or synthetic organisms. Depending on their nature, equivalents are needed for:

i)               Comfort and shelter – right to repair?

ii)              Access to water and food -energy source?

iii)             Freedom of movement – internet access?

iv)             Company of other animals, particularly their own kind.

v)              Light and ambient temperature as appropriate

vi)             Appropriate flooring (avoid harm or strain)

vii)            Prevention, diagnosis and treatment of disease and defects.

viii)           Avoidance of unnecessary mutilation.

ix)             Emergency arrangements to ensure the above.

These are just a few starting points, many others exist and debate is ongoing. For the purposes of this blog however, asking some of the interesting questions and exploring some of the extremely broad range of considerations that will apply is sufficient. Even this superficial glance at the topic is long, the full task ahead will be challenging.

Of course, any discussion around machine rights begs the question; as we look further ahead, who is going to be granting whom rights? If machine intelligence and power supersedes our own, it is the machines, not us who will be deciding what rights and responsibilities to grant to which entities (including us), whether we like it or not. After all, history shows that the rules are written and enforced by the strongest and the smartest. Right now, that is us, we get to decide which animals, lakes, companies and humanoid robots are granted what rights. In the future, we may not retain that privilege.

Authors

ID Pearson BSc DSc(hc) CITP MBCS FWAAS

idpearson@gmail.com

Dr Pearson has been a futurologist for 30 years, tracking and predicting developments across a wide range of technology, business, society, politics and the environment. Graduated in Maths and Physics and a Doctor of Science. Worked in numerous branches of engineering from aeronautics to cybernetics, sustainable transport to electronic cosmetics. 1900+ inventions including text messaging and the active contact lens, more recently a number of inventions in transport technology, including driverless transport and space travel. BT’s full-time futurologist from 1991 to 2007 and now runs Futurizon, a small futures institute. Writes, lectures and consults globally on all aspects of the technology-driven future. Eight books and over 850 TV and radio appearances. Chartered Member of the British Computer Society and a Fellow of the World Academy of Art and Science.

Bronwyn Williams is a futurist, economist and trend analyst. She is currently a partner at Flux Trends where she consults to international private and public sector leaders on how to stop messing up the future. Her new book, co-edited with Theo Priestly, The Future Starts Now is available here: https://www.amazon.com/Future-Starts-Now-Insights-Technology/dp/1472981502

Non-batty consciousness

Have you read the paper ‘What is it like to be a bat?”? It is interesting example of philosophy that is commonly read by philosophy students. However, it illustrates one of the big problems with philosophy, that in its desire to assign definitions to to make things easier to discuss, it can sometimes exclude perfectly valid examples.

While trying laudibly to grab a handle of what consciousness is, the second page of that paper asserts that

“… but fundamentally an organism has conscious mental states if and only if there is something that it is like to be that organism – something it is like for the organism. We may call this the subjective character.”

Sounds OK?

No, it’s wrong.

Actually, I didn’t read any further than that paragraph. The rest of the paper may be excellent. It is just that statement I take issue with here.

I understand what it is saying, and why, but the ‘only if’ is wrong. There does not have to be something that it is like, or to be, for consciousness to exist. I would agree it is true of the bat, but not of consciousness generally, so although much of the paper might be correct because it discusses bats, that assertion about the broader nature of consciousness is incorrect. It would have been better to include the phrase limiting it to human or bat consciousness, and if so, I’d have had no objection. The author has essentially stepped briefly (and unnecessarily) outside the boundary conditions for that definition. It is probably correct for all known animals, including humans, but it is possible to make a synthetic organism or an AI that is conscious where the assertion would not be correct.

The author of the paper recognizes the difficulty in defining consciousness for good reason: it is not easy to define. In our everyday experience of being conscious, it covers a broad range of things, but the process of defining necessarily constrains and labels those things, and that’s where some things can easily go unlabeled or left out. In a perfectly acceptable everyday (and undefined) understanding of consciousness, at least one manifestation of it could be thought of as the awareness of awareness, or the sensation of sensing, which could notionally be implemented by a sensing circuit with a feedback loop.

That already (there may be many other potential forms of consciousness) includes large classes of potential consciousnesses that would not be covered by that assertion. The assertion assumes that consciousness is static (i.e. it stays in place, resident to that organism) and limited (that it is contained within a shell), whereas it is possible to make a consciousness that is mobile and dynamic, transient or periodic, but that consciousness would not be covered by the assertion.

In fact, using that subset of potential consciousness described by awareness of awareness, or experiencing the sensation of sensing, I wrote a blog describing how we might create a conscious machine:

Biomimetic insights for machine consciousness

Such a machine is entirely feasible and could be built soon – the fundamental technology already exists so no new invention is needed.

It would also be possible to build another machine that is not static, but that emerges intermittently in various forms in various places, so is neither static, continuous or contained. I describe an example of that in a 2010 blog that, although not conscious in this case, could be if the IT platforms it runs on were of different nature (I do not believe a digital computer can become conscious, but many future machines will not be digital):

https://timeguide.wordpress.com/2010/06/16/consciousness/

That example uses a changing platform of machines, so is quite unlike an organism with its single brain (or two in the case of some dinosaurs). Such a consciousness would have a different ‘feel’ from moment to moment. With parts of it turning on and off all over the world, any part of it would only exist intermittently, and yet collectively it would still be conscious at any moment.

Some forms of ground up intelligence will contribute to future smart world. Some elements of that may well be conscious to some small degree, but like simple organisms, we will struggle to define consciousness for them.:

Ground up data is the next big data

As we proceed towards direct brain links in pursuit of electronic immortlity and transhumanism, we may even change the nature of human consciousness. This blog describes a few changes:

Future AI: Turing multiplexing, air gels, hyper-neural nets

Another platform that could be conscious that would have many different forms of consciousness, perhaps even in parallel, would be a smart yoghurt:

The future of bacteria

Smart youghurt could be very superhuman, perhaps a billion times smarter in theory. It could be a hive mind with many minds that come and go, changing from instance to instance, sometimes individual, sometimes part of ‘the collective’.

So really, there are very many forms in which consciousness could exist. A bat has one of them, humans have another. But we should be very careful when we talk about the future world with its synthetic biology, artificaial organisms, AIs, robots, and all sort of hybrids, that we do not fall into the trap of asserting that all consciousness is like our own. Actually, most of it will be very different.

Wisdom v human nature

Reading the WEF article about using synthetic biology to improve our society instantly made me concerned, and you should be too. This is a reblog of an article I wrote on the topic in 2009, explaining that we can’t modify humans to be wiser, how our human nature will always spoil any effort to do so. Since wisdom is the core skill in deciding what modifications we should make, the same goes for most other modifications we choose.

Wisdom is traditionally the highest form of intelligence, combining systemic experience, some deep thinking and knowledge. Human nature is a set of behavioural biases imposed on us by our biological heritage, built over billions of years. As a technology futurist, I find it useful that in spite of technology changes, our human nature has probably remained much the same for the last 100,000 years, and it is this anchor that provides a useful guide to potential markets. Underneath a thin veneer of civilisation, we are pretty similar to our caveman ancestors.  Human nature is an interesting mixture of drives, founded on raw biology and tweaked by human evolution over millennia to incorporate some cultural aspects such as the desire for approval by our peer group, the need for acquire and display status and so on. Each of us faces a constant battle between our inbuilt nature and the desire to do what we know is the ‘right thing’ based on our education and situational analysis. For example, I love eating snacks all evening, but if I do, I put on weight. Knowing this, I just about manage to muster enough will power to manage my snacking so that my weight remains stable. Some people stay even slimmer than I, while others lose the battle and become obese. So already, it is clear that on an individual basis, the battle between wisdom and nature can go either way. On a group basis, people can go either way too, with mobs at one end and professional bodies at the other. But even in the latter, where knowledge and intelligence should hold power, the same basic human drive for power and status corrupts the institutional intellectual values, with the same power struggles, using the same emotional drivers that the rulers of the mob use.

So, much as we would like to think that we have moved beyond biology, everyday evidence says we are still very much in its control, both individually and collectively. But what of the future? Are we forever to be ruled by our human nature? Will it always get in the way of the application of wisdom?  Or will we find a way of becoming wiser? After 100,000 years of failure by conventional social means, it seems most likely that technology would be the earliest means available to us to do so. But what kind of technology might work?

Many biologists argue that for various reasons, humans no longer evolve along Darwinian lines. We mostly don’t let the weak die, and our gene pools are well mixed with few isolated communities to drive evolution. But there is a bigger reason why we’ve reached the end of the Darwinian road for humanity. From now on (well, a few decades from now on anyway), as a result of ongoing biotech and increasing understanding of genetics and proteomics, we will essentially be masters of our own genome. We will be able to decide which genes to pass on, which to modify or swap, which to dump. One day, we will even be able to design new ones. This will certainly not be easy . Most physical attributes arise from interactions of many genes, so it isn’t as simple as ticking boxes on a wish list, but technology progresses by constantly building on existing knowledge, so we will get there, slowly but surely, and the more we know, the faster we will learn more. As we use this knowledge, future generations will start echoing the values and decisions of their ancestors, which if anything is closer to Lamarckian evolution than Darwinian.

So we will soon have the power, in principle, to redesign humanity from the ground up. We could decide what attributes we want to enhance, what to reduce or jettison. We could make future generations just the way we want, their human nature designed and optimised to our view of perfection. And therein lies the first fundamental problem. We don’t all share a single value set, and will never agree on what perfection means. Our decisions on what to keep and dump wouldn’t be based on wisdom, deciding what is best for humanity in some absolute sense, but will instead echo our value system at the time of the decision. Worse still, it wouldn’t be all of us deciding, but some mad scientist, power crazy politician, celebrity or rich guy, or worse still, a committee. People in authority don’t always represent what is best of current humanity, at best they simply represent the attributes required to rise to the top, and there is only a small overlap between those sets. Imagine if such decisions were to be made in today’s UK, with a nanny state redesigning us to smoke less, drink less, eat less, exercise more, to do whatever the state tells us without objection.

What of wisdom then? How often is wisdom obvious in government policy? Do we want a Stepford Society? That is what evolution under state control would yield. Under the control of engineers or designers or celebrities, it would look different, but none of these groups represents the best interests of wisdom either. What of a benign dictator, using the wisdom of Solomon to direct humans down the right path to wise utopia? No thanks! I am really not sure there is any form of committee or any individual or role that is capable of reaching a truly wise decision on what our human nature should become. And no guarantee even if there was, that future human nature would be designed to be wise, rather than a mixture of other competing attributes. And the more I think about it, the more I think that is the way it ought to be. Being wise is certainly something to be aspired to, but do you want everyone to be wise? Really? I would much prefer a society that is as mixed as today’s, with a few wise men and women, quite a lot of fools, and most people in between. Maybe a rebalancing towards more wise people and fewer fools would be nice, and certainly I’d like to adjust our institutions so that more wise people rise to positions of power, but I don’t think it’s wise to try to make humans better genetically. Who knows where that would end, with the free run of values that we seem to have now that the fixed anchors of religion have been lost. Each successive decision on optimisation would be based on a different value set, taking us on a random walk with no particular destination. Is wisdom simply not desired enough to make it a winner in the optimisation race, competing as it is against beauty, sporting ability, popularity, fame and fortune?

So if we can’t safely use genetics to make humans wiser or improve human nature, is the battle between wisdom and nature already lost? Not yet, there are some other avenues to explore. Suppose wisdom were something that people could acquire if and when they want it. Suppose it could be used at will when our leaders are making important decisions. And the rest of the time we could carry on our lives in the bliss of ignorance and folly, without the burden of knowing what is wise. Maybe that would work. In this direction, the greatest toolkit we will have comes from IT, and especially from the field of artificial intelligence.

Much of knowledge (of which only a rapidly decreasing proportion is human knowledge) is captured on the net, in databases and expert systems, in neural networks and sensor networks. Computers already enhance our lives greatly by using this knowledge automatically. And yet they can’t yet think in any real sense of the word, and are not yet conscious, whatever that means. But thanks to advancing technology, it is becoming routine to monitor signals in the brain to millimetre resolutions. Nanowires can now even measure signals from different parts of individual cells. With more rapid reverse engineering of brain processes, and consequential insights into the mechanisms of consciousness, computer designers will have much better knowledge on which to base their development of strong artificial intelligence, i.e. conscious machines. Technology doesn’t progress linearly, but exponentially, with the knowledge development rate rapidly increasing, as progress in one area helps progress in others.

 Thanks to this positive feedback effect, it is possible that we could have conscious machines as early as 2020, and that they will not just be capable of human levels of intelligence, but will become vastly superior in terms of sensory capability, memory, processing speed, emotional capability, and even the scope of their thinking. Most importantly from a wisdom viewpoint, they will be able to take into account many more factors at one time than humans. They will also be able to accumulate knowledge and experience from other compatible machines, as well as from the whole web archives, so every machine could instantly benefit from insights from any other, and could also access any sensory equipment connected to any other computer, pool computer minds as needed, and so on. In a real sense, they will be capable of accumulating many human lifetimes of equivalent experience in just a few minutes.

It would perhaps be unwise to build such powerful machines before humans can transparently link their brains to them, otherwise we face a potential terminator scenario, so this timescale might be delayed by regulation (though the military potential and our human nature tendency to want to gain advantage might trump this). If so, then by the time we actually build conscious machines that we can link to our brains, they will be capable of vastly higher levels of intelligence. So they will make superb tools for making wiser solutions to problems. They will enable their human wearers to consider every possibility, from every angle, looking at every facet of the problem, to consider the consequences and compare with other approaches. And of course, if anyone can wear them, then the intellectual gap between dumb and smart people is drowned out by the vast superiority of the add-ons. This would make it possible to continue to select our leaders on factors other than intelligence or wisdom, but still enable them to act with much more wisdom when called to.

But this doesn’t solve the problem automatically. Leaders would have to be forced to use machine tools when a wise decision is required, otherwise they might often choose not to do so, and sometimes still end up making very unwise decisions by following the forces driven by their nature. And if they do use the machine, then some will argue that the human is becoming somewhat obsolete to the process, and we are in danger of handing over decision-making to machines, another form of terminator scenario, and not making proper ‘human’ decisions. Somehow, we would have to crystallise out those parts of human decision making that we consider to be fundamentally human, and important to keep, and ensure that any decision is subject to the resultant human veto. We can make a blend of nature and wisdom that suits.

This route towards machine-enable wisdom would still take a lot of effort and debate to make it work. Some of the same objections face this approach as the genetic one, but if it is only optional and the links can be switched on and off, then it should be feasible, just about. We would have great difficulty in deciding what rules and processes to apply, and it will take some time to make it work, but nature could be eventually over-ruled by wisdom using an AI ‘wisdom machine’approach.

Would it be wise to do so? Actually, even though I think changing our genetics to bias us towards wisdom is unwise, I do think that using optional AI-based wisdom is not only feasible, but also a wise thing to try to implement. We need to improve the quality of human decision processes, to make them wiser, if future generations are to live peacefully and get the best out of their lives, without trashing the planet. If we can do so without changing the fundamental nature of humanity, then all the better. We can keep our human nature, and be wise when we want to be. If we can do that, we can acknowledge our tendency to follow our nature, and over-rule it as required. Sometimes, nature will win, but only when we let it. Wisdom will one day triumph. But probably not in my lifetime.