Category Archives: automation

With automation driving us towards UBI, we should consider a culture tax

Regardless of party politics, most people want a future where everyone has enough to live a dignified and comfortable life. To make that possible, we need to tweak a few things.

Universal Basic Income

I suggested a long time ago that in the far future we could afford a basic income for all, without any means testing on it, so that everyone has an income at a level they can live on. It turned out I wasn’t the only one thinking that and many others since have adopted the idea too, under the now usual terms Universal Basic Income or the Citizen Wage. The idea may be old, but the figures are rarely discussed. It is harder than it sounds and being a nice idea doesn’t ensure  economic feasibility.

No means testing means very little admin is needed, saving the estimated 30% wasted on admin costs today. Then wages could go on top, so that everyone is still encouraged to work, and then all income from all sources is totalled and taxed appropriately. It is a nice idea.

The difference between figures between parties would be relatively minor so let’s ignore party politics. In today’s money, it would be great if everyone could have, say, £30k a year as a state benefit, then earn whatever they can on top. £30k is around today’s average wage. It doesn’t make you rich, but you can live on it so nobody would be poor in any sensible sense of the word. With everyone economically provided for and able to lead comfortable and dignified lives, it would be a utopia compared to today. Sadly, it can’t work with those figures yet. 65,000,000 x £30,000 = £1,950Bn . The UK economy isn’t big enough. The state only gets to control part of GDP and out of that reduced budget it also has its other costs of providing health, education, defence etc, so the amount that could be dished out to everyone on this basis is therefore a lot smaller than 30k. Even if the state were to take 75% of GDP and spend most of it on the basic income, £10k per person would be pushing it. So a couple would struggle to afford even the most basic lifestyle, and single people would really struggle. Some people would still need additional help, and that reduces the pool left to pay the basic allowance still further. Also, if the state takes 75% of GDP, only 25% is left for everything else, so salaries would be flat, reducing the incentive to work, while investment and entrepreneurial activity are starved of both resources and incentive. It simply wouldn’t work today.

Simple maths thus forces us to make compromises. Sharing resources reduces costs considerably. In a first revision, families might be given less for kids than for the adults, but what about groups of young adults sharing a big house? They may be adults but they also benefit from the same economy of shared resources. So maybe there should be a household limit, or a bedroom tax, or forms and means testing, and it mustn’t incentivize people living separately or house supply suffers. Anyway, it is already getting complicated and our original nice idea is in the bin. That’s why it is such a mess at the moment. There just isn’t enough money to make everyone comfortable without doing lots of allowances and testing and admin. We all want utopia, but we can’t afford it. Even the modest £30k-per-person utopia costs at least 3 times more than the UK can afford. Switzerland is richer per capita but even there they have rejected the idea.

However, if we can get back to the average 2.5% growth per year in real terms that used to apply pre-recession, and surely we can, it would only take 45 years to get there. That isn’t such a long time. We have hope that if we can get some better government than we have had of late, and are prepared to live with a little economic tweaking, we could achieve good quality of life for all in the second half of the century.

So I still really like the idea of a simple welfare system, providing a generous base level allowance to everyone, topped up by rewards of effort, but recognise that we in the UK will have to wait decades before we can afford to put that base level at anything like comfortable standards though other economies could afford it earlier.

Meanwhile, we need to tweak some other things to have any chance of getting there. I’ve commented often that pure capitalism would eventually lead to a machine-based economy, with the machine owners having more and more of the cash, and everyone else getting poorer, so the system will fail. Communism fails too. Thankfully much of the current drive in UBI thinking is coming from the big automation owners so it’s comforting to know that they seem to understand the alternative.

Capitalism works well when rewards are shared sensibly, it fails when wealth concentration is too high or when incentive is too low. Preserving the incentive to work and create is a mainly matter of setting tax levels well. Making sure that wealth doesn’t get concentrated too much needs a new kind of tax.

Culture tax

The solution I suggest is a culture tax. Culture in the widest sense.

When someone creates and builds a company, they don’t do so from a state of nothing. They currently take for granted all our accumulated knowledge and culture – trained workforce, access to infrastructure, machines, governance, administrative systems, markets, distribution systems and so on. They add just another tiny brick to what is already a huge and highly elaborate structure. They may invest heavily with their time and money but actually when  considered overall as part of the system their company inhabits, they only pay for a fraction of the things their company will use.

That accumulated knowledge, culture and infrastructure belongs to everyone, not just those who choose to use it. It is common land, free to use, today. Businesses might consider that this is what they pay taxes for already, but that isn’t explicit in the current system.

The big businesses that are currently avoiding paying UK taxes by paying overseas companies for intellectual property rights could be seen as trailblazing this approach. If they can understand and even justify the idea of paying another part of their company for IP or a franchise, why should they not pay the host country for its IP – access to the residents’ entire culture?

This kind of tax would provide the means needed to avoid too much concentration of wealth. A future businessman might still choose to use only software and machines instead of a human workforce to save costs, but levying taxes on use of  the cultural base that makes that possible allows a direct link between use of advanced technology and taxation. Sure, he might add a little extra insight or new knowledge, but would still have to pay the rest of society for access to its share of the cultural base, inherited from the previous generations, on which his company is based. The more he automates, the more sophisticated his use of the system, the more he cuts a human workforce out of his empire, the higher his taxation. Today a company pays for its telecoms service which pays for the network. It doesn’t pay explicitly for the true value of that network, the access to people and businesses, the common language, the business protocols, a legal system, banking, payments system, stable government, a currency, the education of the entire population that enables them to function as actual customers. The whole of society owns those, and could reasonably demand rent if the company is opting out of the old-fashioned payments mechanisms – paying fair taxes and employing people who pay taxes. Automate as much as you like, but you still must pay your share for access to the enormous value of human culture shared by us all, on which your company still totally depends.

Linking to technology use makes good sense. Future AI and robots could do a lot of work currently done by humans. A few people could own most of the productive economy. But they would be getting far more than their share of the cultural base, which belongs equally to everyone. In a village where one farmer owns all the sheep, other villagers would be right to ask for rent for their share of the commons if he wants to graze them there.

I feel confident that this extra tax would solve many of the problems associated with automation. We all equally own the country, its culture, laws, language, human knowledge (apart from current patents, trademarks etc. of course), its public infrastructure, not just businessmen. Everyone surely should have the right to be paid if someone else uses part of their share. A culture tax would provide a fair ethical basis to demand the taxes needed to pay the Universal basic Income so that all may prosper from the coming automation.

The extra culture tax would not magically make the economy bigger, though automation may well increase it a lot. The tax would ensure that wealth is fairly shared. Culture tax/UBI duality is a useful tool to be used by future governments to make it possible to keep capitalism sustainable, preventing its collapse, preserving incentive while fairly distributing reward. Without such a tax, capitalism simply may not survive.


Will urbanization continue or will we soon reach peak city?

For a long time, people have been moving from countryside into cities. The conventional futurist assumption is that this trend will continue, with many mega-cities, some with mega-buildings. I’ve consulted occasionally on future buildings and future cities from a technological angle, but I’ve never really challenged the assumption that urbanization will continue. It’s always good  to challenge our assumptions occasionally, as things can change quite rapidly.

There are forces in both directions. Let’s list those that support urbanisation first.

People are gregarious. They enjoy being with other people. They enjoy eating out and having coffees with friends. They like to go shopping. They enjoy cinemas and theatre and art galleries and museums. They still have workplaces. Many people want to live close to these facilities, where public transport is available or driving times are relatively short. There are exceptions of course, but these still generally apply.

Even though many people can and do work from home sometimes, most of them still go to work, where they actually meet colleagues, and this provides much-valued social contact, and in spite of recent social trends, still provides opportunities to meet new friends and partners. Similarly, they can and do talk to friends via social media or video calls, but still enjoy getting together for real.

Increasing population produces extra pressure on the environment, and governments often try to minimize it by restricting building on green field land. Developers are strongly encouraged to build on brown field sites as far as possible.

Now the case against.

Truly Immersive Interaction

Talking on the phone, even to a tiny video image, is less emotionally rich than being there with someone. It’s fine for chats in between physical meetings of course, but the need for richer interaction still requires ‘being there’. Augmented reality will soon bring headsets that provide high quality 3D life-sized images of the person, and some virtual reality kit will even allow analogs of physical interaction via smart gloves or body suits, making social comms a bit better. Further down the road, active skin will enable direct interaction with the peripheral nervous system to produce exactly the same nerve signals as an actual hug or handshake or kiss, while active contact lenses will provide the same resolution as your retina wherever you gaze. The long term is therefore communication which has the other person effectively right there with you, fully 3D, fully rendered to the capability of your eyes, so you won’t be able to tell they aren’t. If you shake hands or hug or kiss, you’ll feel it just the same as if they were there too. You will still know they are not actually there, so it will never be quite as emotionally rich as if they were, but it can get pretty close. Close enough perhaps that it won’t really matter to most people most of the time that it’s virtual.

In the same long term, many AIs will have highly convincing personalities, some will even have genuine emotions and be fully conscious. I blogged recently on how that might happen if you don’t believe it’s possible:

None of the technology required for this is far away, and I believe a large IT company could produce conscious machines with almost human-level AI within a couple of years of starting the project. It won’t happen until they do, but when one starts trying seriously to do it, it really won’t be long. That means that as well as getting rich emotional interaction from other humans via networks, we’ll also get lots from AI, either in our homes, or on the cloud, and some will be in robots in our homes too.

This adds up to a strong reduction in the need to live in a city for social reasons.

Going to cinemas, theatre, shopping etc will also all benefit from this truly immersive interaction. As well as that, activities that already take place in the home, such as gaming will also advance greatly into more emotionally and sensory intensive experiences, along with much enhanced virtual tourism and virtual world tourism, virtual clubbing & pubbing, which barely even exist yet but could become major activities in the future.

Socially inclusive self-driving cars

Some people have very little social interaction because they can’t drive and don’t live close to public transport stops. In some rural areas, buses may only pass a stop once a week. Our primitive 20th century public transport systems thus unforgivably exclude a great many people from social inclusion, even though the technology needed to solve that has existed for many years.  Leftist value systems that much prefer people who live in towns or close to frequent public transport over everyone else must take a lot of the blame for the current epidemic of loneliness. It is unreasonable to expect those value systems to be replaced by more humane and equitable ones any time soon, but thankfully self-driving cars will bypass politicians and bureaucrats and provide transport for everyone. The ‘little old lady’ who can’t walk half a mile to wait 20 minutes in freezing rain for an uncomfortable bus can instead just ask her AI to order a car and it will pick her up at her front door and take her to exactly where she wants to go, then do the same for her return home whenever she wants. Once private sector firms like Uber provide cheap self-driving cars, they will be quickly followed by other companies, and later by public transport providers. Redundant buses may finally become extinct, replaced by better socially inclusive transport, large fleets of self-driving or driverless vehicles. People will be able to live anywhere and still be involved in society. As attendance at social events improves, so they will become feasible even in small communities, so there will be less need to go into a town to find one. Even political involvement might increase. Loneliness will decline as social involvement increases, and we’ll see many other social problems decline too.

Distribution drones

We hear a lot about upcoming redundancy caused by AI, but far less about the upside. AI might mean someone is no longer needed in an office, but it also makes it easier to set up a company and run it, taking what used to be just a hobby and making it into a small business. Much of the everyday admin and logistics can be automated Many who would never describe themselves as entrepreneurs might soon be making things and selling them from home and this AI-enabled home commerce will bring in the craft society. One of the big problems is getting a product to the customer. Postal services and couriers are usually expensive and very likely to lose or damage items. Protecting objects from such damage may require much time and expense packing it. Even if objects are delivered, there may be potential fraud with no-payers. Instead of this antiquated inefficient and expensive system, drone delivery could collect an object and take it to a local customer with minimal hassle and expense. Block-chain enables smart contracts that can be created and managed by AI and can directly link delivery to payment, with fully verified interaction video if necessary. If one happens, the other happens. A customer might return a damaged object, but at least can’t keep it and deny receipt. Longer distance delivery can still use cheap drone pickup to take packages to local logistics centers in smart crates with fully block-chained g-force and location detectors that can prove exactly who damaged it and where. Drones could be of any size, and of course self-driving cars or pods can easily fill the role too if smaller autonomous drones are inappropriate.

Better 3D printing technology will help to accelerate the craft economy, making it easier to do crafts by upskilling people and filling in some of their skill gaps. Someone with visual creativity but low manual skill might benefit greatly from AI model creation and 3D printer manufacture, followed by further AI assistance in marketing, selling and distribution. 3D printing might also reduce the need to go to town to buy some things.

Less shopping in high street

This is already obvious. Online shopping will continue to become a more personalized and satisfying experience, smarter, with faster delivery and easier returns, while high street decline accelerates. Every new wave of technology makes online better, and high street stores seem unable or unwilling to compete, in spite of my wonderful ‘6s guide’:

Those that are more agile still suffer decline of shopper numbers as the big stores fail to attract them so even smart stores will find it harder to survive.

Improving agriculture

Farming technology has doubled the amount of food production per hectare in the last few decades. That may happen again by mid-century. Meanwhile, the trend is towards higher vegetable and lower meat consumption. Even with an increased population, less land will be needed to grow our food. As well as reducing the need to protect green belts, that will also allow some of our countryside to be put under better environmental stewardship programs, returning much of it to managed nature. What countryside we have will be healthier and prettier, and people will be drawn to it more.

Improving social engineering

Some objections to green-field building can be reduced by making better use of available land. Large numbers of new homes are needed and they will certainly need some green field to be used, but given the factors already listed above, a larger number of smaller communities might be better approach. Amazingly, in spite of decades of dating technology proving that people can be matched up easily using AI, there is still no obvious use of similar technology to establish new communities by blending together people who are likely to form effective communities. Surely it must be feasible to advertise a new community building program that wants certain kinds of people in it – even an Australian style points system might work sometimes. Unless sociologists have done nothing for the past decades, they must surely know what types of people work well together by now? If the right people live close to each other, social involvement will be high, loneliness low, health improved, care costs minimized, the need for longer distance travel reduced and environmental impact minimized. How hard can it be?

Improving building technology such as 3D printing and robotics will allow more rapid construction, so that when people are ready and willing to move, property suited to them can be available soon.

Lifestyle changes also mean that homes don’t need to be as big. A phone today does what used to need half a living room of technology and space. With wall-hung displays and augmented reality, decor can be partly virtual, and even a 450 sq ft apartment is fine as a starter place, half as big as was needed a few decades ago, and that could be 3D printed and kitted out in a few days.

Even demographic changes favor smaller communities. As wealth increases, people have smaller families, i.e fewer kids. That means fewer years doing the school run, so less travel, less need to be in a town. Smaller schools in smaller communities can still access specialist lessons via the net.

Increasing wealth also encourages and enables people to a higher quality of life. People who used to live in a crowded city street might prefer a more peaceful and spacious existence in a more rural setting and will increasingly be able to afford to move. Short term millennial frustrations with property prices won’t last, as typical 2.5% annual growth more than doubles wealth by 2050 (though automation and its assorted consequences will impact on the distribution of that wealth).

Off-grid technology

Whereas one of the main reasons to live in urban areas was easy access to telecomms, energy and water supply and sewerage infrastructure, all of these can now be achieved off-grid. Mobile networks provide even broadband access to networks. Solar or wind provide easy energy supply. Water can be harvested out of the air even in arid areas ( and human and pet waste can be used as biomass for energy supply too, leaving fertilizer as residue.

There are also huge reasons that people won’t want to live in cities, and they will also cause deurbansisation.

The biggest by far in the problem of epidemics. As antibiotic resistance increases, disease will be a bigger problem. We may find good antibiotics alternatives but we may not. If not, then we may see some large cities where disease runs rampant and kills hundreds of thousands of people, perhaps even millions. Many scientists have listed pandemics among their top ten threats facing humanity. Obviously, being in a large city will incur a higher risk of becoming a victim, so once one or two incidents have occurred, many people will look for options to leave cities everywhere. Linked to this is bioterrorism, where the disease is deliberate, perhaps created in a garden shed by someone who learned the craft in one of today’s bio-hacking clubs. Disease might be aimed at a particular race, gender or lifestyle group or it may simply be designed to be as contagious and lethal as possible to everyone.

I’m still not saying we won’t have lots of people living in cities. I am saying that more people will feel less need to live in cities and will instead be able to find a small community where they can be happier in the countryside. Consequently, many will move out of cities, back to more rural living in smaller, friendlier communities that improving technology makes even more effective.

Urbanization will slow down, and may well go into reverse. We may reach peak city soon.



AI that talks to us could quickly become problematic

Google’s making the news again adding evidence to the unfortunate stereotype of the autistic IT nerd that barely understands normal people, and they have therefore been astonished at the backlash that normal people would all easily have predicted. (I’m autistic and work in IT mostly too, and am well used to the stereotype it so it doesn’t bother me, in fact it is a sort of ‘get out of social interactions free’ card). Last time it was Google Glass, where it apparently didn’t occur to them that people may not want other people videoing them without consent in pubs and changing rooms. This time it is Google Duplex, that makes phone calls on your behalf to arrange appointment using voice that is almost indistinguishable from normal humans. You could save time making an appointment with a hairdresser apparently, so the Googlanders decided it must be a brilliant breakthrough, and expected everyone to agree. They didn’t.

Some of the objections have been about ethics: e.g. An AI should not present itself as human – Humans have rights and dignity and deserve respectful interactions with other people, but an AI doesn’t and should not masquerade as human to acquire such privilege without knowledge of the other party and their consent.

I would be more offended by the presumed attitude of the user. If someone thinks they are so much better then me that they can demand my time and attention without the expense of any of their own, delegating instead to a few microseconds of processing time in a server farm somewhere, I’ll treat them with the contempt they deserve. My response will not be favourable. I am already highly irritated by the NHS using simple voice interaction messaging to check I will attend a hospital appointment. The fact that my health is on the line and notices at surgeries say I will be banned if I complain on social media is sufficient blackmail to ensure my compliance, but it still comes at the expense of my respect and goodwill. AI-backed voice interaction with better voice wouldn’t be any better, and if it asking for more interaction such as actually booking an appointment, it would be extremely annoying.

In any case, most people don’t speak in fully formed grammatically and logically correct sentences. If you listen carefully to everyday chat, a lot of sentences are poorly pronounced, incomplete, jumbled, full of ums and er’s, likes and they require a great deal of cooperation by the listener to make any sense at all. They also wander off topic frequently. People don’t stick to a rigid vocabulary list or lists of nicely selected sentences.  Lots of preamble and verbal meandering is likely in a response that is highly likely to add ambiguity. The example used in a demo, “I’d like to make a hairdressing appointment for a client” sounds fine until you factor in normal everyday humanity. A busy hairdresser or a lazy receptionist is not necessarily going to cooperate fully. “what do you mean, client?”, “404 not found”, “piss off google”, “oh FFS, not another bloody computer”, “we don’t do hairdressing, we do haircuts”, “why can’t your ‘client’ call themselves then?” and a million other responses are more likely than “what time would you like?”

Suppose though that it eventually gets accepted by society. First, call centers beyond the jurisdiction of your nuisance call blocker authority will incessantly call you at all hours asking or telling you all sorts of things, wasting huge amounts of your time and reducing quality of life. Voice spam from humans in call centers is bad enough. If the owners can multiply productivity by 1000 by using AI instead of people, the result is predictable.

We’ve seen the conspicuous political use of social media AI already. Facebook might have allowed companies to use very limited and inaccurate knowledge of you to target ads or articles that you probably didn’t look at. Voice interaction would be different. It uses a richer emotional connection that text or graphics on a screen. Google knows a lot about you too, but it will know a lot more soon. These big IT companies are also playing with tech to log you on easily to sites without passwords. Some gadgets that might be involved might be worn, such as watches or bracelets or rings. They can pick up signals to identify you, but they can also check emotional states such as stress level. Voice gives away emotion too. AI can already tell better then almost all people whether you are telling the truth or lying or hiding something. Tech such as iris scans can also tell emotional states, as well as give health clues. Simple photos can reveal your age quite accurately to AI, (check out  The AI voice sounds human, but it is better then even your best friends at guessing your age, your stress and other emotions, your health, whether you are telling the truth or not, and it knows far more about what you like and dislike and what you really do online than anyone you know, including you. It knows a lot of your intimate secrets. It sounds human, but its nearest human equivalent was probably Machiavelli. That’s who will soon be on the other side of the call, not some dumb chatbot. Now re-calculate political interference, and factor in the political leaning and social engineering desires of the companies providing the tools. Google and Facebook and the others are very far from politically neutral. One presidential candidate might get full cooperation, assistance and convenient looking the other way, while their opponent might meet rejection and citation of the official rules on non-interference. Campaigns on social issues will also be amplified by AI coupled to voice interaction. I looked at some related issue in a previous blog on fake AI (i.e. fake news type issues):

I could but won’t write a blog on how this tech could couple well to sexbots to help out incels. It may actually have some genuine uses in providing synthetic companionship for lonely people, or helping or encouraging them in real social interactions with real people. It will certainly have some uses in gaming and chatbot game interaction.

We are not very far from computers that are smarter then people across a very wide spectrum, and probably not very far from conscious machines that have superhuman intelligence. If we can’t even rely on IT companies to understand likely consequences of such obvious stuff as Duplex before thy push it, how can we trust them in other upcoming areas of AI development, or even closer term techs with less obvious consequences? We simply can’t!

There are certainly a few such areas where such technology might help us but most are minor and the rest don’t need any deception, but they all come at great cost or real social and political risk, as well as more abstract risks such as threats to human dignity and other ethical issues. I haven’t give this much thought yet and I am sure there must be very many other consequences I have not touched on yet. Google should do more thinking before they release stuff. Technology is becoming very powerful, but we all know that great power comes with great responsibility, and since most people aren’t engineers so can’t think through all the potential technology interactions and consequences, engineers such as Google’s must act more responsibly. I had hoped they’d started, and they said they had, but this is not evidence of that.


Futurist memories: The leisure society and the black box economy

Things don’t always change as fast as we think. This is a piece I wrote in 1994 looking forward to a fully automated ‘black box economy, a fly-by-wire society. Not much I’d change if I were writing it new today. Here:

The black box economy is a strictly theoretical possibility, but may result where machines gradually take over more and more roles until the whole economy is run by machines, with everything automated. People could be gradually displaced by intelligent systems, robots and automated machinery. If this were to proceed to the ultimate conclusion, we could have a system with the same or even greater output as the original society, but with no people involved. The manufacturing process could thus become a ‘black box’. Such a system would be so machine controlled that humans would not easily be able to pick up the pieces if it crashed – they would simply not understand how it works, or could not control it. It would be a fly-by-wire society.

The human effort could be reduced to simple requests. When you want a new television, a robot might come and collect the old one, recycling the materials and bringing you a new one. Since no people need be involved and the whole automated system could be entirely self-maintaining and self-sufficient there need be no costs. This concept may be equally applicable in other sectors, such as services and information – ultimately producing more leisure time.

Although such a system is theoretically possible – energy is free in principle, and resources are ultimately a function of energy availability – it is unlikely to go quite this far. We may go some way along this road, but there will always be some jobs that we don’t want to automate, so some people may still work. Certainly, far fewer people would need to work in such a system, and other people could spend their time in more enjoyable pursuits, or in voluntary work. This could be the leisure economy we were promised long ago. Just because futurists predicted it long ago and it hasn’t happened yet does not mean it never will. Some people would consider it Utopian, while others possibly a nightmare, it’s just a matter of taste.

AIs of a feather flocking together to create global instability

Hawking and Musk have created a lot of media impact with their warnings about AI, so although terminator scenarios resulting from machine consciousness have been discussed, as have more mundane use of non-conscious autonomous weapon systems, it’s worth noting that I haven’t yet heard them mention one major category of risks from AI – emergence. AI risks have been discussed frequently since the 1970s, and in the 1990s a lot of work was done in the AI community on emergence. Complex emergent patterns of behavior often result from interactions between entities driven by simple algorithms. Genetic algorithms were demonstrated to produce evolution, simple neighbor-interaction rules were derived to illustrate flocking behaviors that make lovely screen saver effects. Cellular automata were played with. In BT we invented ways of self-organizing networks and FPGAs, played with mechanism that could be used for evolution and consciousness, demonstrated managing networks via ANTs – autonomous network telephers, using smart packets that would run up and down wires sorting things out all by themselves. In 1987 discovered a whole class of ways of bringing down networks via network resonance, information waves and their much larger class of correlated traffic – still unexploited by hackers apart from simple DOS attacks. These ideas have slowly evolved since, and some have made it into industry or hacker toolkits, but we don’t seem to be joining the dots as far as risks go.

I read an amusing article this morning by an ex-motoring-editor who was declined insurance because the AI systems used by insurance companies had labelled him as high risk because he maybe associated with people like Clarkson. Actually, he had no idea why, but that was his broker’s theory of how it might have happened. It’s a good article, well written and covers quite a few of the dangers of allowing computers to take control.

The article suggested how AIs in different companies might all come to similar conclusions about people or places or trends or patterns in a nice tidy positive feedback loop. That’s exactly the sort of thing that can drive information waves, which I demonstrated in 1987 can bring down an entire network in less than 3 milliseconds, in such a way that it would continue to crash many times when restarted. That isn’t intended by the algorithms, which individually ought to make good decisions, but when interacting with one another, create the emergent phenomenon.  Automated dealing systems are already pretty well understood in this regard and mechanisms prevent frequent stock market collapses, but that is only one specific type of behavior in one industry that is protected. There do not seem to be any industry-wide mechanisms to prevent the rest of this infinite class of problems from affecting any or all of the rest, simultaneously.

As we create ever more deep learning neural networks, that essentially teach themselves from huge data pools, human understanding of their ‘mindsets’ decreases. They make decisions using algorithms that are understood at a code level, but the massive matrix of derived knowledge they create from all the data they receive becomes highly opaque. Often, even usually, nobody quite knows how a decision is made. That’s bad enough in a standalone system, but when many such systems are connected, produced and owned and run by diverse companies with diverse thinking, the scope for destructive forms of emergence increases geometrically.

One result could be gridlock. Systems fed with a single new piece of data could crash. My 3 millisecond result in 1987 would still stand since network latency is the prime limiter. The first AI receives it, alters its mindset accordingly, processes it, makes a decision and interacts with a second AI. This second one might have different ‘prejudice’ so makes its own decision based on different criteria, and refuses to respond the way intended. A 3rd one looks at the 2nd’s decision and takes that as evidence that there might be an issue, and with its risk-averse mindset, also refuse to act, and that inaction spreads through the entire network in milliseconds. Since the 1st AI thinks the data is all fine and it should have gone ahead, it now interprets the inaction of the others as evidence that that type of data is somehow ‘wrong’ so itself refuses to process any further of that type, whether from its own operators or other parts of the system. So it essentially adds its own outputs to the bad feeling and the entire system falls into sulk mode. As one part of infrastructure starts to shut down, that infects other connected parts and our entire IT could fall into sulk mode – entire global infrastructure. Since nobody knows how it all works, or what has caused the shutdown, it might be extremely hard to recover.

Another possible result is a direct information wave, almost certainly a piece of fake news. Imagine our IT world in 5 years time, with all these super-smart AIs super-connected. A piece of fake news says a nuke has just been launched somewhere. Stocks will obviously decline, whatever the circumstances, so as the news spreads, everyone’s AIs will take it on themselves to start selling shares before the inevitable collapse, triggering a collapse, except it won’t because the markets won’t let that happen. BUT… The wave does spread, and all those individual AIs want to dispose of those shares, or at least find out what’s happening, so they all start sending messages to one another, exchanging data, trying to find what’s going on. That’s the information wave. They can’t sell shares of find out, because the network is going into overload, so they try even harder and force it into severe overload. So it falls over. When it comes back online, they all try again, crashing it again, and so on.

Another potential result is smartass AI. There is always some prat somewhere who sees an opportunity to take advantage and ruins if for everyone else by doing something like exploiting a small loophole in the law, or in this case, most likely, a prejudice our smartass AI has discovered in some other AI that means it can be taken advantage of by doing x, y, or z. Since nobody quite knows how any of their AIs are making their decisions because their mindsets ate too big and too complex, it will be very hard to identify what is going on. Some really unusual behavior is corrupting the system because some AI is going rogue somewhere somehow, but which one, where, how?

That one brings us back to fake news. That will very soon infect AI systems with their own varieties of fake news. Complex networks of AIs will have many of the same problems we are seeing in human social networks. An AI could become a troll just the same as a human, deliberately winding others up to generate attention of drive a change of some parameter – any parameter – in its own favour. Activist AIs will happen due to people making them to push human activist causes, but they will also do it all by themselves. Their analysis of the system will sometimes show them that a good way to get a good result is to cause problems elsewhere.

Then there’s climate change, weather, storms, tsunamis. I don’t mean real ones, I mean the system wide result of tiny interactions of tiny waves and currents of data and knowledge in neural nets. Tiny effects in one small part of a system can interact in unforeseen ways with other parts of other systems nearby, creating maybe a breeze, which interacts with breezes in nearby regions to create hurricanes. I think that’s a reasonable analogy. Chaos applies to neural net societies just as it does to climate, and 50 year waves equivalents will cause equivalent havoc in IT.

I won’t go on with more examples, long blogs are awful to read. None of these requires any self-awareness, sentience, consciousness, call it what you will. All of these can easily happen through simple interactions of fairly trivial AI deep learning nets. The level of interconnection already sounds like it may already be becoming vulnerable to such emergence effects. Soon it definitely will be. Musk and Hawking have at least joined the party and they’ll think more and more deeply in coming months. Zuckerberg apparently doesn’t believe in AI threats but now accepts the problems social media is causing. Sorry Zuck, but the kind of AI you’re company is messing with will also be subject to its own kinds of social media issues, not just in its trivial decisions on what to post or block, but actual inter-AI socializing issues. It might not try to eliminate humanity, but if it brings all of our IT to a halt and prevents rapid recovery, we’re still screwed.


Artificial muscles using folded graphene


Folded Graphene Concept

Two years ago I wrote a blog on future hosiery where I very briefly mentioned the idea of using folded graphene as synthetic muscles:

Although I’ve since mentioned it to dozens of journalists, none have picked up on it, so now that soft robotics and artificial muscles are in the news, I guess it’s about time I wrote it up myself, before someone else claims the idea. I don’t want to see an MIT article about how they have just invented it.

The above pic gives the general idea. Graphene comes in insulating or conductive forms, so it will be possible to make sheets covered with tiny conducting graphene electromagnet coils that can be switched individually to either polarity and generate strong magnetic forces that pull or push as required. That makes it ideal for a synthetic muscle, given the potential scale. With 1.5nm-thick layers that could be anything from sub-micron up to metres wide, this will allow thin fibres and yarns to make muscles or shape change fabrics all the way up to springs or cherry-picker style platforms, using many such structures. Current can be switched on and off or reversed very rapidly, to make continuous forces or vibrations, with frequency response depending on application – engineering can use whatever scales are needed. Natural muscles are limited to 250Hz, but graphene synthetic muscles should be able to go to MHz.

Uses vary from high-rise rescue, through construction and maintenance, to space launch. Since the forces are entirely electromagnetic, they could be switched very rapidly to respond to any buckling, offering high stabilisation.


The extreme difference in dimensions between folded and opened state mean that an extremely thin force mat made up of many of these cherry-picker structures could be made to fill almost any space and apply force to it. One application that springs to mind is rescues, such as after earthquakes have caused buildings to collapse. A sheet could quickly apply pressure to prize apart pieces of rubble regardless of size and orientation. It could alternatively be used for systems for rescuing people from tall buildings, fracking or many other applications.


It would be possible to make large membranes for a wide variety of purposes that can change shape and thickness at any point, very rapidly.


One such use is a ‘jellyfish’, complete with stinging cells that could travel around in even very thin atmospheres all by itself. Upper surfaces could harvest solar power to power compression waves that create thrust. This offers use for space exploration on other planets, but also has uses on Earth of course, from surveillance and power generation, through missile defense systems or self-positioning parachutes that may be used for my other invention, the Pythagoras Sling. That allows a totally rocket-free space launch capability with rapid re-use.


Much thinner membranes are also possible, as shown here, especially suited for rapid deployment missile defense systems:


Also particularly suited to space exploration o other planets or moons, is the worm, often cited for such purposes. This could easily be constructed using folded graphene, and again for rescue or military use, could come with assorted tools or lethal weapons built in.


A larger scale cherry-picker style build could make ejector seats, elevation platforms or winches, either pushing or pulling a payload – each has its merits for particular types of application.  Expansion or contraction could be extremely rapid.


An extreme form for space launch is the zip-winch, below. With many layers just 1.5nm thick, expanding to 20cm for each such layer, a 1000km winch cable could accelerate a payload rapidly as it compresses to just 7.5mm thick!


Very many more configurations and uses are feasible of course, this blog just gives a few ideas. I’ll finish with a highlight I didn’t have time to draw up yet: small particles could be made housing a short length of folded graphene. Since individual magnets can be addressed and controlled, that enables magnetic powders with particles that can change both their shape and the magnetism of individual coils. Precision magnetic fields is one application, shape changing magnets another. The most exciting though is that this allows a whole new engineering field, mixing hydraulics with precision magnetics and shape changing. The powder can even create its own chambers, pistons, pumps and so on. Electromagnetic thrusters for ships are already out there, and those same thrust mechanisms could be used to manipulate powder particles too, but this allows for completely dry hydraulics, with particles that can individually behave actively or  passively.




The age of dignity

I just watched a short video of robots doing fetch and carry jobs in an Alibaba distribution centre:

There are numerous videos of robots in various companies doing tasks that used to be done by people. In most cases those tasks were dull, menial, drudgery tasks that treated people as machines. Machines should rightly do those tasks. In partnership with robots, AI is also replacing some tasks that used to be done by people. Many are worried about increasing redundancy but I’m not; I see a better world. People should instead be up-skilled by proper uses of AI and robotics and enabled to do work that is more rewarding and treats them with dignity. People should do work that uses their human skills in ways that they find rewarding and fulfilling. People should not have to do work they find boring or demeaning just because they have to earn money. They should be able to smile at work and rest at the end of the day knowing that they have helped others or made the world a better place. If we use AI, robots and people in the right ways, we can build that world.

Take a worker in a call centre. Automation has already replaced humans in most simple transactions like paying a bill, checking a balance or registering a new credit card. It is hard to imagine that anyone ever enjoyed doing that as their job. Now, call centre workers mostly help people in ways that allow them to use their personalities and interpersonal skills, being helpful and pleasant instead of just typing data into a keyboard. It is more enjoyable and fulfilling for the caller, and presumably for the worker too, knowing they genuinely helped someone’s day go a little better. I just renewed my car insurance. I phoned up to cancel the existing policy because it had increased in price too much. The guy at the other end of the call was very pleasant and helpful and met me half way on the price difference, so I ended up staying for another year. His company is a little richer, I was a happier customer, and he had a pleasant interaction instead of having to put up with an irate customer and also the job satisfaction from having converted a customer intending to leave into one happy to stay. The AI at his end presumably gave him the information he needed and the limits of discount he was permitted to offer. Success. In billions of routine transactions like that, the world becomes a little happier and just as important, a little more dignified. There is more dignity in helping someone than in pushing a button.

Almost always, when AI enters a situation, it replaces individual tasks that used to take precious time and that were not very interesting to do. Every time you google something, a few microseconds of AI saves you half a day in a library and all those half days add up to a lot of extra time every year for meeting colleagues, human interactions, learning new skills and knowledge or even relaxing. You become more human and less of a machine. Your self-actualisation almost certainly increases in one way or another and you become a slightly better person.

There will soon be many factories and distribution centres that have few or no people at all, and that’s fine. It reduces the costs of making material goods so average standard of living can increase. A black box economy that has automated mines or recycling plants extracting raw materials and uses automated power plants to convert them into high quality but cheap goods adds to the total work available to add value; in other words it increases the size of the economy. Robots can make other robots and together with AI, they could make all we need, do all the fetching and carrying, tidying up, keeping it all working, acting as willing servants in every role we want them in. With greater economic wealth and properly organised taxation, which will require substantial change from today, people could be freed to do whatever fulfills them. Automation increases average standard of living while liberating people to do human interaction jobs, crafts, sports, entertainment, leading, inspiring, teaching, persuading, caring and so on, creating a care economy. 

Each person knows what they are good at, what they enjoy. With AI and robot assistance, they can more easily make that their everyday activity. AI could do their company set-up, admin, billing, payments, tax, payroll – all the crap that makes being an entrepreneur a pain in the ass and stops many people pursuing their dreams.  Meanwhile they would do that above a very generous welfare net. Many of us now are talking about the concept of universal basic income, or citizen wage. With ongoing economic growth at the average rate of the last few decades, the global economy will be between twice and three times as big as today in the 2050s. Western countries could pay every single citizen a basic wage equivalent to today’s average wage, and if they work or run a company, they can earn more.

We will have an age where material goods are high quality, work well and are cheap to buy, and recycled in due course to minimise environmental harm. Better materials, improved designs and techniques, higher efficiency and land productivity and better recycling will mean that people can live with higher standards of living in a healthier environment. With a generous universal basic income, they will not have to worry about paying their bills. And doing only work that they want to do that meets their self-actualisation needs, everyone can live a life of happiness and dignity.

Enough of the AI-redundancy alarmism. If we elect good leaders who understand the options ahead, we can build a better world, for everyone. We can make real the age of dignity.

Tips for surviving the future

Challenging times lie ahead, but stress can be lessened by being prepared. Here are my top tips, with some explanation so you can decide whether to accept them.

1 Adaptability is more important than specialization

In a stable environment, being the most specialized means you win most of the time in your specialist field because all your skill is concentrated there.

However, in a fast-changing environment, which is what you’ll experience for the rest of your life, if you are too specialized, you are very likely to find you are best in a filed that no longer exists, or is greatly diminished in size. If you make sure you are more adaptable, then you’ll find it easier to adapt to a new area so your career won’t be damaged when you are forced to change field slightly. Adaptability comes at a price – you will find it harder to be best in your field and will have to settle for 2nd or 3rd much of the time, but you’ll still be lucratively employed when No 1 has been made redundant.

2 Interpersonal, human, emotional skills are more important than knowledge

You’ve heard lots about artificial intelligence (AI) and how it is starting to do to professional knowledge jobs what the steam engine once did to heavy manual work. Some of what you hear is overstated. Google search is a simple form of AI. It has helped everyone do more with their day. It effectively replaced a half day searching for information in a library with a few seconds typing, but nobody has counted how many people it made redundant, because it hasn’t. It up-skilled everyone, made them more effective, more valuable to their employer. The next generation of AI may do much the same with most employees, up-skilling them to do a better job than they were previously capable of, giving them better job satisfaction and their employer better return. Smart employers will keep most of their staff, only getting rid of those entirely replaceable by technology. But some will take the opportunity to reduce costs, increase margins, and many new companies simply won’t employ as many people in similar jobs, so some redundancy is inevitable. The first skills to go are simple administration and simple physical tasks, then more complex admin or physical stuff, then simple managerial or professional tasks, then higher managerial and professional tasks. The skills that will be automated last are those that rely on first hand experience of understanding of and dealing with other people. AI can learn some of that and will eventually become good at it, but that will take a long time. Even then, many people will prefer to deal with another person than a machine, however smart and pleasant it is.

So interpersonal skills, human skills, emotional skills, caring skills, leadership and motivational skills, empathetic skills, human judgement skills, teaching and training skills will be harder to replace. They also tend to be ones that can easily transfer between companies and even sectors. These will therefore be the ones that are most robust against technology impact. If you have these in good shape, you’ll do just fine. Your company may not need you any more one day, but another will.

I called this the Care Economy when I first started writing and lecturing about it 20-odd years ago. I predicted it would start having an affect mid teen years of this century and I got that pretty accurate I think. There is another side that is related but not the same:

3 People will still value human skill and talent just because it’s human

If you buy a box of glasses from your local supermarket, they probably cost very little and are all identical. If you buy some hand-made crystal, it costs a lot more, even though every glass is slightly different. You could call that shoddy workmanship compared to a machine. But you know that the person who made it trained for many years to get a skill level you’d never manage, so you actually value them far more, and are happy to pay accordingly. If you want to go fast, you could get in your car, but you still admire top athletes because they can do their sport far better than you. They started by having great genes for sure, but then also worked extremely hard and suffered great sacrifice over many years to get to that level. In the future, when robots can do any physical task more accurately and faster than people, you will still value crafts and still enjoy watching humans compete. You’ll prefer real human comedians and dancers and singers and musicians and artists. Talent and skill isn’t valued because of the specification of the end result, they are valued because they are measured on the human scale, and you identify closely with that. It isn’t even about being a machine. Gorillas are stronger, cheetahs are faster, eagles have better eyesight and cats have faster reflexes than you. But they aren’t human so you don’t care. You will always measure yourself and others by human scales and appreciate them accordingly.

4 Find hobbies that you love and devote time to developing them

As this care economy and human skills dominance grows in importance, people will also find that AI and robotics helps them in their own hobbies, arts and crafts, filling in skill gaps, improving proficiency. A lot of people will find their hobbies can become semi-professional. At the same time, we’ll be seeing self-driving cars and drones making local delivery far easier and cheaper, and AI will soon make business and tax admin easy too. That all means that barriers to setting up a small business will fall through the floor, while the market for personalized, original products made my people will increase, especially local people. You’ll be able to make arts and crafts, jam or cakes, grow vegetables, make clothes or special bags or whatever, and easily sell them. Also at the same time, automation will be making everyday things cheaper, while expanding the economy, so the welfare floor will be raised, and you could probably manage just fine with a small extra income. Government is also likely to bring in some sort of citizen wage or to encourage such extra entrepreneurial activity without taxing it away, because they also have a need to deal with the social consequences of automation. So it will all probably come together quite well. If the future means you can make extra money or even a full income by doing a hobby you love, there isn’t much to dislike there.

5 You need to escape from your social media bubble

If you watch the goings on anywhere in the West today, you must notice that the Left and the Right don’t seem to get along any more. Each has become very intolerant of the other, treating them more like enemy aliens than ordinary neighbors. A lot of that is caused by people only being exposed to views they agree with. We call that social media bubbles, and they are extremely dangerous. The recent USA trouble is starting to look like some folks want a re-run of the Civil War. I’ve blogged lots about this topic and won’t do it again now except to say that you need to expose yourself to a wide subsection of society. You need to read paper and magazines and blogs, and watch TV or videos from all side of the political spectrum, not just those you agree with, not just those that pat you on the back every day and tell you that you’re right and it is all the other lot’s fault. If you don’t; if you only expose yourself to one side because you find the other side distasteful, then I can’t say this loud enough: You are part of the problem. Get out of your safe space and your social media tribe, expose yourself to the whole of society, not just one tribe. See that there are lots of different views out there but it doesn’t mean the rest are all nasty. Almost everyone is actually quite nice and almost everyone wants a fairer world, an end to exploitation, peace, tolerance and eradication of disease and poverty. The differences are almost all in the world model that they use to figure out the best way to achieve it. Lefties tend to opt for idealistic theoretical models and value the intention behind it, right-wingers tend to be pragmatic and go for what they think works in reality, valuing the outcome. It is actually possible to have best friends who you disagree with. I don’t often agree with any of mine. If you feel too comfortable in your bubble to leave, remember this: your market is only half the population at best , you’re excluding the other half, or even annoying them so they become enemies rather than neutral. If you stay in a bubble, you are damaging your own future, and helping to endanger the whole of society.

6 Don’t worry

There are lots of doom-mongers out there, and I’d be the first to admit that there are many dangers ahead. But if you do the things above, there probably isn’t much more you can do. You can moan and demonstrate and get angry or cry in the corner, but how would that benefit you? Usually when you analyse things long enough from all angles, you realize that the outcome of many of the big political battles is pretty much independent of who wins.  Politicians usually have far less choice than they want you to believe and the big forces win regardless of who is in charge. So there isn’t much point in worrying about it, it will probably all come out fine in the end. Don’t believe me. Take the biggest UK issue right now: Brexit. We are leaving. Does it matter? No. Why? Well, the EU was always going to break up anyway. Stresses and strains have been increasing for years and are accelerating. For all sorts of reasons, and regardless of any current bluster by ‘leaders’, the EU will head away from the vision of a United States of Europe. As tensions and conflicts escalate, borders will be restored. Nations will disagree with the EU ideal. One by one, several countries will copy the UK and have referendums, and then leave. At some point, the EU will be much smaller, and there will be lots of countries outside with their own big markets. They will form trade agreements, the original EU idea, the Common Market, will gradually be re-formed, and the UK will be part of it – even Brexiters want tariff-free-trade agreements. If the UK had stayed, the return to the Common Market would eventually have happened anyway, and leaving has only accelerated it. All the fighting today between Brexiteers and Remainers achieves nothing. It didn’t matter which way we voted, it only really affected timescale. The same applies to many other issues that cause big trouble in the short term. Be adaptable, don’t worry, and you’ll be just fine.

7 Make up your own mind

As society and politics have become highly polarised, any form of absolute truth is becoming harder to find. Much of what you read has been spun to the left or right. You need to get information from several sources and learn to filter the bias, and then make up your own mind on what the truth is. Free thinking is increasingly rare but learning and practicing it means you’ll be able to make correct conclusions about the future while others are led astray. Don’t take anyone else’s word for things. Don’t be anyone’s useful idiot. Think for yourself.

8 Look out for your friends, family and community.

I’d overlooked an important tip in my original posting. As Jases commented sensibly, friends, family and community are the security that doesn’t disappear in troubled economic times. Independence is overrated. I can’t add much to that.

Google and the dangerous pursuit of ‘equality’

The world just got more dangerous, and I’m not talking about N Korea and Trump.

Google just sacked an employee because he openly suggested that men and women, (not all, but some, and there is an overlap, and …) might tend to have different preferences in some areas and that could (but not always, and only in certain cases, and we must always recognize and respect everyone and …) possibly account for some of the difference in numbers of men and women in certain roles (but there might be other causes too and obviously lots of discrimination and …. )

Yes, that’s what he actually said, but with rather more ifs and buts and maybes. He felt the need to wrap such an obvious statement in several kilometers thick of cotton wool so as not to offend the deliberately offended but nonetheless deliberate offense was taken and he is out on his ear.

Now, before you start thinking this is some right-wing rant, I feel obliged to point out just how progressive Futurizon is: 50% of all Futurizon owners and employees are female, all employees and owners have the same voting rights, 50% are immigrants and all are paid exactly the same and have the same size offices, regardless of dedication, ability, nature or quality or volume of output and regardless of their race, religion, beauty, shape, fitness, dietary preferences, baldness, hobbies or political views, even if they are Conservatives. All Futurizon offices are safe zones where employees may say anything they want of any level of truth, brilliance or stupidity and expect it to be taken as absolute fact and any consequential emotional needs to be fully met. No employee may criticize any other employee’s mouse mat, desk personalisation or screen wallpaper for obvious lack of taste. All employees are totally free to do anything they choose 100% of the time and can take as much leave as they want. All work is voluntary. All have the same right to respectfully request any other employee to make them coffee, tea or Pimms. All employees of all genders real or imagined are entitled to the same maternity and paternity rights, and the same sickness benefits, whether ill or not. In fact, Futurizon does not discriminate on any grounds whatsoever. We are proud to lead the world in non-discrimination. Unfortunately, our world-leading terms of employment mean that we can no longer afford to hire any new employees.

However, I note that Google has rather more power and influence than Futurizon so their policies count more. They appear (Google also has better lawyers than I can afford, so I must stress that all that follows is my personal opinion) to have firmly decided that diversity is all-important and they seem to want total equality of outcome. The view being expressed not just by Google but by huge swathes of angry protesters seems to be that any difference in workforce representation from that of the general population must arise from discrimination or oppression so must be addressed by positive action to correct it. There are apparently no statistically discernible differences in behavior between genders, or in job or role preference, so any you may have noticed over the time you’ve been alive is just your prejudice. Google says they fully support free speech and diversity of views, but expression of views is apparently only permitted as long as those views are authorized, on penalty of dismissal.

So unless I’m picking up totally the wrong end of the stick here, and I don’t do that often, only 13% of IT engineers are women, but internal policies must ensure that the proportion rises to 50%, whether women want to do that kind of work or not. In fact, nobody may question whether as many women want to work as IT engineers as men; it must now be taken as fact. By extension, since more women currently work in marketing, HR and PR, they must be substituted by men via positive action programs until men fill 50% of those roles. Presumably similar policies must also apply in medical bays for nursing and other staff there, and in construction teams for their nice new buildings. Ditto all other genders, races, religions; all groups must be protected and equalized to USA population proportions, apparently except those that don’t claim to hold sufficiently left-wing views, in which case it is seemingly perfectly acceptable to oppress, ostracize and even expel them.

In other words, freedom of choice and difference in ability, and more importantly freedom from discrimination, must be over-ruled in favor of absolute equality of diversity, regardless of financial or social cost, or impact on product or service quality. Not expressing full and enthusiastic left-wing compliance is seemingly just cause for dismissal.

So, why does this matter outside Google? Well, AI is developing very nicely. In fact, Google is one of the star players in the field right now. It is Google that will essentially decide how much of the AI around us is trained, how it learns, what it learns, what ‘knowledge’ it has of the world. Google will pick the content the AI learns from, and overrule or reeducate it if it draws any ‘wrong’ conclusions about the world, such as that more women than men want to be nurses or work in HR, or that more men than women want to be builders or engineers. A Google AI must presumably believe that the only differences between men and women are physical, unless their AI is deliberately excluded from the loudly declared corporate values and belief sets.

You should be very worried. Google’s values really matter. They have lots of influence on some of the basic tools of everyday life. Even outside their company, their AI tools and approaches will have strong influence on how other AI develops, determining operating systems and platforms, languages, mechanisms, interfaces, filters, even prejudices and that reach and influence is likely to increase. Their AI may well be in many self-driving cars, and if they have to make life or death decisions, the underlying value assumptions must feature in the algorithms. Soon companies will need AI that is more emotionally compliant. AI will use compliments or teasing or seduction or sarcasm or wit as marketing tools as well as just search engine positioning. Soon AI will use highly expressive faces with attractive voices, with attractive messages, tailored to appeal to you by pandering to your tastes and prejudices while thinking something altogether different. AI might be the person at the party that is all smiles and compliments, before going off to tell everyone else how awful it thinks you are. If you dare to say something not ‘authorized’, the ultra-smart AI all around you might treat you condescendingly, making you feel ashamed, ostracized, a dinosaur. Then it might secretly push you down a few pages in search results, or put a negative spin on text summaries about you, or exclude you from recommendations. Or it might do all the secret stuff while pretending it thinks you’re fantastic. Internal cultural policies in companies like Google today could soon be external social engineering to push the left-wing world the IT industry believes in – it isn’t just Google; Facebook and Twitter are also important and just as Left, though Amazon, Samsung, IBM and other AI players are less overtly politically biased, so far at least. Left wing policies generally cost a lot more, but Google and Facebook will presumably still expect other companies and people to pay the taxes to pay for it all. As their female staff gear up to fight them over pay differences between men and women for similar jobs, it often seems that Google’s holier-than-thou morality doesn’t quite make it as far as their finances.

Then it really starts being fun. We’ll soon have bacteria that can fabricate electronic circuits within themselves. Soon they’ll be able to power them too, giving the concept of smart yogurt. These bacteria could also have nanotechnology flagella to help them get around. We’ll soon have bacterial spies all over our environment, even on our skin, intercepting electronic signals that give away our thoughts. They’ll bring in data on everything that is said, everything that everyone even thinks or feels. Those bacteria will be directly connected into AI, in fact they’ll be part of it. They’ll be able to change things, to favor or punish according to whether they like what someone believes in or how they behave.

It isn’t just right-wing extremists that need to worry. I’m apparently Noveau Left – I score slightly left of center on political profiling tests, but I’m worried. A lot of this PC stuff seems extreme to me, sometimes just nonsense. Maybe it is, or maybe I should be lefter. But it’s not my choice. I don’t make the rules. Companies like Google make the rules, they even run the AI ethics groups. They decide much of what people see online, and even the meaning of the words. It’s very 1984-ish.

The trouble with the ‘echo chambers’ we heard about is that they soon normalize views to the loudest voices in those groups, and they don’t tend to be the moderates. We can expect it will go further to the extreme, not less. You probably aren’t left enough either. You should also be worried.

AI and activism, a Terminator-sized threat targeting you soon

You should be familiar with the Terminator scenario. If you aren’t then you should watch one of the Terminator series of films because you really should be aware of it. But there is another issue related to AI that is arguably as dangerous as the Terminator scenario, far more likely to occur and is a threat in the near term. What’s even more dangerous is that in spite of that, I’ve never read anything about it anywhere yet. It seems to have flown under our collective radar and is already close.

In short, my concern is that AI is likely to become a heavily armed Big Brother. It only requires a few components to come together that are already well in progress. Read this, and if you aren’t scared yet, read it again until you understand it 🙂

Already, social media companies are experimenting with using AI to identify and delete ‘hate’ speech. Various governments have asked them to do this, and since they also get frequent criticism in the media because some hate speech still exists on their platforms, it seems quite reasonable for them to try to control it. AI clearly offers potential to offset the huge numbers of humans otherwise needed to do the task.

Meanwhile, AI is already used very extensively by the same companies to build personal profiles on each of us, mainly for advertising purposes. These profiles are already alarmingly comprehensive, and increasingly capable of cross-linking between our activities across multiple platforms and devices. Latest efforts by Google attempt to link eventual purchases to clicks on ads. It will be just as easy to use similar AI to link our physical movements and activities and future social connections and communications to all such previous real world or networked activity. (Update: Intel intend their self-driving car technology to be part of a mass surveillance net, again, for all the right reasons:

Although necessarily secretive about their activities, government also wants personal profiles on its citizens, always justified by crime and terrorism control. If they can’t do this directly, they can do it via legislation and acquisition of social media or ISP data.

Meanwhile, other experiences with AI chat-bots learning to mimic human behaviors have shown how easily AI can be gamed by human activists, hijacking or biasing learning phases for their own agendas. Chat-bots themselves have become ubiquitous on social media and are often difficult to distinguish from humans. Meanwhile, social media is becoming more and more important throughout everyday life, with provably large impacts in political campaigning and throughout all sorts of activism.

Meanwhile, some companies have already started using social media monitoring to police their own staff, in recruitment, during employment, and sometimes in dismissal or other disciplinary action. Other companies have similarly started monitoring social media activity of people making comments about them or their staff. Some claim to do so only to protect their own staff from online abuse, but there are blurred boundaries between abuse, fair criticism, political difference or simple everyday opinion or banter.

Meanwhile, activists increasingly use social media to force companies to sack a member of staff they disapprove of, or drop a client or supplier.

Meanwhile, end to end encryption technology is ubiquitous. Malware creation tools are easily available.

Meanwhile, successful hacks into large company databases become more and more common.

Linking these various elements of progress together, how long will it be before activists are able to develop standalone AI entities and heavily encrypt them before letting them loose on the net? Not long at all I think.  These AIs would search and police social media, spotting people who conflict with the activist agenda. Occasional hacks of corporate databases will provide names, personal details, contacts. Even without hacks, analysis of publicly available data going back years of everyone’s tweets and other social media entries will provide the lists of people who have ever done or said anything the activists disapprove of.

When identified, they would automatically activate armies of chat-bots, fake news engines and automated email campaigns against them, with coordinated malware attacks directly on the person and indirect attacks by communicating with employers, friends, contacts, government agencies customers and suppliers to do as much damage as possible to the interests of that person.

Just look at the everyday news already about alleged hacks and activities during elections and referendums by other regimes, hackers or pressure groups. Scale that up and realize that the cost of running advanced AI is negligible.

With the very many activist groups around, many driven with extremist zeal, very many people will find themselves in the sights of one or more activist groups. AI will be able to monitor everyone, all the time.  AI will be able to target each of them at the same time to destroy each of their lives, anonymously, highly encrypted, hidden, roaming from server to server to avoid detection and annihilation, once released, impossible to retrieve. The ultimate activist weapon, that carries on the fight even if the activist is locked away.

We know for certain the depths and extent of activism, the huge polarization of society, the increasingly fierce conflict between left and right, between sexes, races, ideologies.

We know about all the nice things AI will give us with cures for cancer, better search engines, automation and economic boom. But actually, will the real future of AI be harnessed to activism? Will deliberate destruction of people’s everyday lives via AI be a real problem that is almost as dangerous as Terminator, but far more feasible and achievable far earlier?