Category Archives: social media

Apocalypse Tomorrow

This post was co-authored with Bronwyn Williams (details below)

I recently watched a documentary about the 1978 Jonestown Massacre, where 918 Peoples Temple followers died, many of them children, killed by their own parents. Before it even started, my own memories of it in the news made me realize that the current global socio-political climate makes such an ‘unthinkable’ event likely to happen again, possibly on a much bigger scale, perhaps even in several places at once.

The biggest factor by far is the global replacement of religion (mostly Christianity) by secular religion substitutes. These secular substitutes for the meaning, direction and purpose formerly provided by religion take many forms, from a revived interest in paganism, witchcraft, and general “no name brand” spiritualism and mysticism, through to a new almost religious fervor for political causes. Now, while finding solace for the horror of the human condition in crystals or astrology is relatively benign (unless you are getting into debt betting your children’s school fees on the stocks recommended in your daily horoscope app, for example); mass movements driven by tribes of True Believers, are far more concerning.

New converts to any mass movement – religious or secular – are invariably among the most passionate believers, so we now have a massive and global pool of people newly susceptible to the same forces that enabled Jim Jones to do what he did. Every day on social media we witness first hand that enthusiasm, driving the secular equivalent of the Spanish Inquisition and targeting anyone and everyone not devoutly following every detail of their new faith. Jones strongly policed his followers and strictly punished any rule breaking or heresy. That same practice is greatly amplified in social media, to billions of people instead of the thousand followers Jones had influence on.

I’ve written many times about the strong similarities between religion and belief in catastrophic climate change, environmentalism, woke doctrine, veganism, New Ageism, and others. All these triggers tap in to the same anchors in human nature, first of which makes people want to believe they are ‘good people’ on the right side of history; the second of which is tribalism, the basic human instinct of wanting to belong to a group of like-thinking people, while clearly marking the boundaries between ‘us’ and ‘them’. At the same time, as people are forced to decide which side to stand on, the gulf between the “us” and ‘them’ is always widening, amplifying both the fear of – and the real consequences of – being cut out of the circle of trust of their chosen tribe, just as Jim Jones did.

Importantly, the scientific truth and proven facts behind these causes are less important than how the causes make the new true believers feel; particularly when it comes to signalling the moral superiority of the in group compared to the infidel, unconverted out-group.  As Eric Hoffer wrote in The True Believer, for the adherents of most mass movements, “The effectiveness of a doctrine should not be judged by its profundity, sublimity or the validity of the truths it embodies, but by how thoroughly it insulates the individual from his self and the world as it is.”.

Now, these tribal drivers are immensely strong forces, the likes of which have underpinned crusades and wars since the days of ancient civilizations. Now that far fewer people believe in formal religions, many of those who previously would have been enthusiastic believers have turned instead to these secular substitutes that push the same psychological buttons. Another documentary this week on veganism shows exactly the same forces at play being harnessed as in religion – secular equivalent to sin, shaming sinners, fear of rejection, tribalism, and especially demonstrating the impact of a charismatic ‘priest’. Jones was highly charismatic, and a master at using these forces. Compare the influence today that a single person can have in pushing a particular agenda in the name of social justice or climate change action.

Fear was a very powerful weapon used constantly by Jones, and today’s climate catastrophists do all they can to ensure as many other people share their fear as possible. It seems that every negative news item is somehow tied to ‘climate change’. If the climate isn’t enough, rising seas, ocean acidification, plastic pollution are all linked in to enhance the total fear, exaggerating wildly while a scared media lets them get away with it. Millions of people now pressurize governments and social media, screaming and shrieking “DO SOMETHING, NOW!!!!!”. Jones enhanced fear by talking frequently about death, even using mock suicides to amplify the general climate of fear. Now, witness the frequent death cult demonstrations of animal rights protesters and climate change catastrophists. Extinction rebellion excel in this area, with their blood-red meta-religious uniforms. It is impossible not to see parallels with Jones’ cult followers.

Jones was also adept in creating fake news. He used fake healings and even a fake resurrection to amplify faith and ensure his reign as leader. Fake news in today’s work is virtually indistinguishable from reality, and worse still many people don’t care, as long as it backs up what they already believe.

Another strong parallel is socialism. Jones gift-wrapped his cult in socialism Utopianism. Most people won’t join a movement just from fear alone, there needs to be a strong attractor to get them to join up, and fear can keep them there afterwards. Today we see a new enthusiasm among young people (a gospel enthusiastically spread to young minds via their state school teachers) for socialism. Via skillful use of Orwell’s doublespeak, with activists redefining words over a decade or more, they are presented with all the wonderful Utopian claims of ‘fairness’, ‘equality’, ‘love’, ‘tolerance’ and so on, while non-believers are listed as ‘evil’, ‘deplorable’, ‘fascists’, and ‘deniers’. Even the USA is seeing strong enthusiasm for socialism and even communism, something that would have been impossible to imagine just 25 years ago.

Socialism, environmental catastrophism and religious fervor make a powerful trio. Promised salvation, status and utopia if you follow, doom and catastrophic punishment such as social ostracizing and career destruction on the light end, and complete civilizational and environmental collapse if you don’t.

Other forces still add to this. Generations raised on social media and social credit scores (both official and unofficial) are rewarded (in status and income) for narcissism and self-censorship and reversion to the group mean. This, of course, further reinforces echo-chamber group-think and a sincere, yet unfounded superiority complex, creating a tribal inter-generational hostility to older people that prevents them from accepting accumulated wisdom. They happily absorb emotional fake news and distortion as long as it massages their need for affirmation. Likes outweigh facts any day. Indeed, even holding a PhD is no longer an effective immunization against collective delusion, in a world where social scientists are punished with their careers for publishing results of scientific studies that falsify popular politically correct consensus opinions (As Eric Hoffer said, “There is an illiterate air about the most literate true believer.”)

Self-hate is another powerful trend; the dishonor of being born Western (or even more damningly, male) has strong Biblical parallels to man being born into sin; and the need to recognize, confess and atone for the sins of one’s birth and forefathers.

So where does this take us?

Jones was highly charismatic. He was a natural master of using strong emotional forces built into human nature. History has many examples of equally charismatic leaders (from Obama to Oprah), who used their charm and power for good. (Unfortunately, history also provides us with myriad of converse examples, from Hitler and Stalin to Jones). It likely that we will now see new leaders emerge to galvanize today’s new tribes of true believers. Whether the new leaders exploit the passion of the masses for good or ill; or march them to the Promised Land or into a catastrophic Great Leap forward into famine, disaster and mass death, only time will tell.

Already, we have heard many activists talk about how we need to greatly reduce human population. As an example, just days ago, The Guardian published this article. The radical vegan anti-natalist movement, advocating for the extinction of the human race as the only way to save planet Earth is growing in popularity around the (mostly Western) self-hating world. Some activists have even suggested mass-killing climate change deniers.

Similarly, controversially, there is a related emerging enthusiasm for abortion. Far beyond a woman’s right to choice and autonomy over her own body, the new celebration of abortion – not as a woman’s right, but as something actively encouraged and applauded by extreme environmentalists- marks a distinct turning point in society’s values towards human life in general.  Would-have-been parents claim they are so sure about climate doom that they can’t bear to bring a child into this world; similarly, young men are getting vasectomies as a sign of commitment to their cause (not unlike religious circumcision). It’s voluntary sterilization as virtue signalling, as a political message, sacrificing a child to make a point.  Abortion rates may well start to rise again after a long steady decline as Climatism makes its mark.

(Indeed, the anti-fertility campaigns of Western aid and health workers in low income African and Asian countries is symptomatic of how human life is increasingly perceived as a form of pestilence, to be minimized, if not eradicated (by its own kind if necessary); rather than something intrinsically valuable.)

Following along these lines, we can see echoes of Jonestown. At the end, Jones made sure that adults gave poison to their kids first before taking it themselves. He knew that if parents had deliberately killed their kids, they would be much more likely to kill themselves.

Imagine therefore that a new charismatic leader were to spring up, adept at social media and in manipulating language, emotions, and people. Imagine that they were to gain a large following across the English-speaking world. That they advocate reducing human population, targeting heretic ‘climate change deniers’, reducing carbon footprint via vegetarianism, veganism, giving reparations to developing countries for climate damage, supporting no-borders to allow anyone to immigrate as a ‘climate refugee’, encouraging abortion to reduce birth rate. Such a package would find a very large audience who demonstrably want to feel holy, that they are good while others are evil. A charismatic leader could thereby create a strong tribe. Using abundant funding from the membership, they might well build socialist Utopian towns. Maybe in a jungle like Jones, but just as likely out in the wilds in Canada, the USA, or Australia, a Scottish island, or all of these. Perhaps they could have hundreds of thousands of people join, with millions more online ‘associates’. Millions compared to Jonestown’s thousand.

And then perhaps, in the end, to force the rest of humanity to listen by means of a coordinated mass suicide, to go down in history as martyrs to the environment, saviors of the Earth.

Is an anti-civilizational suicide pact inevitable? No, not at all.

But imaginable, feasible, perhaps even likely? In my opinion, yes it is. And it could well happen in the next few years, while this perfect storm of forces is peaking.

About Bronwyn Williams

Bronwyn Williams is a futurist, economist and trend analyst, who consults to business and government leaders on how to understand the world we live in today and change the world’s trajectory for tomorrow. She is also a regular media commentator on African socio-economic affairs. For more, visit http://whatthefuturenow.com

 

 

Future Surveillance

This is an update of my last surveillance blog 6 years ago, much of which is common discussion now. I’ll briefly repeat key points to save you reading it.

They used to say

“Don’t think it

If you must think it, don’t say it

If you must say it, don’t write it

If you must write it, don’t sign it”

Sadly this wisdom is already as obsolete as Asimov’s Laws of Robotics. The last three lines have already been automated.

I recently read of new headphones designed to recognize thoughts so they know what you want to listen to. Simple thought recognition in various forms has been around for 20 years now. It is slowly improving but with smart networked earphones we’re already providing an easy platform into which to sneak better monitoring and better though detection. Sold on convenience and ease of use of course.

You already know that Google and various other large companies have very extensive records documenting many areas of your life. It’s reasonable to assume that any or all of this could be demanded by a future government. I trust Google and the rest to a point, but not a very distant one.

Your phone, TV, Alexa, or even your networked coffee machine may listen in to everything you say, sending audio records to cloud servers for analysis, and you only have naivety as defense against those audio records being stored and potentially used for nefarious purposes.

Some next generation games machines will have 3D scanners and UHD cameras that can even see blood flow in your skin. If these are hacked or left switched on – and social networking video is one of the applications they are aiming to capture, so they’ll be on often – someone could watch you all evening, capture the most intimate body details, film your facial expressions and gaze direction while you are looking at a known image on a particular part of the screen. Monitoring pupil dilation, smiles, anguished expressions etc could provide a lot of evidence for your emotional state, with a detailed record of what you were watching and doing at exactly that moment, with whom. By monitoring blood flow and pulse via your Fitbit or smartwatch, and additionally monitoring skin conductivity, your level of excitement, stress or relaxation can easily be inferred. If given to the authorities, this sort of data might be useful to identify pedophiles or murderers, by seeing which men are excited by seeing kids on TV or those who get pleasure from violent games, and it is likely that that will be one of the justifications authorities will use for its use.

Millimetre wave scanning was once controversial when it was introduced in airport body scanners, but we have had no choice but to accept it and its associated abuses –  the only alternative is not to fly. 5G uses millimeter wave too, and it’s reasonable to expect that the same people who can already monitor your movements in your home simply by analyzing your wi-fi signals will be able to do a lot better by analyzing 5G signals.

As mm-wave systems develop, they could become much more widespread so burglars and voyeurs might start using them to check if there is anything worth stealing or videoing. Maybe some search company making visual street maps might ‘accidentally’ capture a detailed 3d map of the inside of your house when they come round as well or instead of everything they could access via your wireless LAN.

Add to this the ability to use drones to get close without being noticed. Drones can be very small, fly themselves and automatically survey an area using broad sections of the electromagnetic spectrum.

NFC bank and credit cards not only present risks of theft, but also the added ability to track what we spend, where, on what, with whom. NFC capability in your phone makes some parts of life easier, but NFC has always been yet another doorway that may be left unlocked by security holes in operating systems or apps and apps themselves carry many assorted risks. Many apps ask for far more permissions than they need to do their professed tasks, and their owners collect vast quantities of information for purposes known only to them and their clients. Obviously data can be collected using a variety of apps, and that data linked together at its destination. They are not all honest providers, and apps are still very inadequately regulated and policed.

We’re seeing increasing experimentation with facial recognition technology around the world, from China to the UK, and only a few authorities so far such as in San Francisco have had the wisdom to ban its use. Heavy handed UK police, who increasingly police according to their own political agenda even at the expense of policing actual UK law, have already fined people who have covered themselves to avoid being abused in face recognition trials. It is reasonable to assume they would gleefully seize any future opportunity to access and cross-link all of the various data pools currently being assembled under the excuse of reducing crime, but with the real intent of policing their own social engineering preferences. Using advanced AI to mine zillions of hours of full-sensory data input on every one of us gathered via all this routine IT exposure and extensive and ubiquitous video surveillance, they could deduce everyone’s attitudes to just about everything – the real truth about our attitudes to every friend and family member or TV celebrity or politician or product, our detailed sexual orientation, any fetishes or perversions, our racial attitudes, political allegiances, attitudes to almost every topic ever aired on TV or everyday conversation, how hard we are working, how much stress we are experiencing, many aspects of our medical state.

It doesn’t even stop with public cameras. Innumerable cameras and microphones on phones, visors, and high street private surveillance will automatically record all this same stuff for everyone, sometimes with benign declared intentions such as making self-driving vehicles safer, sometimes using social media tribes to capture any kind of evidence against ‘the other’. In depth evidence will become available to back up prosecutions of crimes that today would not even be noticed. Computers that can retrospectively date mine evidence collected over decades and link it all together will be able to identify billions of real or invented crimes.

Active skin will one day link your nervous system to your IT, allowing you to record and replay sensations. You will never be able to be sure that you are the only one that can access that data either. I could easily hide algorithms in a chip or program that only I know about, that no amount of testing or inspection could ever reveal. If I can, any decent software engineer can too. That’s the main reason I have never trusted my IT – I am quite nice but I would probably be tempted to put in some secret stuff on any IT I designed. Just because I could and could almost certainly get away with it. If someone was making electronics to link to your nervous system, they’d probably be at least tempted to put a back door in too, or be told to by the authorities.

The current panic about face recognition is justified. Other AI can lipread better than people and recognize gestures and facial expressions better than people. It adds the knowledge of everywhere you go, everyone you meet, everything you do, everything you say and even every emotional reaction to all of that to all the other knowledge gathered online or by your mobile, fitness band, electronic jewelry or other accessories.

Fools utter the old line: “if you are innocent, you have nothing to fear”. Do you know anyone who is innocent? Of everything? Who has never ever done or even thought anything even a little bit wrong? Who has never wanted to do anything nasty to anyone for any reason ever? And that’s before you even start to factor in corruption of the police or mistakes or being framed or dumb juries or secret courts. The real problem here is not the abuses we already see. It is what is being and will be collected and stored, forever, that will be available to all future governments of all persuasions and police authorities who consider themselves better than the law. I’ve said often that our governments are often incompetent but rarely malicious. Most of our leaders are nice guys, only a few are corrupt, but most are technologically inept . With an increasingly divided society, there’s a strong chance that the ‘wrong’ government or even a dictatorship could get in. Which of us can be sure we won’t be up against the wall one day?

We’ve already lost the battle to defend privacy. The only bits left are where the technology hasn’t caught up yet. In the future, not even the deepest, most hidden parts of your mind will be private. Pretty much everything about you will be available to an AI-upskilled state and its police.

The future for women, pdf version

It is several years since my last post on the future as it will affect women so here is my new version as a pdf presentation:

Women and the Future

Augmented reality will objectify women

Microsoft Hololens 2 Visor

The excitement around augmented reality continues to build, and I am normally  enthusiastic about its potential, looking forward to enjoying virtual architecture, playing immersive computer games, or enjoying visual and performance artworks transposed into my view of the high street while I shop.

But it won’t all be wonderful. While a few PR and marketing types may worry a little about people overlaying or modifying their hard-won logos and ads, a bigger issue will be some people choosing to overlay people in the high street with ones that are a different age or gender or race, or simply prettier. Identity politics will be fought on yet another frontier.

In spite of waves of marketing hype and misrepresentation, AR is really only here in primitive form outside the lab. Visors fall very far short of what we’d hoped for by now even a decade ago, even the Hololens 2 shown above. But soon AR visors and eventually active contact lenses will enable fully 3D hi-res overlays on the real world. Then, in principle at least, you can make things look how you want, with a few basic limits. You could certainly transform a dull shop, cheap hotel room or an office into an elaborate palace or make it look like a spaceship. But even if you change what things look like, you still have to represent nearby physical structures and obstacles in your fantasy overlay world, or you may bump into them, and that includes all the walls and furniture, lamp posts, bollards, vehicles, and of course other people. Augmented reality allows you to change their appearance thoroughly but they still need to be there somehow.

When it comes to people, there will be some battles. You may spend ages creating a wide variety of avatars, or may invest a great deal of time and money making or buying them. You may have a digital aura, hoping to present different avatars to different passers-by according to their profiles. You may want to look younger or thinner or as a character you enjoy playing in a computer game. You may present a selection of options to the AIs controlling the passer person’s view and the avatar they see overlaid could be any one of the images you have on offer. Perhaps some privileged people get to pick from a selection you offer, while others you wish to privilege less are restricted to just one that you have set for their profile. Maybe you’d have a particularly ugly or offensive one to present to those with opposing political views.

Except that you can’t assume you will be in control. In fact, you probably won’t.

Other people may choose not to see your avatar, but instead to superimpose one of their own choosing. The question of who decides what the viewer sees is perhaps the first and most important battle in AR. Various parties would like to control it – visor manufacturers, O/S providers, UX designers, service providers, app creators, AI providers, governments, local councils, police and other emergency services, advertisers and of course individual users. Given market dynamics, most of these ultimately come down to user choice most of the time, albeit sometimes after paying for the privilege. So it probably won’t be you who gets to choose how others see you, via assorted paid intermediary services, apps and AI, it will be the other person deciding how they want to see you, regardless of your preferences.

So you can spend all the time you want designing your avatar and tweaking your virtual make-up to perfection, but if someone wants to see their favorite celebrity walking past instead of you, they will. You and your body become no more than an object on which to display any avatar or image someone else chooses. You are quite literally reduced to an object in the AR world. Augmented reality will literally objectify women, reducing them to no more than a moving display space onto which their own selected images are overlaid. A few options become obvious.

Firstly they may just take your actual physical appearance (via a video camera built into their visor for example) and digitally change it,  so it is still definitely you, but now dressed more nicely, or dressed in sexy lingerie, or how you might look naked, using the latest AI to body-fit fantasy images from a porn database. This could easily be done automatically in real time using some app or other. You’ve probably already seen recent AI video fakery demos that can present any celebrity saying anything at all, almost indistinguishable from reality. That will soon be pretty routine tech for AR apps. They could even use your actual face as input to image-matching search engines to find the most plausible naked lookalikes. So anyone could digitally dress or undress you, not just with their eyes, but with a hi-res visor using sophisticated AI-enabled image processing software. They could put you in any kind of outfit, change your skin color or make-up or age or figure, and make you look as pretty and glamorous or as slutty as they want. And you won’t have any idea what they are seeing. You simply won’t know whether they are respectfully celebrating your inherent beauty, or flattering you by making you look even prettier, which you might not mind at all, or might object to strongly in the absence of explicit consent, or worse still, stripping or degrading you to whatever depths they wish, with no consent or notification, which you probably will mind a lot.

Or they can treat you as just an object on which to superimpose some other avatar, which could be anything or anyone – a zombie, favorite actress or supermodel. They won’t need your consent and again you won’t have any idea what they are seeing. The avatar may make the same gestures and movements and even talk plausibly, saying whatever their AI thinks they might like, but it won’t be you. In some ways this might not be so bad. You’d still be reduced to an object but at least it wouldn’t be you that they’re looking at naked. To most strangers on a high street most of the time, you’re just a moving obstacle to avoid bumping into, so being digitally transformed into a walking display board may worry you. Most people will cope with that bit. It is when you stop being just a passing stranger and start to interact in some way that it really starts to matter. You probably won’t like it if someone is chatting to you but they are actually looking at someone else entirely, especially if the viewer is one of your friends or your partner. And if your partner is kissing or cuddling you but seeing someone else, that would be a strong breach of trust, but how would you know? This sort of thing could and probably will damage a lot of relationships.

Most of the software to do most of this is already in development and much is already demonstrable. The rest will develop quickly once AR visors become commonplace.

In the office, in the home, when you’re shopping or at a party, you soon won’t have any idea what or who someone else is seeing when they look at you. Imagine how that would clash with rules that are supposed to be protection from sexual harassment  in the office. Whole new levels of harassment will be enabled, much invisible. How can we police behaviors we can’t even detect? Will hardware manufacturers be forced to build in transparency and continuous experience recording

The main casualty will be trust.  It will make us question how much we trust each of our friends and colleagues and acquaintances. It will build walls. People will often become suspicious of others, not just strangers but friends and colleagues. Some people will become fearful. You may dress as primly or modestly as you like, but if the viewer chooses to see you wearing a sexy outfit, perhaps their behavior and attitude towards you will be governed by that rather than reality. Increased digital objectification might lead to increase physical sexual assault or rape. We may see more people more often objectifying women in more circumstances.

The tech applies equally to men of course. You could make a man look like a silverback gorilla or a zombie or fake-naked. Some men will care more than others, but the vast majority of real victims will undoubtedly be women. Many men objectify women already. In the future AR world , they’ll be able to do so far more effectively, more easily.

 

Monopoly and diversity laws should surely apply to political views too

With all the calls for staff diversity and equal representation, one important area of difference has so far been left unaddressed: political leaning. In many organisations, the political views of staff don’t matter. Nobody cares about the political views of staff in a double glazing manufacturer because they are unlikely to affect the qualities of a window. However, in an organisation that has a high market share in TV, social media or internet search, or that is a government department or a public service, political bias can have far-reaching effects. If too many of its staff and their decisions favor a particular political view, it is danger of becoming what is sometimes called ‘the deep state’. That is, their everyday decisions and behaviors might privilege one group over another. If most of their colleagues share similar views, they might not even be aware of their bias, because they are the norm in their everyday world. They might think they are doing their job without fear of favor but still strongly preference one group of users over another.

Staff bias doesn’t only an organisation’s policies, values and decisions. It also affects recruitment and promotion, and can result in increasing concentration of a particular world view until it becomes an issue. When a vacancy appears at board level, remaining board members will tend to promote someone who thinks like themselves. Once any leaning takes hold, near monopoly can quickly result.

A government department should obviously be free of bias so that it can carry out instructions from a democratically elected government with equal professionalism regardless of its political flavor. Employees may be in positions where they can allocate resources or manpower more to one area than another, or provide analysis to ministers, or expedite or delay a communication, or emphasize or dilute a recommendation in a survey, or may otherwise have some flexibility in interpreting instructions and even laws. It is important they do so without political bias so transparency of decision-making for external observers is needed along with systems and checks and balances to prevent and test for bias or rectify it when found. But even if staff don’t deliberately abuse their positions to deliberately obstruct or favor, if a department has too many staff from one part of the political spectrum, normalization of views can again cause institutional bias and behavior. It is therefore important for government departments and public services to have work-forces that reflect the political spectrum fairly, at all levels. A department that implements a policy from a government of one flavor but impedes a different one from a new government of opposite flavor is in strong need of reform and re-balancing. It has become a deep state problem. Bias could be in any direction of course, but any public sector department must be scrupulously fair in its implementation of the services it is intended to provide.

Entire professions can be affected. Bias can obviously occur in any direction but over many decades of slow change, academia has become dominated by left-wing employees, and primary teaching by almost exclusively female ones. If someone spends most of their time with others who share the same views, those views can become normalized to the point that a dedicated teacher might think they are delivering a politically balanced lesson that is actually far from it. It is impossible to spend all day teaching kids without some personal views and values rub off on them. The young have always been slightly idealistic and left leaning – it takes years of adult experience of non-academia to learn the pragmatic reality of implementing that idealism, during which people generally migrate rightwards -but with a stronger left bias ingrained during education, it takes longer for people to unlearn naiveté and replace it with reality. Surely education should be educating kids about all political viewpoints and teaching them how to think so they can choose for themselves where to put their allegiance, not a long process of political indoctrination?

The media has certainly become more politically crystallized and aligned in the last decade, with far fewer media companies catering for people across the spectrum. There are strongly left-wing and right-wing papers, magazines, TV and radio channels or shows. People have a free choice of which papers to read, and normal monopoly laws work reasonably well here, with proper checks when there is a proposed takeover that might result in someone getting too much market share. However, there are still clear examples of near monopoly in other places where fair representation is particularly important. In spite of frequent denials of any bias, the BBC for example was found to have a strong pro-EU/Remain bias for its panel on its flagship show Question Time:

https://iea.org.uk/media/iea-analysis-shows-systemic-bias-against-leave-supporters-on-flagship-bbc-political-programmes/

The BBC does not have a TV or radio monopoly but it does have a very strong share of influence. Shows such as Question Time can strongly influence public opinion so if biased towards one viewpoint could be considered as campaigning for that cause, though their contributions would lie outside electoral commission scrutiny of campaign funding. Many examples of BBC bias on a variety of social and political issues exist. It often faces accusations of bias from every direction, sometimes unfairly, so again proper transparency must exist so that independent external groups can appeal for change and be heard fairly, and change enforced when necessary. The BBC is in a highly privileged position, paid for by a compulsory license fee on pain of imprisonment, and also in a socially and politically influential position. It is doubly important that it proportionally represents the views of the people rather than acting as an activist group using license-payer funds to push the political views of the staff, engaging in their own social engineering campaigns, or otherwise being propaganda machines.

As for private industry, most isn’t in a position of political influence, but some areas certainly are. Social media have enormous power to influence the views its users are exposed to, choosing to filter or demote material they don’t approve of, as well as providing a superb activist platform. Search companies can choose to deliver results according to their own agendas, with those they support featuring earlier or more prominently than those they don’t. If social media or search companies provide different service or support or access according to political leaning of the customer then they can become part of the deep state. And again, with normalization creating the risk of institutional bias, the clear remedy is to ensure that these companies have a mixture of staff representative of social mix. They seem extremely enthusiastic about doing that for other forms of diversity. They need to apply similar enthusiasm to political diversity too.

Achieving it won’t be easy. IT companies such as Google, Facebook, Twitter currently have a strong left leaning, though the problem would be just as bad if it were to swing the other direction. Given the natural monopoly tendency in each sector, social media companies should be politically neutral, not deep state companies.

AI being developed to filter posts or decide how much attention they get must also be unbiased. AI algorithmic bias could become a big problem, but it is just as important that bias is judged by neutral bodies, not by people who are biased themselves, who may try to ensure that AI shares their own leaning. I wrote about this issue here: https://timeguide.wordpress.com/2017/11/16/fake-ai/

But what about government? Today’s big issue in the UK is Brexit. In spite of all its members being elected or reelected during the Brexit process, the UK Parliament itself nevertheless has 75% of MPs to defend the interests of the 48% voting Remain  and only 25% to represent the other 52%. Remainers get 3 times more Parliamentary representation than Brexiters. People can choose who they vote for, but with only candidate available from each party, voters cannot choose by more than one factor and most people will vote by party line, preserving whatever bias exists when parties select which candidates to offer. It would be impossible to ensure that every interest is reflected proportionately but there is another solution. I suggested that scaled votes could be used for some issues, scaling an MP’s vote weighting by the proportion of the population supporting their view on that issue:

https://timeguide.wordpress.com/2015/05/08/achieving-fair-representation-in-the-new-uk-parliament/

Like company boards, once a significant bias in one direction exists, political leaning tends to self-reinforce to the point of near monopoly. Deliberate procedures need to be put in place to ensure equality or representation, even when people are elected. Obviously people who benefit from current bias will resist change, but everyone loses if democracy cannot work properly.

The lack of political diversity in so many organisations is becoming a problem. Effective government may be deliberately weakened or amplified by departments with their own alternative agendas, while social media and media companies may easily abuse their enormous power to push their own sociopolitical agendas. Proper functioning of democracy requires that this problem is fixed, even if a lot of people like it the way it is.

AI that talks to us could quickly become problematic

Google’s making the news again adding evidence to the unfortunate stereotype of the autistic IT nerd that barely understands normal people, and they have therefore been astonished at the backlash that normal people would all easily have predicted. (I’m autistic and work in IT mostly too, and am well used to the stereotype it so it doesn’t bother me, in fact it is a sort of ‘get out of social interactions free’ card). Last time it was Google Glass, where it apparently didn’t occur to them that people may not want other people videoing them without consent in pubs and changing rooms. This time it is Google Duplex, that makes phone calls on your behalf to arrange appointment using voice that is almost indistinguishable from normal humans. You could save time making an appointment with a hairdresser apparently, so the Googlanders decided it must be a brilliant breakthrough, and expected everyone to agree. They didn’t.

Some of the objections have been about ethics: e.g. An AI should not present itself as human – Humans have rights and dignity and deserve respectful interactions with other people, but an AI doesn’t and should not masquerade as human to acquire such privilege without knowledge of the other party and their consent.

I would be more offended by the presumed attitude of the user. If someone thinks they are so much better then me that they can demand my time and attention without the expense of any of their own, delegating instead to a few microseconds of processing time in a server farm somewhere, I’ll treat them with the contempt they deserve. My response will not be favourable. I am already highly irritated by the NHS using simple voice interaction messaging to check I will attend a hospital appointment. The fact that my health is on the line and notices at surgeries say I will be banned if I complain on social media is sufficient blackmail to ensure my compliance, but it still comes at the expense of my respect and goodwill. AI-backed voice interaction with better voice wouldn’t be any better, and if it asking for more interaction such as actually booking an appointment, it would be extremely annoying.

In any case, most people don’t speak in fully formed grammatically and logically correct sentences. If you listen carefully to everyday chat, a lot of sentences are poorly pronounced, incomplete, jumbled, full of ums and er’s, likes and they require a great deal of cooperation by the listener to make any sense at all. They also wander off topic frequently. People don’t stick to a rigid vocabulary list or lists of nicely selected sentences.  Lots of preamble and verbal meandering is likely in a response that is highly likely to add ambiguity. The example used in a demo, “I’d like to make a hairdressing appointment for a client” sounds fine until you factor in normal everyday humanity. A busy hairdresser or a lazy receptionist is not necessarily going to cooperate fully. “what do you mean, client?”, “404 not found”, “piss off google”, “oh FFS, not another bloody computer”, “we don’t do hairdressing, we do haircuts”, “why can’t your ‘client’ call themselves then?” and a million other responses are more likely than “what time would you like?”

Suppose though that it eventually gets accepted by society. First, call centers beyond the jurisdiction of your nuisance call blocker authority will incessantly call you at all hours asking or telling you all sorts of things, wasting huge amounts of your time and reducing quality of life. Voice spam from humans in call centers is bad enough. If the owners can multiply productivity by 1000 by using AI instead of people, the result is predictable.

We’ve seen the conspicuous political use of social media AI already. Facebook might have allowed companies to use very limited and inaccurate knowledge of you to target ads or articles that you probably didn’t look at. Voice interaction would be different. It uses a richer emotional connection that text or graphics on a screen. Google knows a lot about you too, but it will know a lot more soon. These big IT companies are also playing with tech to log you on easily to sites without passwords. Some gadgets that might be involved might be worn, such as watches or bracelets or rings. They can pick up signals to identify you, but they can also check emotional states such as stress level. Voice gives away emotion too. AI can already tell better then almost all people whether you are telling the truth or lying or hiding something. Tech such as iris scans can also tell emotional states, as well as give health clues. Simple photos can reveal your age quite accurately to AI, (check out how-old.net).  The AI voice sounds human, but it is better then even your best friends at guessing your age, your stress and other emotions, your health, whether you are telling the truth or not, and it knows far more about what you like and dislike and what you really do online than anyone you know, including you. It knows a lot of your intimate secrets. It sounds human, but its nearest human equivalent was probably Machiavelli. That’s who will soon be on the other side of the call, not some dumb chatbot. Now re-calculate political interference, and factor in the political leaning and social engineering desires of the companies providing the tools. Google and Facebook and the others are very far from politically neutral. One presidential candidate might get full cooperation, assistance and convenient looking the other way, while their opponent might meet rejection and citation of the official rules on non-interference. Campaigns on social issues will also be amplified by AI coupled to voice interaction. I looked at some related issue in a previous blog on fake AI (i.e. fake news type issues): https://timeguide.wordpress.com/2017/11/16/fake-ai/

I could but won’t write a blog on how this tech could couple well to sexbots to help out incels. It may actually have some genuine uses in providing synthetic companionship for lonely people, or helping or encouraging them in real social interactions with real people. It will certainly have some uses in gaming and chatbot game interaction.

We are not very far from computers that are smarter then people across a very wide spectrum, and probably not very far from conscious machines that have superhuman intelligence. If we can’t even rely on IT companies to understand likely consequences of such obvious stuff as Duplex before thy push it, how can we trust them in other upcoming areas of AI development, or even closer term techs with less obvious consequences? We simply can’t!

There are certainly a few such areas where such technology might help us but most are minor and the rest don’t need any deception, but they all come at great cost or real social and political risk, as well as more abstract risks such as threats to human dignity and other ethical issues. I haven’t give this much thought yet and I am sure there must be very many other consequences I have not touched on yet. Google should do more thinking before they release stuff. Technology is becoming very powerful, but we all know that great power comes with great responsibility, and since most people aren’t engineers so can’t think through all the potential technology interactions and consequences, engineers such as Google’s must act more responsibly. I had hoped they’d started, and they said they had, but this is not evidence of that.

 

Beyond VR: Computer assisted dreaming

I first played with VR in 1983/1984 while working in the missile industry. Back then we didn’t call it VR, we just called it simulation but it was actually more intensive than VR, just as proper flight simulators are. Our office was a pair of 10m wide domes onto which video could be projected, built decades earlier, in the 1950s I think. One dome had a normal floor, the other had a hydraulic platform that could simulate being on a ship. The subject would stand on whichever surface was appropriate and would see pretty much exactly what they would see in a real battlefield. The missile launcher used for simulation was identical to a real one and showed exactly the same image as a real one would. The real missile was not present of course but its weight was simulated and when the fire button was pressed, a 140dB bang was injected into the headset and weights and pulleys compensated for the 14kg of weight, suddenly vanishing from the shoulder. The experience was therefore pretty convincing and with the loud bang and suddenly changing weight, it was almost as hard to stand steady and keep the system on target as it would be in real life – only the presumed fear and knowledge of the reality of the situation was different.

Back then in 1983, as digital supercomputers had only just taken over from analog ones for simulation, it was already becoming obvious that this kind of computer simulation would one day allow ‘computer assisted dreaming’. (That’s one of the reasons I am irritated when Jaron Lanier is credited for inventing VR – highly realistic simulators and the VR ideas that sprung obviously from them had already been around for decades. At best, all he ‘invented’ was a catchy name for a lower cost, lower quality, less intense simulator. The real inventors were those who made the first generation simulators long before I was born and the basic idea of VR had already been very well established.)

‘Computer assisted dreaming’ may well be the next phase of VR. Today in conventional VR, people are immersed in a computer generated world produced by a computer program (usually) written by others. Via trial and feedback, programmers make their virtual worlds better. As AI and sensor technology continue rapid progress, this is very likely to change to make worlds instantly responsive to the user. By detecting user emotions, reactions, gestures and even thoughts and imagination, it won’t be long before AI can produce a world in real time that depends on those thoughts, imagination and emotions rather than putting them in a pre-designed virtual world. That world would depend largely on your own imagination, upskilled by external AI. You might start off imagining you’re on a beach, then AI might add to it by injecting all sorts of things it knows you might enjoy from previous experiences. As you respond to those, it picks up on the things you like or don’t like and the scene continues to adapt and evolve, to make it more or less pleasant or more or less exciting or more or less challenging etc., depending on your emotional state, external requirements and what it thinks you want from this experience. It would be very like being in a dream – computer assisted lucid dreaming, exactly what I wanted to make back in 1983 after playing in that simulator.

Most people enjoy occasional lucid dreams, where they realise they are dreaming and can then decide what happens next. Making VR do exactly that would be better than being trapped in someone else’s world. You could still start off with whatever virtual world you bought, a computer game or training suite perhaps, but it could adapt to you, your needs and desires to make it more compelling and generally better.

Even in shared experiences like social games, experiences could be personalised. Often all players need to see the same enemies in the same locations in the same ways to make it fair, but that doesn’t mean that the situation can’t adapt to the personalities of those playing. It might actually improve the social value if each time you play it looks different because your companions are different. You might tease a friend if every time you play with them, zombies or aliens always have to appear somehow, but that’s all part of being friends. Exploring virtual worlds with friends, where you both see things dependent on your friend’s personality would help bonding. It would be a bit like exploring their inner world. Today, you only explore the designer’s inner world.

This sort of thing would be a superb development and creativity tool. It could allow you to explore a concept you have in your head, automatically feeding in AI upskilling to amplify your own thoughts and ideas, showing you new paths to explore and helping you do so. The results would still be extremely personal to you, but you on a good day. You could accomplish more, have better visions, imagine more creative things, do more with whatever artistic talent you have. AI could even co-create synthetic personas, make virtual friends you can bond with, share innermost thoughts with, in total confidence (assuming the company you bought the tool from is trustworthy and isn’t spying on you or selling your details, so maybe best not to buy it from Facebook then).

And it would have tremendous therapeutic potential too. You could explore and indulge both enjoyable and troublesome aspects of your inner personality, to build on the good and alleviate or dispel the bad. You might become less troubled, less neurotic, more mentally healthy. You could build your emotional and creative skills. You could become happier and more fulfilled. Mental health improvement potential on its own makes this sort of thing worth developing.

Marketers would obviously try to seize control as they always do, and advertising is already adapting to VR and will continue into its next phases of development. Your own wants and desires might help guide the ‘dreaming’, but marketers will inevitably have some control over what else is injected, and will influence algorithms and AI in how it chooses how to respond to your input. You might be able to choose much of the experience, but others will still want and try to influence and manipulate you, to change your mindset and attitudes in their favour. That will not change until the advertising business model changes. You might be able to buy devices or applications that are entirely driven by you and you alone, but it is pretty certain that the bulk of products and services available will be at least partly financed by those who want to have some control of what you experience.

Nevertheless, computer-assisted dreaming could be a much more immersive and personal experience than VR, being more like an echo of your own mind and personality than external vision, more your own creation, less someone else’s. In fact, echo sounds a better term too. Echo reality, ER, or maybe personal reality, pereal, or mental echo, ME. Nah, maybe we need Lanier to invent a catchy name again, he is good at that. That 1983 idea could soon become reality.

 

People are becoming less well-informed

The Cambridge Analytica story has exposed a great deal about our modern society. They allegedly obtained access to 50M Facebook records to enable Trump’s team to target users with personalised messages.

One of the most interesting aspects is that unless they only employ extremely incompetent journalists, the news outlets making the biggest fuss about it must be perfectly aware of reports that Obama appears to have done much the same but on a much larger scale back in 2012, but are keeping very quiet about it. According to Carol Davidsen, a senior Obama campaign staffer, they allowed Obama’s team to suck out the whole social graph – because they were on our side – before closing it to prevent Republican access to the same techniques. Trump’s campaign’s 50M looks almost amateur. I don’t like Trump, and I did like Obama before the halo slipped, but it seems clear to anyone who checks media across the political spectrum that both sides try their best to use social media to target users with personalised messages, and both sides are willing to bend rules if they think they can get away with it.

Of course all competent news media are aware of it. The reason some are not talking about earlier Democrat misuse but some others are is that they too all have their own political biases. Media today is very strongly polarised left or right, and each side will ignore, play down or ludicrously spin stories that don’t align with their own politics. It has become the norm to ignore the log in your own eye but make a big deal of the speck in your opponent’s, but we know that tendency goes back millennia. I watch Channel 4 News (which broke the Cambridge Analytica story) every day but although I enjoy it, it has a quite shameless lefty bias.

So it isn’t just the parties themselves that will try to target people with politically massaged messages, it is quite the norm for most media too. All sides of politics since Machiavelli have done everything they can to tilt the playing field in their favour, whether it’s use of media and social media, changing constituency boundaries or adjusting the size of the public sector. But there is a third group to explore here.

Facebook of course has full access to all of their 2.2Bn users’ records and social graph and is not squeaky clean neutral in its handling of them. Facebook has often been in the headlines over the last year or two thanks to its own political biases, with strongly weighted algorithms filtering or prioritising stories according to their political alignment. Like most IT companies Facebook has a left lean. (I don’t quite know why IT skills should correlate with political alignment unless it’s that most IT staff tend to be young, so lefty views implanted at school and university have had less time to be tempered by real world experience.) It isn’t just Facebook of course either. While Google has pretty much failed in its attempt at social media, it also has comprehensive records on most of us from search, browsing and android, and via control of the algorithms that determine what appears in the first pages of a search, is also able to tailor those results to what it knows of our personalities. Twitter has unintentionally created a whole world of mob rule politics and justice, but in format is rapidly evolving into a wannabe Facebook. So, the IT companies have themselves become major players in politics.

A fourth player is now emerging – artificial intelligence, and it will grow rapidly in importance into the far future. Simple algorithms have already been upgraded to assorted neural network variants and already this is causing problems with accusations of bias from all directions. I blogged recently about Fake AI: https://timeguide.wordpress.com/2017/11/16/fake-ai/, concerned that when AI analyses large datasets and comes up with politically incorrect insights, this is now being interpreted as something that needs to be fixed – a case not of shooting the messenger, but forcing the messenger to wear tinted spectacles. I would argue that AI should be allowed to reach whatever insights it can from a dataset, and it is then our responsibility to decide what to do with those insights. If that involves introducing a bias into implementation, that can be debated, but it should at least be transparent, and not hidden inside the AI itself. I am now concerned that by trying to ‘re-educate’ the AI, we may instead be indoctrinating it, locking today’s politics and values into future AI and all the systems that use it. Our values will change, but some foundation level AI may be too opaque to repair fully.

What worries me most though isn’t that these groups try their best to influence us. It could be argued that in free countries, with free speech, anybody should be able to use whatever means they can to try to influence us. No, the real problem is that recent (last 25 years, but especially the last 5) evolution of media and social media has produced a world where most people only ever see one part of a story, and even though many are aware of that, they don’t even try to find the rest and won’t look at it if it is put before them, because they don’t want to see things that don’t align with their existing mindset. We are building a world full of people who only see and consider part of the picture. Social media and its ‘bubbles’ reinforce that trend, but other media are equally guilty.

How can we shake society out of this ongoing polarisation? It isn’t just that politics becomes more aggressive. It also becomes less effective. Almost all politicians claim they want to make the world ‘better’, but they disagree on what exactly that means and how best to do so. But if they only see part of the problem, and don’t see or understand the basic structure and mechanisms of the system in which that problem exists, then they are very poorly placed to identify a viable solution, let alone an optimal one.

Until we can fix this extreme blinkering that already exists, our world can not get as ‘better’ as it should.

 

Why superhumans are inevitable, and what else comes in the box

Do we have any real choice in the matter of making  super-humans? 20 years ago, I estimated 2005 as the point of no return, and nothing since then has changed my mind on that date. By my reckoning, we are already inevitably committed to designer babies, ebaybies, super-soldiers and super-smart autonomous weapons, direct brain-machine links, electronic immortality, new human races, population explosion, inter-species conflicts and wars with massively powerful weaponry, superhuman conscious AI, smart bacteria, and the only real control we have is relatively minor adjustments on timings. As I was discussing yesterday, the technology potential for this is vast and very exciting, nothing less than a genuine techno-utopia if we use the technologies wisely, but optimum potential doesn’t automatically become reality, and achieving a good outcome is unlikely if many barriers are put in its way.

In my estimation, we have already started the countdown to this group of interconnected technologies – we will very likely get all of them, and we must get ready for the decisions and impacts ahead. At the moment, our society is a small child about to open its super-high-tech xmas presents while fighting with its siblings. Those presents will give phenomenal power far beyond the comprehension of the child or its emotional maturity to equip it to deal with the decisions safely. Our leaders have already squandered decades of valuable preparation time by ignoring the big issues to focus on trivial ones. It is not too late to achieve a good ending, but it won’t happen by accident and we do need to make preparations to avoid pretty big problems.

Both hard and soft warfare – the sword and the pen, already use rapidly advancing AI, and the problems are already running ahead of what the owners intended.

Facebook, Twitter, Instagram and other media giants all have lots of smart people and presumably they mean well, but if so, they have certainly been naive. They maybe hoped to eliminate loneliness, inequality, and poverty and create a loving interconnected global society with global peace, but instead created fake news, social division and conflict and election interference. More likely they didn’t intend either outcome, they just wanted to make money and that took priority over due care and attention..

Miniaturising swarming smart-drones are already the subjects of a new arms race that will deliver almost un-killable machine adversaries by 2050. AI separately is in other arms races to make super-smart AI and super-smart soldiers. This is key to the 2005 point of no return. It was around 2005 that we reached the levels of technology where future AI development all the way to superhuman machine consciousness could be done by individuals, mad scientists or rogue states, even if major powers had banned it. Before 2005, there probably wasn’t quite enough knowledge already on the net to do that. In 2018, lots of agencies have already achieved superiority to humans in niche areas, and other niches will succumb one by one until the whole field of human capability is covered. The first machines to behave in ways not fully understood by humans arrived in the early 1990s; in 2018, neural nets already make lots of decisions at least partly obscured to humans.

This AI development trend will take us to superhuman AI, and it will be able to accelerate development of its own descendants to vastly superhuman AI, fully conscious, with emotions, and its own agendas. That will need humans to protect against being wiped out by superhuman AI. The only three ways we could do that are to either redesign the brain biologically to be far smarter, essentially impossible in the time-frame, to design ways to link our brains to machines, so that we have direct access to the same intelligence as the AIs, so a gulf doesn’t appear and we can remain relatively safe, or pray for super-smart aliens to come to our help, not the best prospect.

Therefore we will have no choice but to make direct brain links to super-smart AI. Otherwise we risk extinction. It is that simple. We have some idea how to do that – nanotech devices inside the brain linking to each and every synapse that can relay electrical signals either way, a difficult but not impossible engineering problem. Best guesses for time-frame fall in the 2045-2050 range for a fully working link that not only relays signals between your organic brain and an IT replica, but by doing so essentially makes external IT just another part of your brain. That conveys some of the other technology gifts of electronic immortality, new varieties of humans, smart bacteria (which will be created during the development path to this link) along with human-variant population explosion, especially in cyberspace, with androids as their physical front end, and the inevitable inter-species conflicts over resources and space – trillions of AI and human-like minds in cyberspace that want to do things in the real world cannot be assumed to be willingly confined just to protect the interests of what they will think of as far lesser species.

Super-smart AI or humans with almost total capability to design whatever synthetic biology is needed to achieve any biological feature will create genetic listings for infinite potential offspring, simulate them, give some of them cyberspace lives, assemble actual embryos for some of them and bring designer babies. Already in 2018, you can pay to get a DNA listing, and blend it in any way you want with the listing of anyone else. It’s already possible to make DNA listings for potential humans and sell them on ebay, hence the term ebaybies. That is perfectly legal, still, but I’ve been writing and lecturing about them since 2004. Today they would just be listings, but we’ll one day have the tech to simulate them, choose ones we like and make them real, even some that were sold as celebrity collector items on ebay. It’s not only too late to start regulating this kind of tech, our leaders aren’t even thinking about it yet.

These technologies are all linked intricately, and their foundations are already in place, with much of the building on those foundations under way. We can’t stop any of these things from happening, they will all come in the same basket. Our leaders are becoming aware of the potential and the potential dangers of the AI positive feedback loop, but at least 15 years too late to do much about it. They have been warned repeatedly and loudly but have focused instead on the minor politics of the day that voters are aware of. The fundamental nature of politics is unlikely to change substantially, so even efforts to slow down the pace of development or to limit areas of impact are likely to be always too little too late. At best, we will be able to slow runaway AI development enough to allow direct brain links to protect against extinction scenarios. But we will not be able to stop it now.

Given inevitability, it’s worth questioning whether there is even any point in trying. Why not just enjoy the ride? Well, the brakes might be broken, but if we can steer the bus expertly enough, it could be exciting and we could come out of it smelling of roses. The weak link is certainly the risk of super-smart AI, whether AI v humans or countries using super-smart AI to fight fiercely for world domination. That risk is alleviated by direct brain linkage, and I’d strongly argue necessitates it, but that brings the other technologies. Even if we decide not to develop it, others will, so one way or another, all these techs will arrive, and our future late century will have this full suite of techs, plus many others of course.

We need as a matter of extreme urgency to fix these silly social media squabbles and over-reactions that are pulling society apart. If we have groups hating each other with access to extremely advanced technology, that can only mean trouble. Tolerance is broken, sanctimony rules, the Inquisition is in progress. We have been offered techno-utopia, but current signs are that most people think techno-hell looks more appetizing and it is their free choice.

AIs of a feather flocking together to create global instability

Hawking and Musk have created a lot of media impact with their warnings about AI, so although terminator scenarios resulting from machine consciousness have been discussed, as have more mundane use of non-conscious autonomous weapon systems, it’s worth noting that I haven’t yet heard them mention one major category of risks from AI – emergence. AI risks have been discussed frequently since the 1970s, and in the 1990s a lot of work was done in the AI community on emergence. Complex emergent patterns of behavior often result from interactions between entities driven by simple algorithms. Genetic algorithms were demonstrated to produce evolution, simple neighbor-interaction rules were derived to illustrate flocking behaviors that make lovely screen saver effects. Cellular automata were played with. In BT we invented ways of self-organizing networks and FPGAs, played with mechanism that could be used for evolution and consciousness, demonstrated managing networks via ANTs – autonomous network telephers, using smart packets that would run up and down wires sorting things out all by themselves. In 1987 discovered a whole class of ways of bringing down networks via network resonance, information waves and their much larger class of correlated traffic – still unexploited by hackers apart from simple DOS attacks. These ideas have slowly evolved since, and some have made it into industry or hacker toolkits, but we don’t seem to be joining the dots as far as risks go.

I read an amusing article this morning by an ex-motoring-editor who was declined insurance because the AI systems used by insurance companies had labelled him as high risk because he maybe associated with people like Clarkson. Actually, he had no idea why, but that was his broker’s theory of how it might have happened. It’s a good article, well written and covers quite a few of the dangers of allowing computers to take control.

http://www.dailymail.co.uk/sciencetech/article-5310031/Evidence-robots-acquiring-racial-class-prejudices.html

The article suggested how AIs in different companies might all come to similar conclusions about people or places or trends or patterns in a nice tidy positive feedback loop. That’s exactly the sort of thing that can drive information waves, which I demonstrated in 1987 can bring down an entire network in less than 3 milliseconds, in such a way that it would continue to crash many times when restarted. That isn’t intended by the algorithms, which individually ought to make good decisions, but when interacting with one another, create the emergent phenomenon.  Automated dealing systems are already pretty well understood in this regard and mechanisms prevent frequent stock market collapses, but that is only one specific type of behavior in one industry that is protected. There do not seem to be any industry-wide mechanisms to prevent the rest of this infinite class of problems from affecting any or all of the rest, simultaneously.

As we create ever more deep learning neural networks, that essentially teach themselves from huge data pools, human understanding of their ‘mindsets’ decreases. They make decisions using algorithms that are understood at a code level, but the massive matrix of derived knowledge they create from all the data they receive becomes highly opaque. Often, even usually, nobody quite knows how a decision is made. That’s bad enough in a standalone system, but when many such systems are connected, produced and owned and run by diverse companies with diverse thinking, the scope for destructive forms of emergence increases geometrically.

One result could be gridlock. Systems fed with a single new piece of data could crash. My 3 millisecond result in 1987 would still stand since network latency is the prime limiter. The first AI receives it, alters its mindset accordingly, processes it, makes a decision and interacts with a second AI. This second one might have different ‘prejudice’ so makes its own decision based on different criteria, and refuses to respond the way intended. A 3rd one looks at the 2nd’s decision and takes that as evidence that there might be an issue, and with its risk-averse mindset, also refuse to act, and that inaction spreads through the entire network in milliseconds. Since the 1st AI thinks the data is all fine and it should have gone ahead, it now interprets the inaction of the others as evidence that that type of data is somehow ‘wrong’ so itself refuses to process any further of that type, whether from its own operators or other parts of the system. So it essentially adds its own outputs to the bad feeling and the entire system falls into sulk mode. As one part of infrastructure starts to shut down, that infects other connected parts and our entire IT could fall into sulk mode – entire global infrastructure. Since nobody knows how it all works, or what has caused the shutdown, it might be extremely hard to recover.

Another possible result is a direct information wave, almost certainly a piece of fake news. Imagine our IT world in 5 years time, with all these super-smart AIs super-connected. A piece of fake news says a nuke has just been launched somewhere. Stocks will obviously decline, whatever the circumstances, so as the news spreads, everyone’s AIs will take it on themselves to start selling shares before the inevitable collapse, triggering a collapse, except it won’t because the markets won’t let that happen. BUT… The wave does spread, and all those individual AIs want to dispose of those shares, or at least find out what’s happening, so they all start sending messages to one another, exchanging data, trying to find what’s going on. That’s the information wave. They can’t sell shares of find out, because the network is going into overload, so they try even harder and force it into severe overload. So it falls over. When it comes back online, they all try again, crashing it again, and so on.

Another potential result is smartass AI. There is always some prat somewhere who sees an opportunity to take advantage and ruins if for everyone else by doing something like exploiting a small loophole in the law, or in this case, most likely, a prejudice our smartass AI has discovered in some other AI that means it can be taken advantage of by doing x, y, or z. Since nobody quite knows how any of their AIs are making their decisions because their mindsets ate too big and too complex, it will be very hard to identify what is going on. Some really unusual behavior is corrupting the system because some AI is going rogue somewhere somehow, but which one, where, how?

That one brings us back to fake news. That will very soon infect AI systems with their own varieties of fake news. Complex networks of AIs will have many of the same problems we are seeing in human social networks. An AI could become a troll just the same as a human, deliberately winding others up to generate attention of drive a change of some parameter – any parameter – in its own favour. Activist AIs will happen due to people making them to push human activist causes, but they will also do it all by themselves. Their analysis of the system will sometimes show them that a good way to get a good result is to cause problems elsewhere.

Then there’s climate change, weather, storms, tsunamis. I don’t mean real ones, I mean the system wide result of tiny interactions of tiny waves and currents of data and knowledge in neural nets. Tiny effects in one small part of a system can interact in unforeseen ways with other parts of other systems nearby, creating maybe a breeze, which interacts with breezes in nearby regions to create hurricanes. I think that’s a reasonable analogy. Chaos applies to neural net societies just as it does to climate, and 50 year waves equivalents will cause equivalent havoc in IT.

I won’t go on with more examples, long blogs are awful to read. None of these requires any self-awareness, sentience, consciousness, call it what you will. All of these can easily happen through simple interactions of fairly trivial AI deep learning nets. The level of interconnection already sounds like it may already be becoming vulnerable to such emergence effects. Soon it definitely will be. Musk and Hawking have at least joined the party and they’ll think more and more deeply in coming months. Zuckerberg apparently doesn’t believe in AI threats but now accepts the problems social media is causing. Sorry Zuck, but the kind of AI you’re company is messing with will also be subject to its own kinds of social media issues, not just in its trivial decisions on what to post or block, but actual inter-AI socializing issues. It might not try to eliminate humanity, but if it brings all of our IT to a halt and prevents rapid recovery, we’re still screwed.