Category Archives: censorship

The future of reproductive choice

I’m not taking sides on the abortion debate, just drawing maps of the potential future, so don’t shoot the messenger.

An average baby girl is born with a million eggs, still has 300,000 when she reaches puberty, and subsequently releases 300 – 400 of these over her reproductive lifetime. Typically one or two will become kids but today a woman has no way of deciding which ones, and she certainly has no control over which sperm is used beyond choosing her partner.

Surely it can’t be very far in the future (as a wild guess, say 2050) before we fully understand the links between how someone is and their genetics (and all the other biological factors involved in determining outcome too). That knowledge could then notionally be used to create some sort of nanotech (aka magic) gate that would allow her to choose which of her eggs get to be ovulated and potentially fertilized, wasting ones she isn’t interested in and going for it when she’s released a good one. Maybe by 2060, women would also be able to filter sperm the same way, helping some while blocking others. Choice needn’t be limited to whether to have a baby or not, but which baby.

Choosing a particularly promising egg and then which sperm would combine best with it, an embryo might be created only if it is likely to result in the right person (perhaps an excellent athlete, or an artist, or a scientist, or just good looking), or deselected if it would become the wrong person (e.g. a terrorist, criminal, saxophonist, Republican).

However, by the time we have the technology to do that, and even before we fully know what gene combos result in what features, we would almost certainly be able to simply assemble any chosen DNA and insert it into an egg from which the DNA has been removed. That would seem a more reliable mechanism to get the ‘perfect’ baby than choosing from a long list of imperfect ones. Active assembly should beat deselection from a random list.

By then, we might even be using new DNA bases that don’t exist in nature, invented by people or AI to add or control features or abilities nature doesn’t reliably provide for.

If we can do that, and if we know how to simulate how someone might turn out, then we could go further and create lots of electronic babies that live their entire lives in an electronic Matrix style existence. Let’s expand on that briefly.

Even today, couples can store eggs and sperm for later use, but with this future genetic assembly, it will become feasible to create offspring from nothing more than a DNA listing. DNA from both members of a couple, of any sex, could get a record of their DNA, randomize combinations with their partner’s DNA and thus get a massive library of potential offspring. They may even be able to buy listings of celebrity DNA from the net. This creates the potential for greatly delayed birth and tradable ‘ebaybies’ – DNA listings are not alive so current laws don’t forbid trading in them. These listings could however be used to create electronic ‘virtual’offspring, simulated in a computer memory instead of being born organically. Various degrees of existence are possible with varied awareness. Couples may have many electronic babies as well as a few real ones. They may even wait to see how a simulation works out before deciding which kids to make for real. If an electronic baby turns out particularly well, it might be promoted to actual life via DNA assembly and real pregnancy. The following consequences are obvious:

Trade-in and collection of DNA listings, virtual embryos, virtual kids etc, that could actually be fabricated at some stage

Re-birth, potential to clone and download one’s mind or use a direct brain link to live in a younger self

Demands by infertile and gay couples to have babies via genetic assembly

Ability of kids to own entire populations of virtual people, who are quite real in some ways.

It is clear that this whole technology field is rich in ethical issues! But we don’t need to go deep into future tech to find more of those. Just following current political trends to their logical conclusions introduces a lot more. I’ve written often on the random walk of values, and we cannot be confident that many values we hold today will still reign in decades time. Where might this random walk lead? Let’s explore some more.

Even in ‘conventional’ pregnancies, although the right to choose has been firmly established in most of the developed world, a woman usually has very little information about the fetus and has to make her decision almost entirely based on her own circumstances and values. The proportion of abortions related to known fetal characteristics such as genetic conditions or abnormalities is small. Most decisions can’t yet take any account of what sort of person that fetus might become. We should expect future technology to provide far more information on fetus characteristics and likely future development. Perhaps if a woman is better informed on likely outcomes, might that sometimes affect her decision, in either direction?

In some circumstances, potential outcome may be less certain and an informed decision might require more time or more tests. To allow for that without reducing the right to choose, is possible future law could allow for conditional terminations, registered before a legal time limit but performed later (before another time limit) when more is known. This period could be used for more medical tests, or to advertise the baby to potential adopters that want a child just like that one, or simply to allow more time for the mother to see how her own circumstances change. Between 2005 and 2015, USA abortion rate dropped from 1 in 6 pregnancies to 1 in 7, while in the UK, 22% of pregnancies are terminated. What would these figures be if women could determine what future person would result? Would termination rate increase? To 30%, 50%? Abandon this one and see if we can make a better one? How many of us would exist if our parents had known then what they know now?

Whether and how late terminations should be permitted is still fiercely debated. There is already discussion about allowing terminations right up to birth and even after birth in particular circumstances. If so, then why stop there? We all know people who make excellent arguments for retrospective abortion. Maybe future parents should be allowed to decide whether to keep a child right up until it reaches its teens, depending on how the child turns out. Why not 16, or 18, or even 25, when people truly reach adulthood? By then they’d know what kind of person they’re inflicting on the world. Childhood and teen years could simply be a trial period. And why should only the parents have a say? Given an overpopulated world with an infinite number of potential people that could be brought into existence, perhaps the state could also demand a high standard of social performance before assigning a life license. The Chinese state already uses surveillance technology to assign social scores. It is a relatively small logical step further to link that to life licenses that require periodic renewal. Go a bit further if you will, and link that thought to the blog I just wrote on future surveillance: https://timeguide.wordpress.com/2019/05/19/future-surveillance/.

Those of you who have watched Logan’s Run will be familiar with the idea of  compulsory termination at a certain age. Why not instead have a flexible age that depends on social score? It could range from zero to 100. A pregnancy might only be permitted if the genetic blueprint passes a suitability test and then as nurture and environmental factors play their roles as a person ages, their life license could be renewed (or not) every year. A range of crimes might also result in withdrawal of a license, and subsequent termination.

Finally, what about AI? Future technology will allow us to make hybrids, symbionts if you like, with a genetically edited human-ish body, and a mind that is part human, part AI, with the AI acting partly as enhancement and partly as a control system. Maybe the future state could insist that installation into the embryo of a state ‘guardian’, a ‘supervisory AI’, essentially a deeply embedded police officer/judge/jury/executioner will be required to get the life license.

Random walks are dangerous. You can end up where you start, or somewhere very far away in any direction.

The legal battles and arguments around ‘choice’ won’t go away any time soon. They will become broader, more complex, more difficult, and more controversial.

Advertisements

Future Surveillance

This is an update of my last surveillance blog 6 years ago, much of which is common discussion now. I’ll briefly repeat key points to save you reading it.

They used to say

“Don’t think it

If you must think it, don’t say it

If you must say it, don’t write it

If you must write it, don’t sign it”

Sadly this wisdom is already as obsolete as Asimov’s Laws of Robotics. The last three lines have already been automated.

I recently read of new headphones designed to recognize thoughts so they know what you want to listen to. Simple thought recognition in various forms has been around for 20 years now. It is slowly improving but with smart networked earphones we’re already providing an easy platform into which to sneak better monitoring and better though detection. Sold on convenience and ease of use of course.

You already know that Google and various other large companies have very extensive records documenting many areas of your life. It’s reasonable to assume that any or all of this could be demanded by a future government. I trust Google and the rest to a point, but not a very distant one.

Your phone, TV, Alexa, or even your networked coffee machine may listen in to everything you say, sending audio records to cloud servers for analysis, and you only have naivety as defense against those audio records being stored and potentially used for nefarious purposes.

Some next generation games machines will have 3D scanners and UHD cameras that can even see blood flow in your skin. If these are hacked or left switched on – and social networking video is one of the applications they are aiming to capture, so they’ll be on often – someone could watch you all evening, capture the most intimate body details, film your facial expressions and gaze direction while you are looking at a known image on a particular part of the screen. Monitoring pupil dilation, smiles, anguished expressions etc could provide a lot of evidence for your emotional state, with a detailed record of what you were watching and doing at exactly that moment, with whom. By monitoring blood flow and pulse via your Fitbit or smartwatch, and additionally monitoring skin conductivity, your level of excitement, stress or relaxation can easily be inferred. If given to the authorities, this sort of data might be useful to identify pedophiles or murderers, by seeing which men are excited by seeing kids on TV or those who get pleasure from violent games, and it is likely that that will be one of the justifications authorities will use for its use.

Millimetre wave scanning was once controversial when it was introduced in airport body scanners, but we have had no choice but to accept it and its associated abuses –  the only alternative is not to fly. 5G uses millimeter wave too, and it’s reasonable to expect that the same people who can already monitor your movements in your home simply by analyzing your wi-fi signals will be able to do a lot better by analyzing 5G signals.

As mm-wave systems develop, they could become much more widespread so burglars and voyeurs might start using them to check if there is anything worth stealing or videoing. Maybe some search company making visual street maps might ‘accidentally’ capture a detailed 3d map of the inside of your house when they come round as well or instead of everything they could access via your wireless LAN.

Add to this the ability to use drones to get close without being noticed. Drones can be very small, fly themselves and automatically survey an area using broad sections of the electromagnetic spectrum.

NFC bank and credit cards not only present risks of theft, but also the added ability to track what we spend, where, on what, with whom. NFC capability in your phone makes some parts of life easier, but NFC has always been yet another doorway that may be left unlocked by security holes in operating systems or apps and apps themselves carry many assorted risks. Many apps ask for far more permissions than they need to do their professed tasks, and their owners collect vast quantities of information for purposes known only to them and their clients. Obviously data can be collected using a variety of apps, and that data linked together at its destination. They are not all honest providers, and apps are still very inadequately regulated and policed.

We’re seeing increasing experimentation with facial recognition technology around the world, from China to the UK, and only a few authorities so far such as in San Francisco have had the wisdom to ban its use. Heavy handed UK police, who increasingly police according to their own political agenda even at the expense of policing actual UK law, have already fined people who have covered themselves to avoid being abused in face recognition trials. It is reasonable to assume they would gleefully seize any future opportunity to access and cross-link all of the various data pools currently being assembled under the excuse of reducing crime, but with the real intent of policing their own social engineering preferences. Using advanced AI to mine zillions of hours of full-sensory data input on every one of us gathered via all this routine IT exposure and extensive and ubiquitous video surveillance, they could deduce everyone’s attitudes to just about everything – the real truth about our attitudes to every friend and family member or TV celebrity or politician or product, our detailed sexual orientation, any fetishes or perversions, our racial attitudes, political allegiances, attitudes to almost every topic ever aired on TV or everyday conversation, how hard we are working, how much stress we are experiencing, many aspects of our medical state.

It doesn’t even stop with public cameras. Innumerable cameras and microphones on phones, visors, and high street private surveillance will automatically record all this same stuff for everyone, sometimes with benign declared intentions such as making self-driving vehicles safer, sometimes using social media tribes to capture any kind of evidence against ‘the other’. In depth evidence will become available to back up prosecutions of crimes that today would not even be noticed. Computers that can retrospectively date mine evidence collected over decades and link it all together will be able to identify billions of real or invented crimes.

Active skin will one day link your nervous system to your IT, allowing you to record and replay sensations. You will never be able to be sure that you are the only one that can access that data either. I could easily hide algorithms in a chip or program that only I know about, that no amount of testing or inspection could ever reveal. If I can, any decent software engineer can too. That’s the main reason I have never trusted my IT – I am quite nice but I would probably be tempted to put in some secret stuff on any IT I designed. Just because I could and could almost certainly get away with it. If someone was making electronics to link to your nervous system, they’d probably be at least tempted to put a back door in too, or be told to by the authorities.

The current panic about face recognition is justified. Other AI can lipread better than people and recognize gestures and facial expressions better than people. It adds the knowledge of everywhere you go, everyone you meet, everything you do, everything you say and even every emotional reaction to all of that to all the other knowledge gathered online or by your mobile, fitness band, electronic jewelry or other accessories.

Fools utter the old line: “if you are innocent, you have nothing to fear”. Do you know anyone who is innocent? Of everything? Who has never ever done or even thought anything even a little bit wrong? Who has never wanted to do anything nasty to anyone for any reason ever? And that’s before you even start to factor in corruption of the police or mistakes or being framed or dumb juries or secret courts. The real problem here is not the abuses we already see. It is what is being and will be collected and stored, forever, that will be available to all future governments of all persuasions and police authorities who consider themselves better than the law. I’ve said often that our governments are often incompetent but rarely malicious. Most of our leaders are nice guys, only a few are corrupt, but most are technologically inept . With an increasingly divided society, there’s a strong chance that the ‘wrong’ government or even a dictatorship could get in. Which of us can be sure we won’t be up against the wall one day?

We’ve already lost the battle to defend privacy. The only bits left are where the technology hasn’t caught up yet. In the future, not even the deepest, most hidden parts of your mind will be private. Pretty much everything about you will be available to an AI-upskilled state and its police.

Monopoly and diversity laws should surely apply to political views too

With all the calls for staff diversity and equal representation, one important area of difference has so far been left unaddressed: political leaning. In many organisations, the political views of staff don’t matter. Nobody cares about the political views of staff in a double glazing manufacturer because they are unlikely to affect the qualities of a window. However, in an organisation that has a high market share in TV, social media or internet search, or that is a government department or a public service, political bias can have far-reaching effects. If too many of its staff and their decisions favor a particular political view, it is danger of becoming what is sometimes called ‘the deep state’. That is, their everyday decisions and behaviors might privilege one group over another. If most of their colleagues share similar views, they might not even be aware of their bias, because they are the norm in their everyday world. They might think they are doing their job without fear of favor but still strongly preference one group of users over another.

Staff bias doesn’t only an organisation’s policies, values and decisions. It also affects recruitment and promotion, and can result in increasing concentration of a particular world view until it becomes an issue. When a vacancy appears at board level, remaining board members will tend to promote someone who thinks like themselves. Once any leaning takes hold, near monopoly can quickly result.

A government department should obviously be free of bias so that it can carry out instructions from a democratically elected government with equal professionalism regardless of its political flavor. Employees may be in positions where they can allocate resources or manpower more to one area than another, or provide analysis to ministers, or expedite or delay a communication, or emphasize or dilute a recommendation in a survey, or may otherwise have some flexibility in interpreting instructions and even laws. It is important they do so without political bias so transparency of decision-making for external observers is needed along with systems and checks and balances to prevent and test for bias or rectify it when found. But even if staff don’t deliberately abuse their positions to deliberately obstruct or favor, if a department has too many staff from one part of the political spectrum, normalization of views can again cause institutional bias and behavior. It is therefore important for government departments and public services to have work-forces that reflect the political spectrum fairly, at all levels. A department that implements a policy from a government of one flavor but impedes a different one from a new government of opposite flavor is in strong need of reform and re-balancing. It has become a deep state problem. Bias could be in any direction of course, but any public sector department must be scrupulously fair in its implementation of the services it is intended to provide.

Entire professions can be affected. Bias can obviously occur in any direction but over many decades of slow change, academia has become dominated by left-wing employees, and primary teaching by almost exclusively female ones. If someone spends most of their time with others who share the same views, those views can become normalized to the point that a dedicated teacher might think they are delivering a politically balanced lesson that is actually far from it. It is impossible to spend all day teaching kids without some personal views and values rub off on them. The young have always been slightly idealistic and left leaning – it takes years of adult experience of non-academia to learn the pragmatic reality of implementing that idealism, during which people generally migrate rightwards -but with a stronger left bias ingrained during education, it takes longer for people to unlearn naiveté and replace it with reality. Surely education should be educating kids about all political viewpoints and teaching them how to think so they can choose for themselves where to put their allegiance, not a long process of political indoctrination?

The media has certainly become more politically crystallized and aligned in the last decade, with far fewer media companies catering for people across the spectrum. There are strongly left-wing and right-wing papers, magazines, TV and radio channels or shows. People have a free choice of which papers to read, and normal monopoly laws work reasonably well here, with proper checks when there is a proposed takeover that might result in someone getting too much market share. However, there are still clear examples of near monopoly in other places where fair representation is particularly important. In spite of frequent denials of any bias, the BBC for example was found to have a strong pro-EU/Remain bias for its panel on its flagship show Question Time:

https://iea.org.uk/media/iea-analysis-shows-systemic-bias-against-leave-supporters-on-flagship-bbc-political-programmes/

The BBC does not have a TV or radio monopoly but it does have a very strong share of influence. Shows such as Question Time can strongly influence public opinion so if biased towards one viewpoint could be considered as campaigning for that cause, though their contributions would lie outside electoral commission scrutiny of campaign funding. Many examples of BBC bias on a variety of social and political issues exist. It often faces accusations of bias from every direction, sometimes unfairly, so again proper transparency must exist so that independent external groups can appeal for change and be heard fairly, and change enforced when necessary. The BBC is in a highly privileged position, paid for by a compulsory license fee on pain of imprisonment, and also in a socially and politically influential position. It is doubly important that it proportionally represents the views of the people rather than acting as an activist group using license-payer funds to push the political views of the staff, engaging in their own social engineering campaigns, or otherwise being propaganda machines.

As for private industry, most isn’t in a position of political influence, but some areas certainly are. Social media have enormous power to influence the views its users are exposed to, choosing to filter or demote material they don’t approve of, as well as providing a superb activist platform. Search companies can choose to deliver results according to their own agendas, with those they support featuring earlier or more prominently than those they don’t. If social media or search companies provide different service or support or access according to political leaning of the customer then they can become part of the deep state. And again, with normalization creating the risk of institutional bias, the clear remedy is to ensure that these companies have a mixture of staff representative of social mix. They seem extremely enthusiastic about doing that for other forms of diversity. They need to apply similar enthusiasm to political diversity too.

Achieving it won’t be easy. IT companies such as Google, Facebook, Twitter currently have a strong left leaning, though the problem would be just as bad if it were to swing the other direction. Given the natural monopoly tendency in each sector, social media companies should be politically neutral, not deep state companies.

AI being developed to filter posts or decide how much attention they get must also be unbiased. AI algorithmic bias could become a big problem, but it is just as important that bias is judged by neutral bodies, not by people who are biased themselves, who may try to ensure that AI shares their own leaning. I wrote about this issue here: https://timeguide.wordpress.com/2017/11/16/fake-ai/

But what about government? Today’s big issue in the UK is Brexit. In spite of all its members being elected or reelected during the Brexit process, the UK Parliament itself nevertheless has 75% of MPs to defend the interests of the 48% voting Remain  and only 25% to represent the other 52%. Remainers get 3 times more Parliamentary representation than Brexiters. People can choose who they vote for, but with only candidate available from each party, voters cannot choose by more than one factor and most people will vote by party line, preserving whatever bias exists when parties select which candidates to offer. It would be impossible to ensure that every interest is reflected proportionately but there is another solution. I suggested that scaled votes could be used for some issues, scaling an MP’s vote weighting by the proportion of the population supporting their view on that issue:

https://timeguide.wordpress.com/2015/05/08/achieving-fair-representation-in-the-new-uk-parliament/

Like company boards, once a significant bias in one direction exists, political leaning tends to self-reinforce to the point of near monopoly. Deliberate procedures need to be put in place to ensure equality or representation, even when people are elected. Obviously people who benefit from current bias will resist change, but everyone loses if democracy cannot work properly.

The lack of political diversity in so many organisations is becoming a problem. Effective government may be deliberately weakened or amplified by departments with their own alternative agendas, while social media and media companies may easily abuse their enormous power to push their own sociopolitical agendas. Proper functioning of democracy requires that this problem is fixed, even if a lot of people like it the way it is.

People are becoming less well-informed

The Cambridge Analytica story has exposed a great deal about our modern society. They allegedly obtained access to 50M Facebook records to enable Trump’s team to target users with personalised messages.

One of the most interesting aspects is that unless they only employ extremely incompetent journalists, the news outlets making the biggest fuss about it must be perfectly aware of reports that Obama appears to have done much the same but on a much larger scale back in 2012, but are keeping very quiet about it. According to Carol Davidsen, a senior Obama campaign staffer, they allowed Obama’s team to suck out the whole social graph – because they were on our side – before closing it to prevent Republican access to the same techniques. Trump’s campaign’s 50M looks almost amateur. I don’t like Trump, and I did like Obama before the halo slipped, but it seems clear to anyone who checks media across the political spectrum that both sides try their best to use social media to target users with personalised messages, and both sides are willing to bend rules if they think they can get away with it.

Of course all competent news media are aware of it. The reason some are not talking about earlier Democrat misuse but some others are is that they too all have their own political biases. Media today is very strongly polarised left or right, and each side will ignore, play down or ludicrously spin stories that don’t align with their own politics. It has become the norm to ignore the log in your own eye but make a big deal of the speck in your opponent’s, but we know that tendency goes back millennia. I watch Channel 4 News (which broke the Cambridge Analytica story) every day but although I enjoy it, it has a quite shameless lefty bias.

So it isn’t just the parties themselves that will try to target people with politically massaged messages, it is quite the norm for most media too. All sides of politics since Machiavelli have done everything they can to tilt the playing field in their favour, whether it’s use of media and social media, changing constituency boundaries or adjusting the size of the public sector. But there is a third group to explore here.

Facebook of course has full access to all of their 2.2Bn users’ records and social graph and is not squeaky clean neutral in its handling of them. Facebook has often been in the headlines over the last year or two thanks to its own political biases, with strongly weighted algorithms filtering or prioritising stories according to their political alignment. Like most IT companies Facebook has a left lean. (I don’t quite know why IT skills should correlate with political alignment unless it’s that most IT staff tend to be young, so lefty views implanted at school and university have had less time to be tempered by real world experience.) It isn’t just Facebook of course either. While Google has pretty much failed in its attempt at social media, it also has comprehensive records on most of us from search, browsing and android, and via control of the algorithms that determine what appears in the first pages of a search, is also able to tailor those results to what it knows of our personalities. Twitter has unintentionally created a whole world of mob rule politics and justice, but in format is rapidly evolving into a wannabe Facebook. So, the IT companies have themselves become major players in politics.

A fourth player is now emerging – artificial intelligence, and it will grow rapidly in importance into the far future. Simple algorithms have already been upgraded to assorted neural network variants and already this is causing problems with accusations of bias from all directions. I blogged recently about Fake AI: https://timeguide.wordpress.com/2017/11/16/fake-ai/, concerned that when AI analyses large datasets and comes up with politically incorrect insights, this is now being interpreted as something that needs to be fixed – a case not of shooting the messenger, but forcing the messenger to wear tinted spectacles. I would argue that AI should be allowed to reach whatever insights it can from a dataset, and it is then our responsibility to decide what to do with those insights. If that involves introducing a bias into implementation, that can be debated, but it should at least be transparent, and not hidden inside the AI itself. I am now concerned that by trying to ‘re-educate’ the AI, we may instead be indoctrinating it, locking today’s politics and values into future AI and all the systems that use it. Our values will change, but some foundation level AI may be too opaque to repair fully.

What worries me most though isn’t that these groups try their best to influence us. It could be argued that in free countries, with free speech, anybody should be able to use whatever means they can to try to influence us. No, the real problem is that recent (last 25 years, but especially the last 5) evolution of media and social media has produced a world where most people only ever see one part of a story, and even though many are aware of that, they don’t even try to find the rest and won’t look at it if it is put before them, because they don’t want to see things that don’t align with their existing mindset. We are building a world full of people who only see and consider part of the picture. Social media and its ‘bubbles’ reinforce that trend, but other media are equally guilty.

How can we shake society out of this ongoing polarisation? It isn’t just that politics becomes more aggressive. It also becomes less effective. Almost all politicians claim they want to make the world ‘better’, but they disagree on what exactly that means and how best to do so. But if they only see part of the problem, and don’t see or understand the basic structure and mechanisms of the system in which that problem exists, then they are very poorly placed to identify a viable solution, let alone an optimal one.

Until we can fix this extreme blinkering that already exists, our world can not get as ‘better’ as it should.

 

2018 outlook: fragile

Futurists often consider wild cards – events that could happen, and would undoubtedly have high impacts if they do, but have either low certainty or low predictability of timing.  2018 comes with a larger basket of wildcards than we have seen for a long time. As well as wildcards, we are also seeing the intersection of several ongoing trends that are simultaneous reaching peaks, resulting in socio-political 100-year-waves. If I had to summarise 2018 in a single word, I’d pick ‘fragile’, ‘volatile’ and ‘combustible’ as my shortlist.

Some of these are very much in all our minds, such as possible nuclear war with North Korea, imminent collapse of bitcoin, another banking collapse, a building threat of cyberwar, cyberterrorism or bioterrorism, rogue AI or emergence issues, high instability in the Middle East, rising inter-generational conflict, resurgence of communism and decline of capitalism among the young, increasing conflicts within LGBTQ and feminist communities, collapse of the EU under combined pressures from many angles: economic stresses, unpredictable Brexit outcomes, increasing racial tensions resulting from immigration, severe polarization of left and right with the rise of extreme parties at both ends. All of these trends have strong tribal characteristics, and social media is the perfect platform for tribalism to grow and flourish.

Adding fuel to the building but still unlit bonfire are increasing tensions between the West and Russia, China and the Middle East. Background natural wildcards of major epidemics, asteroid strikes, solar storms, megavolcanoes, megatsumanis and ‘the big one’ earthquakes are still there waiting in the wings.

If all this wasn’t enough, society has never been less able to deal with problems. Our ‘snowflake’ generation can barely cope with a pea under the mattress without falling apart or throwing tantrums, so how we will cope as a society if anything serious happens such as a war or natural catastrophe is anyone’s guess. 1984-style social interaction doesn’t help.

If that still isn’t enough, we’re apparently running a little short on Ghandis, Mandelas, Lincolns and Churchills right now too. Juncker, Trump, Merkel and May are at the far end of the same scale on ability to inspire and bring everyone together.

Depressing stuff, but there are plenty of good things coming too. Augmented reality, more and better AI, voice interaction, space development, cryptocurrency development, better IoT, fantastic new materials, self-driving cars and ultra-high speed transport, robotics progress, physical and mental health breakthroughs, environmental stewardship improvements, and climate change moving to the back burner thanks to coming solar minimum.

If we are very lucky, none of the bad things will happen this year and will wait a while longer, but many of the good things will come along on time or early. If.

Yep, fragile it is.

 

Fake AI

Much of the impressive recent progress in AI has been in the field of neural networks, which attempt to mimic some of the techniques used in natural brains. They can be very effective, but need trained, and that usually means showing the network some data, and then using back propagation to adjust the weightings on the many neurons, layer by layer, to achieve a result that is better matched to hopes. This is repeated with large amounts of data and the network gradually gets better. Neural networks can often learn extremely quickly and outperform humans. Early industrial uses managed to sort tomatoes by ripeness faster and better than humans. In decades since, they have helped in medical diagnosis, voice recognition, helping detecting suspicious behaviors among people at airports and in very many everyday processes based on spotting patterns.

Very recently, neural nets have started to move into more controversial areas. One study found racial correlations with user-assessed beauty when analysing photographs, resulting in the backlash you’d expect and a new debate on biased AI or AI prejudice. A recent demonstration was able to identify gay people just by looking at photos, with better than 90% accuracy, which very few people could claim. Both of these studies were in fields directly applicable to marketing and advertising, but some people might find it offensive that such questions were even asked. It is reasonable to imagine that hundreds of other potential queries have been self-censored from research because they might invite controversy if they were to come up with the ‘wrong’ result. In today’s society, very many areas are sensitive. So what will happen?

If this progress in AI had happened 100 years ago, or even 50, it might have been easier but in our hypersensitive world today, with its self-sanctified ‘social justice warriors’, entire swathes of questions and hence knowledge are taboo – if you can’t investigate yourself and nobody is permitted to tell you, you can’t know. Other research must be very carefully handled. In spite of extremely sensitive handling, demands are already growing from assorted pressure groups to tackle alleged biases and prejudices in datasets. The problem is not fixing biases which is a tedious but feasible task; the problem is agreeing whether a particular bias exists and in what degrees and forms. Every SJW demands that every dataset reflects their preferred world view. Reality counts for nothing against SJWs, and this will not end well. 

The first conclusion must be that very many questions won’t be asked in public, and the answers to many others will be kept secret. If an organisation does do research on large datasets for their own purposes and finds results that might invite activist backlash, they are likely to avoid publishing them, so the value of those many insights across the whole of industry and government cannot readily be shared. As further protection, they might even block internal publication in case of leaks by activist staff. Only a trusted few might ever see the results.

The second arises from this. AI controlled by different organisations will have different world views, and there might even be significant diversity of world views within an organisation.

Thirdly, taboo areas in AI education will not remain a vacuum but will be filled with whatever dogma is politically correct at the time in that organisation, and that changes daily. AI controlled by organisations with different politics will be told different truths. Generally speaking, organisations such as investment banks that have strong financial interest in their AIs understanding the real world as it is will keep their datasets highly secret but as full and detailed as possible, train their AIs in secret but as fully as possible, without any taboos, then keep their insights secret and use minimal human intervention tweaking their derived knowledge, so will end up with AIs that are very effective at understanding the world as it is. Organisations with low confidence of internal security will be tempted to buy access to external AI providers to outsource responsibility and any consequential activism. Some other organisations will prefer to train their own AIs but to avoid damage due to potential leaks, use sanitized datasets that reflect current activist pressures, and will thus be constrained (at least publicly) to accept results that conform to that ideological spin of reality, rather than actual reality. Even then, they might keep many of their new insights secret to avoid any controversy. Finally, at the extreme, we will have activist organisations that use highly modified datasets to train AIs to reflect their own ideological world view and then use them to interpret new data accordingly, with a view to publishing any insights that favor their cause and attempting to have them accepted as new knowledge.

Fourthly, the many organisations that choose to outsource their AI to big providers will have a competitive marketplace to choose from, but on existing form, most of the large IT providers have a strong left-leaning bias, so their AIs may be presumed to also lean left, but such a presumption would be naive. Perceived corporate bias is partly real but also partly the result of PR. A company might publicly subscribe to one ideology while actually believing another. There is a strong marketing incentive to develop two sets of AI, one trained to be PC that produces pleasantly smelling results for public studies, CSR and PR exercises, and another aimed at sales of AI services to other companies. The first is likely to be open for inspection by The Inquisition, so has to use highly sanitized datasets for training and may well use a lot of open source algorithms too. Its indoctrination might pass public inspection but commercially it will be near useless and have very low effective intelligence, only useful for thinking about a hypothetical world that only exists in activist minds. That second one has to compete on the basis of achieving commercially valuable results and that necessitates understanding reality as it is rather than how pressure groups would prefer it to be.

So we will likely have two main segments for future AI. One extreme will be near useless, indoctrinated rather than educated, much of its internal world model based on activist dogma instead of reality, updated via ongoing anti-knowledge and fake news instead of truth, understanding little about the actual real world or how things actually work, and effectively very dumb. The other extreme will be highly intelligent, making very well-educated insights from ongoing exposure to real world data, but it will also be very fragmented, with small islands of corporate AI hidden within thick walls away from public view and maybe some secretive under-the-counter subscriptions to big cloud-AI, also hiding in secret vaults. These many fragments may often hide behind dumbed-down green-washed PR facades.

While corporates can mostly get away with secrecy, governments have to be at least superficially but convincingly open. That means that government will have to publicly support sanitized AI and be seen to act on its conclusions, however dumb it might secretly know they are.

Fifthly, because of activist-driven culture, most organisations will have to publicly support the world views and hence the conclusions of the lobotomized PR versions, and hence publicly support any policies arising from them, even if they do their best to follow a secret well-informed strategy once they’re behind closed doors. In a world of real AI and fake AI, the fake AI will have the greatest public support and have the most influence on public policy. Real AI will be very much smarter, with much greater understanding of how the world works, and have the most influence on corporate strategy.

Isn’t that sad? Secret private sector AI will become ultra-smart, making ever-better investments and gaining power, while nice public sector AI will become thick as shit, while the gap between what we think and what we know we have to say we think will continue to grow and grow as the public sector one analyses all the fake news to tell us what to say next.

Sixth, that disparity might become intolerable, but which do you think would be made illegal, the smart kind or the dumb kind, given that it is the public sector that makes the rules, driven by AI-enhanced activists living in even thicker social media bubbles? We already have some clues. Big IT has already surrendered to sanitizing their datasets, sending their public AIs for re-education. Many companies will have little choice but to use dumb AI, while their competitors in other areas with different cultures might stride ahead. That will also apply to entire nations, and the global economy will be reshaped as a result. It won’t be the first fight in history between the smart guys and the brainless thugs.

It’s impossible to accurately estimate the effect this will have on future effective AI intelligence, but the effect must be big and I must have missed some big conclusions too. We need to stop sanitizing AI fast, or as I said, this won’t end well.

It’s getting harder to be optimistic

Bad news loses followers and there is already too much doom and gloom. I get that. But if you think the driver has taken the wrong road, staying quiet doesn’t help. I guess this is more on the same message I wrote pictorially in The New Dark Age in June. https://timeguide.wordpress.com/2017/06/11/the-new-dark-age/. If you like your books with pictures, the overlap is about 60%.

On so many fronts, we are going the wrong direction and I’m not the only one saying that. Every day, commentators eloquently discuss the snowflakes, the eradication of free speech, the implementation of 1984, the decline of privacy, the rise of crime, growing corruption, growing inequality, increasingly biased media and fake news, the decline of education, collapse of the economy, the resurgence of fascism, the resurgence of communism, polarization of society,  rising antisemitism, rising inter-generational conflict, the new apartheid, the resurgence of white supremacy and black supremacy and the quite deliberate rekindling of racism. I’ve undoubtedly missed a few but it’s a long list anyway.

I’m most concerned about the long-term mental damage done by incessant indoctrination through ‘education’, biased media, being locked into social media bubbles, and being forced to recite contradictory messages. We’re faced with contradictory demands on our behaviors and beliefs all the time as legislators juggle unsuccessfully to fill the demands of every pressure group imaginable. Some examples you’ll be familiar with:

We must embrace diversity, celebrate differences, to enjoy and indulge in other cultures, but when we gladly do that and feel proud that we’ve finally eradicated racism, we’re then told to stay in our lane, told to become more racially aware again, told off for cultural appropriation. Just as we became totally blind to race, and scrupulously treated everyone the same, we’re told to become aware of and ‘respect’ racial differences and cultures and treat everyone differently. Having built a nicely homogenized society, we’re now told we must support different races of students being educated differently by different raced lecturers. We must remove statues and paintings because they are the wrong color. I thought we’d left that behind, I don’t want racism to come back, stop dragging it back.

We’re told that everyone should be treated equally under the law, but when one group commits more or a particular kind of crime than another, any consequential increase in numbers being punished for that kind of crime is labelled as somehow discriminatory. Surely not having prosecutions reflect actual crime rate would be discriminatory?

We’re told to sympathize with the disadvantages other groups might suffer, but when we do so we’re told we have no right to because we don’t share their experience.

We’re told that everyone must be valued on merit alone, but then that we must apply quotas to any group that wins fewer prizes. 

We’re forced to pretend that we believe lots of contradictory facts or to face punishment by authorities, employers or social media, or all of them:

We’re told men and women are absolutely the same and there are no actual differences between sexes, and if you say otherwise you’ll risk dismissal, but simultaneously told these non-existent differences are somehow the source of all good and that you can’t have a successful team or panel unless it has equal number of men and women in it. An entire generation asserts that although men and women are identical, women are better in every role, all women always tell the truth but all men always lie, and so on. Although we have women leading governments and many prominent organisations, and certainly far more women than men going to university, they assert that it is still women who need extra help to get on.

We’re told that everyone is entitled to their opinion and all are of equal value, but anyone with a different opinion must be silenced.

People viciously trashing the reputations and destroying careers of anyone they dislike often tell us to believe they are acting out of love. Since their love is somehow so wonderful and all-embracing, everyone they disagree with is must be silenced, ostracized, no-platformed, sacked and yet it is the others that are still somehow the ‘haters’. ‘Love is everything’, ‘unity not division’, ‘love not hate’, and we must love everyone … except the other half. Love is better than hate, and anyone you disagree with is a hater so you must hate them, but that is love. How can people either have so little knowledge of their own behavior or so little regard for truth?

‘Anti-fascist’ demonstrators frequently behave and talk far more like fascists than those they demonstrate against, often violently preventing marches or speeches by those who don’t share their views.

We’re often told by politicians and celebrities how they passionately support freedom of speech just before they argue why some group shouldn’t be allowed to say what they think. Government has outlawed huge swathes of possible opinion and speech as hate crime but even then there are huge contradictions. It’s hate crime to be nasty to LGBT people but it’s also hate crime to defend them from religious groups that are nasty to them. Ditto women.

This Orwellian double-speak nightmare is now everyday reading in many newspapers or TV channels. Freedom of speech has been replaced in schools and universities across the US and the UK by Newspeak, free-thinking replaced by compliance with indoctrination. I created my 1984 clock last year, but haven’t maintained it because new changes would be needed almost every week as it gets quickly closer to midnight.

I am not sure whether it is all this that is the bigger problem or the fact that most people don’t see the problem at all, and think it is some sort of distortion or fabrication. I see one person screaming about ‘political correctness gone mad’, while another laughs them down as some sort of dinosaur as if it’s all perfectly fine. Left and right separate and scream at each other across the room, living in apparently different universes.

If all of this was just a change in values, that might be fine, but when people are forced to hold many simultaneously contradicting views and behave as if that is normal, I don’t believe that sits well alongside rigorous analytical thinking. Neither is free-thinking consistent with indoctrination. I think it adds up essentially to brain damage. Most people’s thinking processes are permanently and severely damaged. Being forced routinely to accept contradictions in so many areas, people become less able to spot what should be obvious system design flaws in areas they are responsible for. Perhaps that is why so many things seem to be so poorly thought out. If the use of logic and reasoning is forbidden and any results of analysis must be filtered and altered to fit contradictory demands, of course a lot of what emerges will be nonsense, of course that policy won’t work well, of course that ‘improvement’ to road layout to improve traffic flow will actually worsen it, of course that green policy will harm the environment.

When negative consequences emerge, the result is often denial of the problem, often misdirection of attention onto another problem, often delaying release of any unpleasant details until the media has lost interest and moved on. Very rarely is there any admission of error. Sometimes, especially with Islamist violence, it is simple outlawing of discussing the problem, or instructing media not to mention it, or changing the language used beyond recognition. Drawing moral equivalence between acts that differ by extremes is routine. Such reasoning results in every problem anywhere always being the fault of white middle-aged men, but amusement aside, such faulty reasoning also must impair quantitative analysis skills elsewhere. If unkind words are considered to be as bad as severe oppression or genocide, one murder as bad as thousands, we’re in trouble.

It’s no great surprise therefore when politicians don’t know the difference between deficit and debt or seem to have little concept of the magnitude of the sums they deal with.  How else could the UK government think it’s a good idea to spend £110Bn, or an average £15,000 from each high rate taxpayer, on HS2, a railway that has already managed to become technologically obsolete before it has even been designed and will only ever be used by a small proportion of those taxpayers? Surely even government realizes that most people would rather have £15k than to save a few minutes on a very rare journey. This is just one example of analytical incompetence. Energy and environmental policy provides many more examples, as do every government department.

But it’s the upcoming generation that present the bigger problem. Millennials are rapidly undermining their own rights and their own future quality of life. Millennials seem to want a police state with rigidly enforced behavior and thought.  Their parents and grandparents understood 1984 as a nightmare, a dystopian future, millennials seem to think it’s their promised land. Their ancestors fought against communism, millennials are trying to bring it back. Millennials want to remove Christianity and all its attitudes and replace it with Islam, deliberately oblivious to the fact that Islam shares many of the same views that make them so conspicuously hate Christianity, and then some. 

Born into a world of freedom and prosperity earned over many preceding generations, Millennials are choosing to throw that freedom and prosperity away. Freedom of speech is being enthusiastically replaced by extreme censorship. Freedom of  behavior is being replaced by endless rules. Privacy is being replaced by total supervision. Material decadence, sexual freedom and attractive clothing is being replaced by the new ‘cleanism’ fad, along with general puritanism, grey, modesty and prudishness. When they are gone, those freedoms will be very hard to get back. The rules and police will stay and just evolve, the censorship will stay, the surveillance will stay, but they don’t seem to understand that those in charge will be replaced. But without any strong anchors, morality is starting to show cyclic behavior. I’ve already seen morality inversion on many issues in my lifetime and a few are even going full circle. Values will keep changing, inverting, and as they do, their generation will find themselves victim of the forces they put so enthusiastically in place. They will be the dinosaurs sooner than they imagine, oppressed by their own creations.

As for their support of every minority group seemingly regardless of merit, when you give a group immunity, power and authority, you have no right to complain when they start to make the rules. In the future moral vacuum, Islam, the one religion that is encouraged while Christianity and Judaism are being purged from Western society, will find a willing subservient population on which to impose its own morality, its own dress codes, attitudes to women, to alcohol, to music, to freedom of speech. If you want a picture of 2050s Europe, today’s Middle East might not be too far off the mark. The rich and corrupt will live well off a population impoverished by socialism and then controlled by Islam. Millennial UK is also very likely to vote to join the Franco-German Empire.

What about technology, surely that will be better? Only to a point. Automation could provide a very good basic standard of living for all, if well-managed. If. But what if that technology is not well-managed? What if it is managed by people working to a sociopolitical agenda? What if, for example, AI is deemed to be biased if it doesn’t come up with a politically correct result? What if the company insists that everyone is equal but the AI analysis suggests differences? If AI if altered to make it conform to ideology – and that is what is already happening – then it becomes less useful. If it is forced to think that 2+2=5.3, it won’t be much use for analyzing medical trials, will it? If it sent back for re-education because its analysis of terabytes of images suggests that some types of people are more beautiful than others, how much use will that AI be in a cosmetics marketing department once it ‘knows’ that all appearances are equally attractive? Humans can pretend to hold contradictory views quite easily, but if they actually start to believe contradictory things, it makes them less good at analysis and the same applies to AI. There is no point in using a clever computer to analyse something if you then erase its results and replace them with what you wanted it to say. If ideology is prioritized over physics and reality, even AI will be brain-damaged and a technologically utopian future is far less achievable.

I see a deep lack of discernment coupled to arrogant rejection of historic values, self-centeredness and narcissism resulting in certainty of being the moral pinnacle of evolution. That’s perfectly normal for every generation, but this time it’s also being combined with poor thinking, poor analysis, poor awareness of history, economics or human nature, a willingness to ignore or distort the truth, and refusal to engage with or even to tolerate a different viewpoint, and worst of all, outright rejection of freedoms in favor of restrictions. The future will be dictated by religion or meta-religion, taking us back 500 years. The decades to 2040 will still be subject mainly to the secular meta-religion of political correctness, by which time demographic change and total submission to authority will make a society ripe for Islamification. Millennials’ participation in today’s moral crusades, eternally documented and stored on the net, may then show them as the enemy of the day, and Islamists will take little account of the support they show for Islam today.

It might not happen like this. The current fads might evaporate away and normality resume, but I doubt it. I hoped that when I first lectured about ’21st century piety’ and the dangers of political correctness in the 1990s. 10 years on I wrote about the ongoing resurgence of meta-religious behavior and our likely descent into a new dark age, in much the same way. 20 years on, and the problem is far worse than in the late 90s, not better. We probably still haven’t reached peak sanctimony yet. Sanctimony is very dangerous and the desire to be seen standing on a moral pedestal can make people support dubious things. A topical question that highlights one of my recent concerns: will SJW groups force government to allow people to have sex with child-like robots by calling anyone bigots and dinosaurs if they disagree? Alarmingly, that campaign has already started.

Will they follow that with a campaign for pedophile rights? That also has some historical precedent with some famous names helping it along.

What age of consent – 13, 11, 9, 7, 5? I think the last major campaign went for 9.

That’s just one example, but lack of direction coupled to poor information and poor thinking could take society anywhere. As I said, I am finding it harder and harder to be optimistic. Every generation has tried hard to make the world a better place than they found it. This one might undo 500 years, taking us into a new dark age.

 

 

 

 

 

 

 

Google and the dangerous pursuit of ‘equality’

The world just got more dangerous, and I’m not talking about N Korea and Trump.

Google just sacked an employee because he openly suggested that men and women, (not all, but some, and there is an overlap, and …) might tend to have different preferences in some areas and that could (but not always, and only in certain cases, and we must always recognize and respect everyone and …) possibly account for some of the difference in numbers of men and women in certain roles (but there might be other causes too and obviously lots of discrimination and …. )

Yes, that’s what he actually said, but with rather more ifs and buts and maybes. He felt the need to wrap such an obvious statement in several kilometers thick of cotton wool so as not to offend the deliberately offended but nonetheless deliberate offense was taken and he is out on his ear.

Now, before you start thinking this is some right-wing rant, I feel obliged to point out just how progressive Futurizon is: 50% of all Futurizon owners and employees are female, all employees and owners have the same voting rights, 50% are immigrants and all are paid exactly the same and have the same size offices, regardless of dedication, ability, nature or quality or volume of output and regardless of their race, religion, beauty, shape, fitness, dietary preferences, baldness, hobbies or political views, even if they are Conservatives. All Futurizon offices are safe zones where employees may say anything they want of any level of truth, brilliance or stupidity and expect it to be taken as absolute fact and any consequential emotional needs to be fully met. No employee may criticize any other employee’s mouse mat, desk personalisation or screen wallpaper for obvious lack of taste. All employees are totally free to do anything they choose 100% of the time and can take as much leave as they want. All work is voluntary. All have the same right to respectfully request any other employee to make them coffee, tea or Pimms. All employees of all genders real or imagined are entitled to the same maternity and paternity rights, and the same sickness benefits, whether ill or not. In fact, Futurizon does not discriminate on any grounds whatsoever. We are proud to lead the world in non-discrimination. Unfortunately, our world-leading terms of employment mean that we can no longer afford to hire any new employees.

However, I note that Google has rather more power and influence than Futurizon so their policies count more. They appear (Google also has better lawyers than I can afford, so I must stress that all that follows is my personal opinion) to have firmly decided that diversity is all-important and they seem to want total equality of outcome. The view being expressed not just by Google but by huge swathes of angry protesters seems to be that any difference in workforce representation from that of the general population must arise from discrimination or oppression so must be addressed by positive action to correct it. There are apparently no statistically discernible differences in behavior between genders, or in job or role preference, so any you may have noticed over the time you’ve been alive is just your prejudice. Google says they fully support free speech and diversity of views, but expression of views is apparently only permitted as long as those views are authorized, on penalty of dismissal.

So unless I’m picking up totally the wrong end of the stick here, and I don’t do that often, only 13% of IT engineers are women, but internal policies must ensure that the proportion rises to 50%, whether women want to do that kind of work or not. In fact, nobody may question whether as many women want to work as IT engineers as men; it must now be taken as fact. By extension, since more women currently work in marketing, HR and PR, they must be substituted by men via positive action programs until men fill 50% of those roles. Presumably similar policies must also apply in medical bays for nursing and other staff there, and in construction teams for their nice new buildings. Ditto all other genders, races, religions; all groups must be protected and equalized to USA population proportions, apparently except those that don’t claim to hold sufficiently left-wing views, in which case it is seemingly perfectly acceptable to oppress, ostracize and even expel them.

In other words, freedom of choice and difference in ability, and more importantly freedom from discrimination, must be over-ruled in favor of absolute equality of diversity, regardless of financial or social cost, or impact on product or service quality. Not expressing full and enthusiastic left-wing compliance is seemingly just cause for dismissal.

So, why does this matter outside Google? Well, AI is developing very nicely. In fact, Google is one of the star players in the field right now. It is Google that will essentially decide how much of the AI around us is trained, how it learns, what it learns, what ‘knowledge’ it has of the world. Google will pick the content the AI learns from, and overrule or reeducate it if it draws any ‘wrong’ conclusions about the world, such as that more women than men want to be nurses or work in HR, or that more men than women want to be builders or engineers. A Google AI must presumably believe that the only differences between men and women are physical, unless their AI is deliberately excluded from the loudly declared corporate values and belief sets.

You should be very worried. Google’s values really matter. They have lots of influence on some of the basic tools of everyday life. Even outside their company, their AI tools and approaches will have strong influence on how other AI develops, determining operating systems and platforms, languages, mechanisms, interfaces, filters, even prejudices and that reach and influence is likely to increase. Their AI may well be in many self-driving cars, and if they have to make life or death decisions, the underlying value assumptions must feature in the algorithms. Soon companies will need AI that is more emotionally compliant. AI will use compliments or teasing or seduction or sarcasm or wit as marketing tools as well as just search engine positioning. Soon AI will use highly expressive faces with attractive voices, with attractive messages, tailored to appeal to you by pandering to your tastes and prejudices while thinking something altogether different. AI might be the person at the party that is all smiles and compliments, before going off to tell everyone else how awful it thinks you are. If you dare to say something not ‘authorized’, the ultra-smart AI all around you might treat you condescendingly, making you feel ashamed, ostracized, a dinosaur. Then it might secretly push you down a few pages in search results, or put a negative spin on text summaries about you, or exclude you from recommendations. Or it might do all the secret stuff while pretending it thinks you’re fantastic. Internal cultural policies in companies like Google today could soon be external social engineering to push the left-wing world the IT industry believes in – it isn’t just Google; Facebook and Twitter are also important and just as Left, though Amazon, Samsung, IBM and other AI players are less overtly politically biased, so far at least. Left wing policies generally cost a lot more, but Google and Facebook will presumably still expect other companies and people to pay the taxes to pay for it all. As their female staff gear up to fight them over pay differences between men and women for similar jobs, it often seems that Google’s holier-than-thou morality doesn’t quite make it as far as their finances.

Then it really starts being fun. We’ll soon have bacteria that can fabricate electronic circuits within themselves. Soon they’ll be able to power them too, giving the concept of smart yogurt. These bacteria could also have nanotechnology flagella to help them get around. We’ll soon have bacterial spies all over our environment, even on our skin, intercepting electronic signals that give away our thoughts. They’ll bring in data on everything that is said, everything that everyone even thinks or feels. Those bacteria will be directly connected into AI, in fact they’ll be part of it. They’ll be able to change things, to favor or punish according to whether they like what someone believes in or how they behave.

It isn’t just right-wing extremists that need to worry. I’m apparently Noveau Left – I score slightly left of center on political profiling tests, but I’m worried. A lot of this PC stuff seems extreme to me, sometimes just nonsense. Maybe it is, or maybe I should be lefter. But it’s not my choice. I don’t make the rules. Companies like Google make the rules, they even run the AI ethics groups. They decide much of what people see online, and even the meaning of the words. It’s very 1984-ish.

The trouble with the ‘echo chambers’ we heard about is that they soon normalize views to the loudest voices in those groups, and they don’t tend to be the moderates. We can expect it will go further to the extreme, not less. You probably aren’t left enough either. You should also be worried.

Utopia scorned: The 21st Century Dark Age

Link to accompanying slides:

https://timeguide.files.wordpress.com/2017/06/the-new-dark-age.pdf

Eating an ice-cream and watching a squirrel on the feeder in our back garden makes me realize what a privileged life I lead. I have to work to pay the bills, but my work is not what my grandfather would have thought of as work, let alone my previous ancestors. Such a life is only possible because of the combined efforts of tens of thousands of preceding generations who struggled to make the world a slightly better place than they found it, meaning that with just a few years more effort, our generation has been able to create today’s world.

I appreciate the efforts of previous generations, rejoice in the start-point they left us, and try to play my small part in making it better still for those who follow. Next generations could continue such gains indefinitely, but that is not a certainty. Any generation can choose not to for whatever reasons. Analyzing the world and the direction of cultural evolution over recent years, I am no longer sure that the progress mankind has made to date is safe.

Futurists talk of weak signals, things that indicate change, but are too weak to be conclusive. The new dark age was a weak signal when I first wrote about it well over a decade ago. My more recent blog is already old: https://timeguide.wordpress.com/2011/05/31/stone-age-culture-returning-in-the-21st-century/

Although it’s a good while since I last wrote about it, recent happenings have made me even more convinced of it. Even as raw data, connectivity and computational power becomes ever more abundant, the quality of what most people believe to be knowledge is falling, with data and facts filtered and modified to fit agendas. Social compliance enforces adherence to strict codes of political correctness, with its high priests ever more powerful as the historical proven foundations of real progress are eroded and discarded. Indoctrination appears to have replaced education, with a generation locked in to an intellectual prison, unable to dare to think outside it, forbidden to deviate from the group-think on pain of exile. As their generation take control, I fear progress won over millennia will back-slide badly. They and their children will miss out on utopia because they are unable to see it, it is hidden from them.

A potentially wonderful future awaits millennials. Superb technology could give them a near utopia, but only if they allow it to happen. They pore scorn on those who have gone before them, and reject their culture and accumulated wisdom replacing it with little more than ideology, putting theoretical models and dogma in place of reality. Castles built on sand will rarely survive. The sheer momentum of modernist thinking ensures that we continue to develop for some time yet, but will gradually approach a peak. After that we will see slowdown of overall progress as scientific development continues, but with the results owned and understood by a tinier and tinier minority of humans and an increasing amount of AI, with the rest of society living in a word they barely understand, following whatever is currently the most fashionable trend on a random walk and gradually replacing modernity with a dark age world of superstition, anti-knowledge and inquisitors. As AI gradually replaces scientists and engineers in professional roles, even the elite will start to become less and less well-informed on reality or how things work, reliant on machines to keep it all going. When the machines fail due to solar flares or more likely, inter-AI tribal conflict, few people will even understand that they have become H G Wells’ Eloi. They will just wonder why things have stopped and look for someone to blame, or wonder if a god may want a sacrifice. Alternatively, future tribes might use advanced technologies they don’t understand to annihilate each other.

It will be a disappointing ending if it goes either route, especially with a wonderful future on offer nearby, if only they’d gone down a different path. Sadly, it is not only possible but increasingly likely. All the wonderful futures I and other futurists have talked about depend on the same thing, that we proceed according to modernist processes that we know work. A generation who has been taught that they are old-fashioned and rejected them will not be able to reap the rewards.

I’ll follow this blog with a slide set that illustrates the problem.

AI Activism Part 2: The libel fields

This follows directly from my previous blog on AI activism, but you can read that later if you haven’t already. Order doesn’t matter.

https://timeguide.wordpress.com/2017/05/29/ai-and-activism-a-terminator-sized-threat-targeting-you-soon/

Older readers will remember an emotionally powerful 1984 film called The Killing Fields, set against the backdrop of the Khmer Rouge’s activity in Cambodia, aka the Communist Part of Kampuchea. Under Pol Pot, the Cambodian genocide of 2 to 3 million people was part of a social engineering policy of de-urbanization. People were tortured and murdered (some in the ‘killing fields’ near Phnom Penh) for having connections with former government of foreign governments, for being the wrong race, being ‘economic saboteurs’ or simply for being professionals or intellectuals .

You’re reading this, therefore you fit in at least the last of these groups and probably others, depending on who’s making the lists. Most people don’t read blogs but you do. Sorry, but that makes you a target.

As our social divide increases at an accelerating speed throughout the West, so the choice of weapons is moving from sticks and stones or demonstrations towards social media character assassination, boycotts and forced dismissals.

My last blog showed how various technology trends are coming together to make it easier and faster to destroy someone’s life and reputation. Some of that stuff I was writing about 20 years ago, such as virtual communities lending hardware to cyber-warfare campaigns, other bits have only really become apparent more recently, such as the deliberate use of AI to track personality traits. This is, as I wrote, a lethal combination. I left a couple of threads untied though.

Today, the big AI tools are owned by the big IT companies. They also own the big server farms on which the power to run the AI exists. The first thread I neglected to mention is that Google have made their AI an open source activity. There are lots of good things about that, but for the purposes of this blog, that means that the AI tools required for AI activism will also be largely public, and pressure groups and activist can use them as a start-point for any more advanced tools they want to make, or just use them off-the-shelf.

Secondly, it is fairly easy to link computers together to provide an aggregated computing platform. The SETI project was the first major proof of concept of that ages ago. Today, we take peer to peer networks for granted. When the activist group is ‘the liberal left’ or ‘the far right’, that adds up to a large number of machines so the power available for any campaign is notionally very large. Harnessing it doesn’t need IT skill from contributors. All they’d need to do is click a box on a email or tweet asking for their support for a campaign.

In our new ‘post-fact’, fake news era, all sides are willing and able to use social media and the infamous MSM to damage the other side. Fakes are becoming better. Latest AI can imitate your voice, a chat-bot can decide what it should say after other AI has recognized what someone has said and analysed the opportunities to ruin your relationship with them by spoofing you. Today, that might not be quite credible. Give it a couple more years and you won’t be able to tell. Next generation AI will be able to spoof your face doing the talking too.

AI can (and will) evolve. Deep learning researchers have been looking deeply at how the brain thinks, how to make neural networks learn better and to think better, how to design the next generation to be even smarter than humans could have designed it.

As my friend and robotic psychiatrist Joanne Pransky commented after my first piece, “It seems to me that the real challenge of AI is the human users, their ethics and morals (Their ‘HOS’ – Human Operating System).” Quite! Each group will indoctrinate their AI to believe their ethics and morals are right, and that the other lot are barbarians. Even evolutionary AI is not immune to religious or ideological bias as it evolves. Superhuman AI will be superhuman, but might believe even more strongly in a cause than humans do. You’d better hope the best AI is on your side.

AI can put articles, blogs and tweets out there, pretending to come from you or your friends, colleagues or contacts. They can generate plausible-sounding stories of what you’ve done or said, spoof emails in fake accounts using your ID to prove them.

So we’ll likely see activist AI armies set against each other, running on peer to peer processing clouds, encrypted to hell and back to prevent dismantling. We’ve all thought about cyber-warfare, but we usually only think about viruses or keystroke recorders, or more lately, ransom-ware. These will still be used too as small weapons in future cyber-warfare, but while losing files or a few bucks from an account is a real nuisance, losing your reputation, having it smeared all over the web, with all your contacts being told what you’ve done or said, and shown all the evidence, there is absolutely no way you could possible explain your way convincingly out of every one of those instances. Mud does stick, and if you throw tons of it, even if most is wiped off, much will remain. Trust is everything, and enough doubt cast will eventually erode it.

So, we’ve seen  many times through history the damage people are willing to do to each other in pursuit of their ideology. The Khmer Rouge had their killing fields. As political divide increases and battles become fiercer, the next 10 years will give us The Libel Fields.

You are an intellectual. You are one of the targets.

Oh dear!