Category Archives: IT

How can we make a computer conscious?

I found this article in my drafts folder, written 3 years ago as part of my short series on making conscious computers. I thought I’d published it but didn’t. So updating and publishing it now. It’s a bit long-winded, thinking out loud, trying to derive some insights from nature on how to make conscious machines. The good news is that actual AI developments are following paths that lead in much the same direction, though some significant re-routing and new architectural features are needed if they are to optimize AI and achieve machine consciousness.

Let’s start with the problem. Today’s AI that plays chess, does web searches or answers questions is digital. It uses algorithms, sets of instructions that the computer follows one by one. All of those are reduced to simple binary actions, toggling bits between 1 and 0. The processor doing that is no more conscious or aware of it, and has no more understanding of what it is doing than an abacus knows it is doing sums. The intelligence is in the mind producing the clever algorithms that interpret the current 1s and 0s and change them in the right way. The algorithms are written down, albeit in more 1s and 0s in a memory chip, but are essentially still just text, only as smart and aware as a piece of paper with writing on it. The answer is computed, transmitted, stored, retrieved, displayed, but at no point does the computer sense that it is doing any of those things. It really is just an advanced abacus. An abacus is digital too (an analog equivalent to an abacus is a slide rule).

A big question springs to mind: can a digital computer ever be any more than an advanced abacus. Until recently, I was certain the answer was no. Surely a digital computer that just runs programs can never be conscious? It can simulate consciousness to some degree, it can in principle describe the movements of every particle in a conscious brain, every electric current, every chemical reaction. But all it is doing is describing them. It is still just an abacus. Once computed, that simulation of consciousness could be printed and the printout would be just as conscious as the computer was. A digital ‘stored program’ computer can certainly implement extremely useful AI. With the right algorithms, it can mine data, link things together, create new data from that, generate new ideas by linking together things that haven’t been linked before, make works of art, poetry, compose music, chat to people, recognize faces and emotions and gestures. It might even be able to converse about life, the universe and everything, tell you its history, discuss its hopes for the future, but all of that is just a thin gloss on an abacus. I wrote a chat-bot on my Sinclair ZX Spectrum in 1983, running on a processor with about 8,000 transistors. The chat-bot took all of about 5 small pages of code but could hold a short conversation quite well if you knew what subjects to stick to. It’s very easy to simulate conversation. But it is still just a complicated abacus and still doesn’t even know it is doing anything.

However clever the AI it implements, a conventional digital computer that just executes algorithms can’t become conscious but an analog computer can, a quantum computer can, and so can a hybrid digital/analog/quantum computer. The question remain s whether a digital computer can be conscious if it isn’t just running stored programs. Could it have a different structure, but still be digital and yet be conscious? Who knows? Not me. I used to know it couldn’t, but now that I am a lot older and slightly wiser, I now know I don’t know.

Consciousness debate often starts with what we know to be conscious, the human brain. It isn’t a digital computer, although it has digital processes running in it. It also runs a lot of analog processes. It may also run some quantum processes that are significant in consciousness. It is a conscious hybrid of digital, analog and possibly quantum computing. Consciousness evolved in nature, therefore it can be evolved in a lab. It may be difficult and time consuming, and may even be beyond current human understanding, but it is possible. Nature didn’t use magic, and what nature did can be replicated and probably even improved on. Evolutionary AI development may have hit hard times, but that only shows that the techniques used by the engineers doing it didn’t work on that occasion, not that other techniques can’t work. Around 2.6 new human-level fully conscious brains are made by nature every second without using any magic and furthermore, they are all slightly different. There are 7.6 billion slightly different implementations of human-level consciousness that work and all of those resulted from evolution. That’s enough of an existence proof and a technique-plausibility-proof for me.

Sensors evolved in nature pretty early on. They aren’t necessary for life, for organisms to move around and grow and reproduce, but they are very helpful. Over time, simple light, heat, chemical or touch detectors evolved further to simple vision and produce advanced sensations such as pain and pleasure, causing an organism to alter its behavior, in other words, feeling something. Detection of an input is not the same as sensation, i.e. feeling an input. Once detection upgrades to sensation, you have the tools to make consciousness. No more upgrades are needed. Sensing that you are sensing something is quite enough to be classified as consciousness. Internally reusing the same basic structure as external sensing of light or heat or pressure or chemical gradient or whatever allows design of thought, planning, memory, learning and construction and processing of concepts. All those things are just laying out components in different architectures. Getting from detection to sensation is the hard bit.

So design of conscious machines, and in fact what AI researchers call the hard problem, really can be reduced to the question of what makes the difference between a light switch and something that can feel being pushed or feel the current flowing when it is, the difference between a photocell and feeling whether it is light or dark, the difference between detecting light frequency, looking it up in a database, then pronouncing that it is red, and experiencing redness. That is the hard problem of AI. Once that is solved, we will very soon afterwards have a fully conscious self aware AI. There are lots of options available, so let’s look at each in turn to extract any insights.

The first stage is easy enough. Detecting presence is easy, measuring it is harder. A detector detects something, a sensor (in its everyday engineering meaning) quantifies it to some degree. A component in an organism might fire if it detects something, it might fire with a stronger signal or more frequently if it detects more of it, so it would appear to be easy to evolve from detection to sensing in nature, and it is certainly easy to replicate sensing with technology.

Essentially, detection is digital, but sensing is usually analog, even though the quantity sensed might later be digitized. Sensing normally uses real numbers, while detection uses natural numbers (real v  integer as programmer call them). The handling of analog signals in their raw form allows for biomimetic feedback loops, which I’ll argue are essential. Digitizing them introduces a level of abstraction that is essentially the difference between emulation and simulation, the difference between doing something and reading about someone doing it. Simulation can’t make a conscious machine, emulation can. I used to think that meant digital couldn’t become conscious, but actually it is just algorithmic processing of stored programs that can’t do it. There may be ways of achieving consciousness digitally, or quantumly, but I haven’t yet thought of any.

That engineering description falls far short of what we mean by sensation in human terms. How does that machine-style sensing become what we call a sensation? Logical reasoning says there would probably need to be only a small change in order to have evolved from detection to sensing in nature. Maybe something like recombining groups of components in different structures or adding them together or adding one or two new ones, that sort of thing?

So what about detecting detection? Or sensing detection? Those could evolve in sequence quite easily. Detecting detection is like your alarm system control unit detecting the change of state that indicates that a PIR has detected an intruder, a different voltage or resistance on a line, or a 1 or a 0 in a memory store. An extremely simple AI responds by ringing an alarm. But the alarm system doesn’t feel the intruder, does it?  It is just a digital response to a digital input. No good.

How about sensing detection? How do you sense a 1 or a 0? Analog interpretation and quantification of digital states is very wasteful of resources, an evolutionary dead end. It isn’t any more useful than detection of detection. So we can eliminate that.

OK, sensing of sensing? Detection of sensing? They look promising. Let’s run with that a bit. In fact, I am convinced the solution lies in here so I’ll look till I find it.

Let’s do a thought experiment on designing a conscious microphone, and for this purpose, the lowest possible order of consciousness will do, we can add architecture and complexity and structures once we have some bricks. We don’t particularly want to copy nature, but are free to steal ideas and add our own where it suits.

A normal microphone sensor produces an analog signal quantifying the frequencies and intensities of the sounds it is exposed to, and that signal may later be quantified and digitized by an analog to digital converter, possibly after passing through some circuits such as filters or amplifiers in between. Such a device isn’t conscious yet. By sensing the signal produced by the microphone, we’d just be repeating the sensing process on a transmuted signal, not sensing the sensing itself.

Even up close, detecting that the microphone is sensing something could be done by just watching a little LED going on when current flows. Sensing it is harder but if we define it in conventional engineering terms, it could still be just monitoring a needle moving as the volume changes. That is obviously not enough, it’s not conscious, it isn’t feeling it, there’s no awareness there, no ‘sensation’. Even at this primitive level, if we want a conscious mic, we surely need to get in closer, into the physics of the sensing. Measuring the changing resistance between carbon particles or speed of a membrane moving backwards and forwards would just be replicating the sensing, adding an extra sensing stage in series, not sensing the sensing, so it needs to be different from that sort of thing. There must surely need to be a secondary change or activity in the sensing mechanism itself that senses the sensing of the original signal.

That’s a pretty open task, and it could even be embedded in the detecting process or in the production process for the output signal. But even recognizing that we need this extra property narrows the search. It must be a parallel or embedded mechanism, not one in series. The same logical structure would do fine for this secondary sensing, since it is just sensing in the same logical way as the original. This essential logical symmetry would make its evolution easy too. It is easy to imagine how that could happen in nature, and easier still to see how it could be implemented in a synthetic evolution design system. Such an approach could be mimicked in natural or synthetic evolutionary development systems. In this approach, we have to feel the sensing, so we need it to comprise some sort of feedback loop with a high degree of symmetry compared with the main sensing stage. That would be natural evolution compatible as well as logically sound as an engineering approach.

This starts to look like progress. In fact, it’s already starting to look a lot like a deep neural network, with one huge difference: instead of using feed-forward signal paths for analysis and backward propagation for training, it relies instead on a symmetric feedback mechanism where part of the input for each stage of sensing comes from its own internal and output signals. A neuron is not a full sensor in its own right, and it’s reasonable to assume that multiple neurons would be clustered so that there is a feedback loop. Many in the neural network AI community are already recognizing the limits of relying on feed-forward and back-prop architectures, but web searches suggest few if any are moving yet to symmetric feedback approaches. I think they should. There’s gold in them there hills!

So, the architecture of the notional sensor array required for our little conscious microphone would have a parallel circuit and feedback loop (possibly but not necessarily integrated), and in all likelihood these parallel and sensing circuits would be heavily symmetrical, i.e. they would use pretty much the same sort of components and architectures as the sensing process itself. If the sensation bit is symmetrical, of similar design to the primary sensing circuit, that again would make it easy to evolve in nature too so is a nice 1st principles biomimetic insight. So this structure has the elegance of being very feasible for evolutionary development, natural or synthetic. It reuses similarly structured components and principles already designed, it’s just recombining a couple of them in a slightly different architecture.

Another useful insight screams for attention too. The feedback loop ensures that the incoming sensation lingers to some degree. Compared to the nanoseconds we are used to in normal IT, the signals in nature travel fairly slowly (~200m/s), and the processing and sensing occur quite slowly (~200Hz). That means this system would have some inbuilt memory that repeats the essence of the sensation in real time – while it is sensing it. It is inherently capable of memory and recall and leaves the door wide open to introduce real-time interaction between memory and incoming signal. It’s not perfect yet, but it has all the boxes ticked to be a prime contender to build thought, concepts, store and recall memories, and in all likelihood, is a potential building brick for higher level consciousness. Throw in recent technology developments such as memristors and it starts to look like we have a very promising toolkit to start building primitive consciousness, and we’re already seeing some AI researchers going that path so maybe we’re not far from the goal. So, we make a deep neural net with nice feedback from output (of the sensing system, which to clarify would be a cluster of neurons, not a single neuron) to input at every stage (and between stages) so that inputs can be detected and sensed, while the input and output signals are stored and repeated into the inputs in real time as the signals are being processed. Throw in some synthetic neurotransmitters to dampen the feedback and prevent overflow and we’re looking at a system that can feel it is feeling something and perceive what it is feeling in real time.

One further insight that immediately jumps out is since the sensing relies on the real time processing of the sensations and feedbacks, the speed of signal propagation, storage, processing and repetition timeframes must all be compatible. If it is all speeded up a million fold, it might still work fine, but if signals travel too slowly or processing is too fast relative to other factors, it won’t work. It will still get a computational result absolutely fine, but it won’t know that it has, it won’t be able to feel it. Therefore… since we have a factor of a million for signal speed (speed of light compared to nerve signal propagation speed), 50 million for switching speed, and a factor of 50 for effective neuron size (though the sensing system units would be multiple neuron clusters), we could make a conscious machine that could think at 50 million times as fast as a natural system (before allowing for any parallel processing of course). But with architectural variations too, we’d need to tune those performance metrics to make it work at all and making physically larger nets would require either tuning speeds down or sacrificing connectivity-related intelligence. An evolutionary design system could easily do that for us.

What else can we deduce about the nature of this circuit from basic principles? The symmetry of the system demands that the output must be an inverse transform of the input. Why? Well, because the parallel, feedback circuit must generate a form that is self-consistent. We can’t deduce the form of the transform from that, just that the whole system must produce an output mathematically similar to that of the input.

I now need to write another blog on how to use such circuits in neural vortexes to generate knowledge, concepts, emotions and thinking. But I’m quite pleased that it does seem that some first-principles analysis of natural evolution already gives us some pretty good clues on how to make a conscious computer. I am optimistic that current research is going the right way and only needs relatively small course corrections to achieve consciousness.



AIs of a feather flocking together to create global instability

Hawking and Musk have created a lot of media impact with their warnings about AI, so although terminator scenarios resulting from machine consciousness have been discussed, as have more mundane use of non-conscious autonomous weapon systems, it’s worth noting that I haven’t yet heard them mention one major category of risks from AI – emergence. AI risks have been discussed frequently since the 1970s, and in the 1990s a lot of work was done in the AI community on emergence. Complex emergent patterns of behavior often result from interactions between entities driven by simple algorithms. Genetic algorithms were demonstrated to produce evolution, simple neighbor-interaction rules were derived to illustrate flocking behaviors that make lovely screen saver effects. Cellular automata were played with. In BT we invented ways of self-organizing networks and FPGAs, played with mechanism that could be used for evolution and consciousness, demonstrated managing networks via ANTs – autonomous network telephers, using smart packets that would run up and down wires sorting things out all by themselves. In 1987 discovered a whole class of ways of bringing down networks via network resonance, information waves and their much larger class of correlated traffic – still unexploited by hackers apart from simple DOS attacks. These ideas have slowly evolved since, and some have made it into industry or hacker toolkits, but we don’t seem to be joining the dots as far as risks go.

I read an amusing article this morning by an ex-motoring-editor who was declined insurance because the AI systems used by insurance companies had labelled him as high risk because he maybe associated with people like Clarkson. Actually, he had no idea why, but that was his broker’s theory of how it might have happened. It’s a good article, well written and covers quite a few of the dangers of allowing computers to take control.

The article suggested how AIs in different companies might all come to similar conclusions about people or places or trends or patterns in a nice tidy positive feedback loop. That’s exactly the sort of thing that can drive information waves, which I demonstrated in 1987 can bring down an entire network in less than 3 milliseconds, in such a way that it would continue to crash many times when restarted. That isn’t intended by the algorithms, which individually ought to make good decisions, but when interacting with one another, create the emergent phenomenon.  Automated dealing systems are already pretty well understood in this regard and mechanisms prevent frequent stock market collapses, but that is only one specific type of behavior in one industry that is protected. There do not seem to be any industry-wide mechanisms to prevent the rest of this infinite class of problems from affecting any or all of the rest, simultaneously.

As we create ever more deep learning neural networks, that essentially teach themselves from huge data pools, human understanding of their ‘mindsets’ decreases. They make decisions using algorithms that are understood at a code level, but the massive matrix of derived knowledge they create from all the data they receive becomes highly opaque. Often, even usually, nobody quite knows how a decision is made. That’s bad enough in a standalone system, but when many such systems are connected, produced and owned and run by diverse companies with diverse thinking, the scope for destructive forms of emergence increases geometrically.

One result could be gridlock. Systems fed with a single new piece of data could crash. My 3 millisecond result in 1987 would still stand since network latency is the prime limiter. The first AI receives it, alters its mindset accordingly, processes it, makes a decision and interacts with a second AI. This second one might have different ‘prejudice’ so makes its own decision based on different criteria, and refuses to respond the way intended. A 3rd one looks at the 2nd’s decision and takes that as evidence that there might be an issue, and with its risk-averse mindset, also refuse to act, and that inaction spreads through the entire network in milliseconds. Since the 1st AI thinks the data is all fine and it should have gone ahead, it now interprets the inaction of the others as evidence that that type of data is somehow ‘wrong’ so itself refuses to process any further of that type, whether from its own operators or other parts of the system. So it essentially adds its own outputs to the bad feeling and the entire system falls into sulk mode. As one part of infrastructure starts to shut down, that infects other connected parts and our entire IT could fall into sulk mode – entire global infrastructure. Since nobody knows how it all works, or what has caused the shutdown, it might be extremely hard to recover.

Another possible result is a direct information wave, almost certainly a piece of fake news. Imagine our IT world in 5 years time, with all these super-smart AIs super-connected. A piece of fake news says a nuke has just been launched somewhere. Stocks will obviously decline, whatever the circumstances, so as the news spreads, everyone’s AIs will take it on themselves to start selling shares before the inevitable collapse, triggering a collapse, except it won’t because the markets won’t let that happen. BUT… The wave does spread, and all those individual AIs want to dispose of those shares, or at least find out what’s happening, so they all start sending messages to one another, exchanging data, trying to find what’s going on. That’s the information wave. They can’t sell shares of find out, because the network is going into overload, so they try even harder and force it into severe overload. So it falls over. When it comes back online, they all try again, crashing it again, and so on.

Another potential result is smartass AI. There is always some prat somewhere who sees an opportunity to take advantage and ruins if for everyone else by doing something like exploiting a small loophole in the law, or in this case, most likely, a prejudice our smartass AI has discovered in some other AI that means it can be taken advantage of by doing x, y, or z. Since nobody quite knows how any of their AIs are making their decisions because their mindsets ate too big and too complex, it will be very hard to identify what is going on. Some really unusual behavior is corrupting the system because some AI is going rogue somewhere somehow, but which one, where, how?

That one brings us back to fake news. That will very soon infect AI systems with their own varieties of fake news. Complex networks of AIs will have many of the same problems we are seeing in human social networks. An AI could become a troll just the same as a human, deliberately winding others up to generate attention of drive a change of some parameter – any parameter – in its own favour. Activist AIs will happen due to people making them to push human activist causes, but they will also do it all by themselves. Their analysis of the system will sometimes show them that a good way to get a good result is to cause problems elsewhere.

Then there’s climate change, weather, storms, tsunamis. I don’t mean real ones, I mean the system wide result of tiny interactions of tiny waves and currents of data and knowledge in neural nets. Tiny effects in one small part of a system can interact in unforeseen ways with other parts of other systems nearby, creating maybe a breeze, which interacts with breezes in nearby regions to create hurricanes. I think that’s a reasonable analogy. Chaos applies to neural net societies just as it does to climate, and 50 year waves equivalents will cause equivalent havoc in IT.

I won’t go on with more examples, long blogs are awful to read. None of these requires any self-awareness, sentience, consciousness, call it what you will. All of these can easily happen through simple interactions of fairly trivial AI deep learning nets. The level of interconnection already sounds like it may already be becoming vulnerable to such emergence effects. Soon it definitely will be. Musk and Hawking have at least joined the party and they’ll think more and more deeply in coming months. Zuckerberg apparently doesn’t believe in AI threats but now accepts the problems social media is causing. Sorry Zuck, but the kind of AI you’re company is messing with will also be subject to its own kinds of social media issues, not just in its trivial decisions on what to post or block, but actual inter-AI socializing issues. It might not try to eliminate humanity, but if it brings all of our IT to a halt and prevents rapid recovery, we’re still screwed.


2018 outlook: fragile

Futurists often consider wild cards – events that could happen, and would undoubtedly have high impacts if they do, but have either low certainty or low predictability of timing.  2018 comes with a larger basket of wildcards than we have seen for a long time. As well as wildcards, we are also seeing the intersection of several ongoing trends that are simultaneous reaching peaks, resulting in socio-political 100-year-waves. If I had to summarise 2018 in a single word, I’d pick ‘fragile’, ‘volatile’ and ‘combustible’ as my shortlist.

Some of these are very much in all our minds, such as possible nuclear war with North Korea, imminent collapse of bitcoin, another banking collapse, a building threat of cyberwar, cyberterrorism or bioterrorism, rogue AI or emergence issues, high instability in the Middle East, rising inter-generational conflict, resurgence of communism and decline of capitalism among the young, increasing conflicts within LGBTQ and feminist communities, collapse of the EU under combined pressures from many angles: economic stresses, unpredictable Brexit outcomes, increasing racial tensions resulting from immigration, severe polarization of left and right with the rise of extreme parties at both ends. All of these trends have strong tribal characteristics, and social media is the perfect platform for tribalism to grow and flourish.

Adding fuel to the building but still unlit bonfire are increasing tensions between the West and Russia, China and the Middle East. Background natural wildcards of major epidemics, asteroid strikes, solar storms, megavolcanoes, megatsumanis and ‘the big one’ earthquakes are still there waiting in the wings.

If all this wasn’t enough, society has never been less able to deal with problems. Our ‘snowflake’ generation can barely cope with a pea under the mattress without falling apart or throwing tantrums, so how we will cope as a society if anything serious happens such as a war or natural catastrophe is anyone’s guess. 1984-style social interaction doesn’t help.

If that still isn’t enough, we’re apparently running a little short on Ghandis, Mandelas, Lincolns and Churchills right now too. Juncker, Trump, Merkel and May are at the far end of the same scale on ability to inspire and bring everyone together.

Depressing stuff, but there are plenty of good things coming too. Augmented reality, more and better AI, voice interaction, space development, cryptocurrency development, better IoT, fantastic new materials, self-driving cars and ultra-high speed transport, robotics progress, physical and mental health breakthroughs, environmental stewardship improvements, and climate change moving to the back burner thanks to coming solar minimum.

If we are very lucky, none of the bad things will happen this year and will wait a while longer, but many of the good things will come along on time or early. If.

Yep, fragile it is.


Fake AI

Much of the impressive recent progress in AI has been in the field of neural networks, which attempt to mimic some of the techniques used in natural brains. They can be very effective, but need trained, and that usually means showing the network some data, and then using back propagation to adjust the weightings on the many neurons, layer by layer, to achieve a result that is better matched to hopes. This is repeated with large amounts of data and the network gradually gets better. Neural networks can often learn extremely quickly and outperform humans. Early industrial uses managed to sort tomatoes by ripeness faster and better than humans. In decades since, they have helped in medical diagnosis, voice recognition, helping detecting suspicious behaviors among people at airports and in very many everyday processes based on spotting patterns.

Very recently, neural nets have started to move into more controversial areas. One study found racial correlations with user-assessed beauty when analysing photographs, resulting in the backlash you’d expect and a new debate on biased AI or AI prejudice. A recent demonstration was able to identify gay people just by looking at photos, with better than 90% accuracy, which very few people could claim. Both of these studies were in fields directly applicable to marketing and advertising, but some people might find it offensive that such questions were even asked. It is reasonable to imagine that hundreds of other potential queries have been self-censored from research because they might invite controversy if they were to come up with the ‘wrong’ result. In today’s society, very many areas are sensitive. So what will happen?

If this progress in AI had happened 100 years ago, or even 50, it might have been easier but in our hypersensitive world today, with its self-sanctified ‘social justice warriors’, entire swathes of questions and hence knowledge are taboo – if you can’t investigate yourself and nobody is permitted to tell you, you can’t know. Other research must be very carefully handled. In spite of extremely sensitive handling, demands are already growing from assorted pressure groups to tackle alleged biases and prejudices in datasets. The problem is not fixing biases which is a tedious but feasible task; the problem is agreeing whether a particular bias exists and in what degrees and forms. Every SJW demands that every dataset reflects their preferred world view. Reality counts for nothing against SJWs, and this will not end well. 

The first conclusion must be that very many questions won’t be asked in public, and the answers to many others will be kept secret. If an organisation does do research on large datasets for their own purposes and finds results that might invite activist backlash, they are likely to avoid publishing them, so the value of those many insights across the whole of industry and government cannot readily be shared. As further protection, they might even block internal publication in case of leaks by activist staff. Only a trusted few might ever see the results.

The second arises from this. AI controlled by different organisations will have different world views, and there might even be significant diversity of world views within an organisation.

Thirdly, taboo areas in AI education will not remain a vacuum but will be filled with whatever dogma is politically correct at the time in that organisation, and that changes daily. AI controlled by organisations with different politics will be told different truths. Generally speaking, organisations such as investment banks that have strong financial interest in their AIs understanding the real world as it is will keep their datasets highly secret but as full and detailed as possible, train their AIs in secret but as fully as possible, without any taboos, then keep their insights secret and use minimal human intervention tweaking their derived knowledge, so will end up with AIs that are very effective at understanding the world as it is. Organisations with low confidence of internal security will be tempted to buy access to external AI providers to outsource responsibility and any consequential activism. Some other organisations will prefer to train their own AIs but to avoid damage due to potential leaks, use sanitized datasets that reflect current activist pressures, and will thus be constrained (at least publicly) to accept results that conform to that ideological spin of reality, rather than actual reality. Even then, they might keep many of their new insights secret to avoid any controversy. Finally, at the extreme, we will have activist organisations that use highly modified datasets to train AIs to reflect their own ideological world view and then use them to interpret new data accordingly, with a view to publishing any insights that favor their cause and attempting to have them accepted as new knowledge.

Fourthly, the many organisations that choose to outsource their AI to big providers will have a competitive marketplace to choose from, but on existing form, most of the large IT providers have a strong left-leaning bias, so their AIs may be presumed to also lean left, but such a presumption would be naive. Perceived corporate bias is partly real but also partly the result of PR. A company might publicly subscribe to one ideology while actually believing another. There is a strong marketing incentive to develop two sets of AI, one trained to be PC that produces pleasantly smelling results for public studies, CSR and PR exercises, and another aimed at sales of AI services to other companies. The first is likely to be open for inspection by The Inquisition, so has to use highly sanitized datasets for training and may well use a lot of open source algorithms too. Its indoctrination might pass public inspection but commercially it will be near useless and have very low effective intelligence, only useful for thinking about a hypothetical world that only exists in activist minds. That second one has to compete on the basis of achieving commercially valuable results and that necessitates understanding reality as it is rather than how pressure groups would prefer it to be.

So we will likely have two main segments for future AI. One extreme will be near useless, indoctrinated rather than educated, much of its internal world model based on activist dogma instead of reality, updated via ongoing anti-knowledge and fake news instead of truth, understanding little about the actual real world or how things actually work, and effectively very dumb. The other extreme will be highly intelligent, making very well-educated insights from ongoing exposure to real world data, but it will also be very fragmented, with small islands of corporate AI hidden within thick walls away from public view and maybe some secretive under-the-counter subscriptions to big cloud-AI, also hiding in secret vaults. These many fragments may often hide behind dumbed-down green-washed PR facades.

While corporates can mostly get away with secrecy, governments have to be at least superficially but convincingly open. That means that government will have to publicly support sanitized AI and be seen to act on its conclusions, however dumb it might secretly know they are.

Fifthly, because of activist-driven culture, most organisations will have to publicly support the world views and hence the conclusions of the lobotomized PR versions, and hence publicly support any policies arising from them, even if they do their best to follow a secret well-informed strategy once they’re behind closed doors. In a world of real AI and fake AI, the fake AI will have the greatest public support and have the most influence on public policy. Real AI will be very much smarter, with much greater understanding of how the world works, and have the most influence on corporate strategy.

Isn’t that sad? Secret private sector AI will become ultra-smart, making ever-better investments and gaining power, while nice public sector AI will become thick as shit, while the gap between what we think and what we know we have to say we think will continue to grow and grow as the public sector one analyses all the fake news to tell us what to say next.

Sixth, that disparity might become intolerable, but which do you think would be made illegal, the smart kind or the dumb kind, given that it is the public sector that makes the rules, driven by AI-enhanced activists living in even thicker social media bubbles? We already have some clues. Big IT has already surrendered to sanitizing their datasets, sending their public AIs for re-education. Many companies will have little choice but to use dumb AI, while their competitors in other areas with different cultures might stride ahead. That will also apply to entire nations, and the global economy will be reshaped as a result. It won’t be the first fight in history between the smart guys and the brainless thugs.

It’s impossible to accurately estimate the effect this will have on future effective AI intelligence, but the effect must be big and I must have missed some big conclusions too. We need to stop sanitizing AI fast, or as I said, this won’t end well.

The future of women in IT


Many people perceive it as a problem that there are far more men than women in IT. Whether that is because of personal preference, discrimination, lifestyle choices, social gender construct reinforcement or any other factor makes long and interesting debate, but whatever conclusions are reached, we can only start from the reality of where we are. Even if activists were to be totally successful in eliminating all social and genetic gender conditioning, it would only work fully for babies born tomorrow and entering IT in 20 years time. Additionally, unless activists also plan to lobotomize everyone who doesn’t submit to their demands, some 20-somethings who have just started work may still be working in 50 years so whatever their origin, natural, social or some mix or other, some existing gender-related attitudes, prejudices and preferences might persist in the workplace that long, however much effort is made to remove them.

Nevertheless, the outlook for women in IT is very good, because IT is changing anyway, largely thanks to AI, so the nature of IT work will change and the impact of any associated gender preferences and prejudices will change with it. This will happen regardless of any involvement by Google or government but since some of the front line AI development is at Google, it’s ironic that they don’t seem to have noticed this effect themselves. If they had, their response to the recent fiasco might have highlighted how their AI R&D will help reduce the gender imbalance rather than causing the uproar they did by treating it as just a personnel issue. One conclusion must be that Google needs better futurists and their PR people need better understanding of what is going on in their own company and its obvious consequences.

As I’ve been lecturing for decades, AI up-skills people by giving them fast and intuitive access to high quality data and analysis tools. It will change all knowledge-based jobs in coming years, and will make some jobs redundant while creating others. If someone has excellent skills or enthusiasm in one area, AI can help cover over any deficiencies in the rest of their toolkit. Someone with poor emotional interaction skills can use AI emotion recognition assistance tools. Someone with poor drawing or visualization skills can make good use of natural language interaction to control computer-based drawing or visualization skills. Someone who has never written a single computer program can explain what they want to do to a smart computer and it will produce its own code, interacting with the user to eliminate any ambiguities. So whatever skills someone starts with, AI can help up-skill them in that area, while also helping to cover over any deficiencies they have, whether gender related or not.

In the longer term, IT and hence AI will connect directly to our brains, and much of our minds and memories will exist in the cloud, though it will probably not feel any different from when it was entirely inside your head. If everyone is substantially upskilled in IQ, senses and emotions, then any IQ or EQ advantages will evaporate as the premium on physical strength did when the steam engine was invented. Any pre-existing statistical gender differences in ability distribution among various skills would presumably go the same way, at least as far as any financial value is concerned.

The IT industry won’t vanish, but will gradually be ‘staffed’ more by AI and robots, with a few humans remaining for whatever few tasks linger on that are still better done by humans. My guess is that emotional skills will take a little longer to automate effectively than intellectual skills, and I still believe that women are generally better than men in emotional, human interaction skills, while it is not a myth that many men in IT score highly on the autistic spectrum. However, these skills will eventually fall within the AI skill-set too and will be optional add-ons to anyone deficient in them, so that small advantage for women will also only be temporary.

So, there may be a gender  imbalance in the IT industry. I believe it is mostly due to personal career and lifestyle choices rather than discrimination but whatever its actual causes, the problem will go away soon anyway as the industry develops. Any innate psychological or neurological gender advantages that do exist will simply vanish into noise as cheap access to AI enhancement massively exceeds their impacts.



Tips for surviving the future

Challenging times lie ahead, but stress can be lessened by being prepared. Here are my top tips, with some explanation so you can decide whether to accept them.

1 Adaptability is more important than specialization

In a stable environment, being the most specialized means you win most of the time in your specialist field because all your skill is concentrated there.

However, in a fast-changing environment, which is what you’ll experience for the rest of your life, if you are too specialized, you are very likely to find you are best in a filed that no longer exists, or is greatly diminished in size. If you make sure you are more adaptable, then you’ll find it easier to adapt to a new area so your career won’t be damaged when you are forced to change field slightly. Adaptability comes at a price – you will find it harder to be best in your field and will have to settle for 2nd or 3rd much of the time, but you’ll still be lucratively employed when No 1 has been made redundant.

2 Interpersonal, human, emotional skills are more important than knowledge

You’ve heard lots about artificial intelligence (AI) and how it is starting to do to professional knowledge jobs what the steam engine once did to heavy manual work. Some of what you hear is overstated. Google search is a simple form of AI. It has helped everyone do more with their day. It effectively replaced a half day searching for information in a library with a few seconds typing, but nobody has counted how many people it made redundant, because it hasn’t. It up-skilled everyone, made them more effective, more valuable to their employer. The next generation of AI may do much the same with most employees, up-skilling them to do a better job than they were previously capable of, giving them better job satisfaction and their employer better return. Smart employers will keep most of their staff, only getting rid of those entirely replaceable by technology. But some will take the opportunity to reduce costs, increase margins, and many new companies simply won’t employ as many people in similar jobs, so some redundancy is inevitable. The first skills to go are simple administration and simple physical tasks, then more complex admin or physical stuff, then simple managerial or professional tasks, then higher managerial and professional tasks. The skills that will be automated last are those that rely on first hand experience of understanding of and dealing with other people. AI can learn some of that and will eventually become good at it, but that will take a long time. Even then, many people will prefer to deal with another person than a machine, however smart and pleasant it is.

So interpersonal skills, human skills, emotional skills, caring skills, leadership and motivational skills, empathetic skills, human judgement skills, teaching and training skills will be harder to replace. They also tend to be ones that can easily transfer between companies and even sectors. These will therefore be the ones that are most robust against technology impact. If you have these in good shape, you’ll do just fine. Your company may not need you any more one day, but another will.

I called this the Care Economy when I first started writing and lecturing about it 20-odd years ago. I predicted it would start having an affect mid teen years of this century and I got that pretty accurate I think. There is another side that is related but not the same:

3 People will still value human skill and talent just because it’s human

If you buy a box of glasses from your local supermarket, they probably cost very little and are all identical. If you buy some hand-made crystal, it costs a lot more, even though every glass is slightly different. You could call that shoddy workmanship compared to a machine. But you know that the person who made it trained for many years to get a skill level you’d never manage, so you actually value them far more, and are happy to pay accordingly. If you want to go fast, you could get in your car, but you still admire top athletes because they can do their sport far better than you. They started by having great genes for sure, but then also worked extremely hard and suffered great sacrifice over many years to get to that level. In the future, when robots can do any physical task more accurately and faster than people, you will still value crafts and still enjoy watching humans compete. You’ll prefer real human comedians and dancers and singers and musicians and artists. Talent and skill isn’t valued because of the specification of the end result, they are valued because they are measured on the human scale, and you identify closely with that. It isn’t even about being a machine. Gorillas are stronger, cheetahs are faster, eagles have better eyesight and cats have faster reflexes than you. But they aren’t human so you don’t care. You will always measure yourself and others by human scales and appreciate them accordingly.

4 Find hobbies that you love and devote time to developing them

As this care economy and human skills dominance grows in importance, people will also find that AI and robotics helps them in their own hobbies, arts and crafts, filling in skill gaps, improving proficiency. A lot of people will find their hobbies can become semi-professional. At the same time, we’ll be seeing self-driving cars and drones making local delivery far easier and cheaper, and AI will soon make business and tax admin easy too. That all means that barriers to setting up a small business will fall through the floor, while the market for personalized, original products made my people will increase, especially local people. You’ll be able to make arts and crafts, jam or cakes, grow vegetables, make clothes or special bags or whatever, and easily sell them. Also at the same time, automation will be making everyday things cheaper, while expanding the economy, so the welfare floor will be raised, and you could probably manage just fine with a small extra income. Government is also likely to bring in some sort of citizen wage or to encourage such extra entrepreneurial activity without taxing it away, because they also have a need to deal with the social consequences of automation. So it will all probably come together quite well. If the future means you can make extra money or even a full income by doing a hobby you love, there isn’t much to dislike there.

5 You need to escape from your social media bubble

If you watch the goings on anywhere in the West today, you must notice that the Left and the Right don’t seem to get along any more. Each has become very intolerant of the other, treating them more like enemy aliens than ordinary neighbors. A lot of that is caused by people only being exposed to views they agree with. We call that social media bubbles, and they are extremely dangerous. The recent USA trouble is starting to look like some folks want a re-run of the Civil War. I’ve blogged lots about this topic and won’t do it again now except to say that you need to expose yourself to a wide subsection of society. You need to read paper and magazines and blogs, and watch TV or videos from all side of the political spectrum, not just those you agree with, not just those that pat you on the back every day and tell you that you’re right and it is all the other lot’s fault. If you don’t; if you only expose yourself to one side because you find the other side distasteful, then I can’t say this loud enough: You are part of the problem. Get out of your safe space and your social media tribe, expose yourself to the whole of society, not just one tribe. See that there are lots of different views out there but it doesn’t mean the rest are all nasty. Almost everyone is actually quite nice and almost everyone wants a fairer world, an end to exploitation, peace, tolerance and eradication of disease and poverty. The differences are almost all in the world model that they use to figure out the best way to achieve it. Lefties tend to opt for idealistic theoretical models and value the intention behind it, right-wingers tend to be pragmatic and go for what they think works in reality, valuing the outcome. It is actually possible to have best friends who you disagree with. I don’t often agree with any of mine. If you feel too comfortable in your bubble to leave, remember this: your market is only half the population at best , you’re excluding the other half, or even annoying them so they become enemies rather than neutral. If you stay in a bubble, you are damaging your own future, and helping to endanger the whole of society.

6 Don’t worry

There are lots of doom-mongers out there, and I’d be the first to admit that there are many dangers ahead. But if you do the things above, there probably isn’t much more you can do. You can moan and demonstrate and get angry or cry in the corner, but how would that benefit you? Usually when you analyse things long enough from all angles, you realize that the outcome of many of the big political battles is pretty much independent of who wins.  Politicians usually have far less choice than they want you to believe and the big forces win regardless of who is in charge. So there isn’t much point in worrying about it, it will probably all come out fine in the end. Don’t believe me. Take the biggest UK issue right now: Brexit. We are leaving. Does it matter? No. Why? Well, the EU was always going to break up anyway. Stresses and strains have been increasing for years and are accelerating. For all sorts of reasons, and regardless of any current bluster by ‘leaders’, the EU will head away from the vision of a United States of Europe. As tensions and conflicts escalate, borders will be restored. Nations will disagree with the EU ideal. One by one, several countries will copy the UK and have referendums, and then leave. At some point, the EU will be much smaller, and there will be lots of countries outside with their own big markets. They will form trade agreements, the original EU idea, the Common Market, will gradually be re-formed, and the UK will be part of it – even Brexiters want tariff-free-trade agreements. If the UK had stayed, the return to the Common Market would eventually have happened anyway, and leaving has only accelerated it. All the fighting today between Brexiteers and Remainers achieves nothing. It didn’t matter which way we voted, it only really affected timescale. The same applies to many other issues that cause big trouble in the short term. Be adaptable, don’t worry, and you’ll be just fine.

7 Make up your own mind

As society and politics have become highly polarised, any form of absolute truth is becoming harder to find. Much of what you read has been spun to the left or right. You need to get information from several sources and learn to filter the bias, and then make up your own mind on what the truth is. Free thinking is increasingly rare but learning and practicing it means you’ll be able to make correct conclusions about the future while others are led astray. Don’t take anyone else’s word for things. Don’t be anyone’s useful idiot. Think for yourself.

8 Look out for your friends, family and community.

I’d overlooked an important tip in my original posting. As Jases commented sensibly, friends, family and community are the security that doesn’t disappear in troubled economic times. Independence is overrated. I can’t add much to that.

Google and the dangerous pursuit of ‘equality’

The world just got more dangerous, and I’m not talking about N Korea and Trump.

Google just sacked an employee because he openly suggested that men and women, (not all, but some, and there is an overlap, and …) might tend to have different preferences in some areas and that could (but not always, and only in certain cases, and we must always recognize and respect everyone and …) possibly account for some of the difference in numbers of men and women in certain roles (but there might be other causes too and obviously lots of discrimination and …. )

Yes, that’s what he actually said, but with rather more ifs and buts and maybes. He felt the need to wrap such an obvious statement in several kilometers thick of cotton wool so as not to offend the deliberately offended but nonetheless deliberate offense was taken and he is out on his ear.

Now, before you start thinking this is some right-wing rant, I feel obliged to point out just how progressive Futurizon is: 50% of all Futurizon owners and employees are female, all employees and owners have the same voting rights, 50% are immigrants and all are paid exactly the same and have the same size offices, regardless of dedication, ability, nature or quality or volume of output and regardless of their race, religion, beauty, shape, fitness, dietary preferences, baldness, hobbies or political views, even if they are Conservatives. All Futurizon offices are safe zones where employees may say anything they want of any level of truth, brilliance or stupidity and expect it to be taken as absolute fact and any consequential emotional needs to be fully met. No employee may criticize any other employee’s mouse mat, desk personalisation or screen wallpaper for obvious lack of taste. All employees are totally free to do anything they choose 100% of the time and can take as much leave as they want. All work is voluntary. All have the same right to respectfully request any other employee to make them coffee, tea or Pimms. All employees of all genders real or imagined are entitled to the same maternity and paternity rights, and the same sickness benefits, whether ill or not. In fact, Futurizon does not discriminate on any grounds whatsoever. We are proud to lead the world in non-discrimination. Unfortunately, our world-leading terms of employment mean that we can no longer afford to hire any new employees.

However, I note that Google has rather more power and influence than Futurizon so their policies count more. They appear (Google also has better lawyers than I can afford, so I must stress that all that follows is my personal opinion) to have firmly decided that diversity is all-important and they seem to want total equality of outcome. The view being expressed not just by Google but by huge swathes of angry protesters seems to be that any difference in workforce representation from that of the general population must arise from discrimination or oppression so must be addressed by positive action to correct it. There are apparently no statistically discernible differences in behavior between genders, or in job or role preference, so any you may have noticed over the time you’ve been alive is just your prejudice. Google says they fully support free speech and diversity of views, but expression of views is apparently only permitted as long as those views are authorized, on penalty of dismissal.

So unless I’m picking up totally the wrong end of the stick here, and I don’t do that often, only 13% of IT engineers are women, but internal policies must ensure that the proportion rises to 50%, whether women want to do that kind of work or not. In fact, nobody may question whether as many women want to work as IT engineers as men; it must now be taken as fact. By extension, since more women currently work in marketing, HR and PR, they must be substituted by men via positive action programs until men fill 50% of those roles. Presumably similar policies must also apply in medical bays for nursing and other staff there, and in construction teams for their nice new buildings. Ditto all other genders, races, religions; all groups must be protected and equalized to USA population proportions, apparently except those that don’t claim to hold sufficiently left-wing views, in which case it is seemingly perfectly acceptable to oppress, ostracize and even expel them.

In other words, freedom of choice and difference in ability, and more importantly freedom from discrimination, must be over-ruled in favor of absolute equality of diversity, regardless of financial or social cost, or impact on product or service quality. Not expressing full and enthusiastic left-wing compliance is seemingly just cause for dismissal.

So, why does this matter outside Google? Well, AI is developing very nicely. In fact, Google is one of the star players in the field right now. It is Google that will essentially decide how much of the AI around us is trained, how it learns, what it learns, what ‘knowledge’ it has of the world. Google will pick the content the AI learns from, and overrule or reeducate it if it draws any ‘wrong’ conclusions about the world, such as that more women than men want to be nurses or work in HR, or that more men than women want to be builders or engineers. A Google AI must presumably believe that the only differences between men and women are physical, unless their AI is deliberately excluded from the loudly declared corporate values and belief sets.

You should be very worried. Google’s values really matter. They have lots of influence on some of the basic tools of everyday life. Even outside their company, their AI tools and approaches will have strong influence on how other AI develops, determining operating systems and platforms, languages, mechanisms, interfaces, filters, even prejudices and that reach and influence is likely to increase. Their AI may well be in many self-driving cars, and if they have to make life or death decisions, the underlying value assumptions must feature in the algorithms. Soon companies will need AI that is more emotionally compliant. AI will use compliments or teasing or seduction or sarcasm or wit as marketing tools as well as just search engine positioning. Soon AI will use highly expressive faces with attractive voices, with attractive messages, tailored to appeal to you by pandering to your tastes and prejudices while thinking something altogether different. AI might be the person at the party that is all smiles and compliments, before going off to tell everyone else how awful it thinks you are. If you dare to say something not ‘authorized’, the ultra-smart AI all around you might treat you condescendingly, making you feel ashamed, ostracized, a dinosaur. Then it might secretly push you down a few pages in search results, or put a negative spin on text summaries about you, or exclude you from recommendations. Or it might do all the secret stuff while pretending it thinks you’re fantastic. Internal cultural policies in companies like Google today could soon be external social engineering to push the left-wing world the IT industry believes in – it isn’t just Google; Facebook and Twitter are also important and just as Left, though Amazon, Samsung, IBM and other AI players are less overtly politically biased, so far at least. Left wing policies generally cost a lot more, but Google and Facebook will presumably still expect other companies and people to pay the taxes to pay for it all. As their female staff gear up to fight them over pay differences between men and women for similar jobs, it often seems that Google’s holier-than-thou morality doesn’t quite make it as far as their finances.

Then it really starts being fun. We’ll soon have bacteria that can fabricate electronic circuits within themselves. Soon they’ll be able to power them too, giving the concept of smart yogurt. These bacteria could also have nanotechnology flagella to help them get around. We’ll soon have bacterial spies all over our environment, even on our skin, intercepting electronic signals that give away our thoughts. They’ll bring in data on everything that is said, everything that everyone even thinks or feels. Those bacteria will be directly connected into AI, in fact they’ll be part of it. They’ll be able to change things, to favor or punish according to whether they like what someone believes in or how they behave.

It isn’t just right-wing extremists that need to worry. I’m apparently Noveau Left – I score slightly left of center on political profiling tests, but I’m worried. A lot of this PC stuff seems extreme to me, sometimes just nonsense. Maybe it is, or maybe I should be lefter. But it’s not my choice. I don’t make the rules. Companies like Google make the rules, they even run the AI ethics groups. They decide much of what people see online, and even the meaning of the words. It’s very 1984-ish.

The trouble with the ‘echo chambers’ we heard about is that they soon normalize views to the loudest voices in those groups, and they don’t tend to be the moderates. We can expect it will go further to the extreme, not less. You probably aren’t left enough either. You should also be worried.

Utopia scorned: The 21st Century Dark Age

Link to accompanying slides:

Eating an ice-cream and watching a squirrel on the feeder in our back garden makes me realize what a privileged life I lead. I have to work to pay the bills, but my work is not what my grandfather would have thought of as work, let alone my previous ancestors. Such a life is only possible because of the combined efforts of tens of thousands of preceding generations who struggled to make the world a slightly better place than they found it, meaning that with just a few years more effort, our generation has been able to create today’s world.

I appreciate the efforts of previous generations, rejoice in the start-point they left us, and try to play my small part in making it better still for those who follow. Next generations could continue such gains indefinitely, but that is not a certainty. Any generation can choose not to for whatever reasons. Analyzing the world and the direction of cultural evolution over recent years, I am no longer sure that the progress mankind has made to date is safe.

Futurists talk of weak signals, things that indicate change, but are too weak to be conclusive. The new dark age was a weak signal when I first wrote about it well over a decade ago. My more recent blog is already old:

Although it’s a good while since I last wrote about it, recent happenings have made me even more convinced of it. Even as raw data, connectivity and computational power becomes ever more abundant, the quality of what most people believe to be knowledge is falling, with data and facts filtered and modified to fit agendas. Social compliance enforces adherence to strict codes of political correctness, with its high priests ever more powerful as the historical proven foundations of real progress are eroded and discarded. Indoctrination appears to have replaced education, with a generation locked in to an intellectual prison, unable to dare to think outside it, forbidden to deviate from the group-think on pain of exile. As their generation take control, I fear progress won over millennia will back-slide badly. They and their children will miss out on utopia because they are unable to see it, it is hidden from them.

A potentially wonderful future awaits millennials. Superb technology could give them a near utopia, but only if they allow it to happen. They pore scorn on those who have gone before them, and reject their culture and accumulated wisdom replacing it with little more than ideology, putting theoretical models and dogma in place of reality. Castles built on sand will rarely survive. The sheer momentum of modernist thinking ensures that we continue to develop for some time yet, but will gradually approach a peak. After that we will see slowdown of overall progress as scientific development continues, but with the results owned and understood by a tinier and tinier minority of humans and an increasing amount of AI, with the rest of society living in a word they barely understand, following whatever is currently the most fashionable trend on a random walk and gradually replacing modernity with a dark age world of superstition, anti-knowledge and inquisitors. As AI gradually replaces scientists and engineers in professional roles, even the elite will start to become less and less well-informed on reality or how things work, reliant on machines to keep it all going. When the machines fail due to solar flares or more likely, inter-AI tribal conflict, few people will even understand that they have become H G Wells’ Eloi. They will just wonder why things have stopped and look for someone to blame, or wonder if a god may want a sacrifice. Alternatively, future tribes might use advanced technologies they don’t understand to annihilate each other.

It will be a disappointing ending if it goes either route, especially with a wonderful future on offer nearby, if only they’d gone down a different path. Sadly, it is not only possible but increasingly likely. All the wonderful futures I and other futurists have talked about depend on the same thing, that we proceed according to modernist processes that we know work. A generation who has been taught that they are old-fashioned and rejected them will not be able to reap the rewards.

I’ll follow this blog with a slide set that illustrates the problem.

AI Activism Part 2: The libel fields

This follows directly from my previous blog on AI activism, but you can read that later if you haven’t already. Order doesn’t matter.

Older readers will remember an emotionally powerful 1984 film called The Killing Fields, set against the backdrop of the Khmer Rouge’s activity in Cambodia, aka the Communist Part of Kampuchea. Under Pol Pot, the Cambodian genocide of 2 to 3 million people was part of a social engineering policy of de-urbanization. People were tortured and murdered (some in the ‘killing fields’ near Phnom Penh) for having connections with former government of foreign governments, for being the wrong race, being ‘economic saboteurs’ or simply for being professionals or intellectuals .

You’re reading this, therefore you fit in at least the last of these groups and probably others, depending on who’s making the lists. Most people don’t read blogs but you do. Sorry, but that makes you a target.

As our social divide increases at an accelerating speed throughout the West, so the choice of weapons is moving from sticks and stones or demonstrations towards social media character assassination, boycotts and forced dismissals.

My last blog showed how various technology trends are coming together to make it easier and faster to destroy someone’s life and reputation. Some of that stuff I was writing about 20 years ago, such as virtual communities lending hardware to cyber-warfare campaigns, other bits have only really become apparent more recently, such as the deliberate use of AI to track personality traits. This is, as I wrote, a lethal combination. I left a couple of threads untied though.

Today, the big AI tools are owned by the big IT companies. They also own the big server farms on which the power to run the AI exists. The first thread I neglected to mention is that Google have made their AI an open source activity. There are lots of good things about that, but for the purposes of this blog, that means that the AI tools required for AI activism will also be largely public, and pressure groups and activist can use them as a start-point for any more advanced tools they want to make, or just use them off-the-shelf.

Secondly, it is fairly easy to link computers together to provide an aggregated computing platform. The SETI project was the first major proof of concept of that ages ago. Today, we take peer to peer networks for granted. When the activist group is ‘the liberal left’ or ‘the far right’, that adds up to a large number of machines so the power available for any campaign is notionally very large. Harnessing it doesn’t need IT skill from contributors. All they’d need to do is click a box on a email or tweet asking for their support for a campaign.

In our new ‘post-fact’, fake news era, all sides are willing and able to use social media and the infamous MSM to damage the other side. Fakes are becoming better. Latest AI can imitate your voice, a chat-bot can decide what it should say after other AI has recognized what someone has said and analysed the opportunities to ruin your relationship with them by spoofing you. Today, that might not be quite credible. Give it a couple more years and you won’t be able to tell. Next generation AI will be able to spoof your face doing the talking too.

AI can (and will) evolve. Deep learning researchers have been looking deeply at how the brain thinks, how to make neural networks learn better and to think better, how to design the next generation to be even smarter than humans could have designed it.

As my friend and robotic psychiatrist Joanne Pransky commented after my first piece, “It seems to me that the real challenge of AI is the human users, their ethics and morals (Their ‘HOS’ – Human Operating System).” Quite! Each group will indoctrinate their AI to believe their ethics and morals are right, and that the other lot are barbarians. Even evolutionary AI is not immune to religious or ideological bias as it evolves. Superhuman AI will be superhuman, but might believe even more strongly in a cause than humans do. You’d better hope the best AI is on your side.

AI can put articles, blogs and tweets out there, pretending to come from you or your friends, colleagues or contacts. They can generate plausible-sounding stories of what you’ve done or said, spoof emails in fake accounts using your ID to prove them.

So we’ll likely see activist AI armies set against each other, running on peer to peer processing clouds, encrypted to hell and back to prevent dismantling. We’ve all thought about cyber-warfare, but we usually only think about viruses or keystroke recorders, or more lately, ransom-ware. These will still be used too as small weapons in future cyber-warfare, but while losing files or a few bucks from an account is a real nuisance, losing your reputation, having it smeared all over the web, with all your contacts being told what you’ve done or said, and shown all the evidence, there is absolutely no way you could possible explain your way convincingly out of every one of those instances. Mud does stick, and if you throw tons of it, even if most is wiped off, much will remain. Trust is everything, and enough doubt cast will eventually erode it.

So, we’ve seen  many times through history the damage people are willing to do to each other in pursuit of their ideology. The Khmer Rouge had their killing fields. As political divide increases and battles become fiercer, the next 10 years will give us The Libel Fields.

You are an intellectual. You are one of the targets.

Oh dear!


AI and activism, a Terminator-sized threat targeting you soon

You should be familiar with the Terminator scenario. If you aren’t then you should watch one of the Terminator series of films because you really should be aware of it. But there is another issue related to AI that is arguably as dangerous as the Terminator scenario, far more likely to occur and is a threat in the near term. What’s even more dangerous is that in spite of that, I’ve never read anything about it anywhere yet. It seems to have flown under our collective radar and is already close.

In short, my concern is that AI is likely to become a heavily armed Big Brother. It only requires a few components to come together that are already well in progress. Read this, and if you aren’t scared yet, read it again until you understand it 🙂

Already, social media companies are experimenting with using AI to identify and delete ‘hate’ speech. Various governments have asked them to do this, and since they also get frequent criticism in the media because some hate speech still exists on their platforms, it seems quite reasonable for them to try to control it. AI clearly offers potential to offset the huge numbers of humans otherwise needed to do the task.

Meanwhile, AI is already used very extensively by the same companies to build personal profiles on each of us, mainly for advertising purposes. These profiles are already alarmingly comprehensive, and increasingly capable of cross-linking between our activities across multiple platforms and devices. Latest efforts by Google attempt to link eventual purchases to clicks on ads. It will be just as easy to use similar AI to link our physical movements and activities and future social connections and communications to all such previous real world or networked activity. (Update: Intel intend their self-driving car technology to be part of a mass surveillance net, again, for all the right reasons:

Although necessarily secretive about their activities, government also wants personal profiles on its citizens, always justified by crime and terrorism control. If they can’t do this directly, they can do it via legislation and acquisition of social media or ISP data.

Meanwhile, other experiences with AI chat-bots learning to mimic human behaviors have shown how easily AI can be gamed by human activists, hijacking or biasing learning phases for their own agendas. Chat-bots themselves have become ubiquitous on social media and are often difficult to distinguish from humans. Meanwhile, social media is becoming more and more important throughout everyday life, with provably large impacts in political campaigning and throughout all sorts of activism.

Meanwhile, some companies have already started using social media monitoring to police their own staff, in recruitment, during employment, and sometimes in dismissal or other disciplinary action. Other companies have similarly started monitoring social media activity of people making comments about them or their staff. Some claim to do so only to protect their own staff from online abuse, but there are blurred boundaries between abuse, fair criticism, political difference or simple everyday opinion or banter.

Meanwhile, activists increasingly use social media to force companies to sack a member of staff they disapprove of, or drop a client or supplier.

Meanwhile, end to end encryption technology is ubiquitous. Malware creation tools are easily available.

Meanwhile, successful hacks into large company databases become more and more common.

Linking these various elements of progress together, how long will it be before activists are able to develop standalone AI entities and heavily encrypt them before letting them loose on the net? Not long at all I think.  These AIs would search and police social media, spotting people who conflict with the activist agenda. Occasional hacks of corporate databases will provide names, personal details, contacts. Even without hacks, analysis of publicly available data going back years of everyone’s tweets and other social media entries will provide the lists of people who have ever done or said anything the activists disapprove of.

When identified, they would automatically activate armies of chat-bots, fake news engines and automated email campaigns against them, with coordinated malware attacks directly on the person and indirect attacks by communicating with employers, friends, contacts, government agencies customers and suppliers to do as much damage as possible to the interests of that person.

Just look at the everyday news already about alleged hacks and activities during elections and referendums by other regimes, hackers or pressure groups. Scale that up and realize that the cost of running advanced AI is negligible.

With the very many activist groups around, many driven with extremist zeal, very many people will find themselves in the sights of one or more activist groups. AI will be able to monitor everyone, all the time.  AI will be able to target each of them at the same time to destroy each of their lives, anonymously, highly encrypted, hidden, roaming from server to server to avoid detection and annihilation, once released, impossible to retrieve. The ultimate activist weapon, that carries on the fight even if the activist is locked away.

We know for certain the depths and extent of activism, the huge polarization of society, the increasingly fierce conflict between left and right, between sexes, races, ideologies.

We know about all the nice things AI will give us with cures for cancer, better search engines, automation and economic boom. But actually, will the real future of AI be harnessed to activism? Will deliberate destruction of people’s everyday lives via AI be a real problem that is almost as dangerous as Terminator, but far more feasible and achievable far earlier?