Category Archives: terrorism

The rise of Dr Furlough, Evil Super-Villain

Too early for an April Fool blog, but hopefully this might lighten your day a bit.

I had the enormous pleasure this morning of interviewing the up-and-coming Super-Villain Dr Furlough about her new plans to destroy the world after being scorned by the UK Government’s highly selective support policy. It seems that Hell has no fury like a Super-Villain scorned and Dr Furlough leaves no doubt that she blames incompetent government response for the magnitude of the current crisis:

Bitmoji Image

Dr Furlough, Super-Villain

“By late January, it should have been obvious to everyone that this would quickly grow to become a major problem unless immediate action was taken to prevent people bringing the virus into the country. Flights from infected areas should have been stopped immediately, anyone who may have been in contact with it should have been forcibly quarantined, and everyone found infected should have had their contacts traced and also quarantined. This would have been disruptive and expensive, but a tiny fraction of the problem we now face.  Not to do so was to give the virus the freedom to spread and infect widely until it became a severe problem. While very few need have died and the economy need not now be trashed, we now face the full enormous cost of that early refusal to act.”

“With all non-essential travel now blocked”, Dr Furlough explained, “many people have had their incomes totally wiped out, not through any fault of their own but by the government’s incompetence in handling the coronavirus, and although most of them have been promised state support, many haven’t, and have as Dr Furlough puts it ‘been thrown under a bus’. While salaried people who can’t work are given 80% of their wages, and those with their own business will eventually receive 80% of their average earnings up to £2500/month whether they are still working or not, the two million who chose to run their small business by setting up limited companies will only qualify for 80% of the often small fraction of income they pay themselves as basic salary, and not on the bulk of their income most take via dividends once their yearly profits are clearer. Consequently many will have immediately dropped from comfortable incomes to 80% of minimum wage. Many others who have already lost their jobs have been thrown onto universal credit. The future high taxes will have to be paid by everyone whether they received support or were abandoned. Instead of treating everyone equally, the state has thus created a seething mass of deep resentment.” Dr Furlough seems determined to have her evil revenge.

Bitmoji Image

With her previous income obliterated, and scorned by the state support system, the ever self-reliant Dr Furlough decided to “screw the state” and forge a new career as a James-Bond-style Super-Villain, and she complained that it was long overdue for a female Super-Villain to take that role anyway. I asked her about her evil plans and, like all traditional Super-Villains, she was all too eager to tell. So, to quote her verbatim:

“My Super-Evil Plan 1: Tap in to the global climate alarmist market to crowd-fund GM creation of a super-virus, based on COVID19. More contagious, more lethal, and generally more evil. This will reduce world population, reduce CO2 emissions and improve the environment. It will crash the global economy and make them all pay. As a bonus, it will ensure the rise of evil regimes where I can prosper.”

She continued: “My Evil Super-Plan 2: To invent a whole pile of super-weapons and sell the designs to all the nasty regimes, dictators, XR and other assorted doomsday cults, pressure groups, religious nutters and mad-scientists. Then to sell ongoing evil consultancy services while deferring VAT payments.”

Bitmoji Image

“Muhuahuahua!” She cackled, evilly.

“My Super-Plan 3: To link AI and bacteria to make adaptive super-diseases. Each bacterium can be genetically enhanced to include bioluminescent photonic interconnects linked to cloud AI with reciprocal optogenetic niche adaptation. With bacteria clouds acting as distributed sensor nets for an emergent conscious transbacteria population, my new bacteria will be able to infect any organism and adapt to any immune system response, ensuring its demise and my glorious revenge.”

laugh cry

By now, Dr Furlough was clearly losing it. Having heard enough anyway, I asked The Evil Dr Furlough if there was no alternative to destroying the world and life as we know it.

“Well, I suppose I could just live off my savings and sit it all out” she said.

 

AI could use killer drone swarms to attack people while taking out networks

In 1987 I discovered a whole class of security attacks that could knock out networks, which I called correlated traffic attacks, creating particular patterns of data packet arrivals from particular sources at particular times or intervals. We simulated two examples to successfully verify the problem. One example was protocol resonance. I demonstrated that it was possible to push a system into a gross overload state with a single call, by spacing the packets precise intervals apart. Their arrival caused a strong resonance in the bandwidth allocation algorithms and the result was that network capacity was instantaneously reduced by around 70%. Another example was information waves, whereby a single piece of information appearing at a particular point could, by its interaction with particular apps on mobile devices (the assumption was financially relevant data that would trigger AI on the devices to start requesting voluminous data, triggering a highly correlated wave of responses, using up bandwidth and throwing the network into overload, very likely crashing due to initiation of rarely used software. When calls couldn’t get through, the devices would wait until the network recovered, then they would all simultaneously detect recovery and simultaneously try again, killing the net again, and again, until people were asked to turn  their devices off and on again, thereby bringing randomness back into the system. Both of these examples could knock out certain kinds of networks, but they are just two of an infinite set of possibilities in the correlated traffic attack class.

Adversarial AI pits one AI against another, trying things at random or making small modifications until a particular situation is achieved, such as the second AI accepting an image is acceptable. It is possible, though I don’t believe it has been achieved yet, to use the technique to simulate a wide range of correlated traffic situations, seeing which ones achieve network resonance or overloads, which trigger particular desired responses from network management or control systems, via interactions with the network and its protocols, commonly resident apps on mobile devices or computer operating systems.

Activists and researchers are already well aware that adversarial AI can be used to find vulnerabilities in face recognition systems and thereby prevent recognition, or to deceive autonomous car AI into seeing fantasy objects or not seeing real ones. As Noel Sharkey, the robotics expert, has been tweeting today, it will be possible to use adversarial AI to corrupt recognition systems used by killer drones, potentially to cause them to attack their controllers or innocents instead of their intended targets. I have to agree with him. But linking that corruption to the whole extended field of correlated traffic attacks extends the range of mechanisms that can be used greatly. It will be possible to exploit highly obscured interactions between network physical architecture, protocols and operating systems, network management, app interactions, and the entire sensor/IoT ecosystem, as well as software and AI systems using it. It is impossible to check all possible interactions, so no absolute defence is possible, but adversarial AI with enough compute power could randomly explore across these multiple dimensions, stumble across regions of vulnerability and drill down until grand vulnerabilities are found.

This could further be linked to apps used as highly invisible Trojans, offering high attractiveness to users with no apparent side effects, quietly gathering data to help identify potential targets, and simply waiting for a particular situation or command before signalling to the attacking system.

A future activist or terrorist group or rogue state could use such tools to make a multidimensional attack. It could initiate an attack, using its own apps to identify and locate targets, control large swarms of killer drones or robots to attack them, simultaneously executing a cyberattack that knocks out selected parts of the network, crashing or killing computers and infrastructure. The vast bulk of this could be developed, tested and refined offline, using simulation and adversarial AI approaches to discover vulnerabilities and optimise exploits.

There is already debate about killer drones, mainly whether we should permit them and in what circumstances, but activists and rogue states won’t care about rules. Millions of engineers are technically able to build such things and some are not on your side. It is reasonable to expect that freely available AI tools will be used in such ways, using their intelligence to design, refine, initiate and control attacks using killer drones, robots and self-driving cars to harm us, while corrupting systems and infrastructure that protect us.

Worrying, especially since the capability is arriving just as everyone is starting to consider civil war.

 

 

Spiders in Space

A while back I read an interesting article about how small spiders get into the air to disperse, even when there is no wind:

Spiders go ballooning on electric fields: https://phys.org/news/2018-07-spiders-ballooning-electric-fields.html

If you don’t want to read it, the key point is that they use the electric fields in the air to provide enough force to drag them into the air. It gave me an idea. Why not use that same technique to get into space?

There is electric air potential right up to the very top of the atmosphere, but electric fields permeate space too. It only provides a weak force, enough to lift a 25mg spider using the electrostatic force on a few threads from its spinnerets.

25mg isn’t very heavy, but then the threads are only designed to lift the spider. Longer threads could generate higher forces, and lots of longer threads working together could generate significant forces. I’m not thinking of using this to launch space ships though. All I want for this purpose is to lift a few grams and that sounds feasible.

If we can arrange for a synthetic ‘cyber-spider’ to eject long graphene threads in the right directions, and to wind them back in when appropriate, our cyber-spider could harness these electric forces to crawl slowly into space, and then maintain altitude. It won’t need to stay in exactly the same place, but could simply use the changing fields and forces to stay within a reasonably small region. It won’t have used any fuel or rockets to get there or stay there, but now it is in space, even if it isn’t very high, it could be quite useful, even though it is only a few grams in weight.

Suppose our invisibly small cyber-spider sits near the orbit of a particular piece of space junk. The space junk moves fast, and may well be much larger than our spider in terms of mass, but if a few threads of graphene silk were to be in its path, our spider could effectively ensnare it, cause an immediate drop of speed due to Newtonian sharing of momentum (the spider has to be accelerated to the same speed as the junk, from stationary so even though it is much lighter, that would still cause a significant drop in junk speed)) and then use its threads as a mechanism for electromagnetic drag, causing it to slowly lose more speed and fall out of orbit. That might compete well as a cheap mechanism for cleaning up space junk.

Some organic spiders can kill a man with a single bite, and space spiders could do much the same, albeit via a somewhat different process. Instead of junk, our spider could meander into collision course with an astronaut doing a space walk. A few grams isn’t much, but a stationary cyber-spider placed in the way of a rapidly moving human would have much the same effect as a very high speed rifle shot.

The astronaut could easily be a satellite. Its location could be picked to impact on a particular part of the satellite to do most damage, or to cause many fragments, and if enough fragments are created – well, we’ve all watched Gravity and know what high speed fragments of destroyed satellites can do.

The spider doesn’t even need to get itself into a precise position. If it has many threads going off in various directions, it can quickly withdraw some of them to create a Newtonian reaction to move its center of mass fast into a path. It might sit many meters away from the desired impact position, waiting until the last second to jump in front of the astronaut/satellite/space junk.

What concerns me with this is that the weapon potential lends itself to low budget garden shed outfits such as lone terrorists. It wouldn’t need rockets, or massively expensive equipment. It doesn’t need rapid deployment, since being invisible, could migrate to its required location over days, weeks or months. A large number of them could be invisibly deployed from a back garden ready for use at any time, waiting for the command before simultaneously wiping out hundreds of satellites. It only needs a very small amount of IT attached to some sort of filament spinneret. A few years ago I worked out how to spin graphene filaments at 100m/s:

Spiderman-style silk thrower

If I can do it, others can too, and there are probably many ways to do this other than mine.

If you aren’t SpiderMan, and can accept lower specs, you could make a basic graphene silk thrower and associated IT that fits in the few grams weight budget.

There are many ways to cause havoc in space. Spiders have been sci-fi horror material for decades. Soon space spiders could be quite real.

 

 

Why superhumans are inevitable, and what else comes in the box

Do we have any real choice in the matter of making  super-humans? 20 years ago, I estimated 2005 as the point of no return, and nothing since then has changed my mind on that date. By my reckoning, we are already inevitably committed to designer babies, ebaybies, super-soldiers and super-smart autonomous weapons, direct brain-machine links, electronic immortality, new human races, population explosion, inter-species conflicts and wars with massively powerful weaponry, superhuman conscious AI, smart bacteria, and the only real control we have is relatively minor adjustments on timings. As I was discussing yesterday, the technology potential for this is vast and very exciting, nothing less than a genuine techno-utopia if we use the technologies wisely, but optimum potential doesn’t automatically become reality, and achieving a good outcome is unlikely if many barriers are put in its way.

In my estimation, we have already started the countdown to this group of interconnected technologies – we will very likely get all of them, and we must get ready for the decisions and impacts ahead. At the moment, our society is a small child about to open its super-high-tech xmas presents while fighting with its siblings. Those presents will give phenomenal power far beyond the comprehension of the child or its emotional maturity to equip it to deal with the decisions safely. Our leaders have already squandered decades of valuable preparation time by ignoring the big issues to focus on trivial ones. It is not too late to achieve a good ending, but it won’t happen by accident and we do need to make preparations to avoid pretty big problems.

Both hard and soft warfare – the sword and the pen, already use rapidly advancing AI, and the problems are already running ahead of what the owners intended.

Facebook, Twitter, Instagram and other media giants all have lots of smart people and presumably they mean well, but if so, they have certainly been naive. They maybe hoped to eliminate loneliness, inequality, and poverty and create a loving interconnected global society with global peace, but instead created fake news, social division and conflict and election interference. More likely they didn’t intend either outcome, they just wanted to make money and that took priority over due care and attention..

Miniaturising swarming smart-drones are already the subjects of a new arms race that will deliver almost un-killable machine adversaries by 2050. AI separately is in other arms races to make super-smart AI and super-smart soldiers. This is key to the 2005 point of no return. It was around 2005 that we reached the levels of technology where future AI development all the way to superhuman machine consciousness could be done by individuals, mad scientists or rogue states, even if major powers had banned it. Before 2005, there probably wasn’t quite enough knowledge already on the net to do that. In 2018, lots of agencies have already achieved superiority to humans in niche areas, and other niches will succumb one by one until the whole field of human capability is covered. The first machines to behave in ways not fully understood by humans arrived in the early 1990s; in 2018, neural nets already make lots of decisions at least partly obscured to humans.

This AI development trend will take us to superhuman AI, and it will be able to accelerate development of its own descendants to vastly superhuman AI, fully conscious, with emotions, and its own agendas. That will need humans to protect against being wiped out by superhuman AI. The only three ways we could do that are to either redesign the brain biologically to be far smarter, essentially impossible in the time-frame, to design ways to link our brains to machines, so that we have direct access to the same intelligence as the AIs, so a gulf doesn’t appear and we can remain relatively safe, or pray for super-smart aliens to come to our help, not the best prospect.

Therefore we will have no choice but to make direct brain links to super-smart AI. Otherwise we risk extinction. It is that simple. We have some idea how to do that – nanotech devices inside the brain linking to each and every synapse that can relay electrical signals either way, a difficult but not impossible engineering problem. Best guesses for time-frame fall in the 2045-2050 range for a fully working link that not only relays signals between your organic brain and an IT replica, but by doing so essentially makes external IT just another part of your brain. That conveys some of the other technology gifts of electronic immortality, new varieties of humans, smart bacteria (which will be created during the development path to this link) along with human-variant population explosion, especially in cyberspace, with androids as their physical front end, and the inevitable inter-species conflicts over resources and space – trillions of AI and human-like minds in cyberspace that want to do things in the real world cannot be assumed to be willingly confined just to protect the interests of what they will think of as far lesser species.

Super-smart AI or humans with almost total capability to design whatever synthetic biology is needed to achieve any biological feature will create genetic listings for infinite potential offspring, simulate them, give some of them cyberspace lives, assemble actual embryos for some of them and bring designer babies. Already in 2018, you can pay to get a DNA listing, and blend it in any way you want with the listing of anyone else. It’s already possible to make DNA listings for potential humans and sell them on ebay, hence the term ebaybies. That is perfectly legal, still, but I’ve been writing and lecturing about them since 2004. Today they would just be listings, but we’ll one day have the tech to simulate them, choose ones we like and make them real, even some that were sold as celebrity collector items on ebay. It’s not only too late to start regulating this kind of tech, our leaders aren’t even thinking about it yet.

These technologies are all linked intricately, and their foundations are already in place, with much of the building on those foundations under way. We can’t stop any of these things from happening, they will all come in the same basket. Our leaders are becoming aware of the potential and the potential dangers of the AI positive feedback loop, but at least 15 years too late to do much about it. They have been warned repeatedly and loudly but have focused instead on the minor politics of the day that voters are aware of. The fundamental nature of politics is unlikely to change substantially, so even efforts to slow down the pace of development or to limit areas of impact are likely to be always too little too late. At best, we will be able to slow runaway AI development enough to allow direct brain links to protect against extinction scenarios. But we will not be able to stop it now.

Given inevitability, it’s worth questioning whether there is even any point in trying. Why not just enjoy the ride? Well, the brakes might be broken, but if we can steer the bus expertly enough, it could be exciting and we could come out of it smelling of roses. The weak link is certainly the risk of super-smart AI, whether AI v humans or countries using super-smart AI to fight fiercely for world domination. That risk is alleviated by direct brain linkage, and I’d strongly argue necessitates it, but that brings the other technologies. Even if we decide not to develop it, others will, so one way or another, all these techs will arrive, and our future late century will have this full suite of techs, plus many others of course.

We need as a matter of extreme urgency to fix these silly social media squabbles and over-reactions that are pulling society apart. If we have groups hating each other with access to extremely advanced technology, that can only mean trouble. Tolerance is broken, sanctimony rules, the Inquisition is in progress. We have been offered techno-utopia, but current signs are that most people think techno-hell looks more appetizing and it is their free choice.

AIs of a feather flocking together to create global instability

Hawking and Musk have created a lot of media impact with their warnings about AI, so although terminator scenarios resulting from machine consciousness have been discussed, as have more mundane use of non-conscious autonomous weapon systems, it’s worth noting that I haven’t yet heard them mention one major category of risks from AI – emergence. AI risks have been discussed frequently since the 1970s, and in the 1990s a lot of work was done in the AI community on emergence. Complex emergent patterns of behavior often result from interactions between entities driven by simple algorithms. Genetic algorithms were demonstrated to produce evolution, simple neighbor-interaction rules were derived to illustrate flocking behaviors that make lovely screen saver effects. Cellular automata were played with. In BT we invented ways of self-organizing networks and FPGAs, played with mechanism that could be used for evolution and consciousness, demonstrated managing networks via ANTs – autonomous network telephers, using smart packets that would run up and down wires sorting things out all by themselves. In 1987 discovered a whole class of ways of bringing down networks via network resonance, information waves and their much larger class of correlated traffic – still unexploited by hackers apart from simple DOS attacks. These ideas have slowly evolved since, and some have made it into industry or hacker toolkits, but we don’t seem to be joining the dots as far as risks go.

I read an amusing article this morning by an ex-motoring-editor who was declined insurance because the AI systems used by insurance companies had labelled him as high risk because he maybe associated with people like Clarkson. Actually, he had no idea why, but that was his broker’s theory of how it might have happened. It’s a good article, well written and covers quite a few of the dangers of allowing computers to take control.

http://www.dailymail.co.uk/sciencetech/article-5310031/Evidence-robots-acquiring-racial-class-prejudices.html

The article suggested how AIs in different companies might all come to similar conclusions about people or places or trends or patterns in a nice tidy positive feedback loop. That’s exactly the sort of thing that can drive information waves, which I demonstrated in 1987 can bring down an entire network in less than 3 milliseconds, in such a way that it would continue to crash many times when restarted. That isn’t intended by the algorithms, which individually ought to make good decisions, but when interacting with one another, create the emergent phenomenon.  Automated dealing systems are already pretty well understood in this regard and mechanisms prevent frequent stock market collapses, but that is only one specific type of behavior in one industry that is protected. There do not seem to be any industry-wide mechanisms to prevent the rest of this infinite class of problems from affecting any or all of the rest, simultaneously.

As we create ever more deep learning neural networks, that essentially teach themselves from huge data pools, human understanding of their ‘mindsets’ decreases. They make decisions using algorithms that are understood at a code level, but the massive matrix of derived knowledge they create from all the data they receive becomes highly opaque. Often, even usually, nobody quite knows how a decision is made. That’s bad enough in a standalone system, but when many such systems are connected, produced and owned and run by diverse companies with diverse thinking, the scope for destructive forms of emergence increases geometrically.

One result could be gridlock. Systems fed with a single new piece of data could crash. My 3 millisecond result in 1987 would still stand since network latency is the prime limiter. The first AI receives it, alters its mindset accordingly, processes it, makes a decision and interacts with a second AI. This second one might have different ‘prejudice’ so makes its own decision based on different criteria, and refuses to respond the way intended. A 3rd one looks at the 2nd’s decision and takes that as evidence that there might be an issue, and with its risk-averse mindset, also refuse to act, and that inaction spreads through the entire network in milliseconds. Since the 1st AI thinks the data is all fine and it should have gone ahead, it now interprets the inaction of the others as evidence that that type of data is somehow ‘wrong’ so itself refuses to process any further of that type, whether from its own operators or other parts of the system. So it essentially adds its own outputs to the bad feeling and the entire system falls into sulk mode. As one part of infrastructure starts to shut down, that infects other connected parts and our entire IT could fall into sulk mode – entire global infrastructure. Since nobody knows how it all works, or what has caused the shutdown, it might be extremely hard to recover.

Another possible result is a direct information wave, almost certainly a piece of fake news. Imagine our IT world in 5 years time, with all these super-smart AIs super-connected. A piece of fake news says a nuke has just been launched somewhere. Stocks will obviously decline, whatever the circumstances, so as the news spreads, everyone’s AIs will take it on themselves to start selling shares before the inevitable collapse, triggering a collapse, except it won’t because the markets won’t let that happen. BUT… The wave does spread, and all those individual AIs want to dispose of those shares, or at least find out what’s happening, so they all start sending messages to one another, exchanging data, trying to find what’s going on. That’s the information wave. They can’t sell shares of find out, because the network is going into overload, so they try even harder and force it into severe overload. So it falls over. When it comes back online, they all try again, crashing it again, and so on.

Another potential result is smartass AI. There is always some prat somewhere who sees an opportunity to take advantage and ruins if for everyone else by doing something like exploiting a small loophole in the law, or in this case, most likely, a prejudice our smartass AI has discovered in some other AI that means it can be taken advantage of by doing x, y, or z. Since nobody quite knows how any of their AIs are making their decisions because their mindsets ate too big and too complex, it will be very hard to identify what is going on. Some really unusual behavior is corrupting the system because some AI is going rogue somewhere somehow, but which one, where, how?

That one brings us back to fake news. That will very soon infect AI systems with their own varieties of fake news. Complex networks of AIs will have many of the same problems we are seeing in human social networks. An AI could become a troll just the same as a human, deliberately winding others up to generate attention of drive a change of some parameter – any parameter – in its own favour. Activist AIs will happen due to people making them to push human activist causes, but they will also do it all by themselves. Their analysis of the system will sometimes show them that a good way to get a good result is to cause problems elsewhere.

Then there’s climate change, weather, storms, tsunamis. I don’t mean real ones, I mean the system wide result of tiny interactions of tiny waves and currents of data and knowledge in neural nets. Tiny effects in one small part of a system can interact in unforeseen ways with other parts of other systems nearby, creating maybe a breeze, which interacts with breezes in nearby regions to create hurricanes. I think that’s a reasonable analogy. Chaos applies to neural net societies just as it does to climate, and 50 year waves equivalents will cause equivalent havoc in IT.

I won’t go on with more examples, long blogs are awful to read. None of these requires any self-awareness, sentience, consciousness, call it what you will. All of these can easily happen through simple interactions of fairly trivial AI deep learning nets. The level of interconnection already sounds like it may already be becoming vulnerable to such emergence effects. Soon it definitely will be. Musk and Hawking have at least joined the party and they’ll think more and more deeply in coming months. Zuckerberg apparently doesn’t believe in AI threats but now accepts the problems social media is causing. Sorry Zuck, but the kind of AI you’re company is messing with will also be subject to its own kinds of social media issues, not just in its trivial decisions on what to post or block, but actual inter-AI socializing issues. It might not try to eliminate humanity, but if it brings all of our IT to a halt and prevents rapid recovery, we’re still screwed.

 

2018 outlook: fragile

Futurists often consider wild cards – events that could happen, and would undoubtedly have high impacts if they do, but have either low certainty or low predictability of timing.  2018 comes with a larger basket of wildcards than we have seen for a long time. As well as wildcards, we are also seeing the intersection of several ongoing trends that are simultaneous reaching peaks, resulting in socio-political 100-year-waves. If I had to summarise 2018 in a single word, I’d pick ‘fragile’, ‘volatile’ and ‘combustible’ as my shortlist.

Some of these are very much in all our minds, such as possible nuclear war with North Korea, imminent collapse of bitcoin, another banking collapse, a building threat of cyberwar, cyberterrorism or bioterrorism, rogue AI or emergence issues, high instability in the Middle East, rising inter-generational conflict, resurgence of communism and decline of capitalism among the young, increasing conflicts within LGBTQ and feminist communities, collapse of the EU under combined pressures from many angles: economic stresses, unpredictable Brexit outcomes, increasing racial tensions resulting from immigration, severe polarization of left and right with the rise of extreme parties at both ends. All of these trends have strong tribal characteristics, and social media is the perfect platform for tribalism to grow and flourish.

Adding fuel to the building but still unlit bonfire are increasing tensions between the West and Russia, China and the Middle East. Background natural wildcards of major epidemics, asteroid strikes, solar storms, megavolcanoes, megatsumanis and ‘the big one’ earthquakes are still there waiting in the wings.

If all this wasn’t enough, society has never been less able to deal with problems. Our ‘snowflake’ generation can barely cope with a pea under the mattress without falling apart or throwing tantrums, so how we will cope as a society if anything serious happens such as a war or natural catastrophe is anyone’s guess. 1984-style social interaction doesn’t help.

If that still isn’t enough, we’re apparently running a little short on Ghandis, Mandelas, Lincolns and Churchills right now too. Juncker, Trump, Merkel and May are at the far end of the same scale on ability to inspire and bring everyone together.

Depressing stuff, but there are plenty of good things coming too. Augmented reality, more and better AI, voice interaction, space development, cryptocurrency development, better IoT, fantastic new materials, self-driving cars and ultra-high speed transport, robotics progress, physical and mental health breakthroughs, environmental stewardship improvements, and climate change moving to the back burner thanks to coming solar minimum.

If we are very lucky, none of the bad things will happen this year and will wait a while longer, but many of the good things will come along on time or early. If.

Yep, fragile it is.

 

AI Activism Part 2: The libel fields

This follows directly from my previous blog on AI activism, but you can read that later if you haven’t already. Order doesn’t matter.

https://timeguide.wordpress.com/2017/05/29/ai-and-activism-a-terminator-sized-threat-targeting-you-soon/

Older readers will remember an emotionally powerful 1984 film called The Killing Fields, set against the backdrop of the Khmer Rouge’s activity in Cambodia, aka the Communist Part of Kampuchea. Under Pol Pot, the Cambodian genocide of 2 to 3 million people was part of a social engineering policy of de-urbanization. People were tortured and murdered (some in the ‘killing fields’ near Phnom Penh) for having connections with former government of foreign governments, for being the wrong race, being ‘economic saboteurs’ or simply for being professionals or intellectuals .

You’re reading this, therefore you fit in at least the last of these groups and probably others, depending on who’s making the lists. Most people don’t read blogs but you do. Sorry, but that makes you a target.

As our social divide increases at an accelerating speed throughout the West, so the choice of weapons is moving from sticks and stones or demonstrations towards social media character assassination, boycotts and forced dismissals.

My last blog showed how various technology trends are coming together to make it easier and faster to destroy someone’s life and reputation. Some of that stuff I was writing about 20 years ago, such as virtual communities lending hardware to cyber-warfare campaigns, other bits have only really become apparent more recently, such as the deliberate use of AI to track personality traits. This is, as I wrote, a lethal combination. I left a couple of threads untied though.

Today, the big AI tools are owned by the big IT companies. They also own the big server farms on which the power to run the AI exists. The first thread I neglected to mention is that Google have made their AI an open source activity. There are lots of good things about that, but for the purposes of this blog, that means that the AI tools required for AI activism will also be largely public, and pressure groups and activist can use them as a start-point for any more advanced tools they want to make, or just use them off-the-shelf.

Secondly, it is fairly easy to link computers together to provide an aggregated computing platform. The SETI project was the first major proof of concept of that ages ago. Today, we take peer to peer networks for granted. When the activist group is ‘the liberal left’ or ‘the far right’, that adds up to a large number of machines so the power available for any campaign is notionally very large. Harnessing it doesn’t need IT skill from contributors. All they’d need to do is click a box on a email or tweet asking for their support for a campaign.

In our new ‘post-fact’, fake news era, all sides are willing and able to use social media and the infamous MSM to damage the other side. Fakes are becoming better. Latest AI can imitate your voice, a chat-bot can decide what it should say after other AI has recognized what someone has said and analysed the opportunities to ruin your relationship with them by spoofing you. Today, that might not be quite credible. Give it a couple more years and you won’t be able to tell. Next generation AI will be able to spoof your face doing the talking too.

AI can (and will) evolve. Deep learning researchers have been looking deeply at how the brain thinks, how to make neural networks learn better and to think better, how to design the next generation to be even smarter than humans could have designed it.

As my friend and robotic psychiatrist Joanne Pransky commented after my first piece, “It seems to me that the real challenge of AI is the human users, their ethics and morals (Their ‘HOS’ – Human Operating System).” Quite! Each group will indoctrinate their AI to believe their ethics and morals are right, and that the other lot are barbarians. Even evolutionary AI is not immune to religious or ideological bias as it evolves. Superhuman AI will be superhuman, but might believe even more strongly in a cause than humans do. You’d better hope the best AI is on your side.

AI can put articles, blogs and tweets out there, pretending to come from you or your friends, colleagues or contacts. They can generate plausible-sounding stories of what you’ve done or said, spoof emails in fake accounts using your ID to prove them.

So we’ll likely see activist AI armies set against each other, running on peer to peer processing clouds, encrypted to hell and back to prevent dismantling. We’ve all thought about cyber-warfare, but we usually only think about viruses or keystroke recorders, or more lately, ransom-ware. These will still be used too as small weapons in future cyber-warfare, but while losing files or a few bucks from an account is a real nuisance, losing your reputation, having it smeared all over the web, with all your contacts being told what you’ve done or said, and shown all the evidence, there is absolutely no way you could possible explain your way convincingly out of every one of those instances. Mud does stick, and if you throw tons of it, even if most is wiped off, much will remain. Trust is everything, and enough doubt cast will eventually erode it.

So, we’ve seen  many times through history the damage people are willing to do to each other in pursuit of their ideology. The Khmer Rouge had their killing fields. As political divide increases and battles become fiercer, the next 10 years will give us The Libel Fields.

You are an intellectual. You are one of the targets.

Oh dear!

 

AI and activism, a Terminator-sized threat targeting you soon

You should be familiar with the Terminator scenario. If you aren’t then you should watch one of the Terminator series of films because you really should be aware of it. But there is another issue related to AI that is arguably as dangerous as the Terminator scenario, far more likely to occur and is a threat in the near term. What’s even more dangerous is that in spite of that, I’ve never read anything about it anywhere yet. It seems to have flown under our collective radar and is already close.

In short, my concern is that AI is likely to become a heavily armed Big Brother. It only requires a few components to come together that are already well in progress. Read this, and if you aren’t scared yet, read it again until you understand it 🙂

Already, social media companies are experimenting with using AI to identify and delete ‘hate’ speech. Various governments have asked them to do this, and since they also get frequent criticism in the media because some hate speech still exists on their platforms, it seems quite reasonable for them to try to control it. AI clearly offers potential to offset the huge numbers of humans otherwise needed to do the task.

Meanwhile, AI is already used very extensively by the same companies to build personal profiles on each of us, mainly for advertising purposes. These profiles are already alarmingly comprehensive, and increasingly capable of cross-linking between our activities across multiple platforms and devices. Latest efforts by Google attempt to link eventual purchases to clicks on ads. It will be just as easy to use similar AI to link our physical movements and activities and future social connections and communications to all such previous real world or networked activity. (Update: Intel intend their self-driving car technology to be part of a mass surveillance net, again, for all the right reasons: http://www.dailymail.co.uk/sciencetech/article-4564480/Self-driving-cars-double-security-cameras.html)

Although necessarily secretive about their activities, government also wants personal profiles on its citizens, always justified by crime and terrorism control. If they can’t do this directly, they can do it via legislation and acquisition of social media or ISP data.

Meanwhile, other experiences with AI chat-bots learning to mimic human behaviors have shown how easily AI can be gamed by human activists, hijacking or biasing learning phases for their own agendas. Chat-bots themselves have become ubiquitous on social media and are often difficult to distinguish from humans. Meanwhile, social media is becoming more and more important throughout everyday life, with provably large impacts in political campaigning and throughout all sorts of activism.

Meanwhile, some companies have already started using social media monitoring to police their own staff, in recruitment, during employment, and sometimes in dismissal or other disciplinary action. Other companies have similarly started monitoring social media activity of people making comments about them or their staff. Some claim to do so only to protect their own staff from online abuse, but there are blurred boundaries between abuse, fair criticism, political difference or simple everyday opinion or banter.

Meanwhile, activists increasingly use social media to force companies to sack a member of staff they disapprove of, or drop a client or supplier.

Meanwhile, end to end encryption technology is ubiquitous. Malware creation tools are easily available.

Meanwhile, successful hacks into large company databases become more and more common.

Linking these various elements of progress together, how long will it be before activists are able to develop standalone AI entities and heavily encrypt them before letting them loose on the net? Not long at all I think.  These AIs would search and police social media, spotting people who conflict with the activist agenda. Occasional hacks of corporate databases will provide names, personal details, contacts. Even without hacks, analysis of publicly available data going back years of everyone’s tweets and other social media entries will provide the lists of people who have ever done or said anything the activists disapprove of.

When identified, they would automatically activate armies of chat-bots, fake news engines and automated email campaigns against them, with coordinated malware attacks directly on the person and indirect attacks by communicating with employers, friends, contacts, government agencies customers and suppliers to do as much damage as possible to the interests of that person.

Just look at the everyday news already about alleged hacks and activities during elections and referendums by other regimes, hackers or pressure groups. Scale that up and realize that the cost of running advanced AI is negligible.

With the very many activist groups around, many driven with extremist zeal, very many people will find themselves in the sights of one or more activist groups. AI will be able to monitor everyone, all the time.  AI will be able to target each of them at the same time to destroy each of their lives, anonymously, highly encrypted, hidden, roaming from server to server to avoid detection and annihilation, once released, impossible to retrieve. The ultimate activist weapon, that carries on the fight even if the activist is locked away.

We know for certain the depths and extent of activism, the huge polarization of society, the increasingly fierce conflict between left and right, between sexes, races, ideologies.

We know about all the nice things AI will give us with cures for cancer, better search engines, automation and economic boom. But actually, will the real future of AI be harnessed to activism? Will deliberate destruction of people’s everyday lives via AI be a real problem that is almost as dangerous as Terminator, but far more feasible and achievable far earlier?

AI presents a new route to attack corporate value

As AI increases in corporate, social, economic and political importance, it is becoming a big target for activists and I think there are too many vulnerabilities. I think we should be seeing a lot more articles than we are about what developers are doing to guard against deliberate misdirection or corruption, and already far too much enthusiasm for make AI open source and thereby giving mischief-makers the means to identify weaknesses.

I’ve written hundreds of times about AI and believe it will be a benefit to humanity if we develop it carefully. Current AI systems are not vulnerable to the terminator scenario, so we don’t have to worry about that happening yet. AI can’t yet go rogue and decide to wipe out humans by itself, though future AI could so we’ll soon need to take care with every step.

AI can be used in multiple ways by humans to attack systems.

First and most obvious, it can be used to enhance malware such as trojans or viruses, or to optimize denial of service attacks. AI enhanced security systems already battle against adaptive malware and AI can probe systems in complex ways to find vulnerabilities that would take longer to discover via manual inspection. As well as AI attacking operating systems, it can also attack AI by providing inputs that bias its learning and decision-making, giving AI ‘fake news’ to use current terminology. We don’t know the full extent of secret military AI.

Computer malware will grow in scope to address AI systems to undermine corporate value or political campaigns.

A new route to attacking corporate AI, and hence the value in that company that relates in some way to it is already starting to appear though. As companies such as Google try out AI-driven cars or others try out pavement/sidewalk delivery drones, so mischievous people are already developing devious ways to misdirect or confuse them. Kids will soon have such activity as hobbies. Deliberate deception of AI is much easier when people know how they work, and although it’s nice for AI companies to put their AI stuff out there into the open source markets for others to use to build theirs, that does rather steer future systems towards a mono-culture of vulnerability types. A trick that works against one future AI in one industry might well be adaptable to another use in another industry with a little devious imagination. Let’s take an example.

If someone builds a robot to deliberately step in front of a self-driving car every time it starts moving again, that might bring traffic to a halt, but police could quickly confiscate the robot, and they are expensive, a strong deterrent even if the pranksters are hiding and can’t be found. Cardboard cutouts might be cheaper though, even ones with hinged arms to look a little more lifelike. A social media orchestrated campaign against a company using such cars might involve thousands of people across a country or city deliberately waiting until the worst time to step out into a road when one of their vehicles comes along, thereby creating a sort of denial of service attack with that company seen as the cause of massive inconvenience for everyone. Corporate value would obviously suffer, and it might not always be very easy to circumvent such campaigns.

Similarly, the wheeled delivery drones we’ve been told to expect delivering packages any time soon will also have cameras to allow them to avoid bumping into objects or little old ladies or other people, or cats or dogs or cardboard cutouts or carefully crafted miniature tank traps or diversions or small roadblocks that people and pets can easily step over but drones can’t, that the local kids have built from a few twigs or cardboard from a design that has become viral that day. A few campaigns like that with the cold pizzas or missing packages that result could severely damage corporate value.

AI behind websites might also be similarly defeated. An early experiment in making a Twitter chat-bot that learns how to tweet by itself was quickly encouraged by mischief-makers to start tweeting offensively. If people have some idea how an AI is making its decisions, they will attempt to corrupt or distort it to their own ends. If it is heavily reliant on open source AI, then many of its decision processes will be known well enough for activists to develop appropriate corruption tactics. It’s not to early to predict that the proposed AI-based attempts by Facebook and Twitter to identify and defeat ‘fake news’ will fall right into the hands of people already working out how to use them to smear opposition campaigns with such labels.

It will be a sort of arms race of course, but I don’t think we’re seeing enough about this in the media. There is a great deal of hype about the various AI capabilities, a lot of doom-mongering about job cuts (and a lot of reasonable warnings about job cuts too) but very little about the fight back against AI systems by attacking them on their own ground using their own weaknesses.

That looks to me awfully like there isn’t enough awareness of how easily they can be defeated by deliberate mischief or activism, and I expect to see some red faces and corporate account damage as a result.

PS

This article appeared yesterday that also talks about the bias I mentioned: https://techcrunch.com/2016/12/10/5-unexpected-sources-of-bias-in-artificial-intelligence/

Since I wrote this blog, I was asked via Linked-In to clarify why I said that Open Source AI systems would have more security risk. Here is my response:

I wasn’t intending to heap fuel on a dying debate (though since current debate looks the same as in early 1990s it is dying slowly). I like and use open source too. I should have explained my reasoning better to facilitate open source checking: In regular (algorithmic) code, programming error rate should be similar so increasing the number of people checking should cancel out the risk from more contributors so there should be no a priori difference between open and closed. However:

In deep learning, obscurity reappears via neural net weightings being less intuitive to humans. That provides a tempting hiding place.

AI foundations are vulnerable to group-think, where team members share similar world models. These prejudices will affect the nature of OS and CS code and result in AI with inherent and subtle judgment biases which will be less easy to spot than bugs and be more visible to people with alternative world models. Those people are more likely to exist in an OS pool than a CS pool and more likely to be opponents so not share their results.

Deep learning may show the equivalent of political (or masculine and feminine). As well as encouraging group-think, that also distorts the distribution of biases and therefore the cancelling out of errors can no longer be assumed.

Human factors in defeating security often work better than exploiting software bugs. Some of the deep learning AI is designed to mimic humans as well as possible in thinking and in interfacing. I suspect that might also make them more vulnerable to meta-human-factor attacks. Again, exposure to different and diverse cultures will show a non-uniform distribution of error/bias spotting/disclosure/exploitation.

Deep learning will become harder for humans to understand as it develops and becomes more machine dependent. That will amplify the above weaknesses. Think of optical illusions that greatly distort human perception and think of similar in advanced AI deep learning. Errors or biases that are discovered will become more valuable to an opponent since they are less likely to be spotted by others, increasing their black market exploitation risk.

I have not been a programmer for over 20 years and am no security expert so my reasoning may be defective, but at least now you know what my reasoning was and can therefore spot errors in it.

Interesting times

The US Presidential election was a tough choice between an awful candidate and a terrible one, but that is hardly new, is it? There was no good outcome on offer, no Gandhi or Mandela to choose, but you know what, life will go on, it’s not the end of the world.

The nation that elected Reagan and W will survive and prosper, WW3 has been postponed, as has 1984, the environment will benefit, some rogue states are very pissed off, US cultural decay has been slowed and the UK has just jumped past the EU in trade negotiations. A great many downtrodden people suddenly feel they have some hope and a great many sanctimonious egos have been pricked. The MSM and social media hysteria will carry on for months, but actually, it could have been a bit worse. Hillary could have won.

I don’t like Trump, he seems to me to be another egotistical buffoon with a double digit IQ. It’s not great that he will be in charge, but it wouldn’t have been great if Clinton had won either – she was no angel or genius and the best she had to offer was continued stagnation, division, sanctimony and decline. Trump can’t be a dictator though, and there will be plenty of smart people around him who understand the world far better than him and will advise him, while both houses will act as a secure defense against the worst ideas getting through. On the other hand, with a Republican majority in both houses, he will be able to push through those policies that do hold water. So there will be changes, but only changes that appeal to enough elected representatives, so panic isn’t justified, even if shock and terror are understandable in the circumstances.

Let’s take a glass half full view of the new situation, while acknowledging that there are a few bits of cork in the wine too.

Many people that didn’t live on the coast have felt disenfranchised by government in the last terms. In some of the states in between, nearly two thirds of people voted for someone they feel finally gives them hope. hope is a powerful emotion, it can energize and reinvigorate people who have felt left out. Don’t underestimate the potential that brings for economic growth if harnessed well.

Sure, there are also those who have been terrified by media who have endlessly portrayed Trump as some sort of nouveau Hitler who will try to evict or oppress every black, Latino, Hispanic or Middle Eastern. He is very likely to try to limit future economic migration and to put more checks on who enters from jihadic regions, but it is plain silly to expect he would be able to go further than that even if he wanted to, and actually no evidence that he even wants to. Minorities will become far less scared as they discover that their lives will carry on much as before, and nobody tries to make them leave or lock them up. I doubt that any policies will actually target minorities negatively except to restrict immigration to those who bring more benefits than threats.

Russia is happy that he has won. That is a good thing. The cold war just became less cold, the Satan missiles will be stood down, the chance of a nuclear war just dropped significantly and all our life expectancies just increased. Russians will feel a lot less scared and Putin will be less of a problem. Don’t forget how the situation between Russia and the USA improved during Reagan’s term, one of the thickest people ever to be POTUS, but with the right kind of personality. Obama’s Nobel peace prize will be remembered as one of the biggest misjudgments in history. Hillary’s and Obama’s foreign policies have made the world a great deal more dangerous over the last eight years and Hillary would have made Russia even more edgy, the chance of extinction significant, Iran even more empowered, the refugee crisis even greater, and social stress due to migration amplified. In a choice of two evils, Trump’s version is by far the safer.

1984 has come a great deal closer to reality over the last eight years too. Politically correct sanctimony has taken the place of religion and a Spanish Inquisition has oppressed anyone who doesn’t acknowledge and worship the New Truth. I’ve written plenty on 1984 before and won’t repeat it all here, but consider how the mainstream media has handled this election, amplifying every Trump fault while whitewashing Clinton’s. Unbiased is not a word I could use of today’s MSM. one-sidedness and severe distortion of the truth would be much more appropriate descriptions. Trump made some very sexist remarks, but the media made far more of those than Bill’s actual use of the Oval Office. Hillary didn’t leave Bill over that, so how can she be quite so upset at a sexist remark by someone else? The stench of sanctimony has penetrated every area of the electoral campaign, and indeed every area of values debate in recent years. Is being sexist really as bad as being corrupt or putting personal gain ahead of national interests? Accusations of Clinton corruption and mishandling of highly classified information were invariably approached as if exposing them was a greater crime than the acts themselves. I never saw any proper exploration of these in the MSM away from right-wing outlets such as Breitbart. Social media such as Facebook, Twitter and even Google have also been highly polluted by this sanctimony that distorts greatly the data and views people are exposed to, filtering articles and views that don’t comply with their value sets, creating bubbles of groupthink, amplifying tribal forces and increasing division, forcing thick wedges between left and right. The anger between the left and right tribes has become dangerous over the last terms. Hillary might have said she wants unity and that we’re stronger together, that it is Hillary love versus Trump hate, but the evidence points elsewhere, with those who didn’t agree with her apparently being odious intolerable racists, uneducated moronic bigots. A PC 1984 is already close and would have become rapidly closer in a Hillary term.

The social media backlash is already fierce, the anti-Trump protests will be many and often. Sanctimony is a very powerful emotion and it will not go away any time soon. Every policy decision will be met by self-righteous indignation. The split between the holy, progressive, evolved, civilized left and the deplorable, contemptible, ignorant, uneducated, bigoted, omniphobic, Neanderthal right will grow, but it would have grown too under Hillary. California is sanctimony HQ and has oft mentioned that it would like to consider independence again. That day just came closer. I’ve been of the half-baked view that a dual democracy would actually be a better idea,with people sharing the same geography under different governance, and that would be more likely to disperse inter-tribe conflict, but an independent California might get better support in the real world.

The environment will benefit now too. Hillary would have backed more of the same CO2 panic measures such as carbon offset schemes that damage the environment by draining peat bogs and felling forests to plant palm oil plantations, displacing powerless tribes to make space, converting food crops into biofuel and inflating food prices beyond the ability of the world’s poor to pay, planting wind turbines that kill birds and bats and cause bogs to dry out, actually increasing CO2 output. Very many ‘green’ ideas actually harm the environment and the poor. Very few actually work as intended. Without a doubt, the environment will be better off without the greens in control. Environmental science has been polluted so badly that it has severely damaged the reputation of science as a whole over the last few years. New York is not under water, the polar ice caps have not vanished yet, a billion people have not actually been forced from their homes by the sea. Much of the latest science suggests we may well be seeing a prolonged period of cooling from 2020 due to strong reduction in solar activity combined with long period ocean cycles. Severely damaging the economy, increasing prices and taxes and harming poor people disproportionately to solve a problem that actually isn’t anywhere near as bad as the alarmist have suggested, that has been postponed a few decades and will be made irrelevant after that by new technology emerging over those decades is really not a good idea, especially if those natural cycles make the opposite trend more of an issue during that period. Again, we’d be far better off without any of that anti-CO2 policy.

Iran is upset by the Trump victory. That’s good. Iran was becoming rather too enthusiastic about its newfound power in the region. It would be a far greater threat with the nukes it would make in coming years thanks to Obama and Clinton. Another route to WW3 may well just have started to close. Hamas will feel less enthusiastic too. Different policy in that whole unstable region is needed, ongoing stupidity is not. Preventing an influx of jihadists hiding in migrant flows seems a better strategy than inviting more in by reckless virtue signalling. Those in need can still be helped, refugee camps can still offer protection. American kids have more chance now to sleep safely in their beds rather than become victims of jihad. Cultural conflicts between Islamic migrants that refuse to integrate and Americans with Western values will obviously be lower if there are fewer migrants too.

Finally, the UK will benefit too. Instead of a President determined to make sure the UK ‘goes to the back of the queue’ in trade negotiations, we will have one who is more likely to treat the UK well than the EU. Trump recognizes the bond between the UK and the USA far better than Clinton.

So, it ain’t all bad. Sure, you’ve got a buffoon for President, but you’ve had that before and you survived just fine. We nearly got Boris as our PM, so we almost know how you feel. It could have been worse and really, with all your checks and balances, I don’t think it will be all that bad..

The glass is half full, with a few bits of cork.