Category Archives: security

Enhanced cellular blockchain

I thought there was a need for a cellular blockchain variant, and a more sustainable alternative to cryptocurrencies like Bitcoin that depend on unsustainable proofs-of-work. So I designed one and gave it a temporary project name of Grapevine. I like biomimetics, which I used for both the blockchain itself and its derivative management/application/currency/SW distribution layer. The ANTs were my invention in 1993 when I was with BT, along with Chris Winter. BT never did anything with it, and I believe MIT later published some notes on the idea too. ANTs provide an ideal companion to blockchain and together, could be the basis of some very secure IT systems.

The following has not been thoroughly checked so may contain serious flaws, but hopefully contain some useful ideas to push the field a little in the right direction. Hint: If you can’t read the smaller print, hold the control key and use the mouse scroll button to zoom.

With thanks to my good friend Prof Nick Colosimo for letting me bounce the ideas off him.

Advertisements

AI that talks to us could quickly become problematic

Google’s making the news again adding evidence to the unfortunate stereotype of the autistic IT nerd that barely understands normal people, and they have therefore been astonished at the backlash that normal people would all easily have predicted. (I’m autistic and work in IT mostly too, and am well used to the stereotype it so it doesn’t bother me, in fact it is a sort of ‘get out of social interactions free’ card). Last time it was Google Glass, where it apparently didn’t occur to them that people may not want other people videoing them without consent in pubs and changing rooms. This time it is Google Duplex, that makes phone calls on your behalf to arrange appointment using voice that is almost indistinguishable from normal humans. You could save time making an appointment with a hairdresser apparently, so the Googlanders decided it must be a brilliant breakthrough, and expected everyone to agree. They didn’t.

Some of the objections have been about ethics: e.g. An AI should not present itself as human – Humans have rights and dignity and deserve respectful interactions with other people, but an AI doesn’t and should not masquerade as human to acquire such privilege without knowledge of the other party and their consent.

I would be more offended by the presumed attitude of the user. If someone thinks they are so much better then me that they can demand my time and attention without the expense of any of their own, delegating instead to a few microseconds of processing time in a server farm somewhere, I’ll treat them with the contempt they deserve. My response will not be favourable. I am already highly irritated by the NHS using simple voice interaction messaging to check I will attend a hospital appointment. The fact that my health is on the line and notices at surgeries say I will be banned if I complain on social media is sufficient blackmail to ensure my compliance, but it still comes at the expense of my respect and goodwill. AI-backed voice interaction with better voice wouldn’t be any better, and if it asking for more interaction such as actually booking an appointment, it would be extremely annoying.

In any case, most people don’t speak in fully formed grammatically and logically correct sentences. If you listen carefully to everyday chat, a lot of sentences are poorly pronounced, incomplete, jumbled, full of ums and er’s, likes and they require a great deal of cooperation by the listener to make any sense at all. They also wander off topic frequently. People don’t stick to a rigid vocabulary list or lists of nicely selected sentences.  Lots of preamble and verbal meandering is likely in a response that is highly likely to add ambiguity. The example used in a demo, “I’d like to make a hairdressing appointment for a client” sounds fine until you factor in normal everyday humanity. A busy hairdresser or a lazy receptionist is not necessarily going to cooperate fully. “what do you mean, client?”, “404 not found”, “piss off google”, “oh FFS, not another bloody computer”, “we don’t do hairdressing, we do haircuts”, “why can’t your ‘client’ call themselves then?” and a million other responses are more likely than “what time would you like?”

Suppose though that it eventually gets accepted by society. First, call centers beyond the jurisdiction of your nuisance call blocker authority will incessantly call you at all hours asking or telling you all sorts of things, wasting huge amounts of your time and reducing quality of life. Voice spam from humans in call centers is bad enough. If the owners can multiply productivity by 1000 by using AI instead of people, the result is predictable.

We’ve seen the conspicuous political use of social media AI already. Facebook might have allowed companies to use very limited and inaccurate knowledge of you to target ads or articles that you probably didn’t look at. Voice interaction would be different. It uses a richer emotional connection that text or graphics on a screen. Google knows a lot about you too, but it will know a lot more soon. These big IT companies are also playing with tech to log you on easily to sites without passwords. Some gadgets that might be involved might be worn, such as watches or bracelets or rings. They can pick up signals to identify you, but they can also check emotional states such as stress level. Voice gives away emotion too. AI can already tell better then almost all people whether you are telling the truth or lying or hiding something. Tech such as iris scans can also tell emotional states, as well as give health clues. Simple photos can reveal your age quite accurately to AI, (check out how-old.net).  The AI voice sounds human, but it is better then even your best friends at guessing your age, your stress and other emotions, your health, whether you are telling the truth or not, and it knows far more about what you like and dislike and what you really do online than anyone you know, including you. It knows a lot of your intimate secrets. It sounds human, but its nearest human equivalent was probably Machiavelli. That’s who will soon be on the other side of the call, not some dumb chatbot. Now re-calculate political interference, and factor in the political leaning and social engineering desires of the companies providing the tools. Google and Facebook and the others are very far from politically neutral. One presidential candidate might get full cooperation, assistance and convenient looking the other way, while their opponent might meet rejection and citation of the official rules on non-interference. Campaigns on social issues will also be amplified by AI coupled to voice interaction. I looked at some related issue in a previous blog on fake AI (i.e. fake news type issues): https://timeguide.wordpress.com/2017/11/16/fake-ai/

I could but won’t write a blog on how this tech could couple well to sexbots to help out incels. It may actually have some genuine uses in providing synthetic companionship for lonely people, or helping or encouraging them in real social interactions with real people. It will certainly have some uses in gaming and chatbot game interaction.

We are not very far from computers that are smarter then people across a very wide spectrum, and probably not very far from conscious machines that have superhuman intelligence. If we can’t even rely on IT companies to understand likely consequences of such obvious stuff as Duplex before thy push it, how can we trust them in other upcoming areas of AI development, or even closer term techs with less obvious consequences? We simply can’t!

There are certainly a few such areas where such technology might help us but most are minor and the rest don’t need any deception, but they all come at great cost or real social and political risk, as well as more abstract risks such as threats to human dignity and other ethical issues. I haven’t give this much thought yet and I am sure there must be very many other consequences I have not touched on yet. Google should do more thinking before they release stuff. Technology is becoming very powerful, but we all know that great power comes with great responsibility, and since most people aren’t engineers so can’t think through all the potential technology interactions and consequences, engineers such as Google’s must act more responsibly. I had hoped they’d started, and they said they had, but this is not evidence of that.

 

Why superhumans are inevitable, and what else comes in the box

Do we have any real choice in the matter of making  super-humans? 20 years ago, I estimated 2005 as the point of no return, and nothing since then has changed my mind on that date. By my reckoning, we are already inevitably committed to designer babies, ebaybies, super-soldiers and super-smart autonomous weapons, direct brain-machine links, electronic immortality, new human races, population explosion, inter-species conflicts and wars with massively powerful weaponry, superhuman conscious AI, smart bacteria, and the only real control we have is relatively minor adjustments on timings. As I was discussing yesterday, the technology potential for this is vast and very exciting, nothing less than a genuine techno-utopia if we use the technologies wisely, but optimum potential doesn’t automatically become reality, and achieving a good outcome is unlikely if many barriers are put in its way.

In my estimation, we have already started the countdown to this group of interconnected technologies – we will very likely get all of them, and we must get ready for the decisions and impacts ahead. At the moment, our society is a small child about to open its super-high-tech xmas presents while fighting with its siblings. Those presents will give phenomenal power far beyond the comprehension of the child or its emotional maturity to equip it to deal with the decisions safely. Our leaders have already squandered decades of valuable preparation time by ignoring the big issues to focus on trivial ones. It is not too late to achieve a good ending, but it won’t happen by accident and we do need to make preparations to avoid pretty big problems.

Both hard and soft warfare – the sword and the pen, already use rapidly advancing AI, and the problems are already running ahead of what the owners intended.

Facebook, Twitter, Instagram and other media giants all have lots of smart people and presumably they mean well, but if so, they have certainly been naive. They maybe hoped to eliminate loneliness, inequality, and poverty and create a loving interconnected global society with global peace, but instead created fake news, social division and conflict and election interference. More likely they didn’t intend either outcome, they just wanted to make money and that took priority over due care and attention..

Miniaturising swarming smart-drones are already the subjects of a new arms race that will deliver almost un-killable machine adversaries by 2050. AI separately is in other arms races to make super-smart AI and super-smart soldiers. This is key to the 2005 point of no return. It was around 2005 that we reached the levels of technology where future AI development all the way to superhuman machine consciousness could be done by individuals, mad scientists or rogue states, even if major powers had banned it. Before 2005, there probably wasn’t quite enough knowledge already on the net to do that. In 2018, lots of agencies have already achieved superiority to humans in niche areas, and other niches will succumb one by one until the whole field of human capability is covered. The first machines to behave in ways not fully understood by humans arrived in the early 1990s; in 2018, neural nets already make lots of decisions at least partly obscured to humans.

This AI development trend will take us to superhuman AI, and it will be able to accelerate development of its own descendants to vastly superhuman AI, fully conscious, with emotions, and its own agendas. That will need humans to protect against being wiped out by superhuman AI. The only three ways we could do that are to either redesign the brain biologically to be far smarter, essentially impossible in the time-frame, to design ways to link our brains to machines, so that we have direct access to the same intelligence as the AIs, so a gulf doesn’t appear and we can remain relatively safe, or pray for super-smart aliens to come to our help, not the best prospect.

Therefore we will have no choice but to make direct brain links to super-smart AI. Otherwise we risk extinction. It is that simple. We have some idea how to do that – nanotech devices inside the brain linking to each and every synapse that can relay electrical signals either way, a difficult but not impossible engineering problem. Best guesses for time-frame fall in the 2045-2050 range for a fully working link that not only relays signals between your organic brain and an IT replica, but by doing so essentially makes external IT just another part of your brain. That conveys some of the other technology gifts of electronic immortality, new varieties of humans, smart bacteria (which will be created during the development path to this link) along with human-variant population explosion, especially in cyberspace, with androids as their physical front end, and the inevitable inter-species conflicts over resources and space – trillions of AI and human-like minds in cyberspace that want to do things in the real world cannot be assumed to be willingly confined just to protect the interests of what they will think of as far lesser species.

Super-smart AI or humans with almost total capability to design whatever synthetic biology is needed to achieve any biological feature will create genetic listings for infinite potential offspring, simulate them, give some of them cyberspace lives, assemble actual embryos for some of them and bring designer babies. Already in 2018, you can pay to get a DNA listing, and blend it in any way you want with the listing of anyone else. It’s already possible to make DNA listings for potential humans and sell them on ebay, hence the term ebaybies. That is perfectly legal, still, but I’ve been writing and lecturing about them since 2004. Today they would just be listings, but we’ll one day have the tech to simulate them, choose ones we like and make them real, even some that were sold as celebrity collector items on ebay. It’s not only too late to start regulating this kind of tech, our leaders aren’t even thinking about it yet.

These technologies are all linked intricately, and their foundations are already in place, with much of the building on those foundations under way. We can’t stop any of these things from happening, they will all come in the same basket. Our leaders are becoming aware of the potential and the potential dangers of the AI positive feedback loop, but at least 15 years too late to do much about it. They have been warned repeatedly and loudly but have focused instead on the minor politics of the day that voters are aware of. The fundamental nature of politics is unlikely to change substantially, so even efforts to slow down the pace of development or to limit areas of impact are likely to be always too little too late. At best, we will be able to slow runaway AI development enough to allow direct brain links to protect against extinction scenarios. But we will not be able to stop it now.

Given inevitability, it’s worth questioning whether there is even any point in trying. Why not just enjoy the ride? Well, the brakes might be broken, but if we can steer the bus expertly enough, it could be exciting and we could come out of it smelling of roses. The weak link is certainly the risk of super-smart AI, whether AI v humans or countries using super-smart AI to fight fiercely for world domination. That risk is alleviated by direct brain linkage, and I’d strongly argue necessitates it, but that brings the other technologies. Even if we decide not to develop it, others will, so one way or another, all these techs will arrive, and our future late century will have this full suite of techs, plus many others of course.

We need as a matter of extreme urgency to fix these silly social media squabbles and over-reactions that are pulling society apart. If we have groups hating each other with access to extremely advanced technology, that can only mean trouble. Tolerance is broken, sanctimony rules, the Inquisition is in progress. We have been offered techno-utopia, but current signs are that most people think techno-hell looks more appetizing and it is their free choice.

AIs of a feather flocking together to create global instability

Hawking and Musk have created a lot of media impact with their warnings about AI, so although terminator scenarios resulting from machine consciousness have been discussed, as have more mundane use of non-conscious autonomous weapon systems, it’s worth noting that I haven’t yet heard them mention one major category of risks from AI – emergence. AI risks have been discussed frequently since the 1970s, and in the 1990s a lot of work was done in the AI community on emergence. Complex emergent patterns of behavior often result from interactions between entities driven by simple algorithms. Genetic algorithms were demonstrated to produce evolution, simple neighbor-interaction rules were derived to illustrate flocking behaviors that make lovely screen saver effects. Cellular automata were played with. In BT we invented ways of self-organizing networks and FPGAs, played with mechanism that could be used for evolution and consciousness, demonstrated managing networks via ANTs – autonomous network telephers, using smart packets that would run up and down wires sorting things out all by themselves. In 1987 discovered a whole class of ways of bringing down networks via network resonance, information waves and their much larger class of correlated traffic – still unexploited by hackers apart from simple DOS attacks. These ideas have slowly evolved since, and some have made it into industry or hacker toolkits, but we don’t seem to be joining the dots as far as risks go.

I read an amusing article this morning by an ex-motoring-editor who was declined insurance because the AI systems used by insurance companies had labelled him as high risk because he maybe associated with people like Clarkson. Actually, he had no idea why, but that was his broker’s theory of how it might have happened. It’s a good article, well written and covers quite a few of the dangers of allowing computers to take control.

http://www.dailymail.co.uk/sciencetech/article-5310031/Evidence-robots-acquiring-racial-class-prejudices.html

The article suggested how AIs in different companies might all come to similar conclusions about people or places or trends or patterns in a nice tidy positive feedback loop. That’s exactly the sort of thing that can drive information waves, which I demonstrated in 1987 can bring down an entire network in less than 3 milliseconds, in such a way that it would continue to crash many times when restarted. That isn’t intended by the algorithms, which individually ought to make good decisions, but when interacting with one another, create the emergent phenomenon.  Automated dealing systems are already pretty well understood in this regard and mechanisms prevent frequent stock market collapses, but that is only one specific type of behavior in one industry that is protected. There do not seem to be any industry-wide mechanisms to prevent the rest of this infinite class of problems from affecting any or all of the rest, simultaneously.

As we create ever more deep learning neural networks, that essentially teach themselves from huge data pools, human understanding of their ‘mindsets’ decreases. They make decisions using algorithms that are understood at a code level, but the massive matrix of derived knowledge they create from all the data they receive becomes highly opaque. Often, even usually, nobody quite knows how a decision is made. That’s bad enough in a standalone system, but when many such systems are connected, produced and owned and run by diverse companies with diverse thinking, the scope for destructive forms of emergence increases geometrically.

One result could be gridlock. Systems fed with a single new piece of data could crash. My 3 millisecond result in 1987 would still stand since network latency is the prime limiter. The first AI receives it, alters its mindset accordingly, processes it, makes a decision and interacts with a second AI. This second one might have different ‘prejudice’ so makes its own decision based on different criteria, and refuses to respond the way intended. A 3rd one looks at the 2nd’s decision and takes that as evidence that there might be an issue, and with its risk-averse mindset, also refuse to act, and that inaction spreads through the entire network in milliseconds. Since the 1st AI thinks the data is all fine and it should have gone ahead, it now interprets the inaction of the others as evidence that that type of data is somehow ‘wrong’ so itself refuses to process any further of that type, whether from its own operators or other parts of the system. So it essentially adds its own outputs to the bad feeling and the entire system falls into sulk mode. As one part of infrastructure starts to shut down, that infects other connected parts and our entire IT could fall into sulk mode – entire global infrastructure. Since nobody knows how it all works, or what has caused the shutdown, it might be extremely hard to recover.

Another possible result is a direct information wave, almost certainly a piece of fake news. Imagine our IT world in 5 years time, with all these super-smart AIs super-connected. A piece of fake news says a nuke has just been launched somewhere. Stocks will obviously decline, whatever the circumstances, so as the news spreads, everyone’s AIs will take it on themselves to start selling shares before the inevitable collapse, triggering a collapse, except it won’t because the markets won’t let that happen. BUT… The wave does spread, and all those individual AIs want to dispose of those shares, or at least find out what’s happening, so they all start sending messages to one another, exchanging data, trying to find what’s going on. That’s the information wave. They can’t sell shares of find out, because the network is going into overload, so they try even harder and force it into severe overload. So it falls over. When it comes back online, they all try again, crashing it again, and so on.

Another potential result is smartass AI. There is always some prat somewhere who sees an opportunity to take advantage and ruins if for everyone else by doing something like exploiting a small loophole in the law, or in this case, most likely, a prejudice our smartass AI has discovered in some other AI that means it can be taken advantage of by doing x, y, or z. Since nobody quite knows how any of their AIs are making their decisions because their mindsets ate too big and too complex, it will be very hard to identify what is going on. Some really unusual behavior is corrupting the system because some AI is going rogue somewhere somehow, but which one, where, how?

That one brings us back to fake news. That will very soon infect AI systems with their own varieties of fake news. Complex networks of AIs will have many of the same problems we are seeing in human social networks. An AI could become a troll just the same as a human, deliberately winding others up to generate attention of drive a change of some parameter – any parameter – in its own favour. Activist AIs will happen due to people making them to push human activist causes, but they will also do it all by themselves. Their analysis of the system will sometimes show them that a good way to get a good result is to cause problems elsewhere.

Then there’s climate change, weather, storms, tsunamis. I don’t mean real ones, I mean the system wide result of tiny interactions of tiny waves and currents of data and knowledge in neural nets. Tiny effects in one small part of a system can interact in unforeseen ways with other parts of other systems nearby, creating maybe a breeze, which interacts with breezes in nearby regions to create hurricanes. I think that’s a reasonable analogy. Chaos applies to neural net societies just as it does to climate, and 50 year waves equivalents will cause equivalent havoc in IT.

I won’t go on with more examples, long blogs are awful to read. None of these requires any self-awareness, sentience, consciousness, call it what you will. All of these can easily happen through simple interactions of fairly trivial AI deep learning nets. The level of interconnection already sounds like it may already be becoming vulnerable to such emergence effects. Soon it definitely will be. Musk and Hawking have at least joined the party and they’ll think more and more deeply in coming months. Zuckerberg apparently doesn’t believe in AI threats but now accepts the problems social media is causing. Sorry Zuck, but the kind of AI you’re company is messing with will also be subject to its own kinds of social media issues, not just in its trivial decisions on what to post or block, but actual inter-AI socializing issues. It might not try to eliminate humanity, but if it brings all of our IT to a halt and prevents rapid recovery, we’re still screwed.

 

2018 outlook: fragile

Futurists often consider wild cards – events that could happen, and would undoubtedly have high impacts if they do, but have either low certainty or low predictability of timing.  2018 comes with a larger basket of wildcards than we have seen for a long time. As well as wildcards, we are also seeing the intersection of several ongoing trends that are simultaneous reaching peaks, resulting in socio-political 100-year-waves. If I had to summarise 2018 in a single word, I’d pick ‘fragile’, ‘volatile’ and ‘combustible’ as my shortlist.

Some of these are very much in all our minds, such as possible nuclear war with North Korea, imminent collapse of bitcoin, another banking collapse, a building threat of cyberwar, cyberterrorism or bioterrorism, rogue AI or emergence issues, high instability in the Middle East, rising inter-generational conflict, resurgence of communism and decline of capitalism among the young, increasing conflicts within LGBTQ and feminist communities, collapse of the EU under combined pressures from many angles: economic stresses, unpredictable Brexit outcomes, increasing racial tensions resulting from immigration, severe polarization of left and right with the rise of extreme parties at both ends. All of these trends have strong tribal characteristics, and social media is the perfect platform for tribalism to grow and flourish.

Adding fuel to the building but still unlit bonfire are increasing tensions between the West and Russia, China and the Middle East. Background natural wildcards of major epidemics, asteroid strikes, solar storms, megavolcanoes, megatsumanis and ‘the big one’ earthquakes are still there waiting in the wings.

If all this wasn’t enough, society has never been less able to deal with problems. Our ‘snowflake’ generation can barely cope with a pea under the mattress without falling apart or throwing tantrums, so how we will cope as a society if anything serious happens such as a war or natural catastrophe is anyone’s guess. 1984-style social interaction doesn’t help.

If that still isn’t enough, we’re apparently running a little short on Ghandis, Mandelas, Lincolns and Churchills right now too. Juncker, Trump, Merkel and May are at the far end of the same scale on ability to inspire and bring everyone together.

Depressing stuff, but there are plenty of good things coming too. Augmented reality, more and better AI, voice interaction, space development, cryptocurrency development, better IoT, fantastic new materials, self-driving cars and ultra-high speed transport, robotics progress, physical and mental health breakthroughs, environmental stewardship improvements, and climate change moving to the back burner thanks to coming solar minimum.

If we are very lucky, none of the bad things will happen this year and will wait a while longer, but many of the good things will come along on time or early. If.

Yep, fragile it is.

 

Mega-buildings could become cultural bubbles

My regular readers, both of them in fact, will know I am often concerned about the dangerous growth of social media bubbles. By mid-century, thanks to upcoming materials, some cities will have a few buildings over 1km tall, possibly 10km (and a spaceport or two up to 30km high). These would be major buildings, and could create a similar problem.

A 1km building could have 200 floors, and with 100m square floors, 200 hectares of space.  Assuming half is residential space and the other half is shops, offices or services, that equates to 20,000 luxury apartments (90 sq m each) or 40,000 basic flats. That means each such building could be equivalent to a small town, with maybe 50,000 inhabitants. A 10km high mega-building, with a larger 250m side, would have 60 times more space, housing up to 300,000 people and all they need day-to-day, essentially a city.

Construction could be interesting. My thoughts are that a 10km building could be extruded from the ground using high pressure 3D printing, rather than assembled with cranes. Each floor could be fully fitted out while it is still near ground level, its apartments sold and populated, even as the building grows upward. That keeps construction costs and cash flow manageable.

My concern is that although we will have the technology to build such buildings in the 2040s, I’m not aware of much discussion about how cultures would evolve in such places, at least not outside of sci-fi (like Judge Dredd or Blade Runner). I rather hope we wouldn’t just build them first and try to solve social problems later. We really ought to have some sort of plans to make them work.

In a 100m side building, entire floors or groups of floors would likely be allocated to particular functions – residential, shopping, restaurants, businesses etc. Grouping functions sensibly reduces the total travel needed. In larger buildings, it is easier to have local shops mixed with apartments for everyday essentials, with larger malls elsewhere.

People could live almost entirely in the building, rarely needing to leave, and many might well do just that, essentially becoming institutionalized. I think these buildings will feel very different from small towns. In small towns, people still travel a lot to other places, and a feeling of geographic isolation doesn’t emerge. In a huge tower block of similar population and facilities, I don’t think people would leave as often, and many will stay inside. All they need is close by and might soon feel safe and familiar, while the external world might seem more distant, scarier. Institutionalization might not take long, a month or two of becoming used to the convenience of staying nearby while watching news of horrors going on elsewhere. Once people stop the habit of leaving the building, it could become easier to find reasons not to leave it in future.

Power structures would soon evolve – local politics would happen, criminal gangs would emerge, people would soon learn of good and bad zones. It’s possible that people might become tribal, their building and their tribe competing for external resources and funding with tribes in other mega-buildings, and their might be conflict. Knowing they are physically detached, the same bravery to attack total strangers just because they hold different views might emerge that we see on social media today. There might be cyber-wars, drone wars, IoT wars between buildings.

I’m not claiming to be a social anthropologist. I have no real idea how these buildings will work and perhaps my fears are unjustified. But even I can see some potential problems just based on what we see today, magnified for the same reasons problems get magnified on social media. Feelings of safety and anonymity can lead to some very nasty tribal behaviors. Managing diversity of opinion among people moving in would be a significant challenge, maintaining it might be near impossible. With the sort of rapid polarization we’ve already seen today thanks to social media bubbles, physically contained communities would surely see those same forces magnified everyday.

Building a 10km mega-building will become feasible in the 2040s, and increased urban populations will make them an attractive option for planners. Managing them and making them work socially might be a much bigger challenge.

 

 

Google and the dangerous pursuit of ‘equality’

The world just got more dangerous, and I’m not talking about N Korea and Trump.

Google just sacked an employee because he openly suggested that men and women, (not all, but some, and there is an overlap, and …) might tend to have different preferences in some areas and that could (but not always, and only in certain cases, and we must always recognize and respect everyone and …) possibly account for some of the difference in numbers of men and women in certain roles (but there might be other causes too and obviously lots of discrimination and …. )

Yes, that’s what he actually said, but with rather more ifs and buts and maybes. He felt the need to wrap such an obvious statement in several kilometers thick of cotton wool so as not to offend the deliberately offended but nonetheless deliberate offense was taken and he is out on his ear.

Now, before you start thinking this is some right-wing rant, I feel obliged to point out just how progressive Futurizon is: 50% of all Futurizon owners and employees are female, all employees and owners have the same voting rights, 50% are immigrants and all are paid exactly the same and have the same size offices, regardless of dedication, ability, nature or quality or volume of output and regardless of their race, religion, beauty, shape, fitness, dietary preferences, baldness, hobbies or political views, even if they are Conservatives. All Futurizon offices are safe zones where employees may say anything they want of any level of truth, brilliance or stupidity and expect it to be taken as absolute fact and any consequential emotional needs to be fully met. No employee may criticize any other employee’s mouse mat, desk personalisation or screen wallpaper for obvious lack of taste. All employees are totally free to do anything they choose 100% of the time and can take as much leave as they want. All work is voluntary. All have the same right to respectfully request any other employee to make them coffee, tea or Pimms. All employees of all genders real or imagined are entitled to the same maternity and paternity rights, and the same sickness benefits, whether ill or not. In fact, Futurizon does not discriminate on any grounds whatsoever. We are proud to lead the world in non-discrimination. Unfortunately, our world-leading terms of employment mean that we can no longer afford to hire any new employees.

However, I note that Google has rather more power and influence than Futurizon so their policies count more. They appear (Google also has better lawyers than I can afford, so I must stress that all that follows is my personal opinion) to have firmly decided that diversity is all-important and they seem to want total equality of outcome. The view being expressed not just by Google but by huge swathes of angry protesters seems to be that any difference in workforce representation from that of the general population must arise from discrimination or oppression so must be addressed by positive action to correct it. There are apparently no statistically discernible differences in behavior between genders, or in job or role preference, so any you may have noticed over the time you’ve been alive is just your prejudice. Google says they fully support free speech and diversity of views, but expression of views is apparently only permitted as long as those views are authorized, on penalty of dismissal.

So unless I’m picking up totally the wrong end of the stick here, and I don’t do that often, only 13% of IT engineers are women, but internal policies must ensure that the proportion rises to 50%, whether women want to do that kind of work or not. In fact, nobody may question whether as many women want to work as IT engineers as men; it must now be taken as fact. By extension, since more women currently work in marketing, HR and PR, they must be substituted by men via positive action programs until men fill 50% of those roles. Presumably similar policies must also apply in medical bays for nursing and other staff there, and in construction teams for their nice new buildings. Ditto all other genders, races, religions; all groups must be protected and equalized to USA population proportions, apparently except those that don’t claim to hold sufficiently left-wing views, in which case it is seemingly perfectly acceptable to oppress, ostracize and even expel them.

In other words, freedom of choice and difference in ability, and more importantly freedom from discrimination, must be over-ruled in favor of absolute equality of diversity, regardless of financial or social cost, or impact on product or service quality. Not expressing full and enthusiastic left-wing compliance is seemingly just cause for dismissal.

So, why does this matter outside Google? Well, AI is developing very nicely. In fact, Google is one of the star players in the field right now. It is Google that will essentially decide how much of the AI around us is trained, how it learns, what it learns, what ‘knowledge’ it has of the world. Google will pick the content the AI learns from, and overrule or reeducate it if it draws any ‘wrong’ conclusions about the world, such as that more women than men want to be nurses or work in HR, or that more men than women want to be builders or engineers. A Google AI must presumably believe that the only differences between men and women are physical, unless their AI is deliberately excluded from the loudly declared corporate values and belief sets.

You should be very worried. Google’s values really matter. They have lots of influence on some of the basic tools of everyday life. Even outside their company, their AI tools and approaches will have strong influence on how other AI develops, determining operating systems and platforms, languages, mechanisms, interfaces, filters, even prejudices and that reach and influence is likely to increase. Their AI may well be in many self-driving cars, and if they have to make life or death decisions, the underlying value assumptions must feature in the algorithms. Soon companies will need AI that is more emotionally compliant. AI will use compliments or teasing or seduction or sarcasm or wit as marketing tools as well as just search engine positioning. Soon AI will use highly expressive faces with attractive voices, with attractive messages, tailored to appeal to you by pandering to your tastes and prejudices while thinking something altogether different. AI might be the person at the party that is all smiles and compliments, before going off to tell everyone else how awful it thinks you are. If you dare to say something not ‘authorized’, the ultra-smart AI all around you might treat you condescendingly, making you feel ashamed, ostracized, a dinosaur. Then it might secretly push you down a few pages in search results, or put a negative spin on text summaries about you, or exclude you from recommendations. Or it might do all the secret stuff while pretending it thinks you’re fantastic. Internal cultural policies in companies like Google today could soon be external social engineering to push the left-wing world the IT industry believes in – it isn’t just Google; Facebook and Twitter are also important and just as Left, though Amazon, Samsung, IBM and other AI players are less overtly politically biased, so far at least. Left wing policies generally cost a lot more, but Google and Facebook will presumably still expect other companies and people to pay the taxes to pay for it all. As their female staff gear up to fight them over pay differences between men and women for similar jobs, it often seems that Google’s holier-than-thou morality doesn’t quite make it as far as their finances.

Then it really starts being fun. We’ll soon have bacteria that can fabricate electronic circuits within themselves. Soon they’ll be able to power them too, giving the concept of smart yogurt. These bacteria could also have nanotechnology flagella to help them get around. We’ll soon have bacterial spies all over our environment, even on our skin, intercepting electronic signals that give away our thoughts. They’ll bring in data on everything that is said, everything that everyone even thinks or feels. Those bacteria will be directly connected into AI, in fact they’ll be part of it. They’ll be able to change things, to favor or punish according to whether they like what someone believes in or how they behave.

It isn’t just right-wing extremists that need to worry. I’m apparently Noveau Left – I score slightly left of center on political profiling tests, but I’m worried. A lot of this PC stuff seems extreme to me, sometimes just nonsense. Maybe it is, or maybe I should be lefter. But it’s not my choice. I don’t make the rules. Companies like Google make the rules, they even run the AI ethics groups. They decide much of what people see online, and even the meaning of the words. It’s very 1984-ish.

The trouble with the ‘echo chambers’ we heard about is that they soon normalize views to the loudest voices in those groups, and they don’t tend to be the moderates. We can expect it will go further to the extreme, not less. You probably aren’t left enough either. You should also be worried.

AI Activism Part 2: The libel fields

This follows directly from my previous blog on AI activism, but you can read that later if you haven’t already. Order doesn’t matter.

https://timeguide.wordpress.com/2017/05/29/ai-and-activism-a-terminator-sized-threat-targeting-you-soon/

Older readers will remember an emotionally powerful 1984 film called The Killing Fields, set against the backdrop of the Khmer Rouge’s activity in Cambodia, aka the Communist Part of Kampuchea. Under Pol Pot, the Cambodian genocide of 2 to 3 million people was part of a social engineering policy of de-urbanization. People were tortured and murdered (some in the ‘killing fields’ near Phnom Penh) for having connections with former government of foreign governments, for being the wrong race, being ‘economic saboteurs’ or simply for being professionals or intellectuals .

You’re reading this, therefore you fit in at least the last of these groups and probably others, depending on who’s making the lists. Most people don’t read blogs but you do. Sorry, but that makes you a target.

As our social divide increases at an accelerating speed throughout the West, so the choice of weapons is moving from sticks and stones or demonstrations towards social media character assassination, boycotts and forced dismissals.

My last blog showed how various technology trends are coming together to make it easier and faster to destroy someone’s life and reputation. Some of that stuff I was writing about 20 years ago, such as virtual communities lending hardware to cyber-warfare campaigns, other bits have only really become apparent more recently, such as the deliberate use of AI to track personality traits. This is, as I wrote, a lethal combination. I left a couple of threads untied though.

Today, the big AI tools are owned by the big IT companies. They also own the big server farms on which the power to run the AI exists. The first thread I neglected to mention is that Google have made their AI an open source activity. There are lots of good things about that, but for the purposes of this blog, that means that the AI tools required for AI activism will also be largely public, and pressure groups and activist can use them as a start-point for any more advanced tools they want to make, or just use them off-the-shelf.

Secondly, it is fairly easy to link computers together to provide an aggregated computing platform. The SETI project was the first major proof of concept of that ages ago. Today, we take peer to peer networks for granted. When the activist group is ‘the liberal left’ or ‘the far right’, that adds up to a large number of machines so the power available for any campaign is notionally very large. Harnessing it doesn’t need IT skill from contributors. All they’d need to do is click a box on a email or tweet asking for their support for a campaign.

In our new ‘post-fact’, fake news era, all sides are willing and able to use social media and the infamous MSM to damage the other side. Fakes are becoming better. Latest AI can imitate your voice, a chat-bot can decide what it should say after other AI has recognized what someone has said and analysed the opportunities to ruin your relationship with them by spoofing you. Today, that might not be quite credible. Give it a couple more years and you won’t be able to tell. Next generation AI will be able to spoof your face doing the talking too.

AI can (and will) evolve. Deep learning researchers have been looking deeply at how the brain thinks, how to make neural networks learn better and to think better, how to design the next generation to be even smarter than humans could have designed it.

As my friend and robotic psychiatrist Joanne Pransky commented after my first piece, “It seems to me that the real challenge of AI is the human users, their ethics and morals (Their ‘HOS’ – Human Operating System).” Quite! Each group will indoctrinate their AI to believe their ethics and morals are right, and that the other lot are barbarians. Even evolutionary AI is not immune to religious or ideological bias as it evolves. Superhuman AI will be superhuman, but might believe even more strongly in a cause than humans do. You’d better hope the best AI is on your side.

AI can put articles, blogs and tweets out there, pretending to come from you or your friends, colleagues or contacts. They can generate plausible-sounding stories of what you’ve done or said, spoof emails in fake accounts using your ID to prove them.

So we’ll likely see activist AI armies set against each other, running on peer to peer processing clouds, encrypted to hell and back to prevent dismantling. We’ve all thought about cyber-warfare, but we usually only think about viruses or keystroke recorders, or more lately, ransom-ware. These will still be used too as small weapons in future cyber-warfare, but while losing files or a few bucks from an account is a real nuisance, losing your reputation, having it smeared all over the web, with all your contacts being told what you’ve done or said, and shown all the evidence, there is absolutely no way you could possible explain your way convincingly out of every one of those instances. Mud does stick, and if you throw tons of it, even if most is wiped off, much will remain. Trust is everything, and enough doubt cast will eventually erode it.

So, we’ve seen  many times through history the damage people are willing to do to each other in pursuit of their ideology. The Khmer Rouge had their killing fields. As political divide increases and battles become fiercer, the next 10 years will give us The Libel Fields.

You are an intellectual. You are one of the targets.

Oh dear!

 

AI and activism, a Terminator-sized threat targeting you soon

You should be familiar with the Terminator scenario. If you aren’t then you should watch one of the Terminator series of films because you really should be aware of it. But there is another issue related to AI that is arguably as dangerous as the Terminator scenario, far more likely to occur and is a threat in the near term. What’s even more dangerous is that in spite of that, I’ve never read anything about it anywhere yet. It seems to have flown under our collective radar and is already close.

In short, my concern is that AI is likely to become a heavily armed Big Brother. It only requires a few components to come together that are already well in progress. Read this, and if you aren’t scared yet, read it again until you understand it 🙂

Already, social media companies are experimenting with using AI to identify and delete ‘hate’ speech. Various governments have asked them to do this, and since they also get frequent criticism in the media because some hate speech still exists on their platforms, it seems quite reasonable for them to try to control it. AI clearly offers potential to offset the huge numbers of humans otherwise needed to do the task.

Meanwhile, AI is already used very extensively by the same companies to build personal profiles on each of us, mainly for advertising purposes. These profiles are already alarmingly comprehensive, and increasingly capable of cross-linking between our activities across multiple platforms and devices. Latest efforts by Google attempt to link eventual purchases to clicks on ads. It will be just as easy to use similar AI to link our physical movements and activities and future social connections and communications to all such previous real world or networked activity. (Update: Intel intend their self-driving car technology to be part of a mass surveillance net, again, for all the right reasons: http://www.dailymail.co.uk/sciencetech/article-4564480/Self-driving-cars-double-security-cameras.html)

Although necessarily secretive about their activities, government also wants personal profiles on its citizens, always justified by crime and terrorism control. If they can’t do this directly, they can do it via legislation and acquisition of social media or ISP data.

Meanwhile, other experiences with AI chat-bots learning to mimic human behaviors have shown how easily AI can be gamed by human activists, hijacking or biasing learning phases for their own agendas. Chat-bots themselves have become ubiquitous on social media and are often difficult to distinguish from humans. Meanwhile, social media is becoming more and more important throughout everyday life, with provably large impacts in political campaigning and throughout all sorts of activism.

Meanwhile, some companies have already started using social media monitoring to police their own staff, in recruitment, during employment, and sometimes in dismissal or other disciplinary action. Other companies have similarly started monitoring social media activity of people making comments about them or their staff. Some claim to do so only to protect their own staff from online abuse, but there are blurred boundaries between abuse, fair criticism, political difference or simple everyday opinion or banter.

Meanwhile, activists increasingly use social media to force companies to sack a member of staff they disapprove of, or drop a client or supplier.

Meanwhile, end to end encryption technology is ubiquitous. Malware creation tools are easily available.

Meanwhile, successful hacks into large company databases become more and more common.

Linking these various elements of progress together, how long will it be before activists are able to develop standalone AI entities and heavily encrypt them before letting them loose on the net? Not long at all I think.  These AIs would search and police social media, spotting people who conflict with the activist agenda. Occasional hacks of corporate databases will provide names, personal details, contacts. Even without hacks, analysis of publicly available data going back years of everyone’s tweets and other social media entries will provide the lists of people who have ever done or said anything the activists disapprove of.

When identified, they would automatically activate armies of chat-bots, fake news engines and automated email campaigns against them, with coordinated malware attacks directly on the person and indirect attacks by communicating with employers, friends, contacts, government agencies customers and suppliers to do as much damage as possible to the interests of that person.

Just look at the everyday news already about alleged hacks and activities during elections and referendums by other regimes, hackers or pressure groups. Scale that up and realize that the cost of running advanced AI is negligible.

With the very many activist groups around, many driven with extremist zeal, very many people will find themselves in the sights of one or more activist groups. AI will be able to monitor everyone, all the time.  AI will be able to target each of them at the same time to destroy each of their lives, anonymously, highly encrypted, hidden, roaming from server to server to avoid detection and annihilation, once released, impossible to retrieve. The ultimate activist weapon, that carries on the fight even if the activist is locked away.

We know for certain the depths and extent of activism, the huge polarization of society, the increasingly fierce conflict between left and right, between sexes, races, ideologies.

We know about all the nice things AI will give us with cures for cancer, better search engines, automation and economic boom. But actually, will the real future of AI be harnessed to activism? Will deliberate destruction of people’s everyday lives via AI be a real problem that is almost as dangerous as Terminator, but far more feasible and achievable far earlier?

Future sex, gender and relationships: how close can you get?

Using robots for gender play

Using robots for gender play

I recently gave a public talk at the British Academy about future sex, gender, and relationship, asking the question “How close can you get?”, considering particularly the impact of robots. The above slide is an example. People will one day (between 2050 and 2065 depending on their budget) be able to use an android body as their own or even swap bodies with another person. Some will do so to be young again, many will do so to swap gender. Lots will do both. I often enjoy playing as a woman in computer games, so why not ‘come back’ and live all over again as a woman for real? Except I’ll be 90 in 2050.

The British Academy kindly uploaded the audio track from my talk at

If you want to see the full presentation, here is the PowerPoint file as a pdf:

sex-and-robots-british-academy

I guess it is theoretically possible to listen to the audio while reading the presentation. Most of the slides are fairly self-explanatory anyway.

Needless to say, the copyright of the presentation belongs to me, so please don’t reproduce it without permission.

Enjoy.