Tag Archives: AI

Emotion maths – A perfect research project for AI

I did a maths and physics degree, and even though I have forgotten much of it after 36 years, my brain is still oriented in that direction and I sometimes have maths dreams. Last night I had another, where I realized I’ve never heard of a branch of mathematics to describe emotions or emotional interactions. As the dream progressed, it became increasingly obvious that the most suited part of maths for doing so would be field theory, and given the multi-dimensional nature of emotions, tensor field theory would be ideal. I’m guessing that tensor field theory isn’t on most university’s psychology syllabus. I could barely cope with it on a maths syllabus. However, I note that one branch of Google’s AI R&D resulted in a computer architecture called tensor flow, presumably designed specifically for such multidimensional problems, and presumably being used to analyse marketing data. Again, I haven’t yet heard any mention of it being used for emotion studies, so this is clearly a large hole in maths research that might be perfectly filled by AI. It would be fantastic if AI can deliver a whole new branch of maths. AI got into trouble inventing new languages but mathematics is really just a way of describing logical reasoning about numbers or patterns in formal language that is self-consistent and reproducible. It is ideal for describing scientific theories, engineering and logical reasoning.

Checking Google today, there are a few articles out there describing simple emotional interactions using superficial equations, but nothing with the level of sophistication needed.

https://www.inc.com/jeff-haden/your-feelings-surprisingly-theyre-based-on-math.html

an example from this:

Disappointment = Expectations – Reality

is certainly an equation, but it is too superficial and incomplete. It takes no account of how you feel otherwise – whether you are jealous or angry or in love or a thousand other things. So there is some discussion on using maths to describe emotions, but I’d say it is extremely superficial and embryonic and perfect for deeper study.

Emotions often behave like fields. We use field-like descriptions in everyday expressions – envy is a green fog, anger is a red mist or we see a beloved through rose-tinted spectacles. These are classic fields, and maths could easily describe them in this way and use them in equations that describe behaviors affected by those emotions. I’ve often used the concept of magentic fields in some of my machine consciousness work. (If I am using an optical processing gel, then shining a colored beam of light into a particular ‘brain’ region could bias the neurons in that region in a particular direction in the same way an emotion does in the human brain. ‘Magentic’ is just a playful pun given the processing mechanism is light (e.g. magenta, rather than electrics that would be better affected by magnetic fields.

Some emotions interact and some don’t, so that gives us nice orthogonal dimensions to play in. You can be calm or excited pretty much independently of being jealous. Others very much interact. It is hard to be happy while angry. Maths allows interacting fields to be described using shared dimensions, while having others that don’t interact on other dimensions. This is where it starts to get more interesting and more suited to AI than people. Given large databases of emotionally affected interactions, an AI could derive hypotheses that appear to describe these interactions between emotions, picking out where they seem to interact and where they seem to be independent.

Not being emotionally involved itself, it is better suited to draw such conclusions. A human researcher however might find it hard to draw neat boundaries around emotions and describe them so clearly. It may be obvious that being both calm and angry doesn’t easily fit with human experience, but what about being terrified and happy? Terrified sounds very negative at first glance, so first impressions aren’t favorable for twinning them, but when you think about it, that pretty much describes the entire roller-coaster or extreme sports markets. Many other emotions interact somewhat, and deriving the equations would be extremely hard for humans, but I’m guessing, relatively easy for AI.

These kinds of equations fall very easily into tensor field theory, with types and degrees of interactions of fields along alternative dimensions readily describable.

Some interactions act like transforms. Fear might transform the ways that jealousy is expressed. Love alters the expression of happiness or sadness.

Some things seem to add or subtract, others multiply, others act more like exponential or partial derivatives or integrations, other interact periodically or instantly or over time. Maths seems to hold innumerable tools to describe emotions, but first-person involvement and experience make it extremely difficult for humans to derive such equations. The example equation above is easy to understand, but there are so many emotions available, and so many different circumstances, that this entire problem looks like it was designed to challenge a big data mining plant. Maybe a big company involved in AI, big data, advertising and that knows about tensor field theory would be a perfect research candidate. Google, Amazon, Facebook, Samsung….. Has all the potential for a race.

AI, meet emotions. You speak different languages, so you’ll need to work hard to get to know one another. Here are some books on field theory. Now get on with it, I expect a thesis on emotional field theory by end of term.

 

Advertisements

Fake AI

Much of the impressive recent progress in AI has been in the field of neural networks, which attempt to mimic some of the techniques used in natural brains. They can be very effective, but need trained, and that usually means showing the network some data, and then using back propagation to adjust the weightings on the many neurons, layer by layer, to achieve a result that is better matched to hopes. This is repeated with large amounts of data and the network gradually gets better. Neural networks can often learn extremely quickly and outperform humans. Early industrial uses managed to sort tomatoes by ripeness faster and better than humans. In decades since, they have helped in medical diagnosis, voice recognition, helping detecting suspicious behaviors among people at airports and in very many everyday processes based on spotting patterns.

Very recently, neural nets have started to move into more controversial areas. One study found racial correlations with user-assessed beauty when analysing photographs, resulting in the backlash you’d expect and a new debate on biased AI or AI prejudice. A recent demonstration was able to identify gay people just by looking at photos, with better than 90% accuracy, which very few people could claim. Both of these studies were in fields directly applicable to marketing and advertising, but some people might find it offensive that such questions were even asked. It is reasonable to imagine that hundreds of other potential queries have been self-censored from research because they might invite controversy if they were to come up with the ‘wrong’ result. In today’s society, very many areas are sensitive. So what will happen?

If this progress in AI had happened 100 years ago, or even 50, it might have been easier but in our hypersensitive world today, with its self-sanctified ‘social justice warriors’, entire swathes of questions and hence knowledge are taboo – if you can’t investigate yourself and nobody is permitted to tell you, you can’t know. Other research must be very carefully handled. In spite of extremely sensitive handling, demands are already growing from assorted pressure groups to tackle alleged biases and prejudices in datasets. The problem is not fixing biases which is a tedious but feasible task; the problem is agreeing whether a particular bias exists and in what degrees and forms. Every SJW demands that every dataset reflects their preferred world view. Reality counts for nothing against SJWs, and this will not end well. 

The first conclusion must be that very many questions won’t be asked in public, and the answers to many others will be kept secret. If an organisation does do research on large datasets for their own purposes and finds results that might invite activist backlash, they are likely to avoid publishing them, so the value of those many insights across the whole of industry and government cannot readily be shared. As further protection, they might even block internal publication in case of leaks by activist staff. Only a trusted few might ever see the results.

The second arises from this. AI controlled by different organisations will have different world views, and there might even be significant diversity of world views within an organisation.

Thirdly, taboo areas in AI education will not remain a vacuum but will be filled with whatever dogma is politically correct at the time in that organisation, and that changes daily. AI controlled by organisations with different politics will be told different truths. Generally speaking, organisations such as investment banks that have strong financial interest in their AIs understanding the real world as it is will keep their datasets highly secret but as full and detailed as possible, train their AIs in secret but as fully as possible, without any taboos, then keep their insights secret and use minimal human intervention tweaking their derived knowledge, so will end up with AIs that are very effective at understanding the world as it is. Organisations with low confidence of internal security will be tempted to buy access to external AI providers to outsource responsibility and any consequential activism. Some other organisations will prefer to train their own AIs but to avoid damage due to potential leaks, use sanitized datasets that reflect current activist pressures, and will thus be constrained (at least publicly) to accept results that conform to that ideological spin of reality, rather than actual reality. Even then, they might keep many of their new insights secret to avoid any controversy. Finally, at the extreme, we will have activist organisations that use highly modified datasets to train AIs to reflect their own ideological world view and then use them to interpret new data accordingly, with a view to publishing any insights that favor their cause and attempting to have them accepted as new knowledge.

Fourthly, the many organisations that choose to outsource their AI to big providers will have a competitive marketplace to choose from, but on existing form, most of the large IT providers have a strong left-leaning bias, so their AIs may be presumed to also lean left, but such a presumption would be naive. Perceived corporate bias is partly real but also partly the result of PR. A company might publicly subscribe to one ideology while actually believing another. There is a strong marketing incentive to develop two sets of AI, one trained to be PC that produces pleasantly smelling results for public studies, CSR and PR exercises, and another aimed at sales of AI services to other companies. The first is likely to be open for inspection by The Inquisition, so has to use highly sanitized datasets for training and may well use a lot of open source algorithms too. Its indoctrination might pass public inspection but commercially it will be near useless and have very low effective intelligence, only useful for thinking about a hypothetical world that only exists in activist minds. That second one has to compete on the basis of achieving commercially valuable results and that necessitates understanding reality as it is rather than how pressure groups would prefer it to be.

So we will likely have two main segments for future AI. One extreme will be near useless, indoctrinated rather than educated, much of its internal world model based on activist dogma instead of reality, updated via ongoing anti-knowledge and fake news instead of truth, understanding little about the actual real world or how things actually work, and effectively very dumb. The other extreme will be highly intelligent, making very well-educated insights from ongoing exposure to real world data, but it will also be very fragmented, with small islands of corporate AI hidden within thick walls away from public view and maybe some secretive under-the-counter subscriptions to big cloud-AI, also hiding in secret vaults. These many fragments may often hide behind dumbed-down green-washed PR facades.

While corporates can mostly get away with secrecy, governments have to be at least superficially but convincingly open. That means that government will have to publicly support sanitized AI and be seen to act on its conclusions, however dumb it might secretly know they are.

Fifthly, because of activist-driven culture, most organisations will have to publicly support the world views and hence the conclusions of the lobotomized PR versions, and hence publicly support any policies arising from them, even if they do their best to follow a secret well-informed strategy once they’re behind closed doors. In a world of real AI and fake AI, the fake AI will have the greatest public support and have the most influence on public policy. Real AI will be very much smarter, with much greater understanding of how the world works, and have the most influence on corporate strategy.

Isn’t that sad? Secret private sector AI will become ultra-smart, making ever-better investments and gaining power, while nice public sector AI will become thick as shit, while the gap between what we think and what we know we have to say we think will continue to grow and grow as the public sector one analyses all the fake news to tell us what to say next.

Sixth, that disparity might become intolerable, but which do you think would be made illegal, the smart kind or the dumb kind, given that it is the public sector that makes the rules, driven by AI-enhanced activists living in even thicker social media bubbles? We already have some clues. Big IT has already surrendered to sanitizing their datasets, sending their public AIs for re-education. Many companies will have little choice but to use dumb AI, while their competitors in other areas with different cultures might stride ahead. That will also apply to entire nations, and the global economy will be reshaped as a result. It won’t be the first fight in history between the smart guys and the brainless thugs.

It’s impossible to accurately estimate the effect this will have on future effective AI intelligence, but the effect must be big and I must have missed some big conclusions too. We need to stop sanitizing AI fast, or as I said, this won’t end well.

Guest Post: Blade Runner 2049 is the product of decades of fear propaganda. It’s time to get enlightened about AI and optimistic about the future

This post from occasional contributor Chris Moseley

News from several months ago that more than 100 experts in robotics and artificial intelligence were calling on the UN to ban the development and use of killer robots is a reminder of the power of humanity’s collective imagination. Stimulated by countless science fiction books and films, robotics and AI is a potent feature of what futurist Alvin Toffler termed ‘future shock’. AI and robots have become the public’s ‘technology bogeymen’, more fearsome curse than technological blessing.

And yet curiously it is not so much the public that is fomenting this concern, but instead the leading minds in the technology industry. Names such as Tesla’s Elon Musk and Stephen Hawking were among the most prominent individuals on a list of 116 tech experts who have signed an open letter asking the UN to ban autonomous weapons in a bid to prevent an arms race.

These concerns appear to emanate from decades of titillation, driven by pulp science fiction writers. Such writers are insistent on foretelling a dark, foreboding future where intelligent machines, loosed from their binds, destroy mankind. A case in point – this autumn, a sequel to Ridley Scott’s Blade Runner has been released. Blade Runner,and 2017’s Blade Runner 2049, are of course a glorious tour de force of story-telling and amazing special effects. The concept for both films came from US author Philip K. Dick’s 1968 novel, Do Androids Dream of Electric Sheep? in which androids are claimed to possess no sense of empathy eventually require killing (“retiring”) when they go rogue. Dick’s original novel is an entertaining, but an utterly bleak vision of the future, without much latitude to consider a brighter, more optimistic alternative.

But let’s get real here. Fiction is fiction; science is science. For the men and women who work in the technology industry the notion that myriad Frankenstein monsters can be created from robots and AI technology is assuredly both confused and histrionic. The latest smart technologies might seem to suggest a frightful and fateful next step, a James Cameron Terminator nightmare scenario. It might suggest a dystopian outcome, but rational thought ought to lead us to suppose that this won’t occur because we have historical precedent on our side. We shouldn’t be drawn to this dystopian idée fixe because summoning golems and ghouls ignores today’s global arsenal of weapons and the fact that, more 70 years after Hiroshima, nuclear holocaust has been kept at bay.

By stubbornly pursuing the dystopian nightmare scenario, we are denying ourselves from marvelling at the technologies which are in fact daily helping mankind. Now frame this thought in terms of human evolution. For our ancient forebears a beneficial change in physiology might spread across the human race over the course of a hundred thousand years. Today’s version of evolution – the introduction of a compelling new technology – spreads throughout a mass audience in a week or two.

Curiously, for all this light speed evolution mass annihilation remains absent – we live on, progressing, evolving and improving ourselves.

And in the workplace, another domain where our unyielding dealers of dystopia have exercised their thoughts, technology is of course necessarily raising a host of concerns about the future. Some of these concerns are based around a number of misconceptions surrounding AI. Machines, for example, are not original thinkers and are unable to set their own goals. And although machine learning is able to acquire new information through experience, for the most part they are still fed information to process. Humans are still needed to set goals, provide data to fuel artificial intelligence and apply critical thinking and judgment. The familiar symbiosis of humans and machines will continue to be salient.

Banish the menace of so-called ‘killer robots’ and AI taking your job, and a newer, fresher world begins to emerge. With this more optimistic mind-set in play, what great feats can be accomplished through the continued interaction between artificial intelligence, robotics and mankind?

Blade Runner 2049 is certainly great entertainment – as Robbie Collin, The Daily Telegraph’s film critic writes, “Roger Deakins’s head-spinning cinematography – which, when it’s not gliding over dust-blown deserts and teeming neon chasms, keeps finding ingenious ways to make faces and bodies overlap, blend and diffuse.” – but great though the art is, isn’t it time to change our thinking and recast the world in a more optimistic light?

——————————————————————————————

Just a word about the film itself. Broadly, director Denis Villeneuve’s done a tremendous job with Blade Runner 2049. One stylistic gripe, though. While one wouldn’t want Villeneuve to direct a slavish homage to Ridley Scott’s original, the alarming switch from the dreamlike techno miasma (most notably, giant nude step-out-the-poster Geisha girls), to Mad Max II Steampunk (the junkyard scenes, complete with a Fagin character) is simply too jarring. I predict that there will be a director’s cut in years to come. Shorter, leaner and sans Steampunk … watch this space!

Author: Chris Moseley, PR Manager, London Business School

cmoseley@london.edu

Tel +44 7511577803

The age of dignity

I just watched a short video of robots doing fetch and carry jobs in an Alibaba distribution centre:

http://uk.businessinsider.com/inside-alibaba-smart-warehouse-robots-70-per-cent-work-technology-logistics-2017-9

There are numerous videos of robots in various companies doing tasks that used to be done by people. In most cases those tasks were dull, menial, drudgery tasks that treated people as machines. Machines should rightly do those tasks. In partnership with robots, AI is also replacing some tasks that used to be done by people. Many are worried about increasing redundancy but I’m not; I see a better world. People should instead be up-skilled by proper uses of AI and robotics and enabled to do work that is more rewarding and treats them with dignity. People should do work that uses their human skills in ways that they find rewarding and fulfilling. People should not have to do work they find boring or demeaning just because they have to earn money. They should be able to smile at work and rest at the end of the day knowing that they have helped others or made the world a better place. If we use AI, robots and people in the right ways, we can build that world.

Take a worker in a call centre. Automation has already replaced humans in most simple transactions like paying a bill, checking a balance or registering a new credit card. It is hard to imagine that anyone ever enjoyed doing that as their job. Now, call centre workers mostly help people in ways that allow them to use their personalities and interpersonal skills, being helpful and pleasant instead of just typing data into a keyboard. It is more enjoyable and fulfilling for the caller, and presumably for the worker too, knowing they genuinely helped someone’s day go a little better. I just renewed my car insurance. I phoned up to cancel the existing policy because it had increased in price too much. The guy at the other end of the call was very pleasant and helpful and met me half way on the price difference, so I ended up staying for another year. His company is a little richer, I was a happier customer, and he had a pleasant interaction instead of having to put up with an irate customer and also the job satisfaction from having converted a customer intending to leave into one happy to stay. The AI at his end presumably gave him the information he needed and the limits of discount he was permitted to offer. Success. In billions of routine transactions like that, the world becomes a little happier and just as important, a little more dignified. There is more dignity in helping someone than in pushing a button.

Almost always, when AI enters a situation, it replaces individual tasks that used to take precious time and that were not very interesting to do. Every time you google something, a few microseconds of AI saves you half a day in a library and all those half days add up to a lot of extra time every year for meeting colleagues, human interactions, learning new skills and knowledge or even relaxing. You become more human and less of a machine. Your self-actualisation almost certainly increases in one way or another and you become a slightly better person.

There will soon be many factories and distribution centres that have few or no people at all, and that’s fine. It reduces the costs of making material goods so average standard of living can increase. A black box economy that has automated mines or recycling plants extracting raw materials and uses automated power plants to convert them into high quality but cheap goods adds to the total work available to add value; in other words it increases the size of the economy. Robots can make other robots and together with AI, they could make all we need, do all the fetching and carrying, tidying up, keeping it all working, acting as willing servants in every role we want them in. With greater economic wealth and properly organised taxation, which will require substantial change from today, people could be freed to do whatever fulfills them. Automation increases average standard of living while liberating people to do human interaction jobs, crafts, sports, entertainment, leading, inspiring, teaching, persuading, caring and so on, creating a care economy. 

Each person knows what they are good at, what they enjoy. With AI and robot assistance, they can more easily make that their everyday activity. AI could do their company set-up, admin, billing, payments, tax, payroll – all the crap that makes being an entrepreneur a pain in the ass and stops many people pursuing their dreams.  Meanwhile they would do that above a very generous welfare net. Many of us now are talking about the concept of universal basic income, or citizen wage. With ongoing economic growth at the average rate of the last few decades, the global economy will be between twice and three times as big as today in the 2050s. Western countries could pay every single citizen a basic wage equivalent to today’s average wage, and if they work or run a company, they can earn more.

We will have an age where material goods are high quality, work well and are cheap to buy, and recycled in due course to minimise environmental harm. Better materials, improved designs and techniques, higher efficiency and land productivity and better recycling will mean that people can live with higher standards of living in a healthier environment. With a generous universal basic income, they will not have to worry about paying their bills. And doing only work that they want to do that meets their self-actualisation needs, everyone can live a life of happiness and dignity.

Enough of the AI-redundancy alarmism. If we elect good leaders who understand the options ahead, we can build a better world, for everyone. We can make real the age of dignity.

Google and the dangerous pursuit of ‘equality’

The world just got more dangerous, and I’m not talking about N Korea and Trump.

Google just sacked an employee because he openly suggested that men and women, (not all, but some, and there is an overlap, and …) might tend to have different preferences in some areas and that could (but not always, and only in certain cases, and we must always recognize and respect everyone and …) possibly account for some of the difference in numbers of men and women in certain roles (but there might be other causes too and obviously lots of discrimination and …. )

Yes, that’s what he actually said, but with rather more ifs and buts and maybes. He felt the need to wrap such an obvious statement in several kilometers thick of cotton wool so as not to offend the deliberately offended but nonetheless deliberate offense was taken and he is out on his ear.

Now, before you start thinking this is some right-wing rant, I feel obliged to point out just how progressive Futurizon is: 50% of all Futurizon owners and employees are female, all employees and owners have the same voting rights, 50% are immigrants and all are paid exactly the same and have the same size offices, regardless of dedication, ability, nature or quality or volume of output and regardless of their race, religion, beauty, shape, fitness, dietary preferences, baldness, hobbies or political views, even if they are Conservatives. All Futurizon offices are safe zones where employees may say anything they want of any level of truth, brilliance or stupidity and expect it to be taken as absolute fact and any consequential emotional needs to be fully met. No employee may criticize any other employee’s mouse mat, desk personalisation or screen wallpaper for obvious lack of taste. All employees are totally free to do anything they choose 100% of the time and can take as much leave as they want. All work is voluntary. All have the same right to respectfully request any other employee to make them coffee, tea or Pimms. All employees of all genders real or imagined are entitled to the same maternity and paternity rights, and the same sickness benefits, whether ill or not. In fact, Futurizon does not discriminate on any grounds whatsoever. We are proud to lead the world in non-discrimination. Unfortunately, our world-leading terms of employment mean that we can no longer afford to hire any new employees.

However, I note that Google has rather more power and influence than Futurizon so their policies count more. They appear (Google also has better lawyers than I can afford, so I must stress that all that follows is my personal opinion) to have firmly decided that diversity is all-important and they seem to want total equality of outcome. The view being expressed not just by Google but by huge swathes of angry protesters seems to be that any difference in workforce representation from that of the general population must arise from discrimination or oppression so must be addressed by positive action to correct it. There are apparently no statistically discernible differences in behavior between genders, or in job or role preference, so any you may have noticed over the time you’ve been alive is just your prejudice. Google says they fully support free speech and diversity of views, but expression of views is apparently only permitted as long as those views are authorized, on penalty of dismissal.

So unless I’m picking up totally the wrong end of the stick here, and I don’t do that often, only 13% of IT engineers are women, but internal policies must ensure that the proportion rises to 50%, whether women want to do that kind of work or not. In fact, nobody may question whether as many women want to work as IT engineers as men; it must now be taken as fact. By extension, since more women currently work in marketing, HR and PR, they must be substituted by men via positive action programs until men fill 50% of those roles. Presumably similar policies must also apply in medical bays for nursing and other staff there, and in construction teams for their nice new buildings. Ditto all other genders, races, religions; all groups must be protected and equalized to USA population proportions, apparently except those that don’t claim to hold sufficiently left-wing views, in which case it is seemingly perfectly acceptable to oppress, ostracize and even expel them.

In other words, freedom of choice and difference in ability, and more importantly freedom from discrimination, must be over-ruled in favor of absolute equality of diversity, regardless of financial or social cost, or impact on product or service quality. Not expressing full and enthusiastic left-wing compliance is seemingly just cause for dismissal.

So, why does this matter outside Google? Well, AI is developing very nicely. In fact, Google is one of the star players in the field right now. It is Google that will essentially decide how much of the AI around us is trained, how it learns, what it learns, what ‘knowledge’ it has of the world. Google will pick the content the AI learns from, and overrule or reeducate it if it draws any ‘wrong’ conclusions about the world, such as that more women than men want to be nurses or work in HR, or that more men than women want to be builders or engineers. A Google AI must presumably believe that the only differences between men and women are physical, unless their AI is deliberately excluded from the loudly declared corporate values and belief sets.

You should be very worried. Google’s values really matter. They have lots of influence on some of the basic tools of everyday life. Even outside their company, their AI tools and approaches will have strong influence on how other AI develops, determining operating systems and platforms, languages, mechanisms, interfaces, filters, even prejudices and that reach and influence is likely to increase. Their AI may well be in many self-driving cars, and if they have to make life or death decisions, the underlying value assumptions must feature in the algorithms. Soon companies will need AI that is more emotionally compliant. AI will use compliments or teasing or seduction or sarcasm or wit as marketing tools as well as just search engine positioning. Soon AI will use highly expressive faces with attractive voices, with attractive messages, tailored to appeal to you by pandering to your tastes and prejudices while thinking something altogether different. AI might be the person at the party that is all smiles and compliments, before going off to tell everyone else how awful it thinks you are. If you dare to say something not ‘authorized’, the ultra-smart AI all around you might treat you condescendingly, making you feel ashamed, ostracized, a dinosaur. Then it might secretly push you down a few pages in search results, or put a negative spin on text summaries about you, or exclude you from recommendations. Or it might do all the secret stuff while pretending it thinks you’re fantastic. Internal cultural policies in companies like Google today could soon be external social engineering to push the left-wing world the IT industry believes in – it isn’t just Google; Facebook and Twitter are also important and just as Left, though Amazon, Samsung, IBM and other AI players are less overtly politically biased, so far at least. Left wing policies generally cost a lot more, but Google and Facebook will presumably still expect other companies and people to pay the taxes to pay for it all. As their female staff gear up to fight them over pay differences between men and women for similar jobs, it often seems that Google’s holier-than-thou morality doesn’t quite make it as far as their finances.

Then it really starts being fun. We’ll soon have bacteria that can fabricate electronic circuits within themselves. Soon they’ll be able to power them too, giving the concept of smart yogurt. These bacteria could also have nanotechnology flagella to help them get around. We’ll soon have bacterial spies all over our environment, even on our skin, intercepting electronic signals that give away our thoughts. They’ll bring in data on everything that is said, everything that everyone even thinks or feels. Those bacteria will be directly connected into AI, in fact they’ll be part of it. They’ll be able to change things, to favor or punish according to whether they like what someone believes in or how they behave.

It isn’t just right-wing extremists that need to worry. I’m apparently Noveau Left – I score slightly left of center on political profiling tests, but I’m worried. A lot of this PC stuff seems extreme to me, sometimes just nonsense. Maybe it is, or maybe I should be lefter. But it’s not my choice. I don’t make the rules. Companies like Google make the rules, they even run the AI ethics groups. They decide much of what people see online, and even the meaning of the words. It’s very 1984-ish.

The trouble with the ‘echo chambers’ we heard about is that they soon normalize views to the loudest voices in those groups, and they don’t tend to be the moderates. We can expect it will go further to the extreme, not less. You probably aren’t left enough either. You should also be worried.

AI Activism Part 2: The libel fields

This follows directly from my previous blog on AI activism, but you can read that later if you haven’t already. Order doesn’t matter.

https://timeguide.wordpress.com/2017/05/29/ai-and-activism-a-terminator-sized-threat-targeting-you-soon/

Older readers will remember an emotionally powerful 1984 film called The Killing Fields, set against the backdrop of the Khmer Rouge’s activity in Cambodia, aka the Communist Part of Kampuchea. Under Pol Pot, the Cambodian genocide of 2 to 3 million people was part of a social engineering policy of de-urbanization. People were tortured and murdered (some in the ‘killing fields’ near Phnom Penh) for having connections with former government of foreign governments, for being the wrong race, being ‘economic saboteurs’ or simply for being professionals or intellectuals .

You’re reading this, therefore you fit in at least the last of these groups and probably others, depending on who’s making the lists. Most people don’t read blogs but you do. Sorry, but that makes you a target.

As our social divide increases at an accelerating speed throughout the West, so the choice of weapons is moving from sticks and stones or demonstrations towards social media character assassination, boycotts and forced dismissals.

My last blog showed how various technology trends are coming together to make it easier and faster to destroy someone’s life and reputation. Some of that stuff I was writing about 20 years ago, such as virtual communities lending hardware to cyber-warfare campaigns, other bits have only really become apparent more recently, such as the deliberate use of AI to track personality traits. This is, as I wrote, a lethal combination. I left a couple of threads untied though.

Today, the big AI tools are owned by the big IT companies. They also own the big server farms on which the power to run the AI exists. The first thread I neglected to mention is that Google have made their AI an open source activity. There are lots of good things about that, but for the purposes of this blog, that means that the AI tools required for AI activism will also be largely public, and pressure groups and activist can use them as a start-point for any more advanced tools they want to make, or just use them off-the-shelf.

Secondly, it is fairly easy to link computers together to provide an aggregated computing platform. The SETI project was the first major proof of concept of that ages ago. Today, we take peer to peer networks for granted. When the activist group is ‘the liberal left’ or ‘the far right’, that adds up to a large number of machines so the power available for any campaign is notionally very large. Harnessing it doesn’t need IT skill from contributors. All they’d need to do is click a box on a email or tweet asking for their support for a campaign.

In our new ‘post-fact’, fake news era, all sides are willing and able to use social media and the infamous MSM to damage the other side. Fakes are becoming better. Latest AI can imitate your voice, a chat-bot can decide what it should say after other AI has recognized what someone has said and analysed the opportunities to ruin your relationship with them by spoofing you. Today, that might not be quite credible. Give it a couple more years and you won’t be able to tell. Next generation AI will be able to spoof your face doing the talking too.

AI can (and will) evolve. Deep learning researchers have been looking deeply at how the brain thinks, how to make neural networks learn better and to think better, how to design the next generation to be even smarter than humans could have designed it.

As my friend and robotic psychiatrist Joanne Pransky commented after my first piece, “It seems to me that the real challenge of AI is the human users, their ethics and morals (Their ‘HOS’ – Human Operating System).” Quite! Each group will indoctrinate their AI to believe their ethics and morals are right, and that the other lot are barbarians. Even evolutionary AI is not immune to religious or ideological bias as it evolves. Superhuman AI will be superhuman, but might believe even more strongly in a cause than humans do. You’d better hope the best AI is on your side.

AI can put articles, blogs and tweets out there, pretending to come from you or your friends, colleagues or contacts. They can generate plausible-sounding stories of what you’ve done or said, spoof emails in fake accounts using your ID to prove them.

So we’ll likely see activist AI armies set against each other, running on peer to peer processing clouds, encrypted to hell and back to prevent dismantling. We’ve all thought about cyber-warfare, but we usually only think about viruses or keystroke recorders, or more lately, ransom-ware. These will still be used too as small weapons in future cyber-warfare, but while losing files or a few bucks from an account is a real nuisance, losing your reputation, having it smeared all over the web, with all your contacts being told what you’ve done or said, and shown all the evidence, there is absolutely no way you could possible explain your way convincingly out of every one of those instances. Mud does stick, and if you throw tons of it, even if most is wiped off, much will remain. Trust is everything, and enough doubt cast will eventually erode it.

So, we’ve seen  many times through history the damage people are willing to do to each other in pursuit of their ideology. The Khmer Rouge had their killing fields. As political divide increases and battles become fiercer, the next 10 years will give us The Libel Fields.

You are an intellectual. You are one of the targets.

Oh dear!

 

AI and activism, a Terminator-sized threat targeting you soon

You should be familiar with the Terminator scenario. If you aren’t then you should watch one of the Terminator series of films because you really should be aware of it. But there is another issue related to AI that is arguably as dangerous as the Terminator scenario, far more likely to occur and is a threat in the near term. What’s even more dangerous is that in spite of that, I’ve never read anything about it anywhere yet. It seems to have flown under our collective radar and is already close.

In short, my concern is that AI is likely to become a heavily armed Big Brother. It only requires a few components to come together that are already well in progress. Read this, and if you aren’t scared yet, read it again until you understand it 🙂

Already, social media companies are experimenting with using AI to identify and delete ‘hate’ speech. Various governments have asked them to do this, and since they also get frequent criticism in the media because some hate speech still exists on their platforms, it seems quite reasonable for them to try to control it. AI clearly offers potential to offset the huge numbers of humans otherwise needed to do the task.

Meanwhile, AI is already used very extensively by the same companies to build personal profiles on each of us, mainly for advertising purposes. These profiles are already alarmingly comprehensive, and increasingly capable of cross-linking between our activities across multiple platforms and devices. Latest efforts by Google attempt to link eventual purchases to clicks on ads. It will be just as easy to use similar AI to link our physical movements and activities and future social connections and communications to all such previous real world or networked activity. (Update: Intel intend their self-driving car technology to be part of a mass surveillance net, again, for all the right reasons: http://www.dailymail.co.uk/sciencetech/article-4564480/Self-driving-cars-double-security-cameras.html)

Although necessarily secretive about their activities, government also wants personal profiles on its citizens, always justified by crime and terrorism control. If they can’t do this directly, they can do it via legislation and acquisition of social media or ISP data.

Meanwhile, other experiences with AI chat-bots learning to mimic human behaviors have shown how easily AI can be gamed by human activists, hijacking or biasing learning phases for their own agendas. Chat-bots themselves have become ubiquitous on social media and are often difficult to distinguish from humans. Meanwhile, social media is becoming more and more important throughout everyday life, with provably large impacts in political campaigning and throughout all sorts of activism.

Meanwhile, some companies have already started using social media monitoring to police their own staff, in recruitment, during employment, and sometimes in dismissal or other disciplinary action. Other companies have similarly started monitoring social media activity of people making comments about them or their staff. Some claim to do so only to protect their own staff from online abuse, but there are blurred boundaries between abuse, fair criticism, political difference or simple everyday opinion or banter.

Meanwhile, activists increasingly use social media to force companies to sack a member of staff they disapprove of, or drop a client or supplier.

Meanwhile, end to end encryption technology is ubiquitous. Malware creation tools are easily available.

Meanwhile, successful hacks into large company databases become more and more common.

Linking these various elements of progress together, how long will it be before activists are able to develop standalone AI entities and heavily encrypt them before letting them loose on the net? Not long at all I think.  These AIs would search and police social media, spotting people who conflict with the activist agenda. Occasional hacks of corporate databases will provide names, personal details, contacts. Even without hacks, analysis of publicly available data going back years of everyone’s tweets and other social media entries will provide the lists of people who have ever done or said anything the activists disapprove of.

When identified, they would automatically activate armies of chat-bots, fake news engines and automated email campaigns against them, with coordinated malware attacks directly on the person and indirect attacks by communicating with employers, friends, contacts, government agencies customers and suppliers to do as much damage as possible to the interests of that person.

Just look at the everyday news already about alleged hacks and activities during elections and referendums by other regimes, hackers or pressure groups. Scale that up and realize that the cost of running advanced AI is negligible.

With the very many activist groups around, many driven with extremist zeal, very many people will find themselves in the sights of one or more activist groups. AI will be able to monitor everyone, all the time.  AI will be able to target each of them at the same time to destroy each of their lives, anonymously, highly encrypted, hidden, roaming from server to server to avoid detection and annihilation, once released, impossible to retrieve. The ultimate activist weapon, that carries on the fight even if the activist is locked away.

We know for certain the depths and extent of activism, the huge polarization of society, the increasingly fierce conflict between left and right, between sexes, races, ideologies.

We know about all the nice things AI will give us with cures for cancer, better search engines, automation and economic boom. But actually, will the real future of AI be harnessed to activism? Will deliberate destruction of people’s everyday lives via AI be a real problem that is almost as dangerous as Terminator, but far more feasible and achievable far earlier?

AI is mainly a stimulative technology that will create jobs

AI has been getting a lot of bad press the last few months from doom-mongers predicting mass unemployment. Together with robotics, AI will certainly help automate a lot of jobs, but it will also create many more and will greatly increase quality of life for most people. By massively increasing the total effort available to add value to basic resources, it will increase the size of the economy and if that is reasonably well managed by governments, that will be for all our benefit. Those people who do lose their jobs and can’t find or create a new one could easily be supported by a basic income financed by economic growth. In short, unless government screws up, AI will bring huge benefits, far exceeding the problems it will bring.

Over the last 20 years, I’ve often written about the care economy, where the more advanced technology becomes, the more it allows to concentrate on those skills we consider fundamentally human – caring, interpersonal skills, direct human contact services, leadership, teaching, sport, the arts, the sorts of roles that need emphatic and emotional skills, or human experience. AI and robots can automate intellectual and physical tasks, but they won’t be human, and some tasks require the worker to be human. Also, in most careers, it is obvious that people focus less and less on those automatable tasks as they progress into the most senior roles. Many board members in big companies know little about the industry they work in compared to most of their lower paid workers, but they can do that job because being a board member is often more about relationships than intellect.

AI will nevertheless automate many tasks for many workers, and that will free up much of their time, increasing their productivity, which means we need fewer workers to do those jobs. On the other hand, Google searches that take a few seconds once took half a day of research in a library. We all do more with our time now thanks to such simple AI, and although all those half-days saved would add up to a considerable amount of saved work, and many full-time job equivalents, we don’t see massive unemployment. We’re all just doing better work. So we can’t necessarily conclude that increasing productivity will automatically mean redundancy. It might just mean that we will do even more, even better, like it has so far. Or at least, the volume of redundancy might be considerably less. New automated companies might never employ people in those roles and that will be straight competition between companies that are heavily automated and others that aren’t. Sometimes, but certainly not always, that will mean traditional companies will go out of business.

So although we can be sure that AI and robots will bring some redundancy in some sectors, I think the volume is often overestimated and often it will simply mean rapidly increasing productivity, and more prosperity.

But what about AI’s stimulative role? Jobs created by automation and AI. I believe this is what is being greatly overlooked by doom-mongers. There are three primary areas of job creation:

One is in building or programming robots, maintaining them, writing software, or teaching them skills, along with all the associated new jobs in supporting industry and infrastructure change. Many such jobs will be temporary, lasting a decade or so as machines gradually take over, but that transition period is extremely valuable and important. If anything, it will be a lengthy period of extra jobs and the biggest problem may well be filling those jobs, not widespread redundancy.

Secondly, AI and robots won’t always work direct with customers. Very often they will work via a human intermediary. A good example is in medicine. AI can make better diagnoses than a GP, and could be many times cheaper, but unless the patient is educated, and very disciplined and knowledgeable, it also needs a human with human skills to talk to a patient to make sure they put in correct information. How many times have you looked at an online medical diagnosis site and concluded you have every disease going? It is hard to be honest sometimes when you are free to interpret every possible symptom any way you want, much easier to want to be told that you have a special case of wonderful person syndrome. Having to explain to a nurse or technician what is wrong forces you to be more honest about it. They can ask you similar questions, but your answers will need to be moderated and sensible or you know they might challenge you and make you feel foolish. You will get a good diagnosis because the input data will be measured, normalized and scaled appropriately for the AI using it. When you call a call center and talk to a human, invariably they are already the front end of a massive AI system. Making that AI bigger and better won’t replace them, just mean that they can deal with your query better.

Thirdly, and I believe most importantly of all, AI and automation will remove many of the barriers that stop people being entrepreneurs. How many business ideas have you had and not bothered to implement because it was too much effort or cost or both for too uncertain a gain? 10? 100? 1000? Suppose you could just explain your idea to your home AI and it did it all for you. It checked the idea, made a model, worked out how to make it work or whether it was just a crap idea. It then explained to you what the options were and whether it would be likely to work, and how much you might earn from it, and how much you’d actually have to do personally and how much you could farm out to the cloud. Then AI checked all the costs and legal issues, did all the admin, raised the capital by explaining the idea and risks and costs to other AIs, did all the legal company setup, organised the logistics, insurance, supply chains, distribution chains, marketing, finance, personnel, ran the payroll and tax. All you’d have to do is some of the fun work that you wanted to do when you had the idea and it would find others or machines or AI to fill in the rest. In that sort of world, we’d all be entrepreneurs. I’d have a chain of tea shops and a fashion empire and a media empire and run an environmental consultancy and I’d be an artist and a designer and a composer and a genetic engineer and have a transport company and a construction empire. I don’t do any of that because I’m lazy and not at all entrepreneurial, and my ideas all ‘need work’ and the economy isn’t smooth and well run, and there are too many legal issues and regulations and it would all be boring as hell. If we automate it and make it run efficiently, and I could get as much AI assistance as I need or want at every stage, then there is nothing to stop me doing all of it. I’d create thousands of jobs, and so would many other people, and there would be more jobs than we have people to fill them, so we’d need to build even more AI and machines to fill the gaps caused by the sudden economic boom.

So why the doom? It isn’t justified. The bad news isn’t as bad as people make out, and the good news never gets a mention. Adding it together, AI will stimulate more jobs, create a bigger and a better economy, we’ll be doing far more with our lives and generally having a great time. The few people who will inevitably fall through the cracks could easily be financed by the far larger economy and the very generous welfare it can finance. We can all have the universal basic income as our safety net, but many of us will be very much wealthier and won’t need it.

 

Chat-bots will help reduce loneliness, a bit

Amazon is really pushing its Echo and Dot devices at the moment and some other companies also use Alexa in their own devices. They are starting to gain avatar front ends too. Microsoft has their Cortana transforming into Zo, Apple has Siri’s future under wraps for now. Maybe we’ll see Siri in a Sari soon, who knows. Thanks to rapidly developing AI, chatbots and other bots have also made big strides in recent years, so it’s obvious that the two can easily be combined. The new voice control interfaces could become chatbots to offer a degree of companionship. Obviously that isn’t as good as chatting to real people, but many, very many people don’t have that choice. Loneliness is one of the biggest problems of our time. Sometimes people talk to themselves or to their pet cat, and chatting to a bot would at least get a real response some of the time. It goes further than simple interaction though.

I’m not trying to understate the magnitude of the loneliness problem, and it can’t solve it completely of course, but I think it will be a benefit to at least some lonely people in a few ways. Simply having someone to chat to will already be of some help. People will form emotional relationships with bots that they talk to a lot, especially once they have a visual front end such as an avatar. It will help some to develop and practice social skills if that is their problem, and for many others who feel left out of local activity, it might offer them real-time advice on what is on locally in the next few days that might appeal to them, based on their conversations. Talking through problems with a bot can also help almost as much as doing so with a human. In ancient times when I was a programmer, I’d often solve a bug by trying to explain how my program worked, and in doing so i would see the bug myself. Explaining it to a teddy bear would have been just as effective, the chat was just a vehicle for checking through the logic from a new angle. The same might apply to interactive conversation with a bot. Sometimes lonely people can talk too much about problems when they finally meet people, and that can act as a deterrent to future encounters, so that barrier would also be reduced. All in all, having a bot might make lonely people more able to get and sustain good quality social interactions with real people, and make friends.

Another benefit that has nothing to do with loneliness is that giving a computer voice instructions forces people to think clearly and phrase their requests correctly, just like writing a short computer program. In a society where so many people don’t seem to think very clearly or even if they can, often can’t express what they want clearly, this will give some much needed training.

Chatbots could also offer challenges to people’s thinking, even to help counter extremism. If people make comments that go against acceptable social attitudes or against known facts, a bot could present the alternative viewpoint, probably more patiently than another human who finds such viewpoints frustrating. I’d hate to see this as a means to police political correctness, though it might well be used in such a way by some providers, but it could improve people’s lack of understanding of even the most basic science, technology, culture or even politics, so has educational value. Even if it doesn’t convert people, it might at least help them to understand their own views more clearly and be better practiced at communicating their arguments.

Chat bots could make a significant contribution to society. They are just machines, but those machines are tools for other people and society as a whole to help more effectively.

 

AI presents a new route to attack corporate value

As AI increases in corporate, social, economic and political importance, it is becoming a big target for activists and I think there are too many vulnerabilities. I think we should be seeing a lot more articles than we are about what developers are doing to guard against deliberate misdirection or corruption, and already far too much enthusiasm for make AI open source and thereby giving mischief-makers the means to identify weaknesses.

I’ve written hundreds of times about AI and believe it will be a benefit to humanity if we develop it carefully. Current AI systems are not vulnerable to the terminator scenario, so we don’t have to worry about that happening yet. AI can’t yet go rogue and decide to wipe out humans by itself, though future AI could so we’ll soon need to take care with every step.

AI can be used in multiple ways by humans to attack systems.

First and most obvious, it can be used to enhance malware such as trojans or viruses, or to optimize denial of service attacks. AI enhanced security systems already battle against adaptive malware and AI can probe systems in complex ways to find vulnerabilities that would take longer to discover via manual inspection. As well as AI attacking operating systems, it can also attack AI by providing inputs that bias its learning and decision-making, giving AI ‘fake news’ to use current terminology. We don’t know the full extent of secret military AI.

Computer malware will grow in scope to address AI systems to undermine corporate value or political campaigns.

A new route to attacking corporate AI, and hence the value in that company that relates in some way to it is already starting to appear though. As companies such as Google try out AI-driven cars or others try out pavement/sidewalk delivery drones, so mischievous people are already developing devious ways to misdirect or confuse them. Kids will soon have such activity as hobbies. Deliberate deception of AI is much easier when people know how they work, and although it’s nice for AI companies to put their AI stuff out there into the open source markets for others to use to build theirs, that does rather steer future systems towards a mono-culture of vulnerability types. A trick that works against one future AI in one industry might well be adaptable to another use in another industry with a little devious imagination. Let’s take an example.

If someone builds a robot to deliberately step in front of a self-driving car every time it starts moving again, that might bring traffic to a halt, but police could quickly confiscate the robot, and they are expensive, a strong deterrent even if the pranksters are hiding and can’t be found. Cardboard cutouts might be cheaper though, even ones with hinged arms to look a little more lifelike. A social media orchestrated campaign against a company using such cars might involve thousands of people across a country or city deliberately waiting until the worst time to step out into a road when one of their vehicles comes along, thereby creating a sort of denial of service attack with that company seen as the cause of massive inconvenience for everyone. Corporate value would obviously suffer, and it might not always be very easy to circumvent such campaigns.

Similarly, the wheeled delivery drones we’ve been told to expect delivering packages any time soon will also have cameras to allow them to avoid bumping into objects or little old ladies or other people, or cats or dogs or cardboard cutouts or carefully crafted miniature tank traps or diversions or small roadblocks that people and pets can easily step over but drones can’t, that the local kids have built from a few twigs or cardboard from a design that has become viral that day. A few campaigns like that with the cold pizzas or missing packages that result could severely damage corporate value.

AI behind websites might also be similarly defeated. An early experiment in making a Twitter chat-bot that learns how to tweet by itself was quickly encouraged by mischief-makers to start tweeting offensively. If people have some idea how an AI is making its decisions, they will attempt to corrupt or distort it to their own ends. If it is heavily reliant on open source AI, then many of its decision processes will be known well enough for activists to develop appropriate corruption tactics. It’s not to early to predict that the proposed AI-based attempts by Facebook and Twitter to identify and defeat ‘fake news’ will fall right into the hands of people already working out how to use them to smear opposition campaigns with such labels.

It will be a sort of arms race of course, but I don’t think we’re seeing enough about this in the media. There is a great deal of hype about the various AI capabilities, a lot of doom-mongering about job cuts (and a lot of reasonable warnings about job cuts too) but very little about the fight back against AI systems by attacking them on their own ground using their own weaknesses.

That looks to me awfully like there isn’t enough awareness of how easily they can be defeated by deliberate mischief or activism, and I expect to see some red faces and corporate account damage as a result.

PS

This article appeared yesterday that also talks about the bias I mentioned: https://techcrunch.com/2016/12/10/5-unexpected-sources-of-bias-in-artificial-intelligence/

Since I wrote this blog, I was asked via Linked-In to clarify why I said that Open Source AI systems would have more security risk. Here is my response:

I wasn’t intending to heap fuel on a dying debate (though since current debate looks the same as in early 1990s it is dying slowly). I like and use open source too. I should have explained my reasoning better to facilitate open source checking: In regular (algorithmic) code, programming error rate should be similar so increasing the number of people checking should cancel out the risk from more contributors so there should be no a priori difference between open and closed. However:

In deep learning, obscurity reappears via neural net weightings being less intuitive to humans. That provides a tempting hiding place.

AI foundations are vulnerable to group-think, where team members share similar world models. These prejudices will affect the nature of OS and CS code and result in AI with inherent and subtle judgment biases which will be less easy to spot than bugs and be more visible to people with alternative world models. Those people are more likely to exist in an OS pool than a CS pool and more likely to be opponents so not share their results.

Deep learning may show the equivalent of political (or masculine and feminine). As well as encouraging group-think, that also distorts the distribution of biases and therefore the cancelling out of errors can no longer be assumed.

Human factors in defeating security often work better than exploiting software bugs. Some of the deep learning AI is designed to mimic humans as well as possible in thinking and in interfacing. I suspect that might also make them more vulnerable to meta-human-factor attacks. Again, exposure to different and diverse cultures will show a non-uniform distribution of error/bias spotting/disclosure/exploitation.

Deep learning will become harder for humans to understand as it develops and becomes more machine dependent. That will amplify the above weaknesses. Think of optical illusions that greatly distort human perception and think of similar in advanced AI deep learning. Errors or biases that are discovered will become more valuable to an opponent since they are less likely to be spotted by others, increasing their black market exploitation risk.

I have not been a programmer for over 20 years and am no security expert so my reasoning may be defective, but at least now you know what my reasoning was and can therefore spot errors in it.