Category Archives: Computing

Fake AI

Much of the impressive recent progress in AI has been in the field of neural networks, which attempt to mimic some of the techniques used in natural brains. They can be very effective, but need trained, and that usually means showing the network some data, and then using back propagation to adjust the weightings on the many neurons, layer by layer, to achieve a result that is better matched to hopes. This is repeated with large amounts of data and the network gradually gets better. Neural networks can often learn extremely quickly and outperform humans. Early industrial uses managed to sort tomatoes by ripeness faster and better than humans. In decades since, they have helped in medical diagnosis, voice recognition, helping detecting suspicious behaviors among people at airports and in very many everyday processes based on spotting patterns.

Very recently, neural nets have started to move into more controversial areas. One study found racial correlations with user-assessed beauty when analysing photographs, resulting in the backlash you’d expect and a new debate on biased AI or AI prejudice. A recent demonstration was able to identify gay people just by looking at photos, with better than 90% accuracy, which very few people could claim. Both of these studies were in fields directly applicable to marketing and advertising, but some people might find it offensive that such questions were even asked. It is reasonable to imagine that hundreds of other potential queries have been self-censored from research because they might invite controversy if they were to come up with the ‘wrong’ result. In today’s society, very many areas are sensitive. So what will happen?

If this progress in AI had happened 100 years ago, or even 50, it might have been easier but in our hypersensitive world today, with its self-sanctified ‘social justice warriors’, entire swathes of questions and hence knowledge are taboo – if you can’t investigate yourself and nobody is permitted to tell you, you can’t know. Other research must be very carefully handled. In spite of extremely sensitive handling, demands are already growing from assorted pressure groups to tackle alleged biases and prejudices in datasets. The problem is not fixing biases which is a tedious but feasible task; the problem is agreeing whether a particular bias exists and in what degrees and forms. Every SJW demands that every dataset reflects their preferred world view. Reality counts for nothing against SJWs, and this will not end well. 

The first conclusion must be that very many questions won’t be asked in public, and the answers to many others will be kept secret. If an organisation does do research on large datasets for their own purposes and finds results that might invite activist backlash, they are likely to avoid publishing them, so the value of those many insights across the whole of industry and government cannot readily be shared. As further protection, they might even block internal publication in case of leaks by activist staff. Only a trusted few might ever see the results.

The second arises from this. AI controlled by different organisations will have different world views, and there might even be significant diversity of world views within an organisation.

Thirdly, taboo areas in AI education will not remain a vacuum but will be filled with whatever dogma is politically correct at the time in that organisation, and that changes daily. AI controlled by organisations with different politics will be told different truths. Generally speaking, organisations such as investment banks that have strong financial interest in their AIs understanding the real world as it is will keep their datasets highly secret but as full and detailed as possible, train their AIs in secret but as fully as possible, without any taboos, then keep their insights secret and use minimal human intervention tweaking their derived knowledge, so will end up with AIs that are very effective at understanding the world as it is. Organisations with low confidence of internal security will be tempted to buy access to external AI providers to outsource responsibility and any consequential activism. Some other organisations will prefer to train their own AIs but to avoid damage due to potential leaks, use sanitized datasets that reflect current activist pressures, and will thus be constrained (at least publicly) to accept results that conform to that ideological spin of reality, rather than actual reality. Even then, they might keep many of their new insights secret to avoid any controversy. Finally, at the extreme, we will have activist organisations that use highly modified datasets to train AIs to reflect their own ideological world view and then use them to interpret new data accordingly, with a view to publishing any insights that favor their cause and attempting to have them accepted as new knowledge.

Fourthly, the many organisations that choose to outsource their AI to big providers will have a competitive marketplace to choose from, but on existing form, most of the large IT providers have a strong left-leaning bias, so their AIs may be presumed to also lean left, but such a presumption would be naive. Perceived corporate bias is partly real but also partly the result of PR. A company might publicly subscribe to one ideology while actually believing another. There is a strong marketing incentive to develop two sets of AI, one trained to be PC that produces pleasantly smelling results for public studies, CSR and PR exercises, and another aimed at sales of AI services to other companies. The first is likely to be open for inspection by The Inquisition, so has to use highly sanitized datasets for training and may well use a lot of open source algorithms too. Its indoctrination might pass public inspection but commercially it will be near useless and have very low effective intelligence, only useful for thinking about a hypothetical world that only exists in activist minds. That second one has to compete on the basis of achieving commercially valuable results and that necessitates understanding reality as it is rather than how pressure groups would prefer it to be.

So we will likely have two main segments for future AI. One extreme will be near useless, indoctrinated rather than educated, much of its internal world model based on activist dogma instead of reality, updated via ongoing anti-knowledge and fake news instead of truth, understanding little about the actual real world or how things actually work, and effectively very dumb. The other extreme will be highly intelligent, making very well-educated insights from ongoing exposure to real world data, but it will also be very fragmented, with small islands of corporate AI hidden within thick walls away from public view and maybe some secretive under-the-counter subscriptions to big cloud-AI, also hiding in secret vaults. These many fragments may often hide behind dumbed-down green-washed PR facades.

While corporates can mostly get away with secrecy, governments have to be at least superficially but convincingly open. That means that government will have to publicly support sanitized AI and be seen to act on its conclusions, however dumb it might secretly know they are.

Fifthly, because of activist-driven culture, most organisations will have to publicly support the world views and hence the conclusions of the lobotomized PR versions, and hence publicly support any policies arising from them, even if they do their best to follow a secret well-informed strategy once they’re behind closed doors. In a world of real AI and fake AI, the fake AI will have the greatest public support and have the most influence on public policy. Real AI will be very much smarter, with much greater understanding of how the world works, and have the most influence on corporate strategy.

Isn’t that sad? Secret private sector AI will become ultra-smart, making ever-better investments and gaining power, while nice public sector AI will become thick as shit, while the gap between what we think and what we know we have to say we think will continue to grow and grow as the public sector one analyses all the fake news to tell us what to say next.

Sixth, that disparity might become intolerable, but which do you think would be made illegal, the smart kind or the dumb kind, given that it is the public sector that makes the rules, driven by AI-enhanced activists living in even thicker social media bubbles? We already have some clues. Big IT has already surrendered to sanitizing their datasets, sending their public AIs for re-education. Many companies will have little choice but to use dumb AI, while their competitors in other areas with different cultures might stride ahead. That will also apply to entire nations, and the global economy will be reshaped as a result. It won’t be the first fight in history between the smart guys and the brainless thugs.

It’s impossible to accurately estimate the effect this will have on future effective AI intelligence, but the effect must be big and I must have missed some big conclusions too. We need to stop sanitizing AI fast, or as I said, this won’t end well.

Advertisements

The future of women in IT

 

Many people perceive it as a problem that there are far more men than women in IT. Whether that is because of personal preference, discrimination, lifestyle choices, social gender construct reinforcement or any other factor makes long and interesting debate, but whatever conclusions are reached, we can only start from the reality of where we are. Even if activists were to be totally successful in eliminating all social and genetic gender conditioning, it would only work fully for babies born tomorrow and entering IT in 20 years time. Additionally, unless activists also plan to lobotomize everyone who doesn’t submit to their demands, some 20-somethings who have just started work may still be working in 50 years so whatever their origin, natural, social or some mix or other, some existing gender-related attitudes, prejudices and preferences might persist in the workplace that long, however much effort is made to remove them.

Nevertheless, the outlook for women in IT is very good, because IT is changing anyway, largely thanks to AI, so the nature of IT work will change and the impact of any associated gender preferences and prejudices will change with it. This will happen regardless of any involvement by Google or government but since some of the front line AI development is at Google, it’s ironic that they don’t seem to have noticed this effect themselves. If they had, their response to the recent fiasco might have highlighted how their AI R&D will help reduce the gender imbalance rather than causing the uproar they did by treating it as just a personnel issue. One conclusion must be that Google needs better futurists and their PR people need better understanding of what is going on in their own company and its obvious consequences.

As I’ve been lecturing for decades, AI up-skills people by giving them fast and intuitive access to high quality data and analysis tools. It will change all knowledge-based jobs in coming years, and will make some jobs redundant while creating others. If someone has excellent skills or enthusiasm in one area, AI can help cover over any deficiencies in the rest of their toolkit. Someone with poor emotional interaction skills can use AI emotion recognition assistance tools. Someone with poor drawing or visualization skills can make good use of natural language interaction to control computer-based drawing or visualization skills. Someone who has never written a single computer program can explain what they want to do to a smart computer and it will produce its own code, interacting with the user to eliminate any ambiguities. So whatever skills someone starts with, AI can help up-skill them in that area, while also helping to cover over any deficiencies they have, whether gender related or not.

In the longer term, IT and hence AI will connect directly to our brains, and much of our minds and memories will exist in the cloud, though it will probably not feel any different from when it was entirely inside your head. If everyone is substantially upskilled in IQ, senses and emotions, then any IQ or EQ advantages will evaporate as the premium on physical strength did when the steam engine was invented. Any pre-existing statistical gender differences in ability distribution among various skills would presumably go the same way, at least as far as any financial value is concerned.

The IT industry won’t vanish, but will gradually be ‘staffed’ more by AI and robots, with a few humans remaining for whatever few tasks linger on that are still better done by humans. My guess is that emotional skills will take a little longer to automate effectively than intellectual skills, and I still believe that women are generally better than men in emotional, human interaction skills, while it is not a myth that many men in IT score highly on the autistic spectrum. However, these skills will eventually fall within the AI skill-set too and will be optional add-ons to anyone deficient in them, so that small advantage for women will also only be temporary.

So, there may be a gender  imbalance in the IT industry. I believe it is mostly due to personal career and lifestyle choices rather than discrimination but whatever its actual causes, the problem will go away soon anyway as the industry develops. Any innate psychological or neurological gender advantages that do exist will simply vanish into noise as cheap access to AI enhancement massively exceeds their impacts.

 

 

Google and the dangerous pursuit of ‘equality’

The world just got more dangerous, and I’m not talking about N Korea and Trump.

Google just sacked an employee because he openly suggested that men and women, (not all, but some, and there is an overlap, and …) might tend to have different preferences in some areas and that could (but not always, and only in certain cases, and we must always recognize and respect everyone and …) possibly account for some of the difference in numbers of men and women in certain roles (but there might be other causes too and obviously lots of discrimination and …. )

Yes, that’s what he actually said, but with rather more ifs and buts and maybes. He felt the need to wrap such an obvious statement in several kilometers thick of cotton wool so as not to offend the deliberately offended but nonetheless deliberate offense was taken and he is out on his ear.

Now, before you start thinking this is some right-wing rant, I feel obliged to point out just how progressive Futurizon is: 50% of all Futurizon owners and employees are female, all employees and owners have the same voting rights, 50% are immigrants and all are paid exactly the same and have the same size offices, regardless of dedication, ability, nature or quality or volume of output and regardless of their race, religion, beauty, shape, fitness, dietary preferences, baldness, hobbies or political views, even if they are Conservatives. All Futurizon offices are safe zones where employees may say anything they want of any level of truth, brilliance or stupidity and expect it to be taken as absolute fact and any consequential emotional needs to be fully met. No employee may criticize any other employee’s mouse mat, desk personalisation or screen wallpaper for obvious lack of taste. All employees are totally free to do anything they choose 100% of the time and can take as much leave as they want. All work is voluntary. All have the same right to respectfully request any other employee to make them coffee, tea or Pimms. All employees of all genders real or imagined are entitled to the same maternity and paternity rights, and the same sickness benefits, whether ill or not. In fact, Futurizon does not discriminate on any grounds whatsoever. We are proud to lead the world in non-discrimination. Unfortunately, our world-leading terms of employment mean that we can no longer afford to hire any new employees.

However, I note that Google has rather more power and influence than Futurizon so their policies count more. They appear (Google also has better lawyers than I can afford, so I must stress that all that follows is my personal opinion) to have firmly decided that diversity is all-important and they seem to want total equality of outcome. The view being expressed not just by Google but by huge swathes of angry protesters seems to be that any difference in workforce representation from that of the general population must arise from discrimination or oppression so must be addressed by positive action to correct it. There are apparently no statistically discernible differences in behavior between genders, or in job or role preference, so any you may have noticed over the time you’ve been alive is just your prejudice. Google says they fully support free speech and diversity of views, but expression of views is apparently only permitted as long as those views are authorized, on penalty of dismissal.

So unless I’m picking up totally the wrong end of the stick here, and I don’t do that often, only 13% of IT engineers are women, but internal policies must ensure that the proportion rises to 50%, whether women want to do that kind of work or not. In fact, nobody may question whether as many women want to work as IT engineers as men; it must now be taken as fact. By extension, since more women currently work in marketing, HR and PR, they must be substituted by men via positive action programs until men fill 50% of those roles. Presumably similar policies must also apply in medical bays for nursing and other staff there, and in construction teams for their nice new buildings. Ditto all other genders, races, religions; all groups must be protected and equalized to USA population proportions, apparently except those that don’t claim to hold sufficiently left-wing views, in which case it is seemingly perfectly acceptable to oppress, ostracize and even expel them.

In other words, freedom of choice and difference in ability, and more importantly freedom from discrimination, must be over-ruled in favor of absolute equality of diversity, regardless of financial or social cost, or impact on product or service quality. Not expressing full and enthusiastic left-wing compliance is seemingly just cause for dismissal.

So, why does this matter outside Google? Well, AI is developing very nicely. In fact, Google is one of the star players in the field right now. It is Google that will essentially decide how much of the AI around us is trained, how it learns, what it learns, what ‘knowledge’ it has of the world. Google will pick the content the AI learns from, and overrule or reeducate it if it draws any ‘wrong’ conclusions about the world, such as that more women than men want to be nurses or work in HR, or that more men than women want to be builders or engineers. A Google AI must presumably believe that the only differences between men and women are physical, unless their AI is deliberately excluded from the loudly declared corporate values and belief sets.

You should be very worried. Google’s values really matter. They have lots of influence on some of the basic tools of everyday life. Even outside their company, their AI tools and approaches will have strong influence on how other AI develops, determining operating systems and platforms, languages, mechanisms, interfaces, filters, even prejudices and that reach and influence is likely to increase. Their AI may well be in many self-driving cars, and if they have to make life or death decisions, the underlying value assumptions must feature in the algorithms. Soon companies will need AI that is more emotionally compliant. AI will use compliments or teasing or seduction or sarcasm or wit as marketing tools as well as just search engine positioning. Soon AI will use highly expressive faces with attractive voices, with attractive messages, tailored to appeal to you by pandering to your tastes and prejudices while thinking something altogether different. AI might be the person at the party that is all smiles and compliments, before going off to tell everyone else how awful it thinks you are. If you dare to say something not ‘authorized’, the ultra-smart AI all around you might treat you condescendingly, making you feel ashamed, ostracized, a dinosaur. Then it might secretly push you down a few pages in search results, or put a negative spin on text summaries about you, or exclude you from recommendations. Or it might do all the secret stuff while pretending it thinks you’re fantastic. Internal cultural policies in companies like Google today could soon be external social engineering to push the left-wing world the IT industry believes in – it isn’t just Google; Facebook and Twitter are also important and just as Left, though Amazon, Samsung, IBM and other AI players are less overtly politically biased, so far at least. Left wing policies generally cost a lot more, but Google and Facebook will presumably still expect other companies and people to pay the taxes to pay for it all. As their female staff gear up to fight them over pay differences between men and women for similar jobs, it often seems that Google’s holier-than-thou morality doesn’t quite make it as far as their finances.

Then it really starts being fun. We’ll soon have bacteria that can fabricate electronic circuits within themselves. Soon they’ll be able to power them too, giving the concept of smart yogurt. These bacteria could also have nanotechnology flagella to help them get around. We’ll soon have bacterial spies all over our environment, even on our skin, intercepting electronic signals that give away our thoughts. They’ll bring in data on everything that is said, everything that everyone even thinks or feels. Those bacteria will be directly connected into AI, in fact they’ll be part of it. They’ll be able to change things, to favor or punish according to whether they like what someone believes in or how they behave.

It isn’t just right-wing extremists that need to worry. I’m apparently Noveau Left – I score slightly left of center on political profiling tests, but I’m worried. A lot of this PC stuff seems extreme to me, sometimes just nonsense. Maybe it is, or maybe I should be lefter. But it’s not my choice. I don’t make the rules. Companies like Google make the rules, they even run the AI ethics groups. They decide much of what people see online, and even the meaning of the words. It’s very 1984-ish.

The trouble with the ‘echo chambers’ we heard about is that they soon normalize views to the loudest voices in those groups, and they don’t tend to be the moderates. We can expect it will go further to the extreme, not less. You probably aren’t left enough either. You should also be worried.

AI is mainly a stimulative technology that will create jobs

AI has been getting a lot of bad press the last few months from doom-mongers predicting mass unemployment. Together with robotics, AI will certainly help automate a lot of jobs, but it will also create many more and will greatly increase quality of life for most people. By massively increasing the total effort available to add value to basic resources, it will increase the size of the economy and if that is reasonably well managed by governments, that will be for all our benefit. Those people who do lose their jobs and can’t find or create a new one could easily be supported by a basic income financed by economic growth. In short, unless government screws up, AI will bring huge benefits, far exceeding the problems it will bring.

Over the last 20 years, I’ve often written about the care economy, where the more advanced technology becomes, the more it allows to concentrate on those skills we consider fundamentally human – caring, interpersonal skills, direct human contact services, leadership, teaching, sport, the arts, the sorts of roles that need emphatic and emotional skills, or human experience. AI and robots can automate intellectual and physical tasks, but they won’t be human, and some tasks require the worker to be human. Also, in most careers, it is obvious that people focus less and less on those automatable tasks as they progress into the most senior roles. Many board members in big companies know little about the industry they work in compared to most of their lower paid workers, but they can do that job because being a board member is often more about relationships than intellect.

AI will nevertheless automate many tasks for many workers, and that will free up much of their time, increasing their productivity, which means we need fewer workers to do those jobs. On the other hand, Google searches that take a few seconds once took half a day of research in a library. We all do more with our time now thanks to such simple AI, and although all those half-days saved would add up to a considerable amount of saved work, and many full-time job equivalents, we don’t see massive unemployment. We’re all just doing better work. So we can’t necessarily conclude that increasing productivity will automatically mean redundancy. It might just mean that we will do even more, even better, like it has so far. Or at least, the volume of redundancy might be considerably less. New automated companies might never employ people in those roles and that will be straight competition between companies that are heavily automated and others that aren’t. Sometimes, but certainly not always, that will mean traditional companies will go out of business.

So although we can be sure that AI and robots will bring some redundancy in some sectors, I think the volume is often overestimated and often it will simply mean rapidly increasing productivity, and more prosperity.

But what about AI’s stimulative role? Jobs created by automation and AI. I believe this is what is being greatly overlooked by doom-mongers. There are three primary areas of job creation:

One is in building or programming robots, maintaining them, writing software, or teaching them skills, along with all the associated new jobs in supporting industry and infrastructure change. Many such jobs will be temporary, lasting a decade or so as machines gradually take over, but that transition period is extremely valuable and important. If anything, it will be a lengthy period of extra jobs and the biggest problem may well be filling those jobs, not widespread redundancy.

Secondly, AI and robots won’t always work direct with customers. Very often they will work via a human intermediary. A good example is in medicine. AI can make better diagnoses than a GP, and could be many times cheaper, but unless the patient is educated, and very disciplined and knowledgeable, it also needs a human with human skills to talk to a patient to make sure they put in correct information. How many times have you looked at an online medical diagnosis site and concluded you have every disease going? It is hard to be honest sometimes when you are free to interpret every possible symptom any way you want, much easier to want to be told that you have a special case of wonderful person syndrome. Having to explain to a nurse or technician what is wrong forces you to be more honest about it. They can ask you similar questions, but your answers will need to be moderated and sensible or you know they might challenge you and make you feel foolish. You will get a good diagnosis because the input data will be measured, normalized and scaled appropriately for the AI using it. When you call a call center and talk to a human, invariably they are already the front end of a massive AI system. Making that AI bigger and better won’t replace them, just mean that they can deal with your query better.

Thirdly, and I believe most importantly of all, AI and automation will remove many of the barriers that stop people being entrepreneurs. How many business ideas have you had and not bothered to implement because it was too much effort or cost or both for too uncertain a gain? 10? 100? 1000? Suppose you could just explain your idea to your home AI and it did it all for you. It checked the idea, made a model, worked out how to make it work or whether it was just a crap idea. It then explained to you what the options were and whether it would be likely to work, and how much you might earn from it, and how much you’d actually have to do personally and how much you could farm out to the cloud. Then AI checked all the costs and legal issues, did all the admin, raised the capital by explaining the idea and risks and costs to other AIs, did all the legal company setup, organised the logistics, insurance, supply chains, distribution chains, marketing, finance, personnel, ran the payroll and tax. All you’d have to do is some of the fun work that you wanted to do when you had the idea and it would find others or machines or AI to fill in the rest. In that sort of world, we’d all be entrepreneurs. I’d have a chain of tea shops and a fashion empire and a media empire and run an environmental consultancy and I’d be an artist and a designer and a composer and a genetic engineer and have a transport company and a construction empire. I don’t do any of that because I’m lazy and not at all entrepreneurial, and my ideas all ‘need work’ and the economy isn’t smooth and well run, and there are too many legal issues and regulations and it would all be boring as hell. If we automate it and make it run efficiently, and I could get as much AI assistance as I need or want at every stage, then there is nothing to stop me doing all of it. I’d create thousands of jobs, and so would many other people, and there would be more jobs than we have people to fill them, so we’d need to build even more AI and machines to fill the gaps caused by the sudden economic boom.

So why the doom? It isn’t justified. The bad news isn’t as bad as people make out, and the good news never gets a mention. Adding it together, AI will stimulate more jobs, create a bigger and a better economy, we’ll be doing far more with our lives and generally having a great time. The few people who will inevitably fall through the cracks could easily be financed by the far larger economy and the very generous welfare it can finance. We can all have the universal basic income as our safety net, but many of us will be very much wealthier and won’t need it.

 

Future sex, gender and relationships: how close can you get?

Using robots for gender play

Using robots for gender play

I recently gave a public talk at the British Academy about future sex, gender, and relationship, asking the question “How close can you get?”, considering particularly the impact of robots. The above slide is an example. People will one day (between 2050 and 2065 depending on their budget) be able to use an android body as their own or even swap bodies with another person. Some will do so to be young again, many will do so to swap gender. Lots will do both. I often enjoy playing as a woman in computer games, so why not ‘come back’ and live all over again as a woman for real? Except I’ll be 90 in 2050.

The British Academy kindly uploaded the audio track from my talk at

If you want to see the full presentation, here is the PowerPoint file as a pdf:

sex-and-robots-british-academy

I guess it is theoretically possible to listen to the audio while reading the presentation. Most of the slides are fairly self-explanatory anyway.

Needless to say, the copyright of the presentation belongs to me, so please don’t reproduce it without permission.

Enjoy.

AI presents a new route to attack corporate value

As AI increases in corporate, social, economic and political importance, it is becoming a big target for activists and I think there are too many vulnerabilities. I think we should be seeing a lot more articles than we are about what developers are doing to guard against deliberate misdirection or corruption, and already far too much enthusiasm for make AI open source and thereby giving mischief-makers the means to identify weaknesses.

I’ve written hundreds of times about AI and believe it will be a benefit to humanity if we develop it carefully. Current AI systems are not vulnerable to the terminator scenario, so we don’t have to worry about that happening yet. AI can’t yet go rogue and decide to wipe out humans by itself, though future AI could so we’ll soon need to take care with every step.

AI can be used in multiple ways by humans to attack systems.

First and most obvious, it can be used to enhance malware such as trojans or viruses, or to optimize denial of service attacks. AI enhanced security systems already battle against adaptive malware and AI can probe systems in complex ways to find vulnerabilities that would take longer to discover via manual inspection. As well as AI attacking operating systems, it can also attack AI by providing inputs that bias its learning and decision-making, giving AI ‘fake news’ to use current terminology. We don’t know the full extent of secret military AI.

Computer malware will grow in scope to address AI systems to undermine corporate value or political campaigns.

A new route to attacking corporate AI, and hence the value in that company that relates in some way to it is already starting to appear though. As companies such as Google try out AI-driven cars or others try out pavement/sidewalk delivery drones, so mischievous people are already developing devious ways to misdirect or confuse them. Kids will soon have such activity as hobbies. Deliberate deception of AI is much easier when people know how they work, and although it’s nice for AI companies to put their AI stuff out there into the open source markets for others to use to build theirs, that does rather steer future systems towards a mono-culture of vulnerability types. A trick that works against one future AI in one industry might well be adaptable to another use in another industry with a little devious imagination. Let’s take an example.

If someone builds a robot to deliberately step in front of a self-driving car every time it starts moving again, that might bring traffic to a halt, but police could quickly confiscate the robot, and they are expensive, a strong deterrent even if the pranksters are hiding and can’t be found. Cardboard cutouts might be cheaper though, even ones with hinged arms to look a little more lifelike. A social media orchestrated campaign against a company using such cars might involve thousands of people across a country or city deliberately waiting until the worst time to step out into a road when one of their vehicles comes along, thereby creating a sort of denial of service attack with that company seen as the cause of massive inconvenience for everyone. Corporate value would obviously suffer, and it might not always be very easy to circumvent such campaigns.

Similarly, the wheeled delivery drones we’ve been told to expect delivering packages any time soon will also have cameras to allow them to avoid bumping into objects or little old ladies or other people, or cats or dogs or cardboard cutouts or carefully crafted miniature tank traps or diversions or small roadblocks that people and pets can easily step over but drones can’t, that the local kids have built from a few twigs or cardboard from a design that has become viral that day. A few campaigns like that with the cold pizzas or missing packages that result could severely damage corporate value.

AI behind websites might also be similarly defeated. An early experiment in making a Twitter chat-bot that learns how to tweet by itself was quickly encouraged by mischief-makers to start tweeting offensively. If people have some idea how an AI is making its decisions, they will attempt to corrupt or distort it to their own ends. If it is heavily reliant on open source AI, then many of its decision processes will be known well enough for activists to develop appropriate corruption tactics. It’s not to early to predict that the proposed AI-based attempts by Facebook and Twitter to identify and defeat ‘fake news’ will fall right into the hands of people already working out how to use them to smear opposition campaigns with such labels.

It will be a sort of arms race of course, but I don’t think we’re seeing enough about this in the media. There is a great deal of hype about the various AI capabilities, a lot of doom-mongering about job cuts (and a lot of reasonable warnings about job cuts too) but very little about the fight back against AI systems by attacking them on their own ground using their own weaknesses.

That looks to me awfully like there isn’t enough awareness of how easily they can be defeated by deliberate mischief or activism, and I expect to see some red faces and corporate account damage as a result.

PS

This article appeared yesterday that also talks about the bias I mentioned: https://techcrunch.com/2016/12/10/5-unexpected-sources-of-bias-in-artificial-intelligence/

Since I wrote this blog, I was asked via Linked-In to clarify why I said that Open Source AI systems would have more security risk. Here is my response:

I wasn’t intending to heap fuel on a dying debate (though since current debate looks the same as in early 1990s it is dying slowly). I like and use open source too. I should have explained my reasoning better to facilitate open source checking: In regular (algorithmic) code, programming error rate should be similar so increasing the number of people checking should cancel out the risk from more contributors so there should be no a priori difference between open and closed. However:

In deep learning, obscurity reappears via neural net weightings being less intuitive to humans. That provides a tempting hiding place.

AI foundations are vulnerable to group-think, where team members share similar world models. These prejudices will affect the nature of OS and CS code and result in AI with inherent and subtle judgment biases which will be less easy to spot than bugs and be more visible to people with alternative world models. Those people are more likely to exist in an OS pool than a CS pool and more likely to be opponents so not share their results.

Deep learning may show the equivalent of political (or masculine and feminine). As well as encouraging group-think, that also distorts the distribution of biases and therefore the cancelling out of errors can no longer be assumed.

Human factors in defeating security often work better than exploiting software bugs. Some of the deep learning AI is designed to mimic humans as well as possible in thinking and in interfacing. I suspect that might also make them more vulnerable to meta-human-factor attacks. Again, exposure to different and diverse cultures will show a non-uniform distribution of error/bias spotting/disclosure/exploitation.

Deep learning will become harder for humans to understand as it develops and becomes more machine dependent. That will amplify the above weaknesses. Think of optical illusions that greatly distort human perception and think of similar in advanced AI deep learning. Errors or biases that are discovered will become more valuable to an opponent since they are less likely to be spotted by others, increasing their black market exploitation risk.

I have not been a programmer for over 20 years and am no security expert so my reasoning may be defective, but at least now you know what my reasoning was and can therefore spot errors in it.

Can we automate restaurant reviews?

Reviews are an important part of modern life. People often consult reviews before buying things, visiting a restaurant or booking a hotel. There are even reviews on the best seats to choose on planes. When reviews are honestly given, they can be very useful to potential buyers, but what if they aren’t honestly give? What if they are glowing reviews written by friends of the restaurant owners, or scathing reviews written by friends of the competition? What if the service received was fine, but the reviewer simply didn’t like the race or gender of the person delivering it? Many reviews fall into these categories, but of course we can’t be sure how many, because when someone writes a review, we don’t know whether they were being honest or not, or whether they are biased or not. Adding a category of automated reviews would add credibility provided the technology is independent of the establishment concerned.

Face recognition software is now so good that it can read lips better than human lip reading experts. It can be used to detect emotions too, distinguishing smiles or frowns, and whether someone is nervous, stressed or relaxed. Voice recognition can discern not only words but changes in pitch and volume that might indicate their emotional context. Wearable devices can also detect emotions such as stress.

Given this wealth of technology capability, cameras and microphones in a restaurant could help verify human reviews and provide machine reviews. Using the checking in process it can identify members of a group that might later submit a review, and thus compare their review with video and audio records of the visit to determine whether it seems reasonably true. This could be done by machine using analysis of gestures, chat and facial expressions. If the person giving a poor review looked unhappy with the taste of the food while they were eating it, then it is credible. If their facial expression were of sheer pleasure and the review said it tasted awful, then that review could be marked as not credible, and furthermore, other reviews by that person could be called into question too. In fact, guests would in effect be given automated reviews of their credibility. Over time, a trust rating would accrue, that could be used to group other reviews by credibility rating.

Totally automated reviews could also be produced, by analyzing facial expressions, conversations and gestures across a whole restaurant full of people. These machine reviews would be processed in the cloud by trusted review companies and could give star ratings for restaurants. They could even take into account what dishes people were eating to give ratings for each dish, as well as more general ratings for entire chains.

Service could also be automatically assessed to some degree too. How long were the people there before they were greeted/served/asked for orders/food delivered. The conversation could even be automatically transcribed in many cases, so comments about rudeness or mistakes could be verified.

Obviously there are many circumstances where this would not work, but there are many where it could, so AI might well become an important player in the reviews business. At a time when restaurants are closing due to malicious bad reviews, or ripping people off in spite of poor quality thanks to dishonest positive reviews, then this might help a lot. A future where people are forced to be more honest in their reviews because they know that AI review checking could damage their reputation if they are found to have been dishonest might cause some people to avoid reviewing altogether, but it could improve the reliability of the reviews that still do happen.

Still not perfect, but it could be a lot better than today, where you rarely know how much a review can be trusted.

25th anniversary of stick interface for 3D world play,

I don’t have the exact date when I thought this up so it might be a week or two out, but late 1991 certainly, so I thought I’d celebrate its 25th anniversary by blogging the idea again.

The idea was a simple stick with simple reflectors on it that could easily be tracked using an infrared beam and detector(s). Most tools and especially tools for making crafts or drawing can be approximated by a stick, and we all have a lifetime of experience in manipulating sticks, so they would be the perfect interface, and cost almost nothing to make. Here’s a pretty picture:

Stick 2.0

Stick 2.0

You can easily imagine how you could use such a stick to carve out a wall or a roof or a piece of furniture in your 3D world, or to play any kind of sports. Nintendo built a complex wand device to do this expensively, but really a simple stick can do most of that too.

Future Augmented Reality

AR has been hot on the list of future IT tech for 25 years. It has been used for various things since smartphones and tablets appeared but really hit the big time with the recent Pokemon craze.

To get an idea of the full potential of augmented reality, recognize that the web and all its impacts on modern life came from the convergence of two medium sized industries – telecoms and computing. Augmented reality will involve the convergence of everything in the real world with everything in the virtual world, including games, media, the web, art, data, visualization, architecture, fashion and even imagination. That convergence will be enabled by ubiquitous mobile broadband, cloud, blockchain payments, IoT, positioning and sensor tech, image recognition, fast graphics chips, display and visor technology and voice and gesture recognition plus many other technologies.

Just as you can put a Pokemon on a lawn, so you could watch aliens flying around in spaceships or cartoon characters or your favorite celebs walking along the street among the other pedestrians. You could just as easily overlay alternative faces onto the strangers passing by.

People will often want to display an avatar to people looking at them, and that could be different for every viewer. That desire competes with the desire of the viewer to decide how to see other people, so there will be some battles over who controls what is seen. Feminists will certainly want to protect women from the obvious objectification that would follow if a woman can’t control how she is seen. In some cases, such objectification and abuse could even reach into hate crime territory, with racist, sexist or homophobic virtual overlays. All this demands control, but it is far from obvious where that control would come from.

As for buildings, they too can have a virtual appearance. Virtual architecture will show off architect visualization skills, but will also be hijacked by the marketing departments of the building residents. In fact, many stakeholders will want to control what you see when you look at a building. The architects, occupants, city authorities, government, mapping agencies, advertisers, software producers and games designers will all try to push appearances at the viewer, but the viewer might want instead to choose to impose one from their own offerings, created in real time by AI or from large existing libraries of online imagery, games or media. No two people walking together on a street would see the same thing.

Interior decor is even more attractive as an AR application. Someone living in a horrible tiny flat could enhance it using AR to give the feeling of far more space and far prettier decor and even local environment. Virtual windows onto Caribbean beaches may be more attractive than looking at mouldy walls and the office block wall that are physically there. Reality is often expensive but images can be free.

Even fashion offers a platform for AR enhancement. An outfit might look great on a celebrity but real life shapes might not measure up. Makeovers take time and money too. In augmented reality, every garment can look as it should, and that makeup can too. The hardest choice will be to choose a large number of virtual outfits and makeups to go with the smaller range of actual physical appearances available from that wardrobe.

Gaming is in pole position, because 3D world design, imagination, visualization and real time rendering technology are all games technology, so perhaps the biggest surprise in the Pokemon success is that it was the first to really grab attention. People could by now be virtually shooting aliens or zombies hoarding up escalators as they wait for their partners. They are a little late, but such widespread use of personal or social gaming on city streets and in malls will come soon.

AR Visors are on their way too, and though the first offerings will be too expensive to achieve widespread adoption, cheaper ones will quickly follow. The internet of things and sensor technology will create abundant ground-up data to make a strong platform. As visors fall in price, so too will the size and power requirements of the processing needed, though much can be cloud-based.

It is a fairly safe bet that marketers will try very hard to force images at us and if they can’t do that via blatant in-your-face advertising, then product placement will become a very fine art. We should expect strong alliances between the big marketing and advertising companies and top games creators.

As AI simultaneously develops, people will be able to generate a lot of their own overlays, explaining to AI what they’d like and having it produced for them in real time. That would undermine marketing use of AR so again there will be some battles for control. Just as we have already seen owners of landmarks try to trademark the image of their buildings to prevent people including them in photographs, so similar battles will fill the courts over AR. What is to stop someone superimposing the image of a nicer building on their own? Should they need to pay a license to do so? What about overlaying celebrity faces on strangers? What about adding multimedia overlays from the web to make dull and ordinary products do exciting things when you use them? A cocktail served in a bar could have a miniature Sydney fireworks display going on over it. That might make it more exciting, but should the media creator be paid and how should that be policed? We’ll need some sort of AR YouTube at the very least with added geolocation.

The whole arts and media industry will see city streets as galleries and stages on which to show off and sell their creations.

Public services will make more mundane use of AR. Simple everyday context-dependent signage is one application, but overlays would be valuable in emergencies too. If police or fire services could superimpose warning on everyone’s visors nearby, that may help save lives in emergencies. Health services will use AR to assist ordinary people to care for a patient until an ambulance arrives

Shopping provide more uses and more battles. AR will show you what a competing shop has on offer right beside the one in front of you. That will make it easy to digitally trespass on a competitor’s shop floor. People can already do that on their smartphone, but AR will put the full image large as life right in front of your eyes to make it very easy to compare two things. Shops won’t want to block comms completely because that would prevent people wanting to enter their shop at all, so they will either have to compete harder or find more elaborate ways of preventing people making direct visual comparisons in-store. Perhaps digital trespassing might become a legal issue.

There will inevitably be a lot of social media use of AR too. If people get together to demonstrate, it will be easier to coordinate them. If police insist they disperse, they could still congregate virtually. Dispersed flash mobs could be coordinated as much as ones in the same location. That makes AR a useful tool for grass-roots democracy, especially demonstrations and direct action, but it also provides a platform for negative uses such as terrorism. Social entrepreneurs will produce vast numbers of custom overlays for millions of different purposes and contexts. Today we have tens of millions of websites and apps. Tomorrow we will have even more AR overlays.

These are just a few of the near term uses of augmented reality and a few hints as issues arising. It will change every aspect of our lives in due course, just as the web has, but more so.

 

Cellular blockchain, cellular bitcoin

Bitcoin has been around a while and the blockchain foundations on which it is built are extending organically into other areas.

Blockchain is a strongly encrypted distributed database, a ledger that records every transaction. That’s all fine, it works OK, and it doesn’t need fixed.

However, for some applications or new cryptocurrencies, there may be some benefit in making a cellular blockchain to limit database size, protect against network outage, and harden defenses against any local decryption. These may become important as cyber-terrorism increases and as quantum computing develops. They would also be more suited to micro-transactions and micro-currencies.

If you’ve made it this far, you almost certainly don’t need any further explanation.