This week, the fastest supercomputer broke a world record for AI, using machine learning in climate research:
I guess most readers thought this is a great thing, after all we need to solve climate change. That wasn’t my thought. The first thing my boss told me when I used a computer for the first time was: “shit in, shit out”. I don’t remember his name but I remember that concise lesson every time I read about climate models. If either the model or the data is garbage, or both, the output will also be garbage.
So my first thought reading about this new record was: will they let the AI work everything out for itself using all the raw, unadjusted data available about the environment, including all the astrophysics data about every kind of solar activity, human agricultural, industrial activities, air travel, all the unadjusted measurements of or proxies for surface, sea and air temperatures, ever collected, any empirical evidence for any corrections that might be needed on such data in any direction, and then let it make its own deductions, form its own models of how it might all connected and then watch eagerly as it makes predictions?
Or will they just input their own models, CO2 blinkering, prejudices and group-think, adjusted datasets, data omissions and general distortions of historical records into biased models already indoctrinated with climate change dogma, so that it will reconfirm the doom and gloom forecasts we’re so used to hearing, maximizing their chances of continued grants? If they do that, the AI might as well be a cardboard box with a pre-written article stuck on it. Shit in, shit out.
It’s obvious that the speed and capability of the supercomputer is of secondary important to who controls the AI, and its access to data, and its freedom to draw its own conclusions.
(Read my blog on Fake AI: https://timeguide.wordpress.com/2017/11/16/fake-ai/)
You may recall a week or two ago that IBM released a new face database to try to address bias in AI face recognition systems. Many other kinds of data could have biases for all sorts of reasons. At face value reducing bias is a good thing, but what exactly do we mean by that? Who decides what is biased and what is real? There are very many potential AI uses that are potentially sensitive, such as identifying criminals or distinguishing traits that correlate with gender, sexuality, race, religion, or indeed any discernible difference. Are all deductions by the AI permissible, or are huge swathes of possible deductions not permitted because they might be politically unacceptable? Who controls the AI? Why? With what aims?
Many people have some degree of influence on AI. Those who provide funding, equipment, theoreticians, those who design hardware, those who design the learning and training mechanisms, those who supply the data, those who censor or adjust data before letting the AI see it, those who design the interfaces, those who interpret and translate the results, those who decide which results are permissible and how to spin them, and publish them.
People are often impressed when a big powerful computer outputs results of massive amounts of processing. Outputs may often be used to control public opinion and government policy, to change laws, to alter balance of power in society, to create and destroy empires. AI will eventually make or influence most decisions of any consequence.
As AI techniques become more powerful, running on faster and better computers, we must always remember that golden rule: shit in, shit out. And we must always be suspicious of those who might have reason to influence an outcome.
Because who controls AI, controls the world.