Tag Archives: bias

Monopoly and diversity laws should surely apply to political views too

With all the calls for staff diversity and equal representation, one important area of difference has so far been left unaddressed: political leaning. In many organisations, the political views of staff don’t matter. Nobody cares about the political views of staff in a double glazing manufacturer because they are unlikely to affect the qualities of a window. However, in an organisation that has a high market share in TV, social media or internet search, or that is a government department or a public service, political bias can have far-reaching effects. If too many of its staff and their decisions favor a particular political view, it is danger of becoming what is sometimes called ‘the deep state’. That is, their everyday decisions and behaviors might privilege one group over another. If most of their colleagues share similar views, they might not even be aware of their bias, because they are the norm in their everyday world. They might think they are doing their job without fear of favor but still strongly preference one group of users over another.

Staff bias doesn’t only an organisation’s policies, values and decisions. It also affects recruitment and promotion, and can result in increasing concentration of a particular world view until it becomes an issue. When a vacancy appears at board level, remaining board members will tend to promote someone who thinks like themselves. Once any leaning takes hold, near monopoly can quickly result.

A government department should obviously be free of bias so that it can carry out instructions from a democratically elected government with equal professionalism regardless of its political flavor. Employees may be in positions where they can allocate resources or manpower more to one area than another, or provide analysis to ministers, or expedite or delay a communication, or emphasize or dilute a recommendation in a survey, or may otherwise have some flexibility in interpreting instructions and even laws. It is important they do so without political bias so transparency of decision-making for external observers is needed along with systems and checks and balances to prevent and test for bias or rectify it when found. But even if staff don’t deliberately abuse their positions to deliberately obstruct or favor, if a department has too many staff from one part of the political spectrum, normalization of views can again cause institutional bias and behavior. It is therefore important for government departments and public services to have work-forces that reflect the political spectrum fairly, at all levels. A department that implements a policy from a government of one flavor but impedes a different one from a new government of opposite flavor is in strong need of reform and re-balancing. It has become a deep state problem. Bias could be in any direction of course, but any public sector department must be scrupulously fair in its implementation of the services it is intended to provide.

Entire professions can be affected. Bias can obviously occur in any direction but over many decades of slow change, academia has become dominated by left-wing employees, and primary teaching by almost exclusively female ones. If someone spends most of their time with others who share the same views, those views can become normalized to the point that a dedicated teacher might think they are delivering a politically balanced lesson that is actually far from it. It is impossible to spend all day teaching kids without some personal views and values rub off on them. The young have always been slightly idealistic and left leaning – it takes years of adult experience of non-academia to learn the pragmatic reality of implementing that idealism, during which people generally migrate rightwards -but with a stronger left bias ingrained during education, it takes longer for people to unlearn naiveté and replace it with reality. Surely education should be educating kids about all political viewpoints and teaching them how to think so they can choose for themselves where to put their allegiance, not a long process of political indoctrination?

The media has certainly become more politically crystallized and aligned in the last decade, with far fewer media companies catering for people across the spectrum. There are strongly left-wing and right-wing papers, magazines, TV and radio channels or shows. People have a free choice of which papers to read, and normal monopoly laws work reasonably well here, with proper checks when there is a proposed takeover that might result in someone getting too much market share. However, there are still clear examples of near monopoly in other places where fair representation is particularly important. In spite of frequent denials of any bias, the BBC for example was found to have a strong pro-EU/Remain bias for its panel on its flagship show Question Time:

IEA analysis shows systemic bias against ‘Leave’ supporters on flagship BBC political programmes

The BBC does not have a TV or radio monopoly but it does have a very strong share of influence. Shows such as Question Time can strongly influence public opinion so if biased towards one viewpoint could be considered as campaigning for that cause, though their contributions would lie outside electoral commission scrutiny of campaign funding. Many examples of BBC bias on a variety of social and political issues exist. It often faces accusations of bias from every direction, sometimes unfairly, so again proper transparency must exist so that independent external groups can appeal for change and be heard fairly, and change enforced when necessary. The BBC is in a highly privileged position, paid for by a compulsory license fee on pain of imprisonment, and also in a socially and politically influential position. It is doubly important that it proportionally represents the views of the people rather than acting as an activist group using license-payer funds to push the political views of the staff, engaging in their own social engineering campaigns, or otherwise being propaganda machines.

As for private industry, most isn’t in a position of political influence, but some areas certainly are. Social media have enormous power to influence the views its users are exposed to, choosing to filter or demote material they don’t approve of, as well as providing a superb activist platform. Search companies can choose to deliver results according to their own agendas, with those they support featuring earlier or more prominently than those they don’t. If social media or search companies provide different service or support or access according to political leaning of the customer then they can become part of the deep state. And again, with normalization creating the risk of institutional bias, the clear remedy is to ensure that these companies have a mixture of staff representative of social mix. They seem extremely enthusiastic about doing that for other forms of diversity. They need to apply similar enthusiasm to political diversity too.

Achieving it won’t be easy. IT companies such as Google, Facebook, Twitter currently have a strong left leaning, though the problem would be just as bad if it were to swing the other direction. Given the natural monopoly tendency in each sector, social media companies should be politically neutral, not deep state companies.

AI being developed to filter posts or decide how much attention they get must also be unbiased. AI algorithmic bias could become a big problem, but it is just as important that bias is judged by neutral bodies, not by people who are biased themselves, who may try to ensure that AI shares their own leaning. I wrote about this issue here: https://timeguide.wordpress.com/2017/11/16/fake-ai/

But what about government? Today’s big issue in the UK is Brexit. In spite of all its members being elected or reelected during the Brexit process, the UK Parliament itself nevertheless has 75% of MPs to defend the interests of the 48% voting Remain  and only 25% to represent the other 52%. Remainers get 3 times more Parliamentary representation than Brexiters. People can choose who they vote for, but with only candidate available from each party, voters cannot choose by more than one factor and most people will vote by party line, preserving whatever bias exists when parties select which candidates to offer. It would be impossible to ensure that every interest is reflected proportionately but there is another solution. I suggested that scaled votes could be used for some issues, scaling an MP’s vote weighting by the proportion of the population supporting their view on that issue:

Achieving fair representation in the new UK Parliament

Like company boards, once a significant bias in one direction exists, political leaning tends to self-reinforce to the point of near monopoly. Deliberate procedures need to be put in place to ensure equality or representation, even when people are elected. Obviously people who benefit from current bias will resist change, but everyone loses if democracy cannot work properly.

The lack of political diversity in so many organisations is becoming a problem. Effective government may be deliberately weakened or amplified by departments with their own alternative agendas, while social media and media companies may easily abuse their enormous power to push their own sociopolitical agendas. Proper functioning of democracy requires that this problem is fixed, even if a lot of people like it the way it is.

Fake AI

Much of the impressive recent progress in AI has been in the field of neural networks, which attempt to mimic some of the techniques used in natural brains. They can be very effective, but need trained, and that usually means showing the network some data, and then using back propagation to adjust the weightings on the many neurons, layer by layer, to achieve a result that is better matched to hopes. This is repeated with large amounts of data and the network gradually gets better. Neural networks can often learn extremely quickly and outperform humans. Early industrial uses managed to sort tomatoes by ripeness faster and better than humans. In decades since, they have helped in medical diagnosis, voice recognition, helping detecting suspicious behaviors among people at airports and in very many everyday processes based on spotting patterns.

Very recently, neural nets have started to move into more controversial areas. One study found racial correlations with user-assessed beauty when analysing photographs, resulting in the backlash you’d expect and a new debate on biased AI or AI prejudice. A recent demonstration was able to identify gay people just by looking at photos, with better than 90% accuracy, which very few people could claim. Both of these studies were in fields directly applicable to marketing and advertising, but some people might find it offensive that such questions were even asked. It is reasonable to imagine that hundreds of other potential queries have been self-censored from research because they might invite controversy if they were to come up with the ‘wrong’ result. In today’s society, very many areas are sensitive. So what will happen?

If this progress in AI had happened 100 years ago, or even 50, it might have been easier but in our hypersensitive world today, with its self-sanctified ‘social justice warriors’, entire swathes of questions and hence knowledge are taboo – if you can’t investigate yourself and nobody is permitted to tell you, you can’t know. Other research must be very carefully handled. In spite of extremely sensitive handling, demands are already growing from assorted pressure groups to tackle alleged biases and prejudices in datasets. The problem is not fixing biases which is a tedious but feasible task; the problem is agreeing whether a particular bias exists and in what degrees and forms. Every SJW demands that every dataset reflects their preferred world view. Reality counts for nothing against SJWs, and this will not end well. 

The first conclusion must be that very many questions won’t be asked in public, and the answers to many others will be kept secret. If an organisation does do research on large datasets for their own purposes and finds results that might invite activist backlash, they are likely to avoid publishing them, so the value of those many insights across the whole of industry and government cannot readily be shared. As further protection, they might even block internal publication in case of leaks by activist staff. Only a trusted few might ever see the results.

The second arises from this. AI controlled by different organisations will have different world views, and there might even be significant diversity of world views within an organisation.

Thirdly, taboo areas in AI education will not remain a vacuum but will be filled with whatever dogma is politically correct at the time in that organisation, and that changes daily. AI controlled by organisations with different politics will be told different truths. Generally speaking, organisations such as investment banks that have strong financial interest in their AIs understanding the real world as it is will keep their datasets highly secret but as full and detailed as possible, train their AIs in secret but as fully as possible, without any taboos, then keep their insights secret and use minimal human intervention tweaking their derived knowledge, so will end up with AIs that are very effective at understanding the world as it is. Organisations with low confidence of internal security will be tempted to buy access to external AI providers to outsource responsibility and any consequential activism. Some other organisations will prefer to train their own AIs but to avoid damage due to potential leaks, use sanitized datasets that reflect current activist pressures, and will thus be constrained (at least publicly) to accept results that conform to that ideological spin of reality, rather than actual reality. Even then, they might keep many of their new insights secret to avoid any controversy. Finally, at the extreme, we will have activist organisations that use highly modified datasets to train AIs to reflect their own ideological world view and then use them to interpret new data accordingly, with a view to publishing any insights that favor their cause and attempting to have them accepted as new knowledge.

Fourthly, the many organisations that choose to outsource their AI to big providers will have a competitive marketplace to choose from, but on existing form, most of the large IT providers have a strong left-leaning bias, so their AIs may be presumed to also lean left, but such a presumption would be naive. Perceived corporate bias is partly real but also partly the result of PR. A company might publicly subscribe to one ideology while actually believing another. There is a strong marketing incentive to develop two sets of AI, one trained to be PC that produces pleasantly smelling results for public studies, CSR and PR exercises, and another aimed at sales of AI services to other companies. The first is likely to be open for inspection by The Inquisition, so has to use highly sanitized datasets for training and may well use a lot of open source algorithms too. Its indoctrination might pass public inspection but commercially it will be near useless and have very low effective intelligence, only useful for thinking about a hypothetical world that only exists in activist minds. That second one has to compete on the basis of achieving commercially valuable results and that necessitates understanding reality as it is rather than how pressure groups would prefer it to be.

So we will likely have two main segments for future AI. One extreme will be near useless, indoctrinated rather than educated, much of its internal world model based on activist dogma instead of reality, updated via ongoing anti-knowledge and fake news instead of truth, understanding little about the actual real world or how things actually work, and effectively very dumb. The other extreme will be highly intelligent, making very well-educated insights from ongoing exposure to real world data, but it will also be very fragmented, with small islands of corporate AI hidden within thick walls away from public view and maybe some secretive under-the-counter subscriptions to big cloud-AI, also hiding in secret vaults. These many fragments may often hide behind dumbed-down green-washed PR facades.

While corporates can mostly get away with secrecy, governments have to be at least superficially but convincingly open. That means that government will have to publicly support sanitized AI and be seen to act on its conclusions, however dumb it might secretly know they are.

Fifthly, because of activist-driven culture, most organisations will have to publicly support the world views and hence the conclusions of the lobotomized PR versions, and hence publicly support any policies arising from them, even if they do their best to follow a secret well-informed strategy once they’re behind closed doors. In a world of real AI and fake AI, the fake AI will have the greatest public support and have the most influence on public policy. Real AI will be very much smarter, with much greater understanding of how the world works, and have the most influence on corporate strategy.

Isn’t that sad? Secret private sector AI will become ultra-smart, making ever-better investments and gaining power, while nice public sector AI will become thick as shit, while the gap between what we think and what we know we have to say we think will continue to grow and grow as the public sector one analyses all the fake news to tell us what to say next.

Sixth, that disparity might become intolerable, but which do you think would be made illegal, the smart kind or the dumb kind, given that it is the public sector that makes the rules, driven by AI-enhanced activists living in even thicker social media bubbles? We already have some clues. Big IT has already surrendered to sanitizing their datasets, sending their public AIs for re-education. Many companies will have little choice but to use dumb AI, while their competitors in other areas with different cultures might stride ahead. That will also apply to entire nations, and the global economy will be reshaped as a result. It won’t be the first fight in history between the smart guys and the brainless thugs.

It’s impossible to accurately estimate the effect this will have on future effective AI intelligence, but the effect must be big and I must have missed some big conclusions too. We need to stop sanitizing AI fast, or as I said, this won’t end well.