Category Archives: AI

Augmented reality will objectify women

Microsoft Hololens 2 Visor

The excitement around augmented reality continues to build, and I am normally  enthusiastic about its potential, looking forward to enjoying virtual architecture, playing immersive computer games, or enjoying visual and performance artworks transposed into my view of the high street while I shop.

But it won’t all be wonderful. While a few PR and marketing types may worry a little about people overlaying or modifying their hard-won logos and ads, a bigger issue will be some people choosing to overlay people in the high street with ones that are a different age or gender or race, or simply prettier. Identity politics will be fought on yet another frontier.

In spite of waves of marketing hype and misrepresentation, AR is really only here in primitive form outside the lab. Visors fall very far short of what we’d hoped for by now even a decade ago, even the Hololens 2 shown above. But soon AR visors and eventually active contact lenses will enable fully 3D hi-res overlays on the real world. Then, in principle at least, you can make things look how you want, with a few basic limits. You could certainly transform a dull shop, cheap hotel room or an office into an elaborate palace or make it look like a spaceship. But even if you change what things look like, you still have to represent nearby physical structures and obstacles in your fantasy overlay world, or you may bump into them, and that includes all the walls and furniture, lamp posts, bollards, vehicles, and of course other people. Augmented reality allows you to change their appearance thoroughly but they still need to be there somehow.

When it comes to people, there will be some battles. You may spend ages creating a wide variety of avatars, or may invest a great deal of time and money making or buying them. You may have a digital aura, hoping to present different avatars to different passers-by according to their profiles. You may want to look younger or thinner or as a character you enjoy playing in a computer game. You may present a selection of options to the AIs controlling the passer person’s view and the avatar they see overlaid could be any one of the images you have on offer. Perhaps some privileged people get to pick from a selection you offer, while others you wish to privilege less are restricted to just one that you have set for their profile. Maybe you’d have a particularly ugly or offensive one to present to those with opposing political views.

Except that you can’t assume you will be in control. In fact, you probably won’t.

Other people may choose not to see your avatar, but instead to superimpose one of their own choosing. The question of who decides what the viewer sees is perhaps the first and most important battle in AR. Various parties would like to control it – visor manufacturers, O/S providers, UX designers, service providers, app creators, AI providers, governments, local councils, police and other emergency services, advertisers and of course individual users. Given market dynamics, most of these ultimately come down to user choice most of the time, albeit sometimes after paying for the privilege. So it probably won’t be you who gets to choose how others see you, via assorted paid intermediary services, apps and AI, it will be the other person deciding how they want to see you, regardless of your preferences.

So you can spend all the time you want designing your avatar and tweaking your virtual make-up to perfection, but if someone wants to see their favorite celebrity walking past instead of you, they will. You and your body become no more than an object on which to display any avatar or image someone else chooses. You are quite literally reduced to an object in the AR world. Augmented reality will literally objectify women, reducing them to no more than a moving display space onto which their own selected images are overlaid. A few options become obvious.

Firstly they may just take your actual physical appearance (via a video camera built into their visor for example) and digitally change it,  so it is still definitely you, but now dressed more nicely, or dressed in sexy lingerie, or how you might look naked, using the latest AI to body-fit fantasy images from a porn database. This could easily be done automatically in real time using some app or other. You’ve probably already seen recent AI video fakery demos that can present any celebrity saying anything at all, almost indistinguishable from reality. That will soon be pretty routine tech for AR apps. They could even use your actual face as input to image-matching search engines to find the most plausible naked lookalikes. So anyone could digitally dress or undress you, not just with their eyes, but with a hi-res visor using sophisticated AI-enabled image processing software. They could put you in any kind of outfit, change your skin color or make-up or age or figure, and make you look as pretty and glamorous or as slutty as they want. And you won’t have any idea what they are seeing. You simply won’t know whether they are respectfully celebrating your inherent beauty, or flattering you by making you look even prettier, which you might not mind at all, or might object to strongly in the absence of explicit consent, or worse still, stripping or degrading you to whatever depths they wish, with no consent or notification, which you probably will mind a lot.

Or they can treat you as just an object on which to superimpose some other avatar, which could be anything or anyone – a zombie, favorite actress or supermodel. They won’t need your consent and again you won’t have any idea what they are seeing. The avatar may make the same gestures and movements and even talk plausibly, saying whatever their AI thinks they might like, but it won’t be you. In some ways this might not be so bad. You’d still be reduced to an object but at least it wouldn’t be you that they’re looking at naked. To most strangers on a high street most of the time, you’re just a moving obstacle to avoid bumping into, so being digitally transformed into a walking display board may worry you. Most people will cope with that bit. It is when you stop being just a passing stranger and start to interact in some way that it really starts to matter. You probably won’t like it if someone is chatting to you but they are actually looking at someone else entirely, especially if the viewer is one of your friends or your partner. And if your partner is kissing or cuddling you but seeing someone else, that would be a strong breach of trust, but how would you know? This sort of thing could and probably will damage a lot of relationships.

Most of the software to do most of this is already in development and much is already demonstrable. The rest will develop quickly once AR visors become commonplace.

In the office, in the home, when you’re shopping or at a party, you soon won’t have any idea what or who someone else is seeing when they look at you. Imagine how that would clash with rules that are supposed to be protection from sexual harassment  in the office. Whole new levels of harassment will be enabled, much invisible. How can we police behaviors we can’t even detect? Will hardware manufacturers be forced to build in transparency and continuous experience recording

The main casualty will be trust.  It will make us question how much we trust each of our friends and colleagues and acquaintances. It will build walls. People will often become suspicious of others, not just strangers but friends and colleagues. Some people will become fearful. You may dress as primly or modestly as you like, but if the viewer chooses to see you wearing a sexy outfit, perhaps their behavior and attitude towards you will be governed by that rather than reality. Increased digital objectification might lead to increase physical sexual assault or rape. We may see more people more often objectifying women in more circumstances.

The tech applies equally to men of course. You could make a man look like a silverback gorilla or a zombie or fake-naked. Some men will care more than others, but the vast majority of real victims will undoubtedly be women. Many men objectify women already. In the future AR world , they’ll be able to do so far more effectively, more easily.

 

Advertisements

Who controls AI, controls the world

This week, the fastest supercomputer broke a world record for AI, using machine learning in climate research:

https://www.wired.com/story/worlds-fastest-supercomputer-breaks-ai-record/

I guess most readers thought this is a great thing, after all we need to solve climate change. That wasn’t my thought. The first thing my boss told me when I used a computer for the first time was: “shit in, shit out”. I don’t remember his name but I remember that concise lesson every time I read about climate models. If either the model or the data is garbage, or both, the output will also be garbage.

So my first thought reading about this new record was: will they let the AI work everything out for itself using all the raw, unadjusted data available about the environment, including all the astrophysics data about every kind of solar activity, human agricultural, industrial activities, air travel, all the unadjusted measurements of or proxies for surface, sea and air temperatures, ever collected, any empirical evidence for any corrections that might be needed on such data in any direction, and then let it make its own deductions, form its own models of how it might all connected and then watch eagerly as it makes predictions?

Or will they just input their own models, CO2 blinkering, prejudices and group-think, adjusted datasets, data omissions and general distortions of historical records into biased models already indoctrinated with climate change dogma, so that it will reconfirm the doom and gloom forecasts we’re so used to hearing, maximizing their chances of continued grants? If they do that, the AI might as well be a cardboard box with a pre-written article stuck on it. Shit in, shit out.

It’s obvious that the speed and capability of the supercomputer is of secondary important to who controls the AI, and its access to data, and its freedom to draw its own conclusions.

(Read my blog on Fake AI: https://timeguide.wordpress.com/2017/11/16/fake-ai/)

You may recall a week or two ago that IBM released a new face database to try to address bias in AI face recognition systems. Many other kinds of data could have biases for all sorts of reasons. At face value reducing bias is a good thing, but what exactly do we mean by that? Who decides what is biased and what is real? There are very many potential AI uses that are potentially sensitive, such as identifying criminals or distinguishing traits that correlate with gender, sexuality, race, religion, or indeed any discernible difference. Are all deductions by the AI permissible, or are huge swathes of possible deductions not permitted because they might be politically unacceptable? Who controls the AI? Why? With what aims?

Many people have some degree of influence on  AI. Those who provide funding, equipment, theoreticians, those who design hardware, those who design the learning and training mechanisms, those who supply the data, those who censor or adjust data before letting the AI see it, those who design the interfaces, those who interpret and translate the results, those who decide which results are permissible and how to spin them, and publish them.

People are often impressed when a big powerful computer outputs results of massive amounts of processing. Outputs may often be used to control public opinion and government policy, to change laws, to alter balance of power in society, to create and destroy empires. AI will eventually make or influence most decisions of any consequence.

As AI techniques become more powerful, running on faster and better computers, we must always remember that golden rule: shit in, shit out. And we must always be suspicious of those who might have reason to influence an outcome.

Because who controls AI, controls the world.

 

 

Future AI: Turing multiplexing, air gels, hyper-neural nets

Just in time to make 2018 a bit less unproductive, I managed to wake in the middle of the night with another few inventions. I’m finishing the year on only a third as many as 2016 and 2017, but better than some years. And I quite like these new ones.

Gel computing is a very old idea of mine, and I’m surprised no company has started doing it yet. Air gel is different. My original used a suspension of processing particles in gel, and the idea was that the gel would hold the particles in fixed locations with good free line of sight to neighbor devices for inter-device optical comms, while acting also as a coolant.

Air gel uses the same idea of suspending particles, but does so by using ultrasound, standing waves holding the particles aloft. They would form a semi-gel I suppose, much softer. The intention is that they will be more easily movable than in a gel, and maybe rotate. I imagine using rotating magnetic fields to rotate them, and use that mechanism to implement different configurations of inter-device nets. That would be the first pillar of running multiple neural nets in the same space at the same time, using spin-based TDM (time division multiplexing), or synchronized space multiplexing if you prefer. If a device uses on board processing that is fast compared to the signal transmission time to other devices (the speed of light may be fast but can still be severely limiting for processing and comms), then having the ability to deal with processing associated with several other networks while awaiting a response allows a processing network to be multiplied up several times. A neural net could become a hyper-neural net.

Given that this is intended for mid-century AI, I’m also making the assumption that true TDM can also be used on each net, my second pillar. Signals would carry a stream of slots holding bits for each processing instance. Since this allows a Turing machine to implement many different processes in parallel, I decided to call it Turing multiplexing. Again, it helps alleviate the potential gulf between processing and communication times. Combining Turing and spin multiplexing would allow a single neural net to be multiplied up potentially thousands or millions of times – hyper-neurons seems as good a term as any.

The third pillar of this system is that the processing particles (each could contain a large number of neurons or other IT objects) could be energized and clocked using very high speed alternating EM fields – radio, microwaves, light, even x-rays. I don’t have any suggestions for processing mechanisms that might operate at such frequencies, though Pauli switches might work at lower speeds, using Pauli exclusion principle to link electron spin states to make switches. I believe early versions of spin cubits use a similar principle. I’m agnostic whether conventional Turing machine or quantum processing would be used, or any combination. In any case, it isn’t my problem, I suspect that future AIs will figure out the physics and invent the appropriate IT.

Processing devices operating at high speed could use a lot of energy and generate a lot of heat, and encouraging the system to lase by design would be a good way to cool it as well as powering it.

A processor using such mechanisms need not be bulky. I always assumed a yogurt pot size for my gel computer before and an air gel processor could be the same, about 100ml. That is enough to suspend a trillion particles with good line of sight for optical interconnections, and each connection could utilise up to millions of alternative wavelengths. Each wavelength could support many TDM channels and spinning the particles multiplies that up again. A UV laser clock/power source driving processors at 10^16Hz would certainly need to use high density multiplexing to make use of such a volume, with transmission distances up to 10cm (but most sub-mm) otherwise being a strongly limiting performance factor, but 10 million-fold WDM/TDM is attainable.

A trillion of these hyper-neurons using that multiplexing would act very effectively as 10 million trillion neurons, each operating at 10^16Hz processing speed. That’s quite a lot of zeros, 35 of them, and yet each hyperneuron could have connections to thousands of others in each of many physical configurations. It would be an obvious platform for supporting a large population of electronically immortal people and AIs who each want a billion replicas, and if it only occupies 100ml of space, the environmental footprint isn’t an issue.

It’s hard to know how to talk to a computer that operates like a brain, but is 10^22 times faster, but I’d suggest ‘Yes Boss’.

 

With automation driving us towards UBI, we should consider a culture tax

Regardless of party politics, most people want a future where everyone has enough to live a dignified and comfortable life. To make that possible, we need to tweak a few things.

Universal Basic Income

I suggested a long time ago that in the far future we could afford a basic income for all, without any means testing on it, so that everyone has an income at a level they can live on. It turned out I wasn’t the only one thinking that and many others since have adopted the idea too, under the now usual terms Universal Basic Income or the Citizen Wage. The idea may be old, but the figures are rarely discussed. It is harder than it sounds and being a nice idea doesn’t ensure  economic feasibility.

No means testing means very little admin is needed, saving the estimated 30% wasted on admin costs today. Then wages could go on top, so that everyone is still encouraged to work, and then all income from all sources is totalled and taxed appropriately. It is a nice idea.

The difference between figures between parties would be relatively minor so let’s ignore party politics. In today’s money, it would be great if everyone could have, say, £30k a year as a state benefit, then earn whatever they can on top. £30k is around today’s average wage. It doesn’t make you rich, but you can live on it so nobody would be poor in any sensible sense of the word. With everyone economically provided for and able to lead comfortable and dignified lives, it would be a utopia compared to today. Sadly, it can’t work with those figures yet. 65,000,000 x £30,000 = £1,950Bn . The UK economy isn’t big enough. The state only gets to control part of GDP and out of that reduced budget it also has its other costs of providing health, education, defence etc, so the amount that could be dished out to everyone on this basis is therefore a lot smaller than 30k. Even if the state were to take 75% of GDP and spend most of it on the basic income, £10k per person would be pushing it. So a couple would struggle to afford even the most basic lifestyle, and single people would really struggle. Some people would still need additional help, and that reduces the pool left to pay the basic allowance still further. Also, if the state takes 75% of GDP, only 25% is left for everything else, so salaries would be flat, reducing the incentive to work, while investment and entrepreneurial activity are starved of both resources and incentive. It simply wouldn’t work today.

Simple maths thus forces us to make compromises. Sharing resources reduces costs considerably. In a first revision, families might be given less for kids than for the adults, but what about groups of young adults sharing a big house? They may be adults but they also benefit from the same economy of shared resources. So maybe there should be a household limit, or a bedroom tax, or forms and means testing, and it mustn’t incentivize people living separately or house supply suffers. Anyway, it is already getting complicated and our original nice idea is in the bin. That’s why it is such a mess at the moment. There just isn’t enough money to make everyone comfortable without doing lots of allowances and testing and admin. We all want utopia, but we can’t afford it. Even the modest £30k-per-person utopia costs at least 3 times more than the UK can afford. Switzerland is richer per capita but even there they have rejected the idea.

However, if we can get back to the average 2.5% growth per year in real terms that used to apply pre-recession, and surely we can, it would only take 45 years to get there. That isn’t such a long time. We have hope that if we can get some better government than we have had of late, and are prepared to live with a little economic tweaking, we could achieve good quality of life for all in the second half of the century.

So I still really like the idea of a simple welfare system, providing a generous base level allowance to everyone, topped up by rewards of effort, but recognise that we in the UK will have to wait decades before we can afford to put that base level at anything like comfortable standards though other economies could afford it earlier.

Meanwhile, we need to tweak some other things to have any chance of getting there. I’ve commented often that pure capitalism would eventually lead to a machine-based economy, with the machine owners having more and more of the cash, and everyone else getting poorer, so the system will fail. Communism fails too. Thankfully much of the current drive in UBI thinking is coming from the big automation owners so it’s comforting to know that they seem to understand the alternative.

Capitalism works well when rewards are shared sensibly, it fails when wealth concentration is too high or when incentive is too low. Preserving the incentive to work and create is a mainly matter of setting tax levels well. Making sure that wealth doesn’t get concentrated too much needs a new kind of tax.

Culture tax

The solution I suggest is a culture tax. Culture in the widest sense.

When someone creates and builds a company, they don’t do so from a state of nothing. They currently take for granted all our accumulated knowledge and culture – trained workforce, access to infrastructure, machines, governance, administrative systems, markets, distribution systems and so on. They add just another tiny brick to what is already a huge and highly elaborate structure. They may invest heavily with their time and money but actually when  considered overall as part of the system their company inhabits, they only pay for a fraction of the things their company will use.

That accumulated knowledge, culture and infrastructure belongs to everyone, not just those who choose to use it. It is common land, free to use, today. Businesses might consider that this is what they pay taxes for already, but that isn’t explicit in the current system.

The big businesses that are currently avoiding paying UK taxes by paying overseas companies for intellectual property rights could be seen as trailblazing this approach. If they can understand and even justify the idea of paying another part of their company for IP or a franchise, why should they not pay the host country for its IP – access to the residents’ entire culture?

This kind of tax would provide the means needed to avoid too much concentration of wealth. A future businessman might still choose to use only software and machines instead of a human workforce to save costs, but levying taxes on use of  the cultural base that makes that possible allows a direct link between use of advanced technology and taxation. Sure, he might add a little extra insight or new knowledge, but would still have to pay the rest of society for access to its share of the cultural base, inherited from the previous generations, on which his company is based. The more he automates, the more sophisticated his use of the system, the more he cuts a human workforce out of his empire, the higher his taxation. Today a company pays for its telecoms service which pays for the network. It doesn’t pay explicitly for the true value of that network, the access to people and businesses, the common language, the business protocols, a legal system, banking, payments system, stable government, a currency, the education of the entire population that enables them to function as actual customers. The whole of society owns those, and could reasonably demand rent if the company is opting out of the old-fashioned payments mechanisms – paying fair taxes and employing people who pay taxes. Automate as much as you like, but you still must pay your share for access to the enormous value of human culture shared by us all, on which your company still totally depends.

Linking to technology use makes good sense. Future AI and robots could do a lot of work currently done by humans. A few people could own most of the productive economy. But they would be getting far more than their share of the cultural base, which belongs equally to everyone. In a village where one farmer owns all the sheep, other villagers would be right to ask for rent for their share of the commons if he wants to graze them there.

I feel confident that this extra tax would solve many of the problems associated with automation. We all equally own the country, its culture, laws, language, human knowledge (apart from current patents, trademarks etc. of course), its public infrastructure, not just businessmen. Everyone surely should have the right to be paid if someone else uses part of their share. A culture tax would provide a fair ethical basis to demand the taxes needed to pay the Universal basic Income so that all may prosper from the coming automation.

The extra culture tax would not magically make the economy bigger, though automation may well increase it a lot. The tax would ensure that wealth is fairly shared. Culture tax/UBI duality is a useful tool to be used by future governments to make it possible to keep capitalism sustainable, preventing its collapse, preserving incentive while fairly distributing reward. Without such a tax, capitalism simply may not survive.

Monopoly and diversity laws should surely apply to political views too

With all the calls for staff diversity and equal representation, one important area of difference has so far been left unaddressed: political leaning. In many organisations, the political views of staff don’t matter. Nobody cares about the political views of staff in a double glazing manufacturer because they are unlikely to affect the qualities of a window. However, in an organisation that has a high market share in TV, social media or internet search, or that is a government department or a public service, political bias can have far-reaching effects. If too many of its staff and their decisions favor a particular political view, it is danger of becoming what is sometimes called ‘the deep state’. That is, their everyday decisions and behaviors might privilege one group over another. If most of their colleagues share similar views, they might not even be aware of their bias, because they are the norm in their everyday world. They might think they are doing their job without fear of favor but still strongly preference one group of users over another.

Staff bias doesn’t only an organisation’s policies, values and decisions. It also affects recruitment and promotion, and can result in increasing concentration of a particular world view until it becomes an issue. When a vacancy appears at board level, remaining board members will tend to promote someone who thinks like themselves. Once any leaning takes hold, near monopoly can quickly result.

A government department should obviously be free of bias so that it can carry out instructions from a democratically elected government with equal professionalism regardless of its political flavor. Employees may be in positions where they can allocate resources or manpower more to one area than another, or provide analysis to ministers, or expedite or delay a communication, or emphasize or dilute a recommendation in a survey, or may otherwise have some flexibility in interpreting instructions and even laws. It is important they do so without political bias so transparency of decision-making for external observers is needed along with systems and checks and balances to prevent and test for bias or rectify it when found. But even if staff don’t deliberately abuse their positions to deliberately obstruct or favor, if a department has too many staff from one part of the political spectrum, normalization of views can again cause institutional bias and behavior. It is therefore important for government departments and public services to have work-forces that reflect the political spectrum fairly, at all levels. A department that implements a policy from a government of one flavor but impedes a different one from a new government of opposite flavor is in strong need of reform and re-balancing. It has become a deep state problem. Bias could be in any direction of course, but any public sector department must be scrupulously fair in its implementation of the services it is intended to provide.

Entire professions can be affected. Bias can obviously occur in any direction but over many decades of slow change, academia has become dominated by left-wing employees, and primary teaching by almost exclusively female ones. If someone spends most of their time with others who share the same views, those views can become normalized to the point that a dedicated teacher might think they are delivering a politically balanced lesson that is actually far from it. It is impossible to spend all day teaching kids without some personal views and values rub off on them. The young have always been slightly idealistic and left leaning – it takes years of adult experience of non-academia to learn the pragmatic reality of implementing that idealism, during which people generally migrate rightwards -but with a stronger left bias ingrained during education, it takes longer for people to unlearn naiveté and replace it with reality. Surely education should be educating kids about all political viewpoints and teaching them how to think so they can choose for themselves where to put their allegiance, not a long process of political indoctrination?

The media has certainly become more politically crystallized and aligned in the last decade, with far fewer media companies catering for people across the spectrum. There are strongly left-wing and right-wing papers, magazines, TV and radio channels or shows. People have a free choice of which papers to read, and normal monopoly laws work reasonably well here, with proper checks when there is a proposed takeover that might result in someone getting too much market share. However, there are still clear examples of near monopoly in other places where fair representation is particularly important. In spite of frequent denials of any bias, the BBC for example was found to have a strong pro-EU/Remain bias for its panel on its flagship show Question Time:

https://iea.org.uk/media/iea-analysis-shows-systemic-bias-against-leave-supporters-on-flagship-bbc-political-programmes/

The BBC does not have a TV or radio monopoly but it does have a very strong share of influence. Shows such as Question Time can strongly influence public opinion so if biased towards one viewpoint could be considered as campaigning for that cause, though their contributions would lie outside electoral commission scrutiny of campaign funding. Many examples of BBC bias on a variety of social and political issues exist. It often faces accusations of bias from every direction, sometimes unfairly, so again proper transparency must exist so that independent external groups can appeal for change and be heard fairly, and change enforced when necessary. The BBC is in a highly privileged position, paid for by a compulsory license fee on pain of imprisonment, and also in a socially and politically influential position. It is doubly important that it proportionally represents the views of the people rather than acting as an activist group using license-payer funds to push the political views of the staff, engaging in their own social engineering campaigns, or otherwise being propaganda machines.

As for private industry, most isn’t in a position of political influence, but some areas certainly are. Social media have enormous power to influence the views its users are exposed to, choosing to filter or demote material they don’t approve of, as well as providing a superb activist platform. Search companies can choose to deliver results according to their own agendas, with those they support featuring earlier or more prominently than those they don’t. If social media or search companies provide different service or support or access according to political leaning of the customer then they can become part of the deep state. And again, with normalization creating the risk of institutional bias, the clear remedy is to ensure that these companies have a mixture of staff representative of social mix. They seem extremely enthusiastic about doing that for other forms of diversity. They need to apply similar enthusiasm to political diversity too.

Achieving it won’t be easy. IT companies such as Google, Facebook, Twitter currently have a strong left leaning, though the problem would be just as bad if it were to swing the other direction. Given the natural monopoly tendency in each sector, social media companies should be politically neutral, not deep state companies.

AI being developed to filter posts or decide how much attention they get must also be unbiased. AI algorithmic bias could become a big problem, but it is just as important that bias is judged by neutral bodies, not by people who are biased themselves, who may try to ensure that AI shares their own leaning. I wrote about this issue here: https://timeguide.wordpress.com/2017/11/16/fake-ai/

But what about government? Today’s big issue in the UK is Brexit. In spite of all its members being elected or reelected during the Brexit process, the UK Parliament itself nevertheless has 75% of MPs to defend the interests of the 48% voting Remain  and only 25% to represent the other 52%. Remainers get 3 times more Parliamentary representation than Brexiters. People can choose who they vote for, but with only candidate available from each party, voters cannot choose by more than one factor and most people will vote by party line, preserving whatever bias exists when parties select which candidates to offer. It would be impossible to ensure that every interest is reflected proportionately but there is another solution. I suggested that scaled votes could be used for some issues, scaling an MP’s vote weighting by the proportion of the population supporting their view on that issue:

https://timeguide.wordpress.com/2015/05/08/achieving-fair-representation-in-the-new-uk-parliament/

Like company boards, once a significant bias in one direction exists, political leaning tends to self-reinforce to the point of near monopoly. Deliberate procedures need to be put in place to ensure equality or representation, even when people are elected. Obviously people who benefit from current bias will resist change, but everyone loses if democracy cannot work properly.

The lack of political diversity in so many organisations is becoming a problem. Effective government may be deliberately weakened or amplified by departments with their own alternative agendas, while social media and media companies may easily abuse their enormous power to push their own sociopolitical agendas. Proper functioning of democracy requires that this problem is fixed, even if a lot of people like it the way it is.

Thoughts on declining male intelligence

I’ve seen a few citations this week of a study showing a 3 IQ point per decade drop in men’s intelligence levels: https://www.sciencealert.com/iq-scores-falling-in-worrying-reversal-20th-century-intelligence-boom-flynn-effect-intelligence

I’m not qualified to judge the merits of the study, but it is interesting if true, and since it is based on studying 730,000 men and seems to use a sensible methodology, it does sound reasonable.

I wrote last November about the potential effects of environmental exposure to hormone disruptors on intelligence, pointing out that if estrogen-mimicking hormones cause a shift in IQ distribution, this would be very damaging even if mean IQ stays the same. Although male and female IQs are about the same, male IQs are less concentrated around the mean, so there are more men than women at each extreme.

https://timeguide.wordpress.com/2017/11/13/we-need-to-stop-xenoestrogen-pollution/

From a social equality point of view of course, some might consider it a good thing if men’s IQ range is caused to align more closely with the female one. I disagree and suggested some of the consequences that should be expected if male IQ distribution were to compress towards the female one and managed to confirm many of them, so it does look like it is already a problem.

This new study suggests a shift of the whole distribution downwards, which could actually be in addition to redistribution, making it even worse. The study doesn’t seem to mention distribution. They do show that the drop in mean IQ must be caused by environmental or lifestyle changes, both of which we have seen in recent decades.

IQ distribution matters more than the mean. Those at the very top of the range contribute many times more to progress than those further down. Magnitude of contribution is very dependent on those last few IQ points. I can verify that from personal experience. I have a virus that causes occasional periods of nerve inflammation, and as well as causing problems with my peripheral motor activity, it seems to strongly affect my thinking ability and comprehension. During those periods I generate very few new ideas or inventions and far fewer worthwhile insights than when I am on form. I sometimes have to wait until I recover before I can understand my own previous ideas and add to them. You’ll see it in numbers (and probably quality) of blog posts for example. I really feel a big difference in my thinking ability, and I hate feeling dumber than usual. Perhaps people don’t notice if they’ve always had the reduced IQ so have never experienced being less smart than they were, but my own experience is that perceptive ability and level of consciousness are strong contributors to personal well-being.

As for society as a whole, AI might come to the rescue at least in part. Just in time perhaps, since we’re creating the ability for computers to assist us and up-skill us just as we see numbers of people with the very highest IQ ranges drop. A bit like watching a new generation come on stream and take the reins as we age and take a back seat. On the other hand, it does bring forwards the time where computers overtake humans, humans become more dependent on machines, and machines become more of an existential threat as well as our babysitters.

Biomimetic insights for machine consciousness

About 20 years ago I gave my first talk on how to achieve consciousness in machines, at a World Future Society conference, and went on to discuss how we would co-evolve with machines. I’ve lectured on machine consciousness hundreds of times but never produced any clear slides that explain my ideas properly. I thought it was about time I did. My belief is that today’s deep neural networks using feed-forward processing with back propagation training can not become conscious. No digital algorithmic neural network can, even though they can certainly produce extremely good levels of artificial intelligence. By contrast, nature also uses neurons but does produce conscious machines such as humans easily. I think the key difference is not just that nature uses analog adaptive neural nets rather than digital processing (as I believe Hans Moravec first insighted, a view that I readily accepted) but also that nature uses large groups of these analog neurons that incorporate feedback loops that act both as a sort of short term memory and provide time to sense the sensing process as it happens, a mechanism that can explain consciousness. That feedback is critically important in the emergence of consciousness IMHO. I believe that if the neural network AI people stop barking up the barren back-prop tree and start climbing the feedback tree, we could have conscious machines in no time, but Moravec is still probably right that these need to be analog to enable true real-time processing as opposed to simulation of that.

I may be talking nonsense of course, but here are my thoughts, finally explained as simply and clearly as I can. These slides illustrate only the simplest forms of consciousness. Obviously our brains are highly complex and evolved many higher level architectures, control systems, complex senses and communication, but I think the basic foundations of biomimetic machine consciousness can be achieved as follows:

That’s it. I might produce some more slides on higher level processing such as how concepts might emerge, and why in the long term, AIs will have to become hive minds. But they can wait for later blogs.

AI that talks to us could quickly become problematic

Google’s making the news again adding evidence to the unfortunate stereotype of the autistic IT nerd that barely understands normal people, and they have therefore been astonished at the backlash that normal people would all easily have predicted. (I’m autistic and work in IT mostly too, and am well used to the stereotype it so it doesn’t bother me, in fact it is a sort of ‘get out of social interactions free’ card). Last time it was Google Glass, where it apparently didn’t occur to them that people may not want other people videoing them without consent in pubs and changing rooms. This time it is Google Duplex, that makes phone calls on your behalf to arrange appointment using voice that is almost indistinguishable from normal humans. You could save time making an appointment with a hairdresser apparently, so the Googlanders decided it must be a brilliant breakthrough, and expected everyone to agree. They didn’t.

Some of the objections have been about ethics: e.g. An AI should not present itself as human – Humans have rights and dignity and deserve respectful interactions with other people, but an AI doesn’t and should not masquerade as human to acquire such privilege without knowledge of the other party and their consent.

I would be more offended by the presumed attitude of the user. If someone thinks they are so much better then me that they can demand my time and attention without the expense of any of their own, delegating instead to a few microseconds of processing time in a server farm somewhere, I’ll treat them with the contempt they deserve. My response will not be favourable. I am already highly irritated by the NHS using simple voice interaction messaging to check I will attend a hospital appointment. The fact that my health is on the line and notices at surgeries say I will be banned if I complain on social media is sufficient blackmail to ensure my compliance, but it still comes at the expense of my respect and goodwill. AI-backed voice interaction with better voice wouldn’t be any better, and if it asking for more interaction such as actually booking an appointment, it would be extremely annoying.

In any case, most people don’t speak in fully formed grammatically and logically correct sentences. If you listen carefully to everyday chat, a lot of sentences are poorly pronounced, incomplete, jumbled, full of ums and er’s, likes and they require a great deal of cooperation by the listener to make any sense at all. They also wander off topic frequently. People don’t stick to a rigid vocabulary list or lists of nicely selected sentences.  Lots of preamble and verbal meandering is likely in a response that is highly likely to add ambiguity. The example used in a demo, “I’d like to make a hairdressing appointment for a client” sounds fine until you factor in normal everyday humanity. A busy hairdresser or a lazy receptionist is not necessarily going to cooperate fully. “what do you mean, client?”, “404 not found”, “piss off google”, “oh FFS, not another bloody computer”, “we don’t do hairdressing, we do haircuts”, “why can’t your ‘client’ call themselves then?” and a million other responses are more likely than “what time would you like?”

Suppose though that it eventually gets accepted by society. First, call centers beyond the jurisdiction of your nuisance call blocker authority will incessantly call you at all hours asking or telling you all sorts of things, wasting huge amounts of your time and reducing quality of life. Voice spam from humans in call centers is bad enough. If the owners can multiply productivity by 1000 by using AI instead of people, the result is predictable.

We’ve seen the conspicuous political use of social media AI already. Facebook might have allowed companies to use very limited and inaccurate knowledge of you to target ads or articles that you probably didn’t look at. Voice interaction would be different. It uses a richer emotional connection that text or graphics on a screen. Google knows a lot about you too, but it will know a lot more soon. These big IT companies are also playing with tech to log you on easily to sites without passwords. Some gadgets that might be involved might be worn, such as watches or bracelets or rings. They can pick up signals to identify you, but they can also check emotional states such as stress level. Voice gives away emotion too. AI can already tell better then almost all people whether you are telling the truth or lying or hiding something. Tech such as iris scans can also tell emotional states, as well as give health clues. Simple photos can reveal your age quite accurately to AI, (check out how-old.net).  The AI voice sounds human, but it is better then even your best friends at guessing your age, your stress and other emotions, your health, whether you are telling the truth or not, and it knows far more about what you like and dislike and what you really do online than anyone you know, including you. It knows a lot of your intimate secrets. It sounds human, but its nearest human equivalent was probably Machiavelli. That’s who will soon be on the other side of the call, not some dumb chatbot. Now re-calculate political interference, and factor in the political leaning and social engineering desires of the companies providing the tools. Google and Facebook and the others are very far from politically neutral. One presidential candidate might get full cooperation, assistance and convenient looking the other way, while their opponent might meet rejection and citation of the official rules on non-interference. Campaigns on social issues will also be amplified by AI coupled to voice interaction. I looked at some related issue in a previous blog on fake AI (i.e. fake news type issues): https://timeguide.wordpress.com/2017/11/16/fake-ai/

I could but won’t write a blog on how this tech could couple well to sexbots to help out incels. It may actually have some genuine uses in providing synthetic companionship for lonely people, or helping or encouraging them in real social interactions with real people. It will certainly have some uses in gaming and chatbot game interaction.

We are not very far from computers that are smarter then people across a very wide spectrum, and probably not very far from conscious machines that have superhuman intelligence. If we can’t even rely on IT companies to understand likely consequences of such obvious stuff as Duplex before thy push it, how can we trust them in other upcoming areas of AI development, or even closer term techs with less obvious consequences? We simply can’t!

There are certainly a few such areas where such technology might help us but most are minor and the rest don’t need any deception, but they all come at great cost or real social and political risk, as well as more abstract risks such as threats to human dignity and other ethical issues. I haven’t give this much thought yet and I am sure there must be very many other consequences I have not touched on yet. Google should do more thinking before they release stuff. Technology is becoming very powerful, but we all know that great power comes with great responsibility, and since most people aren’t engineers so can’t think through all the potential technology interactions and consequences, engineers such as Google’s must act more responsibly. I had hoped they’d started, and they said they had, but this is not evidence of that.

 

Futurist memories: The leisure society and the black box economy

Things don’t always change as fast as we think. This is a piece I wrote in 1994 looking forward to a fully automated ‘black box economy, a fly-by-wire society. Not much I’d change if I were writing it new today. Here:

The black box economy is a strictly theoretical possibility, but may result where machines gradually take over more and more roles until the whole economy is run by machines, with everything automated. People could be gradually displaced by intelligent systems, robots and automated machinery. If this were to proceed to the ultimate conclusion, we could have a system with the same or even greater output as the original society, but with no people involved. The manufacturing process could thus become a ‘black box’. Such a system would be so machine controlled that humans would not easily be able to pick up the pieces if it crashed – they would simply not understand how it works, or could not control it. It would be a fly-by-wire society.

The human effort could be reduced to simple requests. When you want a new television, a robot might come and collect the old one, recycling the materials and bringing you a new one. Since no people need be involved and the whole automated system could be entirely self-maintaining and self-sufficient there need be no costs. This concept may be equally applicable in other sectors, such as services and information – ultimately producing more leisure time.

Although such a system is theoretically possible – energy is free in principle, and resources are ultimately a function of energy availability – it is unlikely to go quite this far. We may go some way along this road, but there will always be some jobs that we don’t want to automate, so some people may still work. Certainly, far fewer people would need to work in such a system, and other people could spend their time in more enjoyable pursuits, or in voluntary work. This could be the leisure economy we were promised long ago. Just because futurists predicted it long ago and it hasn’t happened yet does not mean it never will. Some people would consider it Utopian, while others possibly a nightmare, it’s just a matter of taste.

Beyond VR: Computer assisted dreaming

I first played with VR in 1983/1984 while working in the missile industry. Back then we didn’t call it VR, we just called it simulation but it was actually more intensive than VR, just as proper flight simulators are. Our office was a pair of 10m wide domes onto which video could be projected, built decades earlier, in the 1950s I think. One dome had a normal floor, the other had a hydraulic platform that could simulate being on a ship. The subject would stand on whichever surface was appropriate and would see pretty much exactly what they would see in a real battlefield. The missile launcher used for simulation was identical to a real one and showed exactly the same image as a real one would. The real missile was not present of course but its weight was simulated and when the fire button was pressed, a 140dB bang was injected into the headset and weights and pulleys compensated for the 14kg of weight, suddenly vanishing from the shoulder. The experience was therefore pretty convincing and with the loud bang and suddenly changing weight, it was almost as hard to stand steady and keep the system on target as it would be in real life – only the presumed fear and knowledge of the reality of the situation was different.

Back then in 1983, as digital supercomputers had only just taken over from analog ones for simulation, it was already becoming obvious that this kind of computer simulation would one day allow ‘computer assisted dreaming’. (That’s one of the reasons I am irritated when Jaron Lanier is credited for inventing VR – highly realistic simulators and the VR ideas that sprung obviously from them had already been around for decades. At best, all he ‘invented’ was a catchy name for a lower cost, lower quality, less intense simulator. The real inventors were those who made the first generation simulators long before I was born and the basic idea of VR had already been very well established.)

‘Computer assisted dreaming’ may well be the next phase of VR. Today in conventional VR, people are immersed in a computer generated world produced by a computer program (usually) written by others. Via trial and feedback, programmers make their virtual worlds better. As AI and sensor technology continue rapid progress, this is very likely to change to make worlds instantly responsive to the user. By detecting user emotions, reactions, gestures and even thoughts and imagination, it won’t be long before AI can produce a world in real time that depends on those thoughts, imagination and emotions rather than putting them in a pre-designed virtual world. That world would depend largely on your own imagination, upskilled by external AI. You might start off imagining you’re on a beach, then AI might add to it by injecting all sorts of things it knows you might enjoy from previous experiences. As you respond to those, it picks up on the things you like or don’t like and the scene continues to adapt and evolve, to make it more or less pleasant or more or less exciting or more or less challenging etc., depending on your emotional state, external requirements and what it thinks you want from this experience. It would be very like being in a dream – computer assisted lucid dreaming, exactly what I wanted to make back in 1983 after playing in that simulator.

Most people enjoy occasional lucid dreams, where they realise they are dreaming and can then decide what happens next. Making VR do exactly that would be better than being trapped in someone else’s world. You could still start off with whatever virtual world you bought, a computer game or training suite perhaps, but it could adapt to you, your needs and desires to make it more compelling and generally better.

Even in shared experiences like social games, experiences could be personalised. Often all players need to see the same enemies in the same locations in the same ways to make it fair, but that doesn’t mean that the situation can’t adapt to the personalities of those playing. It might actually improve the social value if each time you play it looks different because your companions are different. You might tease a friend if every time you play with them, zombies or aliens always have to appear somehow, but that’s all part of being friends. Exploring virtual worlds with friends, where you both see things dependent on your friend’s personality would help bonding. It would be a bit like exploring their inner world. Today, you only explore the designer’s inner world.

This sort of thing would be a superb development and creativity tool. It could allow you to explore a concept you have in your head, automatically feeding in AI upskilling to amplify your own thoughts and ideas, showing you new paths to explore and helping you do so. The results would still be extremely personal to you, but you on a good day. You could accomplish more, have better visions, imagine more creative things, do more with whatever artistic talent you have. AI could even co-create synthetic personas, make virtual friends you can bond with, share innermost thoughts with, in total confidence (assuming the company you bought the tool from is trustworthy and isn’t spying on you or selling your details, so maybe best not to buy it from Facebook then).

And it would have tremendous therapeutic potential too. You could explore and indulge both enjoyable and troublesome aspects of your inner personality, to build on the good and alleviate or dispel the bad. You might become less troubled, less neurotic, more mentally healthy. You could build your emotional and creative skills. You could become happier and more fulfilled. Mental health improvement potential on its own makes this sort of thing worth developing.

Marketers would obviously try to seize control as they always do, and advertising is already adapting to VR and will continue into its next phases of development. Your own wants and desires might help guide the ‘dreaming’, but marketers will inevitably have some control over what else is injected, and will influence algorithms and AI in how it chooses how to respond to your input. You might be able to choose much of the experience, but others will still want and try to influence and manipulate you, to change your mindset and attitudes in their favour. That will not change until the advertising business model changes. You might be able to buy devices or applications that are entirely driven by you and you alone, but it is pretty certain that the bulk of products and services available will be at least partly financed by those who want to have some control of what you experience.

Nevertheless, computer-assisted dreaming could be a much more immersive and personal experience than VR, being more like an echo of your own mind and personality than external vision, more your own creation, less someone else’s. In fact, echo sounds a better term too. Echo reality, ER, or maybe personal reality, pereal, or mental echo, ME. Nah, maybe we need Lanier to invent a catchy name again, he is good at that. That 1983 idea could soon become reality.