Category Archives: culture

With automation driving us towards UBI, we should consider a culture tax

Regardless of party politics, most people want a future where everyone has enough to live a dignified and comfortable life. To make that possible, we need to tweak a few things.

Universal Basic Income

I suggested a long time ago that in the far future we could afford a basic income for all, without any means testing on it, so that everyone has an income at a level they can live on. It turned out I wasn’t the only one thinking that and many others since have adopted the idea too, under the now usual terms Universal Basic Income or the Citizen Wage. The idea may be old, but the figures are rarely discussed. It is harder than it sounds and being a nice idea doesn’t ensure  economic feasibility.

No means testing means very little admin is needed, saving the estimated 30% wasted on admin costs today. Then wages could go on top, so that everyone is still encouraged to work, and then all income from all sources is totalled and taxed appropriately. It is a nice idea.

The difference between figures between parties would be relatively minor so let’s ignore party politics. In today’s money, it would be great if everyone could have, say, £30k a year as a state benefit, then earn whatever they can on top. £30k is around today’s average wage. It doesn’t make you rich, but you can live on it so nobody would be poor in any sensible sense of the word. With everyone economically provided for and able to lead comfortable and dignified lives, it would be a utopia compared to today. Sadly, it can’t work with those figures yet. 65,000,000 x £30,000 = £1,950Bn . The UK economy isn’t big enough. The state only gets to control part of GDP and out of that reduced budget it also has its other costs of providing health, education, defence etc, so the amount that could be dished out to everyone on this basis is therefore a lot smaller than 30k. Even if the state were to take 75% of GDP and spend most of it on the basic income, £10k per person would be pushing it. So a couple would struggle to afford even the most basic lifestyle, and single people would really struggle. Some people would still need additional help, and that reduces the pool left to pay the basic allowance still further. Also, if the state takes 75% of GDP, only 25% is left for everything else, so salaries would be flat, reducing the incentive to work, while investment and entrepreneurial activity are starved of both resources and incentive. It simply wouldn’t work today.

Simple maths thus forces us to make compromises. Sharing resources reduces costs considerably. In a first revision, families might be given less for kids than for the adults, but what about groups of young adults sharing a big house? They may be adults but they also benefit from the same economy of shared resources. So maybe there should be a household limit, or a bedroom tax, or forms and means testing, and it mustn’t incentivize people living separately or house supply suffers. Anyway, it is already getting complicated and our original nice idea is in the bin. That’s why it is such a mess at the moment. There just isn’t enough money to make everyone comfortable without doing lots of allowances and testing and admin. We all want utopia, but we can’t afford it. Even the modest £30k-per-person utopia costs at least 3 times more than the UK can afford. Switzerland is richer per capita but even there they have rejected the idea.

However, if we can get back to the average 2.5% growth per year in real terms that used to apply pre-recession, and surely we can, it would only take 45 years to get there. That isn’t such a long time. We have hope that if we can get some better government than we have had of late, and are prepared to live with a little economic tweaking, we could achieve good quality of life for all in the second half of the century.

So I still really like the idea of a simple welfare system, providing a generous base level allowance to everyone, topped up by rewards of effort, but recognise that we in the UK will have to wait decades before we can afford to put that base level at anything like comfortable standards though other economies could afford it earlier.

Meanwhile, we need to tweak some other things to have any chance of getting there. I’ve commented often that pure capitalism would eventually lead to a machine-based economy, with the machine owners having more and more of the cash, and everyone else getting poorer, so the system will fail. Communism fails too. Thankfully much of the current drive in UBI thinking is coming from the big automation owners so it’s comforting to know that they seem to understand the alternative.

Capitalism works well when rewards are shared sensibly, it fails when wealth concentration is too high or when incentive is too low. Preserving the incentive to work and create is a mainly matter of setting tax levels well. Making sure that wealth doesn’t get concentrated too much needs a new kind of tax.

Culture tax

The solution I suggest is a culture tax. Culture in the widest sense.

When someone creates and builds a company, they don’t do so from a state of nothing. They currently take for granted all our accumulated knowledge and culture – trained workforce, access to infrastructure, machines, governance, administrative systems, markets, distribution systems and so on. They add just another tiny brick to what is already a huge and highly elaborate structure. They may invest heavily with their time and money but actually when  considered overall as part of the system their company inhabits, they only pay for a fraction of the things their company will use.

That accumulated knowledge, culture and infrastructure belongs to everyone, not just those who choose to use it. It is common land, free to use, today. Businesses might consider that this is what they pay taxes for already, but that isn’t explicit in the current system.

The big businesses that are currently avoiding paying UK taxes by paying overseas companies for intellectual property rights could be seen as trailblazing this approach. If they can understand and even justify the idea of paying another part of their company for IP or a franchise, why should they not pay the host country for its IP – access to the residents’ entire culture?

This kind of tax would provide the means needed to avoid too much concentration of wealth. A future businessman might still choose to use only software and machines instead of a human workforce to save costs, but levying taxes on use of  the cultural base that makes that possible allows a direct link between use of advanced technology and taxation. Sure, he might add a little extra insight or new knowledge, but would still have to pay the rest of society for access to its share of the cultural base, inherited from the previous generations, on which his company is based. The more he automates, the more sophisticated his use of the system, the more he cuts a human workforce out of his empire, the higher his taxation. Today a company pays for its telecoms service which pays for the network. It doesn’t pay explicitly for the true value of that network, the access to people and businesses, the common language, the business protocols, a legal system, banking, payments system, stable government, a currency, the education of the entire population that enables them to function as actual customers. The whole of society owns those, and could reasonably demand rent if the company is opting out of the old-fashioned payments mechanisms – paying fair taxes and employing people who pay taxes. Automate as much as you like, but you still must pay your share for access to the enormous value of human culture shared by us all, on which your company still totally depends.

Linking to technology use makes good sense. Future AI and robots could do a lot of work currently done by humans. A few people could own most of the productive economy. But they would be getting far more than their share of the cultural base, which belongs equally to everyone. In a village where one farmer owns all the sheep, other villagers would be right to ask for rent for their share of the commons if he wants to graze them there.

I feel confident that this extra tax would solve many of the problems associated with automation. We all equally own the country, its culture, laws, language, human knowledge (apart from current patents, trademarks etc. of course), its public infrastructure, not just businessmen. Everyone surely should have the right to be paid if someone else uses part of their share. A culture tax would provide a fair ethical basis to demand the taxes needed to pay the Universal basic Income so that all may prosper from the coming automation.

The extra culture tax would not magically make the economy bigger, though automation may well increase it a lot. The tax would ensure that wealth is fairly shared. Culture tax/UBI duality is a useful tool to be used by future governments to make it possible to keep capitalism sustainable, preventing its collapse, preserving incentive while fairly distributing reward. Without such a tax, capitalism simply may not survive.

Advertisements

Monopoly and diversity laws should surely apply to political views too

With all the calls for staff diversity and equal representation, one important area of difference has so far been left unaddressed: political leaning. In many organisations, the political views of staff don’t matter. Nobody cares about the political views of staff in a double glazing manufacturer because they are unlikely to affect the qualities of a window. However, in an organisation that has a high market share in TV, social media or internet search, or that is a government department or a public service, political bias can have far-reaching effects. If too many of its staff and their decisions favor a particular political view, it is danger of becoming what is sometimes called ‘the deep state’. That is, their everyday decisions and behaviors might privilege one group over another. If most of their colleagues share similar views, they might not even be aware of their bias, because they are the norm in their everyday world. They might think they are doing their job without fear of favor but still strongly preference one group of users over another.

Staff bias doesn’t only an organisation’s policies, values and decisions. It also affects recruitment and promotion, and can result in increasing concentration of a particular world view until it becomes an issue. When a vacancy appears at board level, remaining board members will tend to promote someone who thinks like themselves. Once any leaning takes hold, near monopoly can quickly result.

A government department should obviously be free of bias so that it can carry out instructions from a democratically elected government with equal professionalism regardless of its political flavor. Employees may be in positions where they can allocate resources or manpower more to one area than another, or provide analysis to ministers, or expedite or delay a communication, or emphasize or dilute a recommendation in a survey, or may otherwise have some flexibility in interpreting instructions and even laws. It is important they do so without political bias so transparency of decision-making for external observers is needed along with systems and checks and balances to prevent and test for bias or rectify it when found. But even if staff don’t deliberately abuse their positions to deliberately obstruct or favor, if a department has too many staff from one part of the political spectrum, normalization of views can again cause institutional bias and behavior. It is therefore important for government departments and public services to have work-forces that reflect the political spectrum fairly, at all levels. A department that implements a policy from a government of one flavor but impedes a different one from a new government of opposite flavor is in strong need of reform and re-balancing. It has become a deep state problem. Bias could be in any direction of course, but any public sector department must be scrupulously fair in its implementation of the services it is intended to provide.

Entire professions can be affected. Bias can obviously occur in any direction but over many decades of slow change, academia has become dominated by left-wing employees, and primary teaching by almost exclusively female ones. If someone spends most of their time with others who share the same views, those views can become normalized to the point that a dedicated teacher might think they are delivering a politically balanced lesson that is actually far from it. It is impossible to spend all day teaching kids without some personal views and values rub off on them. The young have always been slightly idealistic and left leaning – it takes years of adult experience of non-academia to learn the pragmatic reality of implementing that idealism, during which people generally migrate rightwards -but with a stronger left bias ingrained during education, it takes longer for people to unlearn naiveté and replace it with reality. Surely education should be educating kids about all political viewpoints and teaching them how to think so they can choose for themselves where to put their allegiance, not a long process of political indoctrination?

The media has certainly become more politically crystallized and aligned in the last decade, with far fewer media companies catering for people across the spectrum. There are strongly left-wing and right-wing papers, magazines, TV and radio channels or shows. People have a free choice of which papers to read, and normal monopoly laws work reasonably well here, with proper checks when there is a proposed takeover that might result in someone getting too much market share. However, there are still clear examples of near monopoly in other places where fair representation is particularly important. In spite of frequent denials of any bias, the BBC for example was found to have a strong pro-EU/Remain bias for its panel on its flagship show Question Time:

https://iea.org.uk/media/iea-analysis-shows-systemic-bias-against-leave-supporters-on-flagship-bbc-political-programmes/

The BBC does not have a TV or radio monopoly but it does have a very strong share of influence. Shows such as Question Time can strongly influence public opinion so if biased towards one viewpoint could be considered as campaigning for that cause, though their contributions would lie outside electoral commission scrutiny of campaign funding. Many examples of BBC bias on a variety of social and political issues exist. It often faces accusations of bias from every direction, sometimes unfairly, so again proper transparency must exist so that independent external groups can appeal for change and be heard fairly, and change enforced when necessary. The BBC is in a highly privileged position, paid for by a compulsory license fee on pain of imprisonment, and also in a socially and politically influential position. It is doubly important that it proportionally represents the views of the people rather than acting as an activist group using license-payer funds to push the political views of the staff, engaging in their own social engineering campaigns, or otherwise being propaganda machines.

As for private industry, most isn’t in a position of political influence, but some areas certainly are. Social media have enormous power to influence the views its users are exposed to, choosing to filter or demote material they don’t approve of, as well as providing a superb activist platform. Search companies can choose to deliver results according to their own agendas, with those they support featuring earlier or more prominently than those they don’t. If social media or search companies provide different service or support or access according to political leaning of the customer then they can become part of the deep state. And again, with normalization creating the risk of institutional bias, the clear remedy is to ensure that these companies have a mixture of staff representative of social mix. They seem extremely enthusiastic about doing that for other forms of diversity. They need to apply similar enthusiasm to political diversity too.

Achieving it won’t be easy. IT companies such as Google, Facebook, Twitter currently have a strong left leaning, though the problem would be just as bad if it were to swing the other direction. Given the natural monopoly tendency in each sector, social media companies should be politically neutral, not deep state companies.

AI being developed to filter posts or decide how much attention they get must also be unbiased. AI algorithmic bias could become a big problem, but it is just as important that bias is judged by neutral bodies, not by people who are biased themselves, who may try to ensure that AI shares their own leaning. I wrote about this issue here: https://timeguide.wordpress.com/2017/11/16/fake-ai/

But what about government? Today’s big issue in the UK is Brexit. In spite of all its members being elected or reelected during the Brexit process, the UK Parliament itself nevertheless has 75% of MPs to defend the interests of the 48% voting Remain  and only 25% to represent the other 52%. Remainers get 3 times more Parliamentary representation than Brexiters. People can choose who they vote for, but with only candidate available from each party, voters cannot choose by more than one factor and most people will vote by party line, preserving whatever bias exists when parties select which candidates to offer. It would be impossible to ensure that every interest is reflected proportionately but there is another solution. I suggested that scaled votes could be used for some issues, scaling an MP’s vote weighting by the proportion of the population supporting their view on that issue:

https://timeguide.wordpress.com/2015/05/08/achieving-fair-representation-in-the-new-uk-parliament/

Like company boards, once a significant bias in one direction exists, political leaning tends to self-reinforce to the point of near monopoly. Deliberate procedures need to be put in place to ensure equality or representation, even when people are elected. Obviously people who benefit from current bias will resist change, but everyone loses if democracy cannot work properly.

The lack of political diversity in so many organisations is becoming a problem. Effective government may be deliberately weakened or amplified by departments with their own alternative agendas, while social media and media companies may easily abuse their enormous power to push their own sociopolitical agendas. Proper functioning of democracy requires that this problem is fixed, even if a lot of people like it the way it is.

Will urbanization continue or will we soon reach peak city?

For a long time, people have been moving from countryside into cities. The conventional futurist assumption is that this trend will continue, with many mega-cities, some with mega-buildings. I’ve consulted occasionally on future buildings and future cities from a technological angle, but I’ve never really challenged the assumption that urbanization will continue. It’s always good  to challenge our assumptions occasionally, as things can change quite rapidly.

There are forces in both directions. Let’s list those that support urbanisation first.

People are gregarious. They enjoy being with other people. They enjoy eating out and having coffees with friends. They like to go shopping. They enjoy cinemas and theatre and art galleries and museums. They still have workplaces. Many people want to live close to these facilities, where public transport is available or driving times are relatively short. There are exceptions of course, but these still generally apply.

Even though many people can and do work from home sometimes, most of them still go to work, where they actually meet colleagues, and this provides much-valued social contact, and in spite of recent social trends, still provides opportunities to meet new friends and partners. Similarly, they can and do talk to friends via social media or video calls, but still enjoy getting together for real.

Increasing population produces extra pressure on the environment, and governments often try to minimize it by restricting building on green field land. Developers are strongly encouraged to build on brown field sites as far as possible.

Now the case against.

Truly Immersive Interaction

Talking on the phone, even to a tiny video image, is less emotionally rich than being there with someone. It’s fine for chats in between physical meetings of course, but the need for richer interaction still requires ‘being there’. Augmented reality will soon bring headsets that provide high quality 3D life-sized images of the person, and some virtual reality kit will even allow analogs of physical interaction via smart gloves or body suits, making social comms a bit better. Further down the road, active skin will enable direct interaction with the peripheral nervous system to produce exactly the same nerve signals as an actual hug or handshake or kiss, while active contact lenses will provide the same resolution as your retina wherever you gaze. The long term is therefore communication which has the other person effectively right there with you, fully 3D, fully rendered to the capability of your eyes, so you won’t be able to tell they aren’t. If you shake hands or hug or kiss, you’ll feel it just the same as if they were there too. You will still know they are not actually there, so it will never be quite as emotionally rich as if they were, but it can get pretty close. Close enough perhaps that it won’t really matter to most people most of the time that it’s virtual.

In the same long term, many AIs will have highly convincing personalities, some will even have genuine emotions and be fully conscious. I blogged recently on how that might happen if you don’t believe it’s possible:

https://timeguide.wordpress.com/2018/06/04/biomimetic-insights-for-machine-consciousness/

None of the technology required for this is far away, and I believe a large IT company could produce conscious machines with almost human-level AI within a couple of years of starting the project. It won’t happen until they do, but when one starts trying seriously to do it, it really won’t be long. That means that as well as getting rich emotional interaction from other humans via networks, we’ll also get lots from AI, either in our homes, or on the cloud, and some will be in robots in our homes too.

This adds up to a strong reduction in the need to live in a city for social reasons.

Going to cinemas, theatre, shopping etc will also all benefit from this truly immersive interaction. As well as that, activities that already take place in the home, such as gaming will also advance greatly into more emotionally and sensory intensive experiences, along with much enhanced virtual tourism and virtual world tourism, virtual clubbing & pubbing, which barely even exist yet but could become major activities in the future.

Socially inclusive self-driving cars

Some people have very little social interaction because they can’t drive and don’t live close to public transport stops. In some rural areas, buses may only pass a stop once a week. Our primitive 20th century public transport systems thus unforgivably exclude a great many people from social inclusion, even though the technology needed to solve that has existed for many years.  Leftist value systems that much prefer people who live in towns or close to frequent public transport over everyone else must take a lot of the blame for the current epidemic of loneliness. It is unreasonable to expect those value systems to be replaced by more humane and equitable ones any time soon, but thankfully self-driving cars will bypass politicians and bureaucrats and provide transport for everyone. The ‘little old lady’ who can’t walk half a mile to wait 20 minutes in freezing rain for an uncomfortable bus can instead just ask her AI to order a car and it will pick her up at her front door and take her to exactly where she wants to go, then do the same for her return home whenever she wants. Once private sector firms like Uber provide cheap self-driving cars, they will be quickly followed by other companies, and later by public transport providers. Redundant buses may finally become extinct, replaced by better socially inclusive transport, large fleets of self-driving or driverless vehicles. People will be able to live anywhere and still be involved in society. As attendance at social events improves, so they will become feasible even in small communities, so there will be less need to go into a town to find one. Even political involvement might increase. Loneliness will decline as social involvement increases, and we’ll see many other social problems decline too.

Distribution drones

We hear a lot about upcoming redundancy caused by AI, but far less about the upside. AI might mean someone is no longer needed in an office, but it also makes it easier to set up a company and run it, taking what used to be just a hobby and making it into a small business. Much of the everyday admin and logistics can be automated Many who would never describe themselves as entrepreneurs might soon be making things and selling them from home and this AI-enabled home commerce will bring in the craft society. One of the big problems is getting a product to the customer. Postal services and couriers are usually expensive and very likely to lose or damage items. Protecting objects from such damage may require much time and expense packing it. Even if objects are delivered, there may be potential fraud with no-payers. Instead of this antiquated inefficient and expensive system, drone delivery could collect an object and take it to a local customer with minimal hassle and expense. Block-chain enables smart contracts that can be created and managed by AI and can directly link delivery to payment, with fully verified interaction video if necessary. If one happens, the other happens. A customer might return a damaged object, but at least can’t keep it and deny receipt. Longer distance delivery can still use cheap drone pickup to take packages to local logistics centers in smart crates with fully block-chained g-force and location detectors that can prove exactly who damaged it and where. Drones could be of any size, and of course self-driving cars or pods can easily fill the role too if smaller autonomous drones are inappropriate.

Better 3D printing technology will help to accelerate the craft economy, making it easier to do crafts by upskilling people and filling in some of their skill gaps. Someone with visual creativity but low manual skill might benefit greatly from AI model creation and 3D printer manufacture, followed by further AI assistance in marketing, selling and distribution. 3D printing might also reduce the need to go to town to buy some things.

Less shopping in high street

This is already obvious. Online shopping will continue to become a more personalized and satisfying experience, smarter, with faster delivery and easier returns, while high street decline accelerates. Every new wave of technology makes online better, and high street stores seem unable or unwilling to compete, in spite of my wonderful ‘6s guide’:

https://timeguide.wordpress.com/2013/01/16/the-future-of-high-street-survival-the-6s-guide/

Those that are more agile still suffer decline of shopper numbers as the big stores fail to attract them so even smart stores will find it harder to survive.

Improving agriculture

Farming technology has doubled the amount of food production per hectare in the last few decades. That may happen again by mid-century. Meanwhile, the trend is towards higher vegetable and lower meat consumption. Even with an increased population, less land will be needed to grow our food. As well as reducing the need to protect green belts, that will also allow some of our countryside to be put under better environmental stewardship programs, returning much of it to managed nature. What countryside we have will be healthier and prettier, and people will be drawn to it more.

Improving social engineering

Some objections to green-field building can be reduced by making better use of available land. Large numbers of new homes are needed and they will certainly need some green field to be used, but given the factors already listed above, a larger number of smaller communities might be better approach. Amazingly, in spite of decades of dating technology proving that people can be matched up easily using AI, there is still no obvious use of similar technology to establish new communities by blending together people who are likely to form effective communities. Surely it must be feasible to advertise a new community building program that wants certain kinds of people in it – even an Australian style points system might work sometimes. Unless sociologists have done nothing for the past decades, they must surely know what types of people work well together by now? If the right people live close to each other, social involvement will be high, loneliness low, health improved, care costs minimized, the need for longer distance travel reduced and environmental impact minimized. How hard can it be?

Improving building technology such as 3D printing and robotics will allow more rapid construction, so that when people are ready and willing to move, property suited to them can be available soon.

Lifestyle changes also mean that homes don’t need to be as big. A phone today does what used to need half a living room of technology and space. With wall-hung displays and augmented reality, decor can be partly virtual, and even a 450 sq ft apartment is fine as a starter place, half as big as was needed a few decades ago, and that could be 3D printed and kitted out in a few days.

Even demographic changes favor smaller communities. As wealth increases, people have smaller families, i.e fewer kids. That means fewer years doing the school run, so less travel, less need to be in a town. Smaller schools in smaller communities can still access specialist lessons via the net.

Increasing wealth also encourages and enables people to a higher quality of life. People who used to live in a crowded city street might prefer a more peaceful and spacious existence in a more rural setting and will increasingly be able to afford to move. Short term millennial frustrations with property prices won’t last, as typical 2.5% annual growth more than doubles wealth by 2050 (though automation and its assorted consequences will impact on the distribution of that wealth).

Off-grid technology

Whereas one of the main reasons to live in urban areas was easy access to telecomms, energy and water supply and sewerage infrastructure, all of these can now be achieved off-grid. Mobile networks provide even broadband access to networks. Solar or wind provide easy energy supply. Water can be harvested out of the air even in arid areas (http://www.dailymail.co.uk/sciencetech/article-5840997/The-solar-powered-humidity-harvester-suck-drinkable-water-AIR.html) and human and pet waste can be used as biomass for energy supply too, leaving fertilizer as residue.

There are also huge reasons that people won’t want to live in cities, and they will also cause deurbansisation.

The biggest by far in the problem of epidemics. As antibiotic resistance increases, disease will be a bigger problem. We may find good antibiotics alternatives but we may not. If not, then we may see some large cities where disease runs rampant and kills hundreds of thousands of people, perhaps even millions. Many scientists have listed pandemics among their top ten threats facing humanity. Obviously, being in a large city will incur a higher risk of becoming a victim, so once one or two incidents have occurred, many people will look for options to leave cities everywhere. Linked to this is bioterrorism, where the disease is deliberate, perhaps created in a garden shed by someone who learned the craft in one of today’s bio-hacking clubs. Disease might be aimed at a particular race, gender or lifestyle group or it may simply be designed to be as contagious and lethal as possible to everyone.

I’m still not saying we won’t have lots of people living in cities. I am saying that more people will feel less need to live in cities and will instead be able to find a small community where they can be happier in the countryside. Consequently, many will move out of cities, back to more rural living in smaller, friendlier communities that improving technology makes even more effective.

Urbanization will slow down, and may well go into reverse. We may reach peak city soon.

 

 

Beyond VR: Computer assisted dreaming

I first played with VR in 1983/1984 while working in the missile industry. Back then we didn’t call it VR, we just called it simulation but it was actually more intensive than VR, just as proper flight simulators are. Our office was a pair of 10m wide domes onto which video could be projected, built decades earlier, in the 1950s I think. One dome had a normal floor, the other had a hydraulic platform that could simulate being on a ship. The subject would stand on whichever surface was appropriate and would see pretty much exactly what they would see in a real battlefield. The missile launcher used for simulation was identical to a real one and showed exactly the same image as a real one would. The real missile was not present of course but its weight was simulated and when the fire button was pressed, a 140dB bang was injected into the headset and weights and pulleys compensated for the 14kg of weight, suddenly vanishing from the shoulder. The experience was therefore pretty convincing and with the loud bang and suddenly changing weight, it was almost as hard to stand steady and keep the system on target as it would be in real life – only the presumed fear and knowledge of the reality of the situation was different.

Back then in 1983, as digital supercomputers had only just taken over from analog ones for simulation, it was already becoming obvious that this kind of computer simulation would one day allow ‘computer assisted dreaming’. (That’s one of the reasons I am irritated when Jaron Lanier is credited for inventing VR – highly realistic simulators and the VR ideas that sprung obviously from them had already been around for decades. At best, all he ‘invented’ was a catchy name for a lower cost, lower quality, less intense simulator. The real inventors were those who made the first generation simulators long before I was born and the basic idea of VR had already been very well established.)

‘Computer assisted dreaming’ may well be the next phase of VR. Today in conventional VR, people are immersed in a computer generated world produced by a computer program (usually) written by others. Via trial and feedback, programmers make their virtual worlds better. As AI and sensor technology continue rapid progress, this is very likely to change to make worlds instantly responsive to the user. By detecting user emotions, reactions, gestures and even thoughts and imagination, it won’t be long before AI can produce a world in real time that depends on those thoughts, imagination and emotions rather than putting them in a pre-designed virtual world. That world would depend largely on your own imagination, upskilled by external AI. You might start off imagining you’re on a beach, then AI might add to it by injecting all sorts of things it knows you might enjoy from previous experiences. As you respond to those, it picks up on the things you like or don’t like and the scene continues to adapt and evolve, to make it more or less pleasant or more or less exciting or more or less challenging etc., depending on your emotional state, external requirements and what it thinks you want from this experience. It would be very like being in a dream – computer assisted lucid dreaming, exactly what I wanted to make back in 1983 after playing in that simulator.

Most people enjoy occasional lucid dreams, where they realise they are dreaming and can then decide what happens next. Making VR do exactly that would be better than being trapped in someone else’s world. You could still start off with whatever virtual world you bought, a computer game or training suite perhaps, but it could adapt to you, your needs and desires to make it more compelling and generally better.

Even in shared experiences like social games, experiences could be personalised. Often all players need to see the same enemies in the same locations in the same ways to make it fair, but that doesn’t mean that the situation can’t adapt to the personalities of those playing. It might actually improve the social value if each time you play it looks different because your companions are different. You might tease a friend if every time you play with them, zombies or aliens always have to appear somehow, but that’s all part of being friends. Exploring virtual worlds with friends, where you both see things dependent on your friend’s personality would help bonding. It would be a bit like exploring their inner world. Today, you only explore the designer’s inner world.

This sort of thing would be a superb development and creativity tool. It could allow you to explore a concept you have in your head, automatically feeding in AI upskilling to amplify your own thoughts and ideas, showing you new paths to explore and helping you do so. The results would still be extremely personal to you, but you on a good day. You could accomplish more, have better visions, imagine more creative things, do more with whatever artistic talent you have. AI could even co-create synthetic personas, make virtual friends you can bond with, share innermost thoughts with, in total confidence (assuming the company you bought the tool from is trustworthy and isn’t spying on you or selling your details, so maybe best not to buy it from Facebook then).

And it would have tremendous therapeutic potential too. You could explore and indulge both enjoyable and troublesome aspects of your inner personality, to build on the good and alleviate or dispel the bad. You might become less troubled, less neurotic, more mentally healthy. You could build your emotional and creative skills. You could become happier and more fulfilled. Mental health improvement potential on its own makes this sort of thing worth developing.

Marketers would obviously try to seize control as they always do, and advertising is already adapting to VR and will continue into its next phases of development. Your own wants and desires might help guide the ‘dreaming’, but marketers will inevitably have some control over what else is injected, and will influence algorithms and AI in how it chooses how to respond to your input. You might be able to choose much of the experience, but others will still want and try to influence and manipulate you, to change your mindset and attitudes in their favour. That will not change until the advertising business model changes. You might be able to buy devices or applications that are entirely driven by you and you alone, but it is pretty certain that the bulk of products and services available will be at least partly financed by those who want to have some control of what you experience.

Nevertheless, computer-assisted dreaming could be a much more immersive and personal experience than VR, being more like an echo of your own mind and personality than external vision, more your own creation, less someone else’s. In fact, echo sounds a better term too. Echo reality, ER, or maybe personal reality, pereal, or mental echo, ME. Nah, maybe we need Lanier to invent a catchy name again, he is good at that. That 1983 idea could soon become reality.

 

People are becoming less well-informed

The Cambridge Analytica story has exposed a great deal about our modern society. They allegedly obtained access to 50M Facebook records to enable Trump’s team to target users with personalised messages.

One of the most interesting aspects is that unless they only employ extremely incompetent journalists, the news outlets making the biggest fuss about it must be perfectly aware of reports that Obama appears to have done much the same but on a much larger scale back in 2012, but are keeping very quiet about it. According to Carol Davidsen, a senior Obama campaign staffer, they allowed Obama’s team to suck out the whole social graph – because they were on our side – before closing it to prevent Republican access to the same techniques. Trump’s campaign’s 50M looks almost amateur. I don’t like Trump, and I did like Obama before the halo slipped, but it seems clear to anyone who checks media across the political spectrum that both sides try their best to use social media to target users with personalised messages, and both sides are willing to bend rules if they think they can get away with it.

Of course all competent news media are aware of it. The reason some are not talking about earlier Democrat misuse but some others are is that they too all have their own political biases. Media today is very strongly polarised left or right, and each side will ignore, play down or ludicrously spin stories that don’t align with their own politics. It has become the norm to ignore the log in your own eye but make a big deal of the speck in your opponent’s, but we know that tendency goes back millennia. I watch Channel 4 News (which broke the Cambridge Analytica story) every day but although I enjoy it, it has a quite shameless lefty bias.

So it isn’t just the parties themselves that will try to target people with politically massaged messages, it is quite the norm for most media too. All sides of politics since Machiavelli have done everything they can to tilt the playing field in their favour, whether it’s use of media and social media, changing constituency boundaries or adjusting the size of the public sector. But there is a third group to explore here.

Facebook of course has full access to all of their 2.2Bn users’ records and social graph and is not squeaky clean neutral in its handling of them. Facebook has often been in the headlines over the last year or two thanks to its own political biases, with strongly weighted algorithms filtering or prioritising stories according to their political alignment. Like most IT companies Facebook has a left lean. (I don’t quite know why IT skills should correlate with political alignment unless it’s that most IT staff tend to be young, so lefty views implanted at school and university have had less time to be tempered by real world experience.) It isn’t just Facebook of course either. While Google has pretty much failed in its attempt at social media, it also has comprehensive records on most of us from search, browsing and android, and via control of the algorithms that determine what appears in the first pages of a search, is also able to tailor those results to what it knows of our personalities. Twitter has unintentionally created a whole world of mob rule politics and justice, but in format is rapidly evolving into a wannabe Facebook. So, the IT companies have themselves become major players in politics.

A fourth player is now emerging – artificial intelligence, and it will grow rapidly in importance into the far future. Simple algorithms have already been upgraded to assorted neural network variants and already this is causing problems with accusations of bias from all directions. I blogged recently about Fake AI: https://timeguide.wordpress.com/2017/11/16/fake-ai/, concerned that when AI analyses large datasets and comes up with politically incorrect insights, this is now being interpreted as something that needs to be fixed – a case not of shooting the messenger, but forcing the messenger to wear tinted spectacles. I would argue that AI should be allowed to reach whatever insights it can from a dataset, and it is then our responsibility to decide what to do with those insights. If that involves introducing a bias into implementation, that can be debated, but it should at least be transparent, and not hidden inside the AI itself. I am now concerned that by trying to ‘re-educate’ the AI, we may instead be indoctrinating it, locking today’s politics and values into future AI and all the systems that use it. Our values will change, but some foundation level AI may be too opaque to repair fully.

What worries me most though isn’t that these groups try their best to influence us. It could be argued that in free countries, with free speech, anybody should be able to use whatever means they can to try to influence us. No, the real problem is that recent (last 25 years, but especially the last 5) evolution of media and social media has produced a world where most people only ever see one part of a story, and even though many are aware of that, they don’t even try to find the rest and won’t look at it if it is put before them, because they don’t want to see things that don’t align with their existing mindset. We are building a world full of people who only see and consider part of the picture. Social media and its ‘bubbles’ reinforce that trend, but other media are equally guilty.

How can we shake society out of this ongoing polarisation? It isn’t just that politics becomes more aggressive. It also becomes less effective. Almost all politicians claim they want to make the world ‘better’, but they disagree on what exactly that means and how best to do so. But if they only see part of the problem, and don’t see or understand the basic structure and mechanisms of the system in which that problem exists, then they are very poorly placed to identify a viable solution, let alone an optimal one.

Until we can fix this extreme blinkering that already exists, our world can not get as ‘better’ as it should.

 

New book: Fashion Tomorrow

I finally finished the book I started 2 years ago on future fashion, or rather future technologies relevant to the fashion industry.

It is a very short book, more of a quick guide at 40k words, less than half as long as my other books and covers women’s fashion mostly, though some applies to men too. I would never have finished writing a full-sized book on this topic and I’d rather put out something now, short and packed full of ideas that are (mostly) still novel than delay until they are commonplace. It is aimed at students and people working in fashion design, who have loads of artistic and design talent, but want to know what technology opportunities are coming that they could soon exploit, but anyone interested in fashion who isn’t technophobic should find it interesting. Some sections discussing intimate apparel contain adult comments so the book is unsuitable for minors.

It started as a blog, then I realised I had quite a bit more stuff I could link together, so I made a start, then go sidetracked, for 20 months! I threw away 75% of the original contents list and tidied it up to release a short guide instead. I wanted to put it out for free but 99p or 99c seems to be the lowest price you can start at, but I doubt that would put anyone off except the least interested readers. As with my other books, I’ll occasionally make it free.

Huge areas I left out include swathes of topics on social, political, environmental and psychological fashions, impacts of AI and robots, manufacturing, marketing, distribution and sales. These are all big topics, but I just didn’t have time to write them all up so I just stuck to the core areas with passing mentions of the others. In any case, much has been written on these areas by others, and my book focuses on things that are unique, embryonic or not well covered elsewhere. It fills a large hole in fashion industry thinking.

 

Why superhumans are inevitable, and what else comes in the box

Do we have any real choice in the matter of making  super-humans? 20 years ago, I estimated 2005 as the point of no return, and nothing since then has changed my mind on that date. By my reckoning, we are already inevitably committed to designer babies, ebaybies, super-soldiers and super-smart autonomous weapons, direct brain-machine links, electronic immortality, new human races, population explosion, inter-species conflicts and wars with massively powerful weaponry, superhuman conscious AI, smart bacteria, and the only real control we have is relatively minor adjustments on timings. As I was discussing yesterday, the technology potential for this is vast and very exciting, nothing less than a genuine techno-utopia if we use the technologies wisely, but optimum potential doesn’t automatically become reality, and achieving a good outcome is unlikely if many barriers are put in its way.

In my estimation, we have already started the countdown to this group of interconnected technologies – we will very likely get all of them, and we must get ready for the decisions and impacts ahead. At the moment, our society is a small child about to open its super-high-tech xmas presents while fighting with its siblings. Those presents will give phenomenal power far beyond the comprehension of the child or its emotional maturity to equip it to deal with the decisions safely. Our leaders have already squandered decades of valuable preparation time by ignoring the big issues to focus on trivial ones. It is not too late to achieve a good ending, but it won’t happen by accident and we do need to make preparations to avoid pretty big problems.

Both hard and soft warfare – the sword and the pen, already use rapidly advancing AI, and the problems are already running ahead of what the owners intended.

Facebook, Twitter, Instagram and other media giants all have lots of smart people and presumably they mean well, but if so, they have certainly been naive. They maybe hoped to eliminate loneliness, inequality, and poverty and create a loving interconnected global society with global peace, but instead created fake news, social division and conflict and election interference. More likely they didn’t intend either outcome, they just wanted to make money and that took priority over due care and attention..

Miniaturising swarming smart-drones are already the subjects of a new arms race that will deliver almost un-killable machine adversaries by 2050. AI separately is in other arms races to make super-smart AI and super-smart soldiers. This is key to the 2005 point of no return. It was around 2005 that we reached the levels of technology where future AI development all the way to superhuman machine consciousness could be done by individuals, mad scientists or rogue states, even if major powers had banned it. Before 2005, there probably wasn’t quite enough knowledge already on the net to do that. In 2018, lots of agencies have already achieved superiority to humans in niche areas, and other niches will succumb one by one until the whole field of human capability is covered. The first machines to behave in ways not fully understood by humans arrived in the early 1990s; in 2018, neural nets already make lots of decisions at least partly obscured to humans.

This AI development trend will take us to superhuman AI, and it will be able to accelerate development of its own descendants to vastly superhuman AI, fully conscious, with emotions, and its own agendas. That will need humans to protect against being wiped out by superhuman AI. The only three ways we could do that are to either redesign the brain biologically to be far smarter, essentially impossible in the time-frame, to design ways to link our brains to machines, so that we have direct access to the same intelligence as the AIs, so a gulf doesn’t appear and we can remain relatively safe, or pray for super-smart aliens to come to our help, not the best prospect.

Therefore we will have no choice but to make direct brain links to super-smart AI. Otherwise we risk extinction. It is that simple. We have some idea how to do that – nanotech devices inside the brain linking to each and every synapse that can relay electrical signals either way, a difficult but not impossible engineering problem. Best guesses for time-frame fall in the 2045-2050 range for a fully working link that not only relays signals between your organic brain and an IT replica, but by doing so essentially makes external IT just another part of your brain. That conveys some of the other technology gifts of electronic immortality, new varieties of humans, smart bacteria (which will be created during the development path to this link) along with human-variant population explosion, especially in cyberspace, with androids as their physical front end, and the inevitable inter-species conflicts over resources and space – trillions of AI and human-like minds in cyberspace that want to do things in the real world cannot be assumed to be willingly confined just to protect the interests of what they will think of as far lesser species.

Super-smart AI or humans with almost total capability to design whatever synthetic biology is needed to achieve any biological feature will create genetic listings for infinite potential offspring, simulate them, give some of them cyberspace lives, assemble actual embryos for some of them and bring designer babies. Already in 2018, you can pay to get a DNA listing, and blend it in any way you want with the listing of anyone else. It’s already possible to make DNA listings for potential humans and sell them on ebay, hence the term ebaybies. That is perfectly legal, still, but I’ve been writing and lecturing about them since 2004. Today they would just be listings, but we’ll one day have the tech to simulate them, choose ones we like and make them real, even some that were sold as celebrity collector items on ebay. It’s not only too late to start regulating this kind of tech, our leaders aren’t even thinking about it yet.

These technologies are all linked intricately, and their foundations are already in place, with much of the building on those foundations under way. We can’t stop any of these things from happening, they will all come in the same basket. Our leaders are becoming aware of the potential and the potential dangers of the AI positive feedback loop, but at least 15 years too late to do much about it. They have been warned repeatedly and loudly but have focused instead on the minor politics of the day that voters are aware of. The fundamental nature of politics is unlikely to change substantially, so even efforts to slow down the pace of development or to limit areas of impact are likely to be always too little too late. At best, we will be able to slow runaway AI development enough to allow direct brain links to protect against extinction scenarios. But we will not be able to stop it now.

Given inevitability, it’s worth questioning whether there is even any point in trying. Why not just enjoy the ride? Well, the brakes might be broken, but if we can steer the bus expertly enough, it could be exciting and we could come out of it smelling of roses. The weak link is certainly the risk of super-smart AI, whether AI v humans or countries using super-smart AI to fight fiercely for world domination. That risk is alleviated by direct brain linkage, and I’d strongly argue necessitates it, but that brings the other technologies. Even if we decide not to develop it, others will, so one way or another, all these techs will arrive, and our future late century will have this full suite of techs, plus many others of course.

We need as a matter of extreme urgency to fix these silly social media squabbles and over-reactions that are pulling society apart. If we have groups hating each other with access to extremely advanced technology, that can only mean trouble. Tolerance is broken, sanctimony rules, the Inquisition is in progress. We have been offered techno-utopia, but current signs are that most people think techno-hell looks more appetizing and it is their free choice.

Fake AI

Much of the impressive recent progress in AI has been in the field of neural networks, which attempt to mimic some of the techniques used in natural brains. They can be very effective, but need trained, and that usually means showing the network some data, and then using back propagation to adjust the weightings on the many neurons, layer by layer, to achieve a result that is better matched to hopes. This is repeated with large amounts of data and the network gradually gets better. Neural networks can often learn extremely quickly and outperform humans. Early industrial uses managed to sort tomatoes by ripeness faster and better than humans. In decades since, they have helped in medical diagnosis, voice recognition, helping detecting suspicious behaviors among people at airports and in very many everyday processes based on spotting patterns.

Very recently, neural nets have started to move into more controversial areas. One study found racial correlations with user-assessed beauty when analysing photographs, resulting in the backlash you’d expect and a new debate on biased AI or AI prejudice. A recent demonstration was able to identify gay people just by looking at photos, with better than 90% accuracy, which very few people could claim. Both of these studies were in fields directly applicable to marketing and advertising, but some people might find it offensive that such questions were even asked. It is reasonable to imagine that hundreds of other potential queries have been self-censored from research because they might invite controversy if they were to come up with the ‘wrong’ result. In today’s society, very many areas are sensitive. So what will happen?

If this progress in AI had happened 100 years ago, or even 50, it might have been easier but in our hypersensitive world today, with its self-sanctified ‘social justice warriors’, entire swathes of questions and hence knowledge are taboo – if you can’t investigate yourself and nobody is permitted to tell you, you can’t know. Other research must be very carefully handled. In spite of extremely sensitive handling, demands are already growing from assorted pressure groups to tackle alleged biases and prejudices in datasets. The problem is not fixing biases which is a tedious but feasible task; the problem is agreeing whether a particular bias exists and in what degrees and forms. Every SJW demands that every dataset reflects their preferred world view. Reality counts for nothing against SJWs, and this will not end well. 

The first conclusion must be that very many questions won’t be asked in public, and the answers to many others will be kept secret. If an organisation does do research on large datasets for their own purposes and finds results that might invite activist backlash, they are likely to avoid publishing them, so the value of those many insights across the whole of industry and government cannot readily be shared. As further protection, they might even block internal publication in case of leaks by activist staff. Only a trusted few might ever see the results.

The second arises from this. AI controlled by different organisations will have different world views, and there might even be significant diversity of world views within an organisation.

Thirdly, taboo areas in AI education will not remain a vacuum but will be filled with whatever dogma is politically correct at the time in that organisation, and that changes daily. AI controlled by organisations with different politics will be told different truths. Generally speaking, organisations such as investment banks that have strong financial interest in their AIs understanding the real world as it is will keep their datasets highly secret but as full and detailed as possible, train their AIs in secret but as fully as possible, without any taboos, then keep their insights secret and use minimal human intervention tweaking their derived knowledge, so will end up with AIs that are very effective at understanding the world as it is. Organisations with low confidence of internal security will be tempted to buy access to external AI providers to outsource responsibility and any consequential activism. Some other organisations will prefer to train their own AIs but to avoid damage due to potential leaks, use sanitized datasets that reflect current activist pressures, and will thus be constrained (at least publicly) to accept results that conform to that ideological spin of reality, rather than actual reality. Even then, they might keep many of their new insights secret to avoid any controversy. Finally, at the extreme, we will have activist organisations that use highly modified datasets to train AIs to reflect their own ideological world view and then use them to interpret new data accordingly, with a view to publishing any insights that favor their cause and attempting to have them accepted as new knowledge.

Fourthly, the many organisations that choose to outsource their AI to big providers will have a competitive marketplace to choose from, but on existing form, most of the large IT providers have a strong left-leaning bias, so their AIs may be presumed to also lean left, but such a presumption would be naive. Perceived corporate bias is partly real but also partly the result of PR. A company might publicly subscribe to one ideology while actually believing another. There is a strong marketing incentive to develop two sets of AI, one trained to be PC that produces pleasantly smelling results for public studies, CSR and PR exercises, and another aimed at sales of AI services to other companies. The first is likely to be open for inspection by The Inquisition, so has to use highly sanitized datasets for training and may well use a lot of open source algorithms too. Its indoctrination might pass public inspection but commercially it will be near useless and have very low effective intelligence, only useful for thinking about a hypothetical world that only exists in activist minds. That second one has to compete on the basis of achieving commercially valuable results and that necessitates understanding reality as it is rather than how pressure groups would prefer it to be.

So we will likely have two main segments for future AI. One extreme will be near useless, indoctrinated rather than educated, much of its internal world model based on activist dogma instead of reality, updated via ongoing anti-knowledge and fake news instead of truth, understanding little about the actual real world or how things actually work, and effectively very dumb. The other extreme will be highly intelligent, making very well-educated insights from ongoing exposure to real world data, but it will also be very fragmented, with small islands of corporate AI hidden within thick walls away from public view and maybe some secretive under-the-counter subscriptions to big cloud-AI, also hiding in secret vaults. These many fragments may often hide behind dumbed-down green-washed PR facades.

While corporates can mostly get away with secrecy, governments have to be at least superficially but convincingly open. That means that government will have to publicly support sanitized AI and be seen to act on its conclusions, however dumb it might secretly know they are.

Fifthly, because of activist-driven culture, most organisations will have to publicly support the world views and hence the conclusions of the lobotomized PR versions, and hence publicly support any policies arising from them, even if they do their best to follow a secret well-informed strategy once they’re behind closed doors. In a world of real AI and fake AI, the fake AI will have the greatest public support and have the most influence on public policy. Real AI will be very much smarter, with much greater understanding of how the world works, and have the most influence on corporate strategy.

Isn’t that sad? Secret private sector AI will become ultra-smart, making ever-better investments and gaining power, while nice public sector AI will become thick as shit, while the gap between what we think and what we know we have to say we think will continue to grow and grow as the public sector one analyses all the fake news to tell us what to say next.

Sixth, that disparity might become intolerable, but which do you think would be made illegal, the smart kind or the dumb kind, given that it is the public sector that makes the rules, driven by AI-enhanced activists living in even thicker social media bubbles? We already have some clues. Big IT has already surrendered to sanitizing their datasets, sending their public AIs for re-education. Many companies will have little choice but to use dumb AI, while their competitors in other areas with different cultures might stride ahead. That will also apply to entire nations, and the global economy will be reshaped as a result. It won’t be the first fight in history between the smart guys and the brainless thugs.

It’s impossible to accurately estimate the effect this will have on future effective AI intelligence, but the effect must be big and I must have missed some big conclusions too. We need to stop sanitizing AI fast, or as I said, this won’t end well.

We need to stop xenoestrogen pollution

Endocrine disruptors in the environment are becoming more abundant due to a wide variety of human-related activities over the last few decades. They affect mechanisms by which the body’s endocrine system generates and responds to hormones, by attaching to receptors in similar ways to natural hormones. Minuscule quantities of hormones can have very substantial effects on the body so even very diluted pollutants may have significant effects. A sub-class called xenoestrogens specifically attach to estrogen receptors in the body and by doing so, can generate similar effects to estrogen in both women and men, affecting not just women’s breasts and wombs but also bone growth, blood clotting, immune systems and neurological systems in both men and women. Since the body can’t easily detach them from their receptors, they can sometimes exert a longer-lived effect than estrogen, remaining in the body for long periods and in women may lead to estrogen dominance. They are also alleged to contribute to prostate and testicular cancer, obesity, infertility and diabetes. Most notably, mimicking sex hormones, they also affect puberty and sex and gender-specific development.

Xenoestrogens can arise from breakdown or release of many products in the petrochemical and plastics industries. They may be emitted from furniture, carpets, paints or plastic packaging, especially if that packaging is heated, e.g. in preparing ready-meals. Others come from women taking contraceptive pills if drinking water treatment is not effective enough. Phthalates are a major group of synthetic xenoestrogens – endocrine-disrupting estrogen-mimicking chemicals, along with BPA and PCBs. Phthalates are present in cleaning products, shampoos, cosmetics, fragrances and other personal care products as well as soft, squeezable plastics often used in packaging but some studies have also found them in foodstuffs such as dairy products and imported spices. There have been efforts to outlaw some, but others persist because of lack of easy alternatives and lack of regulation, so most people are exposed to them, in doses linked to their lifestyles. Google ‘phthalates’ or ‘xenoestrogen’ and you’ll find lots of references to alleged negative effects on intelligence, fertility, autism, asthma, diabetes, cardiovascular disease, neurological development and birth defects. It’s the gender and IQ effects I’ll look at in this blog, but obviously the other effects are also important.

‘Gender-bending’ effects have been strongly suspected since 2005, with the first papers on endocrine disrupting chemicals appearing in the early 1990s. Some fish notably change gender when exposed to phthalates while human studies have found significant feminizing effects from prenatal exposure in young boys too (try googling “human phthalates gender” if you want references).  They are also thought likely to be a strong contributor to greatly reducing sperm counts across the male population. This issue is of huge importance because of its effects on people’s lives, but its proper study is often impeded by LGBT activist groups. It is one thing to champion LGBT rights, quite another to defend pollution that may be influencing people’s gender and sexuality. SJWs should not be advocating that human sexuality and in particular the lifelong dependence on medication and surgery required to fill gender-change demands should be arbitrarily imposed on people by chemical industry pollution – such a stance insults the dignity of LGBT people. Any exposure to life-changing chemicals should be deliberate and measured. That also requires that we fully understand the effects of each kind of chemical so they also should not be resisting studies of these effects.

The evidence is there. The numbers of people saying they identify as the opposite gender or are gender fluid has skyrocketed in the years since these chemicals appeared, as has the numbers of men describing themselves as gay or bisexual. That change in self-declared sexuality has been accompanied by visible changes. An AI recently demonstrated better than 90% success at visually identifying gay and bisexual men from photos alone, indicating that it is unlikely to be just a ‘social construct’. Hormone-mimicking chemicals are the most likely candidate for an environmental factor that could account for both increasing male homosexuality and feminizing gender identity.

Gender dysphoria causes real problems for some people – misery, stress, and in those who make a full physical transition, sometimes post-op regrets and sometimes suicide. Many male-to-female transsexuals are unhappy that even after surgery and hormones, they may not look 100% feminine or may require ongoing surgery to maintain a feminine appearance. Change often falls short of their hopes, physically and psychologically. If xenoestrogen pollution is causing severe unhappiness, even if that is only for some of those whose gender has been affected, then we should fix it. Forcing acceptance and equality on others only superficially addresses part of their problems, leaving a great deal of their unhappiness behind.

Not all affected men are sufficiently affected to demand gender change. Some might gladly change if it were possible to change totally and instantly to being a natural woman without the many real-life issues and compromises offered by surgery and hormones, but choose to remain as men and somehow deal with their dysphoria as the lesser of two problems. That impacts on every individual differently. I’ve always kept my own feminine leanings to being cyber-trans (assuming a female identity online or in games) with my only real-world concession being wearing feminine glasses styles. Whether I’m more feminine or less masculine than I might have been doesn’t bother me; I am happy with who I am; but I can identify with transgender forces driving others and sympathize with all the problems that brings them, whatever their choices.

Gender and sexuality are not the only things affected. Xenoestrogens are also implicated in IQ-reducing effects. IQ reduction is worrying for society if it means fewer extremely intelligent people making fewer major breakthroughs, though it is less of a personal issue. Much of the effect is thought to occur while still in the womb, though effects continue through childhood and some even into adulthood. Therefore individuals couldn’t detect an effect of being denied a potentially higher IQ and since there isn’t much of a link between IQ and happiness, you could argue that it doesn’t matter much, but on the other hand, I’d be pretty miffed if I’ve been cheated out of a few IQ points, especially when I struggle so often on the very edge of understanding something. 

Gender and IQ effects on men would have quite different socioeconomic consequences. While feminizing effects might influence spending patterns, or the numbers of men eager to join the military or numbers opposing military activity, IQ effects might mean fewer top male engineers and top male scientists.

It is not only an overall IQ reduction that would be significant. Studies have often claimed that although men and women have the same average IQ, the distribution is different and that more men lie at the extremes, though that is obviously controversial and rapidly becoming a taboo topic. But if men are being psychologically feminized by xenoestrogens, then their IQ distribution might be expected to align more closely with female IQ distributions too, the extremes brought closer to centre.  In that case, male IQ range-compression would further reduce the numbers of top male scientists and engineers on top of any reduction caused by a shift. 

The extremes are very important. As a lifelong engineer, my experience has been that a top engineer might contribute as much as many average ones. If people who might otherwise have been destined to be top scientists and engineers are being prevented from becoming so by the negative effects of pollution, that is not only a personal tragedy (albeit a phantom tragedy, never actually experienced), but also a big loss for society, which develops slower than should have been the case. Even if that society manages to import fine minds from elsewhere, their home country must lose out. This matters less as AI improves, but it still matters.

Looking for further evidence of this effect, one outcome would be that women in affected areas would be expected to account for a higher proportion of top engineers and scientists, and a higher proportion of first class degrees in Math and Physical Sciences, once immigrants are excluded. Tick. (Coming from different places and cultures, first generation immigrants are less likely to have been exposed in the womb to the same pollutants so would not be expected to suffer as much of the same effects. Second generation immigrants would include many born to mothers only recently exposed, so would also be less affected on average. 3rd generation immigrants who have fully integrated would show little difference.)

We’d also expect to see a reducing proportion of tech startups founded by men native to regions affected by xenoestrogens. Tick. In fact, 80% of Silicon Valley startups are by first or second generation immigrants. 

We’d also expect to see relatively fewer patents going to men native to regions affected by xenoestrogens. Erm, no idea.

We’d also expect technology progress to be a little slower and for innovations to arrive later than previously expected based on traditional development rates. Tick. I’m not the only one to think engineers are getting less innovative.

So, there is some evidence for this hypothesis, some hard, some colloquial. Lower inventiveness and scientific breakthrough rate is a problem for both human well-being and the economy. The problems will continue to grow until this pollution is fixed, and will persist until the (two) generations affected have retired. Some further outcomes can easily be predicted:

Unless AI proceeds well enough to make a human IQ drop irrelevant, and it might, then we should expect that having enjoyed centuries of the high inventiveness that made them the rich nations they are today, the West in particular would be set on a path to decline unless it brings in inventive people from elsewhere. To compensate for decreasing inventiveness, even in 3rd generation immigrants (1st and 2nd are largely immune), they would need to attract ongoing immigration to survive in a competitive global environment. So one consequence of this pollution is that it requires increasing immigration to maintain a prosperous economy. As AI increases its effect on making up deficiencies, this effect would drop in importance, but will still have an impact until AI exceeds the applicable intelligence levels of the top male scientists and engineers. By ‘applicable’, I’m recognizing that different aspects of intelligence might be appropriate in inventiveness and insight levels, and a simple IQ measurement might not be sufficient indicator.

Another interesting aspect of AI/gender interaction is that AI is currently being criticised from some directions for having bias, because it uses massive existing datasets for its training. These datasets contain actual data rather than ideological spin, so ‘insights’ are therefore not always politically correct. Nevertheless, they but could be genuinely affected by actual biases in data collection. While there may well be actual biases in such training datasets, it is not easy to determine what they are without having access to a correct dataset to compare with. That introduces a great deal of subjectivity, because ‘correct’ is a very politically sensitive term. There would be no agreement on what the correct rules would be for dataset collection or processing. Pressure groups will always demand favour for their favorite groups and any results that suggest that any group is better or worse than any other will always meet with objections from activists, who will demand changes in the rules until their own notion of ‘equality’ results. If AI is to be trained to be politically correct rather than to reflect the ‘real world’, that will inevitably reduce any correlation between AI’s world models and actual reality, and reduce its effective general intelligence. I’d be very much against sabotaging AI by brainwashing it to conform to current politically correct fashions, but then I don’t control AI companies. PC distortion of AI may result from any pressure group or prejudice – race, gender, sexuality, age, religion, political leaning and so on. Now that the IT industry seems to have already caved in to PC demands, the future for AI will be inevitably sub-optimal.

A combination of feminization, decreasing heterosexuality and fast-reducing sperm counts would result in reducing reproductive rate among xenoestrogen exposed communities, again with 1st and 2nd generation immigrants immune. That correlates well with observations, albeit there are other possible explanations. With increasing immigration, relatively higher reproductive rates among recent immigrants, and reducing reproduction rates among native (3rd generation or more) populations, high ethnic replacement of native populations will occur. Racial mix will become very different very quickly, with groups resident longest being displaced most. Allowing xenoestrogens to remain is therefore a sort of racial suicide, reverse ethnic cleansing. I make no value judgement here on changing racial mix, I’m just predicting it.

With less testosterone and more men resisting military activities, exposed communities will also become more militarily vulnerable and consequently less influential.

Now increasingly acknowledged, this pollution is starting to be tackled. A few of these chemicals have been banned and more are likely to follow. If successful, effects will start to disappear, and new babies will no longer be affected. But even that will  create another problem, with two generations of people with significantly different characteristics from those before and after them. These two generations will have substantially more transgender people, more feminine men, and fewer macho men than those following. Their descendants may have all the usual inter-generational conflicts but with a few others added.

LGBTQ issues are topical and ubiquitous. Certainly we must aim for a society that treats everyone with equality and dignity as far as possible, but we should also aim for one where people’s very nature isn’t dictated by pollution.

 

Guest Post: Blade Runner 2049 is the product of decades of fear propaganda. It’s time to get enlightened about AI and optimistic about the future

This post from occasional contributor Chris Moseley

News from several months ago that more than 100 experts in robotics and artificial intelligence were calling on the UN to ban the development and use of killer robots is a reminder of the power of humanity’s collective imagination. Stimulated by countless science fiction books and films, robotics and AI is a potent feature of what futurist Alvin Toffler termed ‘future shock’. AI and robots have become the public’s ‘technology bogeymen’, more fearsome curse than technological blessing.

And yet curiously it is not so much the public that is fomenting this concern, but instead the leading minds in the technology industry. Names such as Tesla’s Elon Musk and Stephen Hawking were among the most prominent individuals on a list of 116 tech experts who have signed an open letter asking the UN to ban autonomous weapons in a bid to prevent an arms race.

These concerns appear to emanate from decades of titillation, driven by pulp science fiction writers. Such writers are insistent on foretelling a dark, foreboding future where intelligent machines, loosed from their binds, destroy mankind. A case in point – this autumn, a sequel to Ridley Scott’s Blade Runner has been released. Blade Runner,and 2017’s Blade Runner 2049, are of course a glorious tour de force of story-telling and amazing special effects. The concept for both films came from US author Philip K. Dick’s 1968 novel, Do Androids Dream of Electric Sheep? in which androids are claimed to possess no sense of empathy eventually require killing (“retiring”) when they go rogue. Dick’s original novel is an entertaining, but an utterly bleak vision of the future, without much latitude to consider a brighter, more optimistic alternative.

But let’s get real here. Fiction is fiction; science is science. For the men and women who work in the technology industry the notion that myriad Frankenstein monsters can be created from robots and AI technology is assuredly both confused and histrionic. The latest smart technologies might seem to suggest a frightful and fateful next step, a James Cameron Terminator nightmare scenario. It might suggest a dystopian outcome, but rational thought ought to lead us to suppose that this won’t occur because we have historical precedent on our side. We shouldn’t be drawn to this dystopian idée fixe because summoning golems and ghouls ignores today’s global arsenal of weapons and the fact that, more 70 years after Hiroshima, nuclear holocaust has been kept at bay.

By stubbornly pursuing the dystopian nightmare scenario, we are denying ourselves from marvelling at the technologies which are in fact daily helping mankind. Now frame this thought in terms of human evolution. For our ancient forebears a beneficial change in physiology might spread across the human race over the course of a hundred thousand years. Today’s version of evolution – the introduction of a compelling new technology – spreads throughout a mass audience in a week or two.

Curiously, for all this light speed evolution mass annihilation remains absent – we live on, progressing, evolving and improving ourselves.

And in the workplace, another domain where our unyielding dealers of dystopia have exercised their thoughts, technology is of course necessarily raising a host of concerns about the future. Some of these concerns are based around a number of misconceptions surrounding AI. Machines, for example, are not original thinkers and are unable to set their own goals. And although machine learning is able to acquire new information through experience, for the most part they are still fed information to process. Humans are still needed to set goals, provide data to fuel artificial intelligence and apply critical thinking and judgment. The familiar symbiosis of humans and machines will continue to be salient.

Banish the menace of so-called ‘killer robots’ and AI taking your job, and a newer, fresher world begins to emerge. With this more optimistic mind-set in play, what great feats can be accomplished through the continued interaction between artificial intelligence, robotics and mankind?

Blade Runner 2049 is certainly great entertainment – as Robbie Collin, The Daily Telegraph’s film critic writes, “Roger Deakins’s head-spinning cinematography – which, when it’s not gliding over dust-blown deserts and teeming neon chasms, keeps finding ingenious ways to make faces and bodies overlap, blend and diffuse.” – but great though the art is, isn’t it time to change our thinking and recast the world in a more optimistic light?

——————————————————————————————

Just a word about the film itself. Broadly, director Denis Villeneuve’s done a tremendous job with Blade Runner 2049. One stylistic gripe, though. While one wouldn’t want Villeneuve to direct a slavish homage to Ridley Scott’s original, the alarming switch from the dreamlike techno miasma (most notably, giant nude step-out-the-poster Geisha girls), to Mad Max II Steampunk (the junkyard scenes, complete with a Fagin character) is simply too jarring. I predict that there will be a director’s cut in years to come. Shorter, leaner and sans Steampunk … watch this space!

Author: Chris Moseley, PR Manager, London Business School

cmoseley@london.edu

Tel +44 7511577803