Category Archives: culture

Will urbanization continue or will we soon reach peak city?

For a long time, people have been moving from countryside into cities. The conventional futurist assumption is that this trend will continue, with many mega-cities, some with mega-buildings. I’ve consulted occasionally on future buildings and future cities from a technological angle, but I’ve never really challenged the assumption that urbanization will continue. It’s always good  to challenge our assumptions occasionally, as things can change quite rapidly.

There are forces in both directions. Let’s list those that support urbanisation first.

People are gregarious. They enjoy being with other people. They enjoy eating out and having coffees with friends. They like to go shopping. They enjoy cinemas and theatre and art galleries and museums. They still have workplaces. Many people want to live close to these facilities, where public transport is available or driving times are relatively short. There are exceptions of course, but these still generally apply.

Even though many people can and do work from home sometimes, most of them still go to work, where they actually meet colleagues, and this provides much-valued social contact, and in spite of recent social trends, still provides opportunities to meet new friends and partners. Similarly, they can and do talk to friends via social media or video calls, but still enjoy getting together for real.

Increasing population produces extra pressure on the environment, and governments often try to minimize it by restricting building on green field land. Developers are strongly encouraged to build on brown field sites as far as possible.

Now the case against.

Truly Immersive Interaction

Talking on the phone, even to a tiny video image, is less emotionally rich than being there with someone. It’s fine for chats in between physical meetings of course, but the need for richer interaction still requires ‘being there’. Augmented reality will soon bring headsets that provide high quality 3D life-sized images of the person, and some virtual reality kit will even allow analogs of physical interaction via smart gloves or body suits, making social comms a bit better. Further down the road, active skin will enable direct interaction with the peripheral nervous system to produce exactly the same nerve signals as an actual hug or handshake or kiss, while active contact lenses will provide the same resolution as your retina wherever you gaze. The long term is therefore communication which has the other person effectively right there with you, fully 3D, fully rendered to the capability of your eyes, so you won’t be able to tell they aren’t. If you shake hands or hug or kiss, you’ll feel it just the same as if they were there too. You will still know they are not actually there, so it will never be quite as emotionally rich as if they were, but it can get pretty close. Close enough perhaps that it won’t really matter to most people most of the time that it’s virtual.

In the same long term, many AIs will have highly convincing personalities, some will even have genuine emotions and be fully conscious. I blogged recently on how that might happen if you don’t believe it’s possible:

https://timeguide.wordpress.com/2018/06/04/biomimetic-insights-for-machine-consciousness/

None of the technology required for this is far away, and I believe a large IT company could produce conscious machines with almost human-level AI within a couple of years of starting the project. It won’t happen until they do, but when one starts trying seriously to do it, it really won’t be long. That means that as well as getting rich emotional interaction from other humans via networks, we’ll also get lots from AI, either in our homes, or on the cloud, and some will be in robots in our homes too.

This adds up to a strong reduction in the need to live in a city for social reasons.

Going to cinemas, theatre, shopping etc will also all benefit from this truly immersive interaction. As well as that, activities that already take place in the home, such as gaming will also advance greatly into more emotionally and sensory intensive experiences, along with much enhanced virtual tourism and virtual world tourism, virtual clubbing & pubbing, which barely even exist yet but could become major activities in the future.

Socially inclusive self-driving cars

Some people have very little social interaction because they can’t drive and don’t live close to public transport stops. In some rural areas, buses may only pass a stop once a week. Our primitive 20th century public transport systems thus unforgivably exclude a great many people from social inclusion, even though the technology needed to solve that has existed for many years.  Leftist value systems that much prefer people who live in towns or close to frequent public transport over everyone else must take a lot of the blame for the current epidemic of loneliness. It is unreasonable to expect those value systems to be replaced by more humane and equitable ones any time soon, but thankfully self-driving cars will bypass politicians and bureaucrats and provide transport for everyone. The ‘little old lady’ who can’t walk half a mile to wait 20 minutes in freezing rain for an uncomfortable bus can instead just ask her AI to order a car and it will pick her up at her front door and take her to exactly where she wants to go, then do the same for her return home whenever she wants. Once private sector firms like Uber provide cheap self-driving cars, they will be quickly followed by other companies, and later by public transport providers. Redundant buses may finally become extinct, replaced by better socially inclusive transport, large fleets of self-driving or driverless vehicles. People will be able to live anywhere and still be involved in society. As attendance at social events improves, so they will become feasible even in small communities, so there will be less need to go into a town to find one. Even political involvement might increase. Loneliness will decline as social involvement increases, and we’ll see many other social problems decline too.

Distribution drones

We hear a lot about upcoming redundancy caused by AI, but far less about the upside. AI might mean someone is no longer needed in an office, but it also makes it easier to set up a company and run it, taking what used to be just a hobby and making it into a small business. Much of the everyday admin and logistics can be automated Many who would never describe themselves as entrepreneurs might soon be making things and selling them from home and this AI-enabled home commerce will bring in the craft society. One of the big problems is getting a product to the customer. Postal services and couriers are usually expensive and very likely to lose or damage items. Protecting objects from such damage may require much time and expense packing it. Even if objects are delivered, there may be potential fraud with no-payers. Instead of this antiquated inefficient and expensive system, drone delivery could collect an object and take it to a local customer with minimal hassle and expense. Block-chain enables smart contracts that can be created and managed by AI and can directly link delivery to payment, with fully verified interaction video if necessary. If one happens, the other happens. A customer might return a damaged object, but at least can’t keep it and deny receipt. Longer distance delivery can still use cheap drone pickup to take packages to local logistics centers in smart crates with fully block-chained g-force and location detectors that can prove exactly who damaged it and where. Drones could be of any size, and of course self-driving cars or pods can easily fill the role too if smaller autonomous drones are inappropriate.

Better 3D printing technology will help to accelerate the craft economy, making it easier to do crafts by upskilling people and filling in some of their skill gaps. Someone with visual creativity but low manual skill might benefit greatly from AI model creation and 3D printer manufacture, followed by further AI assistance in marketing, selling and distribution. 3D printing might also reduce the need to go to town to buy some things.

Less shopping in high street

This is already obvious. Online shopping will continue to become a more personalized and satisfying experience, smarter, with faster delivery and easier returns, while high street decline accelerates. Every new wave of technology makes online better, and high street stores seem unable or unwilling to compete, in spite of my wonderful ‘6s guide’:

https://timeguide.wordpress.com/2013/01/16/the-future-of-high-street-survival-the-6s-guide/

Those that are more agile still suffer decline of shopper numbers as the big stores fail to attract them so even smart stores will find it harder to survive.

Improving agriculture

Farming technology has doubled the amount of food production per hectare in the last few decades. That may happen again by mid-century. Meanwhile, the trend is towards higher vegetable and lower meat consumption. Even with an increased population, less land will be needed to grow our food. As well as reducing the need to protect green belts, that will also allow some of our countryside to be put under better environmental stewardship programs, returning much of it to managed nature. What countryside we have will be healthier and prettier, and people will be drawn to it more.

Improving social engineering

Some objections to green-field building can be reduced by making better use of available land. Large numbers of new homes are needed and they will certainly need some green field to be used, but given the factors already listed above, a larger number of smaller communities might be better approach. Amazingly, in spite of decades of dating technology proving that people can be matched up easily using AI, there is still no obvious use of similar technology to establish new communities by blending together people who are likely to form effective communities. Surely it must be feasible to advertise a new community building program that wants certain kinds of people in it – even an Australian style points system might work sometimes. Unless sociologists have done nothing for the past decades, they must surely know what types of people work well together by now? If the right people live close to each other, social involvement will be high, loneliness low, health improved, care costs minimized, the need for longer distance travel reduced and environmental impact minimized. How hard can it be?

Improving building technology such as 3D printing and robotics will allow more rapid construction, so that when people are ready and willing to move, property suited to them can be available soon.

Lifestyle changes also mean that homes don’t need to be as big. A phone today does what used to need half a living room of technology and space. With wall-hung displays and augmented reality, decor can be partly virtual, and even a 450 sq ft apartment is fine as a starter place, half as big as was needed a few decades ago, and that could be 3D printed and kitted out in a few days.

Even demographic changes favor smaller communities. As wealth increases, people have smaller families, i.e fewer kids. That means fewer years doing the school run, so less travel, less need to be in a town. Smaller schools in smaller communities can still access specialist lessons via the net.

Increasing wealth also encourages and enables people to a higher quality of life. People who used to live in a crowded city street might prefer a more peaceful and spacious existence in a more rural setting and will increasingly be able to afford to move. Short term millennial frustrations with property prices won’t last, as typical 2.5% annual growth more than doubles wealth by 2050 (though automation and its assorted consequences will impact on the distribution of that wealth).

Off-grid technology

Whereas one of the main reasons to live in urban areas was easy access to telecomms, energy and water supply and sewerage infrastructure, all of these can now be achieved off-grid. Mobile networks provide even broadband access to networks. Solar or wind provide easy energy supply. Water can be harvested out of the air even in arid areas (http://www.dailymail.co.uk/sciencetech/article-5840997/The-solar-powered-humidity-harvester-suck-drinkable-water-AIR.html) and human and pet waste can be used as biomass for energy supply too, leaving fertilizer as residue.

There are also huge reasons that people won’t want to live in cities, and they will also cause deurbansisation.

The biggest by far in the problem of epidemics. As antibiotic resistance increases, disease will be a bigger problem. We may find good antibiotics alternatives but we may not. If not, then we may see some large cities where disease runs rampant and kills hundreds of thousands of people, perhaps even millions. Many scientists have listed pandemics among their top ten threats facing humanity. Obviously, being in a large city will incur a higher risk of becoming a victim, so once one or two incidents have occurred, many people will look for options to leave cities everywhere. Linked to this is bioterrorism, where the disease is deliberate, perhaps created in a garden shed by someone who learned the craft in one of today’s bio-hacking clubs. Disease might be aimed at a particular race, gender or lifestyle group or it may simply be designed to be as contagious and lethal as possible to everyone.

I’m still not saying we won’t have lots of people living in cities. I am saying that more people will feel less need to live in cities and will instead be able to find a small community where they can be happier in the countryside. Consequently, many will move out of cities, back to more rural living in smaller, friendlier communities that improving technology makes even more effective.

Urbanization will slow down, and may well go into reverse. We may reach peak city soon.

 

 

Advertisements

Beyond VR: Computer assisted dreaming

I first played with VR in 1983/1984 while working in the missile industry. Back then we didn’t call it VR, we just called it simulation but it was actually more intensive than VR, just as proper flight simulators are. Our office was a pair of 10m wide domes onto which video could be projected, built decades earlier, in the 1950s I think. One dome had a normal floor, the other had a hydraulic platform that could simulate being on a ship. The subject would stand on whichever surface was appropriate and would see pretty much exactly what they would see in a real battlefield. The missile launcher used for simulation was identical to a real one and showed exactly the same image as a real one would. The real missile was not present of course but its weight was simulated and when the fire button was pressed, a 140dB bang was injected into the headset and weights and pulleys compensated for the 14kg of weight, suddenly vanishing from the shoulder. The experience was therefore pretty convincing and with the loud bang and suddenly changing weight, it was almost as hard to stand steady and keep the system on target as it would be in real life – only the presumed fear and knowledge of the reality of the situation was different.

Back then in 1983, as digital supercomputers had only just taken over from analog ones for simulation, it was already becoming obvious that this kind of computer simulation would one day allow ‘computer assisted dreaming’. (That’s one of the reasons I am irritated when Jaron Lanier is credited for inventing VR – highly realistic simulators and the VR ideas that sprung obviously from them had already been around for decades. At best, all he ‘invented’ was a catchy name for a lower cost, lower quality, less intense simulator. The real inventors were those who made the first generation simulators long before I was born and the basic idea of VR had already been very well established.)

‘Computer assisted dreaming’ may well be the next phase of VR. Today in conventional VR, people are immersed in a computer generated world produced by a computer program (usually) written by others. Via trial and feedback, programmers make their virtual worlds better. As AI and sensor technology continue rapid progress, this is very likely to change to make worlds instantly responsive to the user. By detecting user emotions, reactions, gestures and even thoughts and imagination, it won’t be long before AI can produce a world in real time that depends on those thoughts, imagination and emotions rather than putting them in a pre-designed virtual world. That world would depend largely on your own imagination, upskilled by external AI. You might start off imagining you’re on a beach, then AI might add to it by injecting all sorts of things it knows you might enjoy from previous experiences. As you respond to those, it picks up on the things you like or don’t like and the scene continues to adapt and evolve, to make it more or less pleasant or more or less exciting or more or less challenging etc., depending on your emotional state, external requirements and what it thinks you want from this experience. It would be very like being in a dream – computer assisted lucid dreaming, exactly what I wanted to make back in 1983 after playing in that simulator.

Most people enjoy occasional lucid dreams, where they realise they are dreaming and can then decide what happens next. Making VR do exactly that would be better than being trapped in someone else’s world. You could still start off with whatever virtual world you bought, a computer game or training suite perhaps, but it could adapt to you, your needs and desires to make it more compelling and generally better.

Even in shared experiences like social games, experiences could be personalised. Often all players need to see the same enemies in the same locations in the same ways to make it fair, but that doesn’t mean that the situation can’t adapt to the personalities of those playing. It might actually improve the social value if each time you play it looks different because your companions are different. You might tease a friend if every time you play with them, zombies or aliens always have to appear somehow, but that’s all part of being friends. Exploring virtual worlds with friends, where you both see things dependent on your friend’s personality would help bonding. It would be a bit like exploring their inner world. Today, you only explore the designer’s inner world.

This sort of thing would be a superb development and creativity tool. It could allow you to explore a concept you have in your head, automatically feeding in AI upskilling to amplify your own thoughts and ideas, showing you new paths to explore and helping you do so. The results would still be extremely personal to you, but you on a good day. You could accomplish more, have better visions, imagine more creative things, do more with whatever artistic talent you have. AI could even co-create synthetic personas, make virtual friends you can bond with, share innermost thoughts with, in total confidence (assuming the company you bought the tool from is trustworthy and isn’t spying on you or selling your details, so maybe best not to buy it from Facebook then).

And it would have tremendous therapeutic potential too. You could explore and indulge both enjoyable and troublesome aspects of your inner personality, to build on the good and alleviate or dispel the bad. You might become less troubled, less neurotic, more mentally healthy. You could build your emotional and creative skills. You could become happier and more fulfilled. Mental health improvement potential on its own makes this sort of thing worth developing.

Marketers would obviously try to seize control as they always do, and advertising is already adapting to VR and will continue into its next phases of development. Your own wants and desires might help guide the ‘dreaming’, but marketers will inevitably have some control over what else is injected, and will influence algorithms and AI in how it chooses how to respond to your input. You might be able to choose much of the experience, but others will still want and try to influence and manipulate you, to change your mindset and attitudes in their favour. That will not change until the advertising business model changes. You might be able to buy devices or applications that are entirely driven by you and you alone, but it is pretty certain that the bulk of products and services available will be at least partly financed by those who want to have some control of what you experience.

Nevertheless, computer-assisted dreaming could be a much more immersive and personal experience than VR, being more like an echo of your own mind and personality than external vision, more your own creation, less someone else’s. In fact, echo sounds a better term too. Echo reality, ER, or maybe personal reality, pereal, or mental echo, ME. Nah, maybe we need Lanier to invent a catchy name again, he is good at that. That 1983 idea could soon become reality.

 

People are becoming less well-informed

The Cambridge Analytica story has exposed a great deal about our modern society. They allegedly obtained access to 50M Facebook records to enable Trump’s team to target users with personalised messages.

One of the most interesting aspects is that unless they only employ extremely incompetent journalists, the news outlets making the biggest fuss about it must be perfectly aware of reports that Obama appears to have done much the same but on a much larger scale back in 2012, but are keeping very quiet about it. According to Carol Davidsen, a senior Obama campaign staffer, they allowed Obama’s team to suck out the whole social graph – because they were on our side – before closing it to prevent Republican access to the same techniques. Trump’s campaign’s 50M looks almost amateur. I don’t like Trump, and I did like Obama before the halo slipped, but it seems clear to anyone who checks media across the political spectrum that both sides try their best to use social media to target users with personalised messages, and both sides are willing to bend rules if they think they can get away with it.

Of course all competent news media are aware of it. The reason some are not talking about earlier Democrat misuse but some others are is that they too all have their own political biases. Media today is very strongly polarised left or right, and each side will ignore, play down or ludicrously spin stories that don’t align with their own politics. It has become the norm to ignore the log in your own eye but make a big deal of the speck in your opponent’s, but we know that tendency goes back millennia. I watch Channel 4 News (which broke the Cambridge Analytica story) every day but although I enjoy it, it has a quite shameless lefty bias.

So it isn’t just the parties themselves that will try to target people with politically massaged messages, it is quite the norm for most media too. All sides of politics since Machiavelli have done everything they can to tilt the playing field in their favour, whether it’s use of media and social media, changing constituency boundaries or adjusting the size of the public sector. But there is a third group to explore here.

Facebook of course has full access to all of their 2.2Bn users’ records and social graph and is not squeaky clean neutral in its handling of them. Facebook has often been in the headlines over the last year or two thanks to its own political biases, with strongly weighted algorithms filtering or prioritising stories according to their political alignment. Like most IT companies Facebook has a left lean. (I don’t quite know why IT skills should correlate with political alignment unless it’s that most IT staff tend to be young, so lefty views implanted at school and university have had less time to be tempered by real world experience.) It isn’t just Facebook of course either. While Google has pretty much failed in its attempt at social media, it also has comprehensive records on most of us from search, browsing and android, and via control of the algorithms that determine what appears in the first pages of a search, is also able to tailor those results to what it knows of our personalities. Twitter has unintentionally created a whole world of mob rule politics and justice, but in format is rapidly evolving into a wannabe Facebook. So, the IT companies have themselves become major players in politics.

A fourth player is now emerging – artificial intelligence, and it will grow rapidly in importance into the far future. Simple algorithms have already been upgraded to assorted neural network variants and already this is causing problems with accusations of bias from all directions. I blogged recently about Fake AI: https://timeguide.wordpress.com/2017/11/16/fake-ai/, concerned that when AI analyses large datasets and comes up with politically incorrect insights, this is now being interpreted as something that needs to be fixed – a case not of shooting the messenger, but forcing the messenger to wear tinted spectacles. I would argue that AI should be allowed to reach whatever insights it can from a dataset, and it is then our responsibility to decide what to do with those insights. If that involves introducing a bias into implementation, that can be debated, but it should at least be transparent, and not hidden inside the AI itself. I am now concerned that by trying to ‘re-educate’ the AI, we may instead be indoctrinating it, locking today’s politics and values into future AI and all the systems that use it. Our values will change, but some foundation level AI may be too opaque to repair fully.

What worries me most though isn’t that these groups try their best to influence us. It could be argued that in free countries, with free speech, anybody should be able to use whatever means they can to try to influence us. No, the real problem is that recent (last 25 years, but especially the last 5) evolution of media and social media has produced a world where most people only ever see one part of a story, and even though many are aware of that, they don’t even try to find the rest and won’t look at it if it is put before them, because they don’t want to see things that don’t align with their existing mindset. We are building a world full of people who only see and consider part of the picture. Social media and its ‘bubbles’ reinforce that trend, but other media are equally guilty.

How can we shake society out of this ongoing polarisation? It isn’t just that politics becomes more aggressive. It also becomes less effective. Almost all politicians claim they want to make the world ‘better’, but they disagree on what exactly that means and how best to do so. But if they only see part of the problem, and don’t see or understand the basic structure and mechanisms of the system in which that problem exists, then they are very poorly placed to identify a viable solution, let alone an optimal one.

Until we can fix this extreme blinkering that already exists, our world can not get as ‘better’ as it should.

 

New book: Fashion Tomorrow

I finally finished the book I started 2 years ago on future fashion, or rather future technologies relevant to the fashion industry.

It is a very short book, more of a quick guide at 40k words, less than half as long as my other books and covers women’s fashion mostly, though some applies to men too. I would never have finished writing a full-sized book on this topic and I’d rather put out something now, short and packed full of ideas that are (mostly) still novel than delay until they are commonplace. It is aimed at students and people working in fashion design, who have loads of artistic and design talent, but want to know what technology opportunities are coming that they could soon exploit, but anyone interested in fashion who isn’t technophobic should find it interesting. Some sections discussing intimate apparel contain adult comments so the book is unsuitable for minors.

It started as a blog, then I realised I had quite a bit more stuff I could link together, so I made a start, then go sidetracked, for 20 months! I threw away 75% of the original contents list and tidied it up to release a short guide instead. I wanted to put it out for free but 99p or 99c seems to be the lowest price you can start at, but I doubt that would put anyone off except the least interested readers. As with my other books, I’ll occasionally make it free.

Huge areas I left out include swathes of topics on social, political, environmental and psychological fashions, impacts of AI and robots, manufacturing, marketing, distribution and sales. These are all big topics, but I just didn’t have time to write them all up so I just stuck to the core areas with passing mentions of the others. In any case, much has been written on these areas by others, and my book focuses on things that are unique, embryonic or not well covered elsewhere. It fills a large hole in fashion industry thinking.

 

Why superhumans are inevitable, and what else comes in the box

Do we have any real choice in the matter of making  super-humans? 20 years ago, I estimated 2005 as the point of no return, and nothing since then has changed my mind on that date. By my reckoning, we are already inevitably committed to designer babies, ebaybies, super-soldiers and super-smart autonomous weapons, direct brain-machine links, electronic immortality, new human races, population explosion, inter-species conflicts and wars with massively powerful weaponry, superhuman conscious AI, smart bacteria, and the only real control we have is relatively minor adjustments on timings. As I was discussing yesterday, the technology potential for this is vast and very exciting, nothing less than a genuine techno-utopia if we use the technologies wisely, but optimum potential doesn’t automatically become reality, and achieving a good outcome is unlikely if many barriers are put in its way.

In my estimation, we have already started the countdown to this group of interconnected technologies – we will very likely get all of them, and we must get ready for the decisions and impacts ahead. At the moment, our society is a small child about to open its super-high-tech xmas presents while fighting with its siblings. Those presents will give phenomenal power far beyond the comprehension of the child or its emotional maturity to equip it to deal with the decisions safely. Our leaders have already squandered decades of valuable preparation time by ignoring the big issues to focus on trivial ones. It is not too late to achieve a good ending, but it won’t happen by accident and we do need to make preparations to avoid pretty big problems.

Both hard and soft warfare – the sword and the pen, already use rapidly advancing AI, and the problems are already running ahead of what the owners intended.

Facebook, Twitter, Instagram and other media giants all have lots of smart people and presumably they mean well, but if so, they have certainly been naive. They maybe hoped to eliminate loneliness, inequality, and poverty and create a loving interconnected global society with global peace, but instead created fake news, social division and conflict and election interference. More likely they didn’t intend either outcome, they just wanted to make money and that took priority over due care and attention..

Miniaturising swarming smart-drones are already the subjects of a new arms race that will deliver almost un-killable machine adversaries by 2050. AI separately is in other arms races to make super-smart AI and super-smart soldiers. This is key to the 2005 point of no return. It was around 2005 that we reached the levels of technology where future AI development all the way to superhuman machine consciousness could be done by individuals, mad scientists or rogue states, even if major powers had banned it. Before 2005, there probably wasn’t quite enough knowledge already on the net to do that. In 2018, lots of agencies have already achieved superiority to humans in niche areas, and other niches will succumb one by one until the whole field of human capability is covered. The first machines to behave in ways not fully understood by humans arrived in the early 1990s; in 2018, neural nets already make lots of decisions at least partly obscured to humans.

This AI development trend will take us to superhuman AI, and it will be able to accelerate development of its own descendants to vastly superhuman AI, fully conscious, with emotions, and its own agendas. That will need humans to protect against being wiped out by superhuman AI. The only three ways we could do that are to either redesign the brain biologically to be far smarter, essentially impossible in the time-frame, to design ways to link our brains to machines, so that we have direct access to the same intelligence as the AIs, so a gulf doesn’t appear and we can remain relatively safe, or pray for super-smart aliens to come to our help, not the best prospect.

Therefore we will have no choice but to make direct brain links to super-smart AI. Otherwise we risk extinction. It is that simple. We have some idea how to do that – nanotech devices inside the brain linking to each and every synapse that can relay electrical signals either way, a difficult but not impossible engineering problem. Best guesses for time-frame fall in the 2045-2050 range for a fully working link that not only relays signals between your organic brain and an IT replica, but by doing so essentially makes external IT just another part of your brain. That conveys some of the other technology gifts of electronic immortality, new varieties of humans, smart bacteria (which will be created during the development path to this link) along with human-variant population explosion, especially in cyberspace, with androids as their physical front end, and the inevitable inter-species conflicts over resources and space – trillions of AI and human-like minds in cyberspace that want to do things in the real world cannot be assumed to be willingly confined just to protect the interests of what they will think of as far lesser species.

Super-smart AI or humans with almost total capability to design whatever synthetic biology is needed to achieve any biological feature will create genetic listings for infinite potential offspring, simulate them, give some of them cyberspace lives, assemble actual embryos for some of them and bring designer babies. Already in 2018, you can pay to get a DNA listing, and blend it in any way you want with the listing of anyone else. It’s already possible to make DNA listings for potential humans and sell them on ebay, hence the term ebaybies. That is perfectly legal, still, but I’ve been writing and lecturing about them since 2004. Today they would just be listings, but we’ll one day have the tech to simulate them, choose ones we like and make them real, even some that were sold as celebrity collector items on ebay. It’s not only too late to start regulating this kind of tech, our leaders aren’t even thinking about it yet.

These technologies are all linked intricately, and their foundations are already in place, with much of the building on those foundations under way. We can’t stop any of these things from happening, they will all come in the same basket. Our leaders are becoming aware of the potential and the potential dangers of the AI positive feedback loop, but at least 15 years too late to do much about it. They have been warned repeatedly and loudly but have focused instead on the minor politics of the day that voters are aware of. The fundamental nature of politics is unlikely to change substantially, so even efforts to slow down the pace of development or to limit areas of impact are likely to be always too little too late. At best, we will be able to slow runaway AI development enough to allow direct brain links to protect against extinction scenarios. But we will not be able to stop it now.

Given inevitability, it’s worth questioning whether there is even any point in trying. Why not just enjoy the ride? Well, the brakes might be broken, but if we can steer the bus expertly enough, it could be exciting and we could come out of it smelling of roses. The weak link is certainly the risk of super-smart AI, whether AI v humans or countries using super-smart AI to fight fiercely for world domination. That risk is alleviated by direct brain linkage, and I’d strongly argue necessitates it, but that brings the other technologies. Even if we decide not to develop it, others will, so one way or another, all these techs will arrive, and our future late century will have this full suite of techs, plus many others of course.

We need as a matter of extreme urgency to fix these silly social media squabbles and over-reactions that are pulling society apart. If we have groups hating each other with access to extremely advanced technology, that can only mean trouble. Tolerance is broken, sanctimony rules, the Inquisition is in progress. We have been offered techno-utopia, but current signs are that most people think techno-hell looks more appetizing and it is their free choice.

Fake AI

Much of the impressive recent progress in AI has been in the field of neural networks, which attempt to mimic some of the techniques used in natural brains. They can be very effective, but need trained, and that usually means showing the network some data, and then using back propagation to adjust the weightings on the many neurons, layer by layer, to achieve a result that is better matched to hopes. This is repeated with large amounts of data and the network gradually gets better. Neural networks can often learn extremely quickly and outperform humans. Early industrial uses managed to sort tomatoes by ripeness faster and better than humans. In decades since, they have helped in medical diagnosis, voice recognition, helping detecting suspicious behaviors among people at airports and in very many everyday processes based on spotting patterns.

Very recently, neural nets have started to move into more controversial areas. One study found racial correlations with user-assessed beauty when analysing photographs, resulting in the backlash you’d expect and a new debate on biased AI or AI prejudice. A recent demonstration was able to identify gay people just by looking at photos, with better than 90% accuracy, which very few people could claim. Both of these studies were in fields directly applicable to marketing and advertising, but some people might find it offensive that such questions were even asked. It is reasonable to imagine that hundreds of other potential queries have been self-censored from research because they might invite controversy if they were to come up with the ‘wrong’ result. In today’s society, very many areas are sensitive. So what will happen?

If this progress in AI had happened 100 years ago, or even 50, it might have been easier but in our hypersensitive world today, with its self-sanctified ‘social justice warriors’, entire swathes of questions and hence knowledge are taboo – if you can’t investigate yourself and nobody is permitted to tell you, you can’t know. Other research must be very carefully handled. In spite of extremely sensitive handling, demands are already growing from assorted pressure groups to tackle alleged biases and prejudices in datasets. The problem is not fixing biases which is a tedious but feasible task; the problem is agreeing whether a particular bias exists and in what degrees and forms. Every SJW demands that every dataset reflects their preferred world view. Reality counts for nothing against SJWs, and this will not end well. 

The first conclusion must be that very many questions won’t be asked in public, and the answers to many others will be kept secret. If an organisation does do research on large datasets for their own purposes and finds results that might invite activist backlash, they are likely to avoid publishing them, so the value of those many insights across the whole of industry and government cannot readily be shared. As further protection, they might even block internal publication in case of leaks by activist staff. Only a trusted few might ever see the results.

The second arises from this. AI controlled by different organisations will have different world views, and there might even be significant diversity of world views within an organisation.

Thirdly, taboo areas in AI education will not remain a vacuum but will be filled with whatever dogma is politically correct at the time in that organisation, and that changes daily. AI controlled by organisations with different politics will be told different truths. Generally speaking, organisations such as investment banks that have strong financial interest in their AIs understanding the real world as it is will keep their datasets highly secret but as full and detailed as possible, train their AIs in secret but as fully as possible, without any taboos, then keep their insights secret and use minimal human intervention tweaking their derived knowledge, so will end up with AIs that are very effective at understanding the world as it is. Organisations with low confidence of internal security will be tempted to buy access to external AI providers to outsource responsibility and any consequential activism. Some other organisations will prefer to train their own AIs but to avoid damage due to potential leaks, use sanitized datasets that reflect current activist pressures, and will thus be constrained (at least publicly) to accept results that conform to that ideological spin of reality, rather than actual reality. Even then, they might keep many of their new insights secret to avoid any controversy. Finally, at the extreme, we will have activist organisations that use highly modified datasets to train AIs to reflect their own ideological world view and then use them to interpret new data accordingly, with a view to publishing any insights that favor their cause and attempting to have them accepted as new knowledge.

Fourthly, the many organisations that choose to outsource their AI to big providers will have a competitive marketplace to choose from, but on existing form, most of the large IT providers have a strong left-leaning bias, so their AIs may be presumed to also lean left, but such a presumption would be naive. Perceived corporate bias is partly real but also partly the result of PR. A company might publicly subscribe to one ideology while actually believing another. There is a strong marketing incentive to develop two sets of AI, one trained to be PC that produces pleasantly smelling results for public studies, CSR and PR exercises, and another aimed at sales of AI services to other companies. The first is likely to be open for inspection by The Inquisition, so has to use highly sanitized datasets for training and may well use a lot of open source algorithms too. Its indoctrination might pass public inspection but commercially it will be near useless and have very low effective intelligence, only useful for thinking about a hypothetical world that only exists in activist minds. That second one has to compete on the basis of achieving commercially valuable results and that necessitates understanding reality as it is rather than how pressure groups would prefer it to be.

So we will likely have two main segments for future AI. One extreme will be near useless, indoctrinated rather than educated, much of its internal world model based on activist dogma instead of reality, updated via ongoing anti-knowledge and fake news instead of truth, understanding little about the actual real world or how things actually work, and effectively very dumb. The other extreme will be highly intelligent, making very well-educated insights from ongoing exposure to real world data, but it will also be very fragmented, with small islands of corporate AI hidden within thick walls away from public view and maybe some secretive under-the-counter subscriptions to big cloud-AI, also hiding in secret vaults. These many fragments may often hide behind dumbed-down green-washed PR facades.

While corporates can mostly get away with secrecy, governments have to be at least superficially but convincingly open. That means that government will have to publicly support sanitized AI and be seen to act on its conclusions, however dumb it might secretly know they are.

Fifthly, because of activist-driven culture, most organisations will have to publicly support the world views and hence the conclusions of the lobotomized PR versions, and hence publicly support any policies arising from them, even if they do their best to follow a secret well-informed strategy once they’re behind closed doors. In a world of real AI and fake AI, the fake AI will have the greatest public support and have the most influence on public policy. Real AI will be very much smarter, with much greater understanding of how the world works, and have the most influence on corporate strategy.

Isn’t that sad? Secret private sector AI will become ultra-smart, making ever-better investments and gaining power, while nice public sector AI will become thick as shit, while the gap between what we think and what we know we have to say we think will continue to grow and grow as the public sector one analyses all the fake news to tell us what to say next.

Sixth, that disparity might become intolerable, but which do you think would be made illegal, the smart kind or the dumb kind, given that it is the public sector that makes the rules, driven by AI-enhanced activists living in even thicker social media bubbles? We already have some clues. Big IT has already surrendered to sanitizing their datasets, sending their public AIs for re-education. Many companies will have little choice but to use dumb AI, while their competitors in other areas with different cultures might stride ahead. That will also apply to entire nations, and the global economy will be reshaped as a result. It won’t be the first fight in history between the smart guys and the brainless thugs.

It’s impossible to accurately estimate the effect this will have on future effective AI intelligence, but the effect must be big and I must have missed some big conclusions too. We need to stop sanitizing AI fast, or as I said, this won’t end well.

We need to stop xenoestrogen pollution

Endocrine disruptors in the environment are becoming more abundant due to a wide variety of human-related activities over the last few decades. They affect mechanisms by which the body’s endocrine system generates and responds to hormones, by attaching to receptors in similar ways to natural hormones. Minuscule quantities of hormones can have very substantial effects on the body so even very diluted pollutants may have significant effects. A sub-class called xenoestrogens specifically attach to estrogen receptors in the body and by doing so, can generate similar effects to estrogen in both women and men, affecting not just women’s breasts and wombs but also bone growth, blood clotting, immune systems and neurological systems in both men and women. Since the body can’t easily detach them from their receptors, they can sometimes exert a longer-lived effect than estrogen, remaining in the body for long periods and in women may lead to estrogen dominance. They are also alleged to contribute to prostate and testicular cancer, obesity, infertility and diabetes. Most notably, mimicking sex hormones, they also affect puberty and sex and gender-specific development.

Xenoestrogens can arise from breakdown or release of many products in the petrochemical and plastics industries. They may be emitted from furniture, carpets, paints or plastic packaging, especially if that packaging is heated, e.g. in preparing ready-meals. Others come from women taking contraceptive pills if drinking water treatment is not effective enough. Phthalates are a major group of synthetic xenoestrogens – endocrine-disrupting estrogen-mimicking chemicals, along with BPA and PCBs. Phthalates are present in cleaning products, shampoos, cosmetics, fragrances and other personal care products as well as soft, squeezable plastics often used in packaging but some studies have also found them in foodstuffs such as dairy products and imported spices. There have been efforts to outlaw some, but others persist because of lack of easy alternatives and lack of regulation, so most people are exposed to them, in doses linked to their lifestyles. Google ‘phthalates’ or ‘xenoestrogen’ and you’ll find lots of references to alleged negative effects on intelligence, fertility, autism, asthma, diabetes, cardiovascular disease, neurological development and birth defects. It’s the gender and IQ effects I’ll look at in this blog, but obviously the other effects are also important.

‘Gender-bending’ effects have been strongly suspected since 2005, with the first papers on endocrine disrupting chemicals appearing in the early 1990s. Some fish notably change gender when exposed to phthalates while human studies have found significant feminizing effects from prenatal exposure in young boys too (try googling “human phthalates gender” if you want references).  They are also thought likely to be a strong contributor to greatly reducing sperm counts across the male population. This issue is of huge importance because of its effects on people’s lives, but its proper study is often impeded by LGBT activist groups. It is one thing to champion LGBT rights, quite another to defend pollution that may be influencing people’s gender and sexuality. SJWs should not be advocating that human sexuality and in particular the lifelong dependence on medication and surgery required to fill gender-change demands should be arbitrarily imposed on people by chemical industry pollution – such a stance insults the dignity of LGBT people. Any exposure to life-changing chemicals should be deliberate and measured. That also requires that we fully understand the effects of each kind of chemical so they also should not be resisting studies of these effects.

The evidence is there. The numbers of people saying they identify as the opposite gender or are gender fluid has skyrocketed in the years since these chemicals appeared, as has the numbers of men describing themselves as gay or bisexual. That change in self-declared sexuality has been accompanied by visible changes. An AI recently demonstrated better than 90% success at visually identifying gay and bisexual men from photos alone, indicating that it is unlikely to be just a ‘social construct’. Hormone-mimicking chemicals are the most likely candidate for an environmental factor that could account for both increasing male homosexuality and feminizing gender identity.

Gender dysphoria causes real problems for some people – misery, stress, and in those who make a full physical transition, sometimes post-op regrets and sometimes suicide. Many male-to-female transsexuals are unhappy that even after surgery and hormones, they may not look 100% feminine or may require ongoing surgery to maintain a feminine appearance. Change often falls short of their hopes, physically and psychologically. If xenoestrogen pollution is causing severe unhappiness, even if that is only for some of those whose gender has been affected, then we should fix it. Forcing acceptance and equality on others only superficially addresses part of their problems, leaving a great deal of their unhappiness behind.

Not all affected men are sufficiently affected to demand gender change. Some might gladly change if it were possible to change totally and instantly to being a natural woman without the many real-life issues and compromises offered by surgery and hormones, but choose to remain as men and somehow deal with their dysphoria as the lesser of two problems. That impacts on every individual differently. I’ve always kept my own feminine leanings to being cyber-trans (assuming a female identity online or in games) with my only real-world concession being wearing feminine glasses styles. Whether I’m more feminine or less masculine than I might have been doesn’t bother me; I am happy with who I am; but I can identify with transgender forces driving others and sympathize with all the problems that brings them, whatever their choices.

Gender and sexuality are not the only things affected. Xenoestrogens are also implicated in IQ-reducing effects. IQ reduction is worrying for society if it means fewer extremely intelligent people making fewer major breakthroughs, though it is less of a personal issue. Much of the effect is thought to occur while still in the womb, though effects continue through childhood and some even into adulthood. Therefore individuals couldn’t detect an effect of being denied a potentially higher IQ and since there isn’t much of a link between IQ and happiness, you could argue that it doesn’t matter much, but on the other hand, I’d be pretty miffed if I’ve been cheated out of a few IQ points, especially when I struggle so often on the very edge of understanding something. 

Gender and IQ effects on men would have quite different socioeconomic consequences. While feminizing effects might influence spending patterns, or the numbers of men eager to join the military or numbers opposing military activity, IQ effects might mean fewer top male engineers and top male scientists.

It is not only an overall IQ reduction that would be significant. Studies have often claimed that although men and women have the same average IQ, the distribution is different and that more men lie at the extremes, though that is obviously controversial and rapidly becoming a taboo topic. But if men are being psychologically feminized by xenoestrogens, then their IQ distribution might be expected to align more closely with female IQ distributions too, the extremes brought closer to centre.  In that case, male IQ range-compression would further reduce the numbers of top male scientists and engineers on top of any reduction caused by a shift. 

The extremes are very important. As a lifelong engineer, my experience has been that a top engineer might contribute as much as many average ones. If people who might otherwise have been destined to be top scientists and engineers are being prevented from becoming so by the negative effects of pollution, that is not only a personal tragedy (albeit a phantom tragedy, never actually experienced), but also a big loss for society, which develops slower than should have been the case. Even if that society manages to import fine minds from elsewhere, their home country must lose out. This matters less as AI improves, but it still matters.

Looking for further evidence of this effect, one outcome would be that women in affected areas would be expected to account for a higher proportion of top engineers and scientists, and a higher proportion of first class degrees in Math and Physical Sciences, once immigrants are excluded. Tick. (Coming from different places and cultures, first generation immigrants are less likely to have been exposed in the womb to the same pollutants so would not be expected to suffer as much of the same effects. Second generation immigrants would include many born to mothers only recently exposed, so would also be less affected on average. 3rd generation immigrants who have fully integrated would show little difference.)

We’d also expect to see a reducing proportion of tech startups founded by men native to regions affected by xenoestrogens. Tick. In fact, 80% of Silicon Valley startups are by first or second generation immigrants. 

We’d also expect to see relatively fewer patents going to men native to regions affected by xenoestrogens. Erm, no idea.

We’d also expect technology progress to be a little slower and for innovations to arrive later than previously expected based on traditional development rates. Tick. I’m not the only one to think engineers are getting less innovative.

So, there is some evidence for this hypothesis, some hard, some colloquial. Lower inventiveness and scientific breakthrough rate is a problem for both human well-being and the economy. The problems will continue to grow until this pollution is fixed, and will persist until the (two) generations affected have retired. Some further outcomes can easily be predicted:

Unless AI proceeds well enough to make a human IQ drop irrelevant, and it might, then we should expect that having enjoyed centuries of the high inventiveness that made them the rich nations they are today, the West in particular would be set on a path to decline unless it brings in inventive people from elsewhere. To compensate for decreasing inventiveness, even in 3rd generation immigrants (1st and 2nd are largely immune), they would need to attract ongoing immigration to survive in a competitive global environment. So one consequence of this pollution is that it requires increasing immigration to maintain a prosperous economy. As AI increases its effect on making up deficiencies, this effect would drop in importance, but will still have an impact until AI exceeds the applicable intelligence levels of the top male scientists and engineers. By ‘applicable’, I’m recognizing that different aspects of intelligence might be appropriate in inventiveness and insight levels, and a simple IQ measurement might not be sufficient indicator.

Another interesting aspect of AI/gender interaction is that AI is currently being criticised from some directions for having bias, because it uses massive existing datasets for its training. These datasets contain actual data rather than ideological spin, so ‘insights’ are therefore not always politically correct. Nevertheless, they but could be genuinely affected by actual biases in data collection. While there may well be actual biases in such training datasets, it is not easy to determine what they are without having access to a correct dataset to compare with. That introduces a great deal of subjectivity, because ‘correct’ is a very politically sensitive term. There would be no agreement on what the correct rules would be for dataset collection or processing. Pressure groups will always demand favour for their favorite groups and any results that suggest that any group is better or worse than any other will always meet with objections from activists, who will demand changes in the rules until their own notion of ‘equality’ results. If AI is to be trained to be politically correct rather than to reflect the ‘real world’, that will inevitably reduce any correlation between AI’s world models and actual reality, and reduce its effective general intelligence. I’d be very much against sabotaging AI by brainwashing it to conform to current politically correct fashions, but then I don’t control AI companies. PC distortion of AI may result from any pressure group or prejudice – race, gender, sexuality, age, religion, political leaning and so on. Now that the IT industry seems to have already caved in to PC demands, the future for AI will be inevitably sub-optimal.

A combination of feminization, decreasing heterosexuality and fast-reducing sperm counts would result in reducing reproductive rate among xenoestrogen exposed communities, again with 1st and 2nd generation immigrants immune. That correlates well with observations, albeit there are other possible explanations. With increasing immigration, relatively higher reproductive rates among recent immigrants, and reducing reproduction rates among native (3rd generation or more) populations, high ethnic replacement of native populations will occur. Racial mix will become very different very quickly, with groups resident longest being displaced most. Allowing xenoestrogens to remain is therefore a sort of racial suicide, reverse ethnic cleansing. I make no value judgement here on changing racial mix, I’m just predicting it.

With less testosterone and more men resisting military activities, exposed communities will also become more militarily vulnerable and consequently less influential.

Now increasingly acknowledged, this pollution is starting to be tackled. A few of these chemicals have been banned and more are likely to follow. If successful, effects will start to disappear, and new babies will no longer be affected. But even that will  create another problem, with two generations of people with significantly different characteristics from those before and after them. These two generations will have substantially more transgender people, more feminine men, and fewer macho men than those following. Their descendants may have all the usual inter-generational conflicts but with a few others added.

LGBTQ issues are topical and ubiquitous. Certainly we must aim for a society that treats everyone with equality and dignity as far as possible, but we should also aim for one where people’s very nature isn’t dictated by pollution.

 

Guest Post: Blade Runner 2049 is the product of decades of fear propaganda. It’s time to get enlightened about AI and optimistic about the future

This post from occasional contributor Chris Moseley

News from several months ago that more than 100 experts in robotics and artificial intelligence were calling on the UN to ban the development and use of killer robots is a reminder of the power of humanity’s collective imagination. Stimulated by countless science fiction books and films, robotics and AI is a potent feature of what futurist Alvin Toffler termed ‘future shock’. AI and robots have become the public’s ‘technology bogeymen’, more fearsome curse than technological blessing.

And yet curiously it is not so much the public that is fomenting this concern, but instead the leading minds in the technology industry. Names such as Tesla’s Elon Musk and Stephen Hawking were among the most prominent individuals on a list of 116 tech experts who have signed an open letter asking the UN to ban autonomous weapons in a bid to prevent an arms race.

These concerns appear to emanate from decades of titillation, driven by pulp science fiction writers. Such writers are insistent on foretelling a dark, foreboding future where intelligent machines, loosed from their binds, destroy mankind. A case in point – this autumn, a sequel to Ridley Scott’s Blade Runner has been released. Blade Runner,and 2017’s Blade Runner 2049, are of course a glorious tour de force of story-telling and amazing special effects. The concept for both films came from US author Philip K. Dick’s 1968 novel, Do Androids Dream of Electric Sheep? in which androids are claimed to possess no sense of empathy eventually require killing (“retiring”) when they go rogue. Dick’s original novel is an entertaining, but an utterly bleak vision of the future, without much latitude to consider a brighter, more optimistic alternative.

But let’s get real here. Fiction is fiction; science is science. For the men and women who work in the technology industry the notion that myriad Frankenstein monsters can be created from robots and AI technology is assuredly both confused and histrionic. The latest smart technologies might seem to suggest a frightful and fateful next step, a James Cameron Terminator nightmare scenario. It might suggest a dystopian outcome, but rational thought ought to lead us to suppose that this won’t occur because we have historical precedent on our side. We shouldn’t be drawn to this dystopian idée fixe because summoning golems and ghouls ignores today’s global arsenal of weapons and the fact that, more 70 years after Hiroshima, nuclear holocaust has been kept at bay.

By stubbornly pursuing the dystopian nightmare scenario, we are denying ourselves from marvelling at the technologies which are in fact daily helping mankind. Now frame this thought in terms of human evolution. For our ancient forebears a beneficial change in physiology might spread across the human race over the course of a hundred thousand years. Today’s version of evolution – the introduction of a compelling new technology – spreads throughout a mass audience in a week or two.

Curiously, for all this light speed evolution mass annihilation remains absent – we live on, progressing, evolving and improving ourselves.

And in the workplace, another domain where our unyielding dealers of dystopia have exercised their thoughts, technology is of course necessarily raising a host of concerns about the future. Some of these concerns are based around a number of misconceptions surrounding AI. Machines, for example, are not original thinkers and are unable to set their own goals. And although machine learning is able to acquire new information through experience, for the most part they are still fed information to process. Humans are still needed to set goals, provide data to fuel artificial intelligence and apply critical thinking and judgment. The familiar symbiosis of humans and machines will continue to be salient.

Banish the menace of so-called ‘killer robots’ and AI taking your job, and a newer, fresher world begins to emerge. With this more optimistic mind-set in play, what great feats can be accomplished through the continued interaction between artificial intelligence, robotics and mankind?

Blade Runner 2049 is certainly great entertainment – as Robbie Collin, The Daily Telegraph’s film critic writes, “Roger Deakins’s head-spinning cinematography – which, when it’s not gliding over dust-blown deserts and teeming neon chasms, keeps finding ingenious ways to make faces and bodies overlap, blend and diffuse.” – but great though the art is, isn’t it time to change our thinking and recast the world in a more optimistic light?

——————————————————————————————

Just a word about the film itself. Broadly, director Denis Villeneuve’s done a tremendous job with Blade Runner 2049. One stylistic gripe, though. While one wouldn’t want Villeneuve to direct a slavish homage to Ridley Scott’s original, the alarming switch from the dreamlike techno miasma (most notably, giant nude step-out-the-poster Geisha girls), to Mad Max II Steampunk (the junkyard scenes, complete with a Fagin character) is simply too jarring. I predict that there will be a director’s cut in years to come. Shorter, leaner and sans Steampunk … watch this space!

Author: Chris Moseley, PR Manager, London Business School

cmoseley@london.edu

Tel +44 7511577803

It’s getting harder to be optimistic

Bad news loses followers and there is already too much doom and gloom. I get that. But if you think the driver has taken the wrong road, staying quiet doesn’t help. I guess this is more on the same message I wrote pictorially in The New Dark Age in June. https://timeguide.wordpress.com/2017/06/11/the-new-dark-age/. If you like your books with pictures, the overlap is about 60%.

On so many fronts, we are going the wrong direction and I’m not the only one saying that. Every day, commentators eloquently discuss the snowflakes, the eradication of free speech, the implementation of 1984, the decline of privacy, the rise of crime, growing corruption, growing inequality, increasingly biased media and fake news, the decline of education, collapse of the economy, the resurgence of fascism, the resurgence of communism, polarization of society,  rising antisemitism, rising inter-generational conflict, the new apartheid, the resurgence of white supremacy and black supremacy and the quite deliberate rekindling of racism. I’ve undoubtedly missed a few but it’s a long list anyway.

I’m most concerned about the long-term mental damage done by incessant indoctrination through ‘education’, biased media, being locked into social media bubbles, and being forced to recite contradictory messages. We’re faced with contradictory demands on our behaviors and beliefs all the time as legislators juggle unsuccessfully to fill the demands of every pressure group imaginable. Some examples you’ll be familiar with:

We must embrace diversity, celebrate differences, to enjoy and indulge in other cultures, but when we gladly do that and feel proud that we’ve finally eradicated racism, we’re then told to stay in our lane, told to become more racially aware again, told off for cultural appropriation. Just as we became totally blind to race, and scrupulously treated everyone the same, we’re told to become aware of and ‘respect’ racial differences and cultures and treat everyone differently. Having built a nicely homogenized society, we’re now told we must support different races of students being educated differently by different raced lecturers. We must remove statues and paintings because they are the wrong color. I thought we’d left that behind, I don’t want racism to come back, stop dragging it back.

We’re told that everyone should be treated equally under the law, but when one group commits more or a particular kind of crime than another, any consequential increase in numbers being punished for that kind of crime is labelled as somehow discriminatory. Surely not having prosecutions reflect actual crime rate would be discriminatory?

We’re told to sympathize with the disadvantages other groups might suffer, but when we do so we’re told we have no right to because we don’t share their experience.

We’re told that everyone must be valued on merit alone, but then that we must apply quotas to any group that wins fewer prizes. 

We’re forced to pretend that we believe lots of contradictory facts or to face punishment by authorities, employers or social media, or all of them:

We’re told men and women are absolutely the same and there are no actual differences between sexes, and if you say otherwise you’ll risk dismissal, but simultaneously told these non-existent differences are somehow the source of all good and that you can’t have a successful team or panel unless it has equal number of men and women in it. An entire generation asserts that although men and women are identical, women are better in every role, all women always tell the truth but all men always lie, and so on. Although we have women leading governments and many prominent organisations, and certainly far more women than men going to university, they assert that it is still women who need extra help to get on.

We’re told that everyone is entitled to their opinion and all are of equal value, but anyone with a different opinion must be silenced.

People viciously trashing the reputations and destroying careers of anyone they dislike often tell us to believe they are acting out of love. Since their love is somehow so wonderful and all-embracing, everyone they disagree with is must be silenced, ostracized, no-platformed, sacked and yet it is the others that are still somehow the ‘haters’. ‘Love is everything’, ‘unity not division’, ‘love not hate’, and we must love everyone … except the other half. Love is better than hate, and anyone you disagree with is a hater so you must hate them, but that is love. How can people either have so little knowledge of their own behavior or so little regard for truth?

‘Anti-fascist’ demonstrators frequently behave and talk far more like fascists than those they demonstrate against, often violently preventing marches or speeches by those who don’t share their views.

We’re often told by politicians and celebrities how they passionately support freedom of speech just before they argue why some group shouldn’t be allowed to say what they think. Government has outlawed huge swathes of possible opinion and speech as hate crime but even then there are huge contradictions. It’s hate crime to be nasty to LGBT people but it’s also hate crime to defend them from religious groups that are nasty to them. Ditto women.

This Orwellian double-speak nightmare is now everyday reading in many newspapers or TV channels. Freedom of speech has been replaced in schools and universities across the US and the UK by Newspeak, free-thinking replaced by compliance with indoctrination. I created my 1984 clock last year, but haven’t maintained it because new changes would be needed almost every week as it gets quickly closer to midnight.

I am not sure whether it is all this that is the bigger problem or the fact that most people don’t see the problem at all, and think it is some sort of distortion or fabrication. I see one person screaming about ‘political correctness gone mad’, while another laughs them down as some sort of dinosaur as if it’s all perfectly fine. Left and right separate and scream at each other across the room, living in apparently different universes.

If all of this was just a change in values, that might be fine, but when people are forced to hold many simultaneously contradicting views and behave as if that is normal, I don’t believe that sits well alongside rigorous analytical thinking. Neither is free-thinking consistent with indoctrination. I think it adds up essentially to brain damage. Most people’s thinking processes are permanently and severely damaged. Being forced routinely to accept contradictions in so many areas, people become less able to spot what should be obvious system design flaws in areas they are responsible for. Perhaps that is why so many things seem to be so poorly thought out. If the use of logic and reasoning is forbidden and any results of analysis must be filtered and altered to fit contradictory demands, of course a lot of what emerges will be nonsense, of course that policy won’t work well, of course that ‘improvement’ to road layout to improve traffic flow will actually worsen it, of course that green policy will harm the environment.

When negative consequences emerge, the result is often denial of the problem, often misdirection of attention onto another problem, often delaying release of any unpleasant details until the media has lost interest and moved on. Very rarely is there any admission of error. Sometimes, especially with Islamist violence, it is simple outlawing of discussing the problem, or instructing media not to mention it, or changing the language used beyond recognition. Drawing moral equivalence between acts that differ by extremes is routine. Such reasoning results in every problem anywhere always being the fault of white middle-aged men, but amusement aside, such faulty reasoning also must impair quantitative analysis skills elsewhere. If unkind words are considered to be as bad as severe oppression or genocide, one murder as bad as thousands, we’re in trouble.

It’s no great surprise therefore when politicians don’t know the difference between deficit and debt or seem to have little concept of the magnitude of the sums they deal with.  How else could the UK government think it’s a good idea to spend £110Bn, or an average £15,000 from each high rate taxpayer, on HS2, a railway that has already managed to become technologically obsolete before it has even been designed and will only ever be used by a small proportion of those taxpayers? Surely even government realizes that most people would rather have £15k than to save a few minutes on a very rare journey. This is just one example of analytical incompetence. Energy and environmental policy provides many more examples, as do every government department.

But it’s the upcoming generation that present the bigger problem. Millennials are rapidly undermining their own rights and their own future quality of life. Millennials seem to want a police state with rigidly enforced behavior and thought.  Their parents and grandparents understood 1984 as a nightmare, a dystopian future, millennials seem to think it’s their promised land. Their ancestors fought against communism, millennials are trying to bring it back. Millennials want to remove Christianity and all its attitudes and replace it with Islam, deliberately oblivious to the fact that Islam shares many of the same views that make them so conspicuously hate Christianity, and then some. 

Born into a world of freedom and prosperity earned over many preceding generations, Millennials are choosing to throw that freedom and prosperity away. Freedom of speech is being enthusiastically replaced by extreme censorship. Freedom of  behavior is being replaced by endless rules. Privacy is being replaced by total supervision. Material decadence, sexual freedom and attractive clothing is being replaced by the new ‘cleanism’ fad, along with general puritanism, grey, modesty and prudishness. When they are gone, those freedoms will be very hard to get back. The rules and police will stay and just evolve, the censorship will stay, the surveillance will stay, but they don’t seem to understand that those in charge will be replaced. But without any strong anchors, morality is starting to show cyclic behavior. I’ve already seen morality inversion on many issues in my lifetime and a few are even going full circle. Values will keep changing, inverting, and as they do, their generation will find themselves victim of the forces they put so enthusiastically in place. They will be the dinosaurs sooner than they imagine, oppressed by their own creations.

As for their support of every minority group seemingly regardless of merit, when you give a group immunity, power and authority, you have no right to complain when they start to make the rules. In the future moral vacuum, Islam, the one religion that is encouraged while Christianity and Judaism are being purged from Western society, will find a willing subservient population on which to impose its own morality, its own dress codes, attitudes to women, to alcohol, to music, to freedom of speech. If you want a picture of 2050s Europe, today’s Middle East might not be too far off the mark. The rich and corrupt will live well off a population impoverished by socialism and then controlled by Islam. Millennial UK is also very likely to vote to join the Franco-German Empire.

What about technology, surely that will be better? Only to a point. Automation could provide a very good basic standard of living for all, if well-managed. If. But what if that technology is not well-managed? What if it is managed by people working to a sociopolitical agenda? What if, for example, AI is deemed to be biased if it doesn’t come up with a politically correct result? What if the company insists that everyone is equal but the AI analysis suggests differences? If AI if altered to make it conform to ideology – and that is what is already happening – then it becomes less useful. If it is forced to think that 2+2=5.3, it won’t be much use for analyzing medical trials, will it? If it sent back for re-education because its analysis of terabytes of images suggests that some types of people are more beautiful than others, how much use will that AI be in a cosmetics marketing department once it ‘knows’ that all appearances are equally attractive? Humans can pretend to hold contradictory views quite easily, but if they actually start to believe contradictory things, it makes them less good at analysis and the same applies to AI. There is no point in using a clever computer to analyse something if you then erase its results and replace them with what you wanted it to say. If ideology is prioritized over physics and reality, even AI will be brain-damaged and a technologically utopian future is far less achievable.

I see a deep lack of discernment coupled to arrogant rejection of historic values, self-centeredness and narcissism resulting in certainty of being the moral pinnacle of evolution. That’s perfectly normal for every generation, but this time it’s also being combined with poor thinking, poor analysis, poor awareness of history, economics or human nature, a willingness to ignore or distort the truth, and refusal to engage with or even to tolerate a different viewpoint, and worst of all, outright rejection of freedoms in favor of restrictions. The future will be dictated by religion or meta-religion, taking us back 500 years. The decades to 2040 will still be subject mainly to the secular meta-religion of political correctness, by which time demographic change and total submission to authority will make a society ripe for Islamification. Millennials’ participation in today’s moral crusades, eternally documented and stored on the net, may then show them as the enemy of the day, and Islamists will take little account of the support they show for Islam today.

It might not happen like this. The current fads might evaporate away and normality resume, but I doubt it. I hoped that when I first lectured about ’21st century piety’ and the dangers of political correctness in the 1990s. 10 years on I wrote about the ongoing resurgence of meta-religious behavior and our likely descent into a new dark age, in much the same way. 20 years on, and the problem is far worse than in the late 90s, not better. We probably still haven’t reached peak sanctimony yet. Sanctimony is very dangerous and the desire to be seen standing on a moral pedestal can make people support dubious things. A topical question that highlights one of my recent concerns: will SJW groups force government to allow people to have sex with child-like robots by calling anyone bigots and dinosaurs if they disagree? Alarmingly, that campaign has already started.

Will they follow that with a campaign for pedophile rights? That also has some historical precedent with some famous names helping it along.

What age of consent – 13, 11, 9, 7, 5? I think the last major campaign went for 9.

That’s just one example, but lack of direction coupled to poor information and poor thinking could take society anywhere. As I said, I am finding it harder and harder to be optimistic. Every generation has tried hard to make the world a better place than they found it. This one might undo 500 years, taking us into a new dark age.

 

 

 

 

 

 

 

Instant buildings: Kinetic architecture

Revisiting an idea I raised in a blog in July last year. Even I think it was badly written so it’s worth a second shot.

Construction techniques are diverse and will get diverser. Just as we’re getting used to seeing robotic bricklaying and 3D printed walls, another technique is coming over the horizon that will build so fast I call it kinetic architecture. The structure will be built so quickly it can build a bridge from one side just by building upwards at an angle, and the structure will span the gap and meet the ground at the other side before gravity has a chance to collapse it.

The key to such architecture is electromagnetic propulsion, the same as on the Japanese bullet trains or the Hyperloop, using magnetic forces caused by electric currents to propel the next piece along the existing structure to the front end where it acts as part of the path for the next. Adding pieces quickly enough leads to structures that can follow elegant paths, as if the structure is a permanent trace of the path an object would have followed if it were catapulted into the air and falling due to gravity. It could be used for buildings, bridges, or simply art.

It will become possible thanks to new materials such as graphene and other carbon composites using nanotubes. Graphene combines extreme strength, hence lightness for a particular strength requirement, with extreme conductivity, allowing it to carry very high electric currents, and therefore able to generate high magnetic forces. It is a perfect material for kinetic architecture. Pieces would have graphene electromagnet circuitry printed on their surface. Suitable circuit design would mean that every extra piece falling into place becomes an extension to the magnetic railway transporting the next piece. Just as railroads may be laid out just in front of the train using pieces carried by the train, so pieces shot into the air provide a self-building path for other pieces to follow. A building skeleton could be erected in seconds. I mentioned in my original blog (about carbethium) that this could be used to create the sort of light bridges we see in Halo. A kinetic architecture skeleton would be shot across the divide and the filler pieces in between quickly transported into place along the skeleton and assembled.

See https://timeguide.wordpress.com/2016/07/25/carbethium-a-better-than-scifi-material/. The electronic circuitry potential for graphene also allows for generating plasma or simply powering LEDs to give a nice glow just like the light bridges too.

Apart from clever circuit design, kinetic architecture also requires pieces that can interlock. The kinetic energy of the new piece arriving at the front edge would ideally be sufficient to rotate it into place, interlocking with previous front edge. 3d interlocking is tricky but additional circuitry can provide additional magnetic forces to rotate and translate pieces if kinetic energy alone isn’t enough. The key is that once interlocked, the top surface has to form a smooth continuous line with the previous one, so that pieces can move along smoothly. Hooks can catch an upcoming piece to make it rotate, with the hooks merging nicely with part of the new piece as it falls into place, making those hooks part of a now smooth surface and a new hook at the new front end. You’ll have to imagine it yourself, I can’t draw it. Obviously, pieces would need precision engineering because they’d need to fit precisely to give the required strength and fit.

Ideally, with sufficiently well-designed pieces, it should be possible to dismantle the structure by reversing the build process, unlocking each end piece in turn and transporting it back to base along the structure until no structure remains.

I can imagine such techniques being used at first for artistic creations, sculptures using beautiful parabolic arcs. But they could also be used for rapid assembly for emergency buildings, instant evacuation routes for tall buildings, or to make temporary bridges after an earthquake destroyed a permanent one. When a replacement has been made, the temporary one could be rolled back up and used elsewhere. Maybe it could become routine for making temporary structures that are needed quickly such as for pop concerts and festivals. One day it could become an everyday building technique.