Category Archives: Futurology

Medic or futurist – A personal history

This article is autobiographical drivel and nothing to do with the future. Read on only if you are bored enough.

I sometimes wanted to be a doctor when I was young, but when I was 17, I looked about 12, and realised that I would probably look about 16 by the time I graduated, and that, believe it or not, is one of the two main reasons I chose to study Physics and Maths at university rather than medicine. (I was proved right – I was last asked what age I was getting on a bus when I was 22, the child discount ending only when you hit 16, and I was last turned away from a night club for being under 18 when I was 25). The 2nd main reason was that although I was reasonably bright, my memory was rubbish, and while Physics and Maths rewards intellect, medicine rewards memory.

I do like to read medical articles occasionally, even if the microbiology and chemistry side of it often leaves me bored. However, I’ve also invented quite a few things in the medical space, so I do find it fun sometimes too.

A few days ago I was very pleased with myself after reading an article on the wondrous properties of Marmite, suspected to increase GABA levels in the brain, and since it mentioned poor memory, anxiety and overactive neurons, some quick Googling then linked that to both epilepsy and childhood febrile seizure.

Suddenly a lot of my family history fell neatly into place. I had such a seizure followed by a coma apparently when my parents cruelly abandoned me screaming at a Scottish petrol station because they counted their kids wrongly. The last thing I recall is their car disappearing into the distance. They did eventually come back for me, but the damage was done. According to google, or rather one of the articles it showed me, these seizures damage the hippocampus, causing lasting problems with memory, and I’ve always had problems memorising stuff. So my first major conclusion from my Googling is that my poor memory was likely caused by my parents abandoning me at the petrol station, and that then caused me to choose Physics and Maths degree, end up as a systems engineer and then a futurologist.  So, I am a futurist and not a doctor, because I was abandoned as a child. Hmmm!

Low GABA levels that make kids susceptible to that also cause hyperactive neurons that don’t stop firing properly and cause anxiety, which I and many others in my clan suffer from. I suffer a lot of neural noise, making it hard to play musical instruments because of unwanted signals, hard to settle and relax, hard to ever feel calm, very often feeling unsettled and anxious for no reason. It also links to epilepsy and to transient ischemic attacks and strokes, more family history and again to myself – I had a suspected TIA 3 years ago. On the upside, I do wonder whether that hyperactive neural firing isn’t one of the main reasons why my brain often works well at making cross-links between concepts and imagination-related tasks generally. Or that could be one of the other effects of low GABA, the inefficient neural pruning in teen years that normally should channel the brain into narrowed but more stable thinking processes. That would even explain why I am still waiting to group up, at 56!

As a result of that article, I have eaten a dose of Marmite religiously every single day since I managed to get some, for two days now! It is probably too early to tell if there are any major benefits, though I can already confirm that it doesn’t taste as nice if you eat a teaspoonful straight off the teaspoon rather than on toast.

Google isn’t perfect by a long way, but its search engine makes up for a multitude of sins. My conclusions above might be rubbish, but it was fun coming up with them anyway.

Time moves on. I was just having my daily look at phys.org, a great website that has links to many interesting recent articles across science, and it mentioned that celiac disease (coeliac disease in UK) may be caused by a virus. I know a few people with that, but I don’t. However, a long time ago, in 1989 I did have cancer, a rare and aggressive T-cell lymphoma, and I am grateful to be one of the 65% survivors. Because it was rare, with just a few cases a year in the UK, not much was known about it at the time, but it had already been suspected that it might be triggered by a severe trauma or a virus. So, having had my memory triggered by the phys.org article, I checked up to see if there had ever been much progress on that, and yes, it is now known that it is caused by the HTLV-1 virus. (e.g. https://www.ncbi.nlm.nih.gov/books/NBK304341/)

So, I wondered, how did I get it, since Google says it is apparently almost unheard of in native Europeans. That connects to the other suspected cause, trauma. When I was a young man, I was badly injured in a motorbike accident, and my GP later suggested that might possibly have caused the cancer, but he was wrong. The connection wasn’t the trauma itself, but the virus, the infection route being that during my treatment for that trauma, I received several pints of blood, the only mechanism possible for me personally getting the virus. I could not have been infected via the other mechanisms.

So now I know that I must have received contaminated blood and that is what later caused my cancer, though in fairness to the Belfast City Hospital, they could not have known about that at the time so I won’t sue. (I’ll also generously overlook the fact that the Staff Nurse (let’s just call her Elizabeth) tied my traction so wrongly that it was prevented from applying tension to my leg, and it was only corrected weeks later when I was sentient again and complained and finally got someone to fix it, resulting in my left leg being permanently 4cm shorter than my right leg.)

Reading still further, it turns out that HTLV-1 was almost unheard of in native Europeans, therefore it must have been blood from a donor of foreign origin. 1983 Belfast had very few people from the regions most likely to carry the virus – sub-Saharan Africa, South America, Caribbean and a few parts of Japan – so few in fact, that it would very likely be possible to check the blood donor records from that period and infer exactly whose blood it would have been. It is possible they are still alive, still a blood donor, still infecting people with HTLV-1 and up to 1 in 25 of the recipients developing a T-cell lymphoma. On the other hand, since I had cancer, I have been banned from being a blood or bone marrow donor, which I now know actually does make perfect sense.

But hang on, I had my motorbike accident while travelling to work, as an engineer. If I had done a medicine degree, I wouldn’t have been on that road, I’d have been in medical school. So I wouldn’t have needed the blood, wouldn’t have been infected with the virus, and wouldn’t have later got cancer.

So, a fascinating week for me. Several personal and family medical mysteries that our GPs don’t have time or inclination to look into have been solved by two random press articles and the google searches they triggered.

Thanks to two ordinary press articles I now know that something as everyday and trivial as my mother not checking her toddler was in the car before they drove away caused me to be a futurist, via becoming an engineer and having a crash that left me permanently disfigured and later led to cancer. On the fun side, I can solve some everyday issues by eating Marmite, but doing so might adversely affect my thinking process and make me less creative. What a week!

Christmas in 2040

I am cheating with this post, since I did a newspaper interview that writes up some of my ideas and will save time rewriting it all. Here’s a link:

https://www.thesun.co.uk/living/2454633/dinner-cooked-by-robots-no-wrapping-paper-and-video-make-up-for-the-office-party-this-is-what-christmas-will-look-like-in-2040-according-to-futurologist-dr-ian-pearson/

I hope you all have a wonderful Christmas.

Get all of my current e-books free, today only

This offer is now over. Sorry if you missed it.

As an early Christmas present, I have made all of my books free just for today on Amazon. The links here are for amazon.co.uk, but the book reference is the same on other branches so just edit the .co.uk to .com or whatever.

You Tomorrow and Society Tomorrow were almost entirely made by adding some of my blogs, tidying up and filling a few gaps.

https://www.amazon.co.uk/You-Tomorrow-Ian-Pearson-ebook/dp/B00G8DLB24

https://www.amazon.co.uk/Society-Tomorrow-Growing-Century-Britain-ebook/dp/B01HJY7RHI

Total Sustainability takes a system level view of sustainability and contradicts a lot of environmentalist dogma.

https://www.amazon.co.uk/Total-Sustainability-Ian-Pearson-ebook/dp/B00FWMW194

Space Anchor is my only Sci-fi novel to date, and features the first ever furry space ship in sci-fi, a gender-fluid AI, and its heroes Carbon Girl and Carbon Man have an almost entirely carbon-based itinerary.

https://www.amazon.co.uk/Space-Anchor-Ian-Pearson-ebook/dp/B00E9X02IE

Enjoy reading. Next year I hope to finish my book on future fashion.

 

25 predictions for 2017

2017-predictions

On Independence Day, remember that the most important independence is independence of thought

Division is the most obvious observation of the West right now. The causes of it are probably many but one of the biggest must be the reinforcement of views that people experience due to today’s media and especially social media. People tend to read news from sources that agree with them, and while immersed in a crowd of others sharing the same views, any biases they had quickly seem to be the norm. In the absence of face to face counterbalances, extreme views may be shared, normalized, and drift towards extremes is enabled. Demonisation of those with opposing views often follows. This is one of the two main themes of my new book Society Tomorrow, the other being the trend towards 1984, which is somewhat related since censorship follows from division..

It is healthy to make sure you are exposed to views across the field. When you regularly see the same news with very different spins, and notice which news doesn’t even appear in some channels, it makes you less vulnerable to bias. If you end up disagreeing with some people, that is fine; better to be right than popular. Other independent thinkers won’t dump you just because you disagree with them. Only clones will, and you should ask whether they matter that much.

Bias is an error source, it is not healthy. You can’t make good models of the world if you can’t filter bias, you can’t make good predictions. Independent thought is healthy, even when it is critical or skeptical. It is right to challenge what you are told, not to rejoice that it agrees with what you already believed. Learning to filter bias from the channels you expose yourself to means your conclusions, your thoughts, and your insights are your own. Your mind is your own, not just another clone.

Theoretical freedom means nothing if your mind has been captured and enslaved.

Celebrate Independence Day by breaking free from your daily read, or making sure you start reading other sources too. Watch news channels that you find supremely irritating sometimes. Follow people you profoundly disagree with. Stay civil, but more importantly, stay independent. Liberate your consciousness, set your mind free.

 

Prejudice is an essential predictive tool

Prejudice has a bad name but it is an essential tool evolution has given us to help our survival. It is not a bad thing in itself, but it can cause errors of judgement and misuse so it needs to be treated with care. It’s worth thinking it through from first principles, so that you aren’t too prejudiced about prejudice.

I like a few people, dislike a few others, but don’t have any first hand opinion on almost everyone. With over 7 billion people, no-one can ever meet more than a tiny proportion. We see a few more on TV or other media and may form a narrow-channel opinion on some aspects of their character from what is shown in their appearances. Otherwise, any opinion we may have on anyone we have not actually met or spent any time with is prejudice – pre-judgment based on experiences we have had with people who share similarities.

Prejudice isn’t always a bad thing

Humans are good at using patterns and similarities as indicators, because it improves our chances of survival. If you see a flame, even though you have never encountered that particular flame before, you are prejudiced about how it might feel if you stick your hand in it. You don’t go all politically correct and assume that making such a pre-judgment is wrong and put your hand in it anyway, since it may well be a very nice flame that tickles and feels good. If you see a tiger running towards you, you probably won’t assume it just wants to cuddle you or get stroked. Prejudices keep us alive. Used correctly, they are a good thing.

Taking examples from human culture, if a salesman smiles at you, you may reasonably engage some filters rather than just treating the forthcoming conversation like any other. Similarly, if a politician promises you milk and honey, you may reasonable wonder who will pay for it, or what they are not telling you. Some salesmen and politicians don’t conform to the prejudice, but enough do to make it worthwhile engaging the filters.

Prejudices can be positive too. If you see some nice strawberries, you probably don’t worry too much that they have been poisoned. If someone smiles at you, you will probably feel warmer emotions towards them. We usually talk about prejudice when we are talking about race or nationality or religion but all prejudice is is pre-judgement of a person or object or situation based on any clues we can pick up. If we didn’t prejudge things at all we would waste a great deal of time and effort starting from scratch at every encounter.

Error sources

People interpret situations differently, and of course experience different situations, and therefore build up quite different prejudice databases. Some people notice things that others don’t. Then they allocate different weightings to all the different inputs they do notice. Then they file them differently. Some will connect experiences with others to build more complex mindsets and the quality of those connections will vary enormously. As an inevitable result of growing up, people make mental models of the world so that they can make useful predictions that enable them to take advantage of opportunities and avoid threats. The prejudices in those models are essentially equations, variables, weightings and coefficients. Some people will use poor equations that ignore some variables completely, use poor weightings for others and also assign poor quality coefficients to what they have left. (A bit like climate modelling really, it is common to give too high weightings to a few fashionable variables while totally ignoring others of equal importance.)

Virtues and dangers in sharing prejudices

People communicate and learn prejudices from each other too, good and bad. Your parents teach you about flames and tigers to avoid the need for you to suffer. Your family, friends, teachers, neighbors, celebrities, politicians and social media contacts teach you more. You absorb a varied proportion of what they tell you into your own mindset, and the filters you use are governed by your existing prejudices. Some inputs from others will lead to you editing some of your existing prejudices, for better or worse. So your prejudices set will be a complex mix of things you have learned from your own experiences and those learned from others, all processed and edited continually with the processing and editing processes themselves influenced by existing and inherited prejudices.

A lot of encounters in modern life are mediated by the media, and there is a lot of selective prejudice involved in choosing which media to be exposed to. Media messages are very often biased in favour of some groups and against others, but it is hard to avoid them being assimilated into the total experience used for our prejudice. People may choose to watch news channels that have a particular bias because it frames the news in terms they are more familiar with. Adverts and marketing generally also have huge influence, professionally designed to steer our prejudices in particular direction. This can be very successful. Thanks to media messages, I still think Honda makes good cars in spite of having bought one that has easily had more faults than all my previous cars combined. I have to engage my own rationality filters to prevent me considering them for my next car. Prejudice says they are great, personal experience says they are not.

So, modern life provides many sources of errors for our prejudice databases, and many people, companies, governments and pressure groups try hard to manipulate them in their favour, or against others.

Prejudice and wisdom

Accumulated prejudices are actually a large component of wisdom. Wisdom is using acquired knowledge alongside acquired experience to build a complex mental world model that reliably indicates how a hypothetical situation might play out. The quality of one’s mental world model hopefully improves with age and experience and acquired knowledge, though that is by no means guaranteed. People gain wisdom at different rates, and some seem to manage to avoid doing so completely.

So there is nothing wrong with prejudice per se, it is an essential survival shortcut to avoid the need to treat every experience and encounter with the same checks and precautions or to waste enormous extra time investigating every possible resource from scratch. A well-managed prejudice set and the mental world model built using it are foundation stones of wisdom.

Mental models

Mental models are extremely important to quality of personal analysis and if they are compromised by inaccurate prejudices we will find it harder to do understand the world properly. It is obviously important to protect prejudices from external influences that are not trustworthy. We need the friendly social sharing that helps us towards genuinely better understanding of the world around us, but we need to identify forces with other interests than our well-being so that we can prevent them from corrupting our mindsets and our mental models, otherwise our predictive ability will be damaged. Politicians and pressure groups would be top of the list of dubious influences. We also tend to put different weightings on advice from various friends, family, colleagues or celebrities, sensibly so. Some people are more easily influenced by others. Independent thought is made much more difficult when peer pressure is added. When faced with peer pressure, many people simply adopt what they believe to be the ‘correct’ prejudice set for ‘their’ ‘tribe’. All those inverted commas indicate that each of these is a matter of prejudice too.

Bad prejudices

Where we do find problems from prejudice is in areas like race and religion, mainly because our tribal identity includes identification with a particular race or religion (or indeed atheism). Strong tribal forces in human nature push people to favour those of their own tribe over others, and we see that at every level of tribe, whether it is a work group or an entire nation. So we are more inclined to believe good things about our own tribe than others. The number of experiences we have of other tribes is far higher than it was centuries ago. We meet far more people face to face now, and we see very many more via the media. The media exposure we get tends to be subject to bias, but since the media we choose to consume is self-selected, that tends to reinforce existing prejudices. Furthermore, negative representations are more likely to appear on the news, because people behaving normally is not news, whereas people doing bad things is. Through all those combined exposures, we may build extensive personal experience of many members of a group and it is easy to apply that experience to new encounters of others from that group who may not share the same faults or virtues. One way to reduce the problem is to fragment groups into subgroups so that you don’t apply prejudices from one subgroup incorrectly to another.

Inherited experiences, such as those of columnists, experts brought into news interviews or even the loaded questions of news presenters on particular channels are more dangerous since many of the sources are strongly biased or have an interest in changing our views. As a result of massively increased exposures to potentially biased representations of other groups in modern life, it is harder than ever to maintain an objective viewpoint and maintain a realistic prejudice set. It is very easy to accumulate a set of prejudices essentially determined by others. That is very dangerous, especially bearing in mind the power of peer pressure, since peers are also likely to have such corrupted prejudice sets. We call that group-think, and it is not only the enemy of free thought but also the enemy of accurate prediction, and ultimately of wisdom. A mental model corrupted by group-think and inherited biases is of poor quality.

Debugging Prejudices

Essential maintenance for good mental models includes checking prejudices regularly against reality. Meeting people and doing things is good practice of course, but checking actual statistics is surprisingly effective too. Many of us hold ideas about traits and behaviors of certain groups that are well away from reality. Governments collect high quality statistics on an amazing range of things. Pressure groups also do, but are far more likely to put a particular spin on their figures, or even bury figures that don’t give the message they want you to hear. Media also put spins on statistics, so it is far better to use the original statistics yourself than to trust someone else’s potentially biased analysis. For us Brits, http://www.ons.gov.uk/ons/index.html is a good source of trustworthy official statistics, relatively free of government or pressure group spin, though finding the data can sometimes involve tricky navigation.

It is also a good idea to make sure you consume media and especially news from a variety of sources, some explicitly left or right wing or even from pressure groups. This ensures you see many sides of the same story, ensures you stay aware of stories that may not even appear via some channels, and helps train you to spot biases and filter them out when they are there. I read several newspapers every day. So should you. When I have time, I try to go to the original source of any data being discussed so I can get the facts without the spin. Doing this not only helps protect your own mental model, it allows you to predict how other people may see the same stories and how they might feel and react, so it also helps extend your model to include behaviour of other groups of people.

If you regularly debug your prejudices, then they will be far more useful and less of an error source. It will sometimes be obvious that other people hold different ones but as long as you know yours are based on reality, then you should not be influenced to change yours. If you are trying to work out how others might behave, then understanding their prejudices and the reasons they hold them is very useful. It makes up another section of the world model.

Looking at it from a modelling direction, prejudices are the equations, factors and coefficients in a agent-based model, which you run inside your head. Without them, you can’t make a useful model, since you aren’t capable of knowing and modelling over 7 billion individuals. If the equations are wrong, or the factors or coefficients, then the answer will be wrong. Crap in, crap out. If your prejudices are reasonably accurate representations of the behaviours and characteristics of groups as a whole, then you can make good models of the world around you, and you can make sounds predictions. And over time, as they get better, you might even become wise.

Stimulative technology

You are sick of reading about disruptive technology, well, I am anyway. When a technology changes many areas of life and business dramatically it is often labelled disruptive technology. Disruption was the business strategy buzzword of the last decade. Great news though: the primarily disruptive phase of IT is rapidly being replaced by a more stimulative phase, where it still changes things but in a more creative way. Disruption hasn’t stopped, it’s just not going to be the headline effect. Stimulation will replace it. It isn’t just IT that is changing either, but materials and biotech too.

Stimulative technology creates new areas of business, new industries, new areas of lifestyle. It isn’t new per se. The invention of the wheel is an excellent example. It destroyed a cave industry based on log rolling, and doubtless a few cavemen had to retrain from their carrying or log-rolling careers.

I won’t waffle on for ages here, I don’t need to. The internet of things, digital jewelry, active skin, AI, neural chips, storage and processing that is physically tiny but with huge capacity, dirt cheap displays, lighting, local 3D mapping and location, 3D printing, far-reach inductive powering, virtual and augmented reality, smart drugs and delivery systems, drones, new super-materials such as graphene and molybdenene, spray-on solar … The list carries on and on. These are all developing very, very quickly now, and are all capable of stimulating entire new industries and revolutionizing lifestyle and the way we do business. They will certainly disrupt, but they will stimulate even more. Some jobs will be wiped out, but more will be created. Pretty much everything will be affected hugely, but mostly beneficially and creatively. The economy will grow faster, there will be many beneficial effects across the board, including the arts and social development as well as manufacturing industry, other commerce and politics. Overall, we will live better lives as a result.

So, you read it here first. Stimulative technology is the next disruptive technology.

 

Future human evolution

I’ve done patches of work on this topic frequently over the last 20 years. It usually features in my books at some point too, but it’s always good to look afresh at anything. Sometimes you see something you didn’t see last time.

Some of the potential future is pretty obvious. I use the word potential, because there are usually choices to be made, regulations that may or may not get in the way, or many other reasons we could divert from the main road or even get blocked completely.

We’ve been learning genetics now for a long time, with a few key breakthroughs. It is certain that our understanding will increase, less certain how far people will be permitted to exploit the potential here in any given time frame. But let’s take a good example to learn a key message first. In IVF, we can filter out embryos that have the ‘wrong’ genes, and use their sibling embryos instead. Few people have a problem with that. At the same time, pregnant women may choose an abortion if they don’t want a child when they discover it is the wrong gender, but in the UK at least, that is illegal. The moral and ethical values of our society are on a random walk though, changing direction frequently. The social assignment of right and wrong can reverse completely in just 30 years. In this example, we saw a complete reversal of attitudes to abortion itself within 30 years, so who is to say we won’t see reversal on the attitude to abortion due to gender? It is unwise to expect that future generations will have the same value sets. In fact, it is highly unlikely that they will.

That lesson likely applies to many technology developments and quite a lot of social ones – such as euthanasia and assisted suicide, both already well into their attitude reversal. At some point, even if something is distasteful to current attitudes, it is pretty likely to be legalized eventually, and hard to ban once the door is opened. There will always be another special case that opens the door a little further. So we should assume that we may eventually use genetics to its full capability, even if it is temporarily blocked for a few decades along the way. The same goes for other biotech, nanotech, IT, AI and any other transhuman enhancements that might come down the road.

So, where can we go in the future? What sorts of splits can we expect in the future human evolution path? It certainly won’t remain as just plain old homo sapiens.

I drew this evolution path a long time ago in the mid 1990s:

human evolution 1

It was clear even then that we could connect external IT to the nervous system, eventually the brain, and this would lead to IT-enhanced senses, memory, processing, higher intelligence, hence homo cyberneticus. (No point in having had to suffer Latin at school if you aren’t allowed to get your own back on it later). Meanwhile, genetic enhancement and optimization of selected features would lead to homo optimus. Converging these two – why should you have to choose, why not have a perfect body and an enhanced mind? – you get homo hybridus. Meanwhile, in the robots and AI world, machine intelligence is increasing and we eventually we get the first self-aware AI/robot (it makes little sense to separate the two since networked AI can easily be connected to a machine such as a robot) and this has its own evolution path towards a rich diversity of different kinds of AI and robots, robotus multitudinus. Since both the AI world and the human world could be networked to the same network, it is then easy to see how they could converge, to give homo machinus. This future transhuman would have any of the abilities of humans and machines at its disposal. and eventually the ability to network minds into a shared consciousness. A lot of ordinary conventional humans would remain, but with safe upgrades available, I called them homo sapiens ludditus. As they watch their neighbors getting all the best jobs, winning at all the sports, buying everything, and getting the hottest dates too, many would be tempted to accept the upgrades and homo sapiens might gradually fizzle out.

My future evolution timeline stayed like that for several years. Then in the early 2000s I updated it to include later ideas:

human evolution 2

I realized that we could still add AI into computer games long after it becomes comparable with human intelligence, so games like EA’s The Sims might evolve to allow entire civilizations living within a computer game, each aware of their existence, each running just as real a life as you and I. It is perhaps unlikely that we would allow children any time soon to control fully sentient people within a computer game, acting as some sort of a god to them, but who knows, future people will argue that they’re not really real people so it’s OK. Anyway, you could employ them in the game to do real knowledge work, and make money, like slaves. But since you’re nice, you might do an incentive program for them that lets them buy their freedom if they do well, letting them migrate into an android. They could even carry on living in their Sims home and still wander round in our world too.

Emigration from computer games into our world could be high, but the reverse is also possible. If the mind is connected well enough, and enhanced so far by external IT that almost all of it runs on the IT instead of in the brain, then when your body dies, your mind would carry on living. It could live in any world, real or fantasy, or move freely between them. (As I explained in my last blog, it would also be able to travel in time, subject to certain very expensive infrastructural requirements.) As well as migrants coming via electronic immortality route, it would be likely that some people that are unhappy in the real world might prefer to end it all and migrate their minds into a virtual world where they might be happy. As an alternative to suicide, I can imagine that would be a popular route. If they feel better later, they could even come back, using an android.  So we’d have an interesting future with lots of variants of people, AI and computer game and fantasy characters migrating among various real and imaginary worlds.

But it doesn’t stop there. Meanwhile, back in the biotech labs, progress is continuing to harness bacteria to make components of electronic circuits (after which the bacteria are dissolved to leave the electronics). Bacteria can also have genes added to emit light or electrical signals. They could later be enhanced so that as well as being able to fabricate electronic components, they could power them too. We might add various other features too, but eventually, we’re likely to end up with bacteria that contain electronics and can connect to other bacteria nearby that contain other electronics to make sophisticated circuits. We could obviously harness self-assembly and self-organisation, which are also progressing nicely. The result is that we will get smart bacteria, collectively making sophisticated, intelligent, conscious entities of a wide variety, with lots of sensory capability distributed over a wide range. Bacteria Sapiens.

I often talk about smart yogurt using such an approach as a key future computing solution. If it were to stay in a yogurt pot, it would be easy to control. But it won’t. A collective bacterial intelligence such as this could gain a global presence, and could exist in land, sea and air, maybe even in space. Allowing lots of different biological properties could allow colonization of every niche. In fact, the first few generations of bacteria sapiens might be smart enough to design their own offspring. They could probably buy or gain access to equipment to fabricate them and release them to multiply. It might be impossible for humans to stop this once it gets to a certain point. Accidents happen, as do rogue regimes, terrorism and general mad-scientist type mischief.

And meanwhile, we’ll also be modifying nature. We’ll be genetically enhancing a wide range of organisms, bringing some back from extinction, creating new ones, adding new features, changing even some of the basic mechanism by which nature works in some cases. We might even create new kinds of DNA or develop substitutes with enhanced capability. We may change nature’s evolution hugely. With a mix of old and new and modified, nature evolves nicely into Gaia Sapiens.

We’re not finished with the evolution chart though. Here is the next one:

human evolution 3

Just one thing is added. Homo zombius. I realized eventually that the sci-fi ideas of zombies being created by viruses could be entirely feasible. A few viruses, bacteria and other parasites can affect the brains of the victims and change their behaviour to harness them for their own life cycle.

See http://io9.com/12-real-parasites-that-control-the-lives-of-their-hosts-461313366 for fun.

Bacteria sapiens could be highly versatile. It could make virus variants if need be. It could evolve itself to be able to live in our bodies, maybe penetrate our brains. Bacteria sapiens could make tiny components that connect to brain cells and intercept signals within our brains, or put signals back in. It could read our thoughts, and then control our thoughts. It could essentially convert people into remote controlled robots, or zombies as we usually call them. They could even control muscles directly to a point, so even if the zombie is decapitated, it could carry on for a short while. I used that as part of my storyline in Space Anchor. If future humans have widespread availability of cordless electricity, as they might, then it is far fetched but possible that headless zombies could wander around for ages, using the bacterial sensors to navigate. Homo zombius would be mankind enslaved by bacteria. Hopefully just a few people, but it could be everyone if we lose the battle. Think how difficult a war against bacteria would be, especially if they can penetrate anyone’s brain and intercept thoughts. The Terminator films looks a lot less scary when you compare the Terminator with the real potential of smart yogurt.

Bacteria sapiens might also need to be consulted when humans plan any transhuman upgrades. If they don’t consent, we might not be able to do other transhuman stuff. Transhumans might only be possible if transbacteria allow it.

Not done yet. I wrote a couple of weeks ago about fairies. I suggested fairies are entirely feasible future variants that would be ideally suited to space travel.

https://timeguide.wordpress.com/2014/06/06/fairies-will-dominate-space-travel/

They’d also have lots of environmental advantages as well as most other things from the transhuman library. So I think they’re inevitable. So we should add fairies to the future timeline. We need a revised timeline and they certainly deserve their own branch. But I haven’t drawn it yet, hence this blog as an excuse. Before I do and finish this, what else needs to go on it?

Well, time travel in cyberspace is feasible and attractive beyond 2075. It’s not the proper real world time travel that isn’t permitted by physics, but it could feel just like that to those involved, and it could go further than you might think. It certainly will have some effects in the real world, because some of the active members of the society beyond 2075 might be involved in it. It certainly changes the future evolution timeline if people can essentially migrate from one era to another (there are some very strong caveats applicable here that I tried to explain in the blog, so please don’t misquote me as a nutter – I haven’t forgotten basic physics and logic, I’m just suggesting a feasible implementation of cyberspace that would allow time travel within it. It is really a cyberspace bubble that intersects with the real world at the real time front so doesn’t cause any physics problems, but at that intersection, its users can interact fully with the real world and their cultural experiences of time travel are therefore significant to others outside it.)

What else? OK, well there is a very significant community (many millions of people) that engages in all sorts of fantasy in shared on-line worlds, chat rooms and other forums. Fairies, elves, assorted spirits, assorted gods, dwarves, vampires, werewolves, assorted furry animals, assorted aliens, dolls,  living statues, mannequins, remote controlled people, assorted inanimate but living objects, plants and of course assorted robot/android variants are just some of those that already exist in principle; I’m sure I’ve forgotten some here and anyway, many more are invented every year so an exhaustive list would quickly become out of date. In most cases, many people already role play these with a great deal of conviction and imagination, not just in standalone games, but in communities, with rich cultures, back-stories and story-lines. So we know there is a strong demand, so we’re only waiting for their implementation once technology catches up, and it certainly will.

Biotech can do a lot, and nanotech and IT can add greatly to that. If you can design any kind of body with almost any kind of properties and constraints and abilities, and add any kind of IT and sensing and networking and sharing and external links for control and access and duplication, we will have an extremely rich diversity of future forms with an infinite variety of subcultures, cross-fertilization, migration and transformation. In fact, I can’t add just a few branches to my timeline. I need millions. So instead I will just lump all these extras into a huge collected category that allows almost anything, called Homo Whateverus.

So, here is the future of human (and associates) evolution, for the next 150 years. A few possible cross-links are omitted for clarity

evolution

I won’t be around to watch it all happen. But a lot of you will.

 

Time – The final frontier. Maybe

It is very risky naming the final frontier. A frontier is just the far edge of where we’ve got to.

Technology has a habit of opening new doors to new frontiers so it is a fast way of losing face. When Star Trek named space as the final frontier, it was thought to be so. We’d go off into space and keep discovering new worlds, new civilizations, long after we’ve mapped the ocean floor. Space will keep us busy for a while. In thousands of years we may have gone beyond even our own galaxy if we’ve developed faster than light travel somehow, but that just takes us to more space. It’s big, and maybe we’ll never ever get to explore all of it, but it is just a physical space with physical things in it. We can imagine more than just physical things. That means there is stuff to explore beyond space, so space isn’t the final frontier.

So… not space. Not black holes or other galaxies.

Certainly not the ocean floor, however fashionable that might be to claim. We’ll have mapped that in details long before the rest of space. Not the centre of the Earth, for the same reason.

How about cyberspace? Cyberspace physically includes all the memory in all our computers, but also the imaginary spaces that are represented in it. The entire physical universe could be simulated as just a tiny bit of cyberspace, since it only needs to be rendered when someone looks at it. All the computer game environments and virtual shops are part of it too. The cyberspace tree doesn’t have to make a sound unless someone is there to hear it, but it could. The memory in computers is limited, but the cyberspace limits come from imagination of those building or exploring it. It is sort of infinite, but really its outer limits are just a function of our minds.

Games? Dreams? Human Imagination? Love? All very new agey and sickly sweet, but no. Just like cyberspace, these are also all just different products of the human mind, so all of these can be replaced by ‘the human mind’ as a frontier. I’m still not convinced that is the final one though. Even if we extend that to greatly AI-enhanced future human mind, it still won’t be the final frontier. When we AI-enhance ourselves, and connect to the smart AIs too, we have a sort of global consciousness, linking everyone’s minds together as far as each allows. That’s a bigger frontier, since the individual minds and AIs add up to more cooperative capability than they can achieve individually. The frontier is getting bigger and more interesting. You could explore other people directly, share and meld with them. Fun, but still not the final frontier.

Time adds another dimension. We can’t do physical time travel, and even if we can do so in physics labs with tiny particles for tiny time periods, that won’t necessarily translate into a practical time machine to travel in the physical world. We can time travel in cyberspace though, as I explained in

https://timeguide.wordpress.com/2012/10/25/the-future-of-time-travel-cheat/

and when our minds are fully networked and everything is recorded, you’ll be able to travel back in time and genuinely interact with people in the past, back to the point where the recording started. You would also be able to travel forwards in time as far as the recording stops and future laws allow (I didn’t fully realise that when I wrote my time travel blog, so I ought to update it, soon). You’d be able to inhabit other peoples’ bodies, share their minds, share consciousness and feelings and emotions and thoughts. The frontier suddenly jumps out a lot once we start that recording, because you can go into the future as far as is continuously permitted. Going into that future allows you to get hold of all the future technologies and bring them back home, short circuiting the future, as long as time police don’t stop you. No, I’m not nuts – if you record everyone’s minds continuously, you can time travel into the future using cyberspace, and the effects extend beyond cyberspace into the real world you inhabit, so although it is certainly a cheat, it is effectively real time travel, backwards and forwards. It needs some security sorted out on warfare, banking and investments, procreation, gambling and so on, as well as lot of other causality issues, but to quote from Back to the Future: ‘What the hell?’ [IMPORTANT EDIT: in my following blog, I revise this a bit and conclude that although time travel to the future in this system lets you do pretty much what you want outside the system, time travel to the past only lets you interact with people and other things supported within the system platform, not the physical universe outside it. This does limit the scope for mischief.]

So, time travel in fully networked fully AI-enhanced cosmically-connected cyberspace/dream-space/imagination/love/games would be a bigger and later frontier. It lets you travel far into the future and so it notionally includes any frontiers invented and included by then. Is it the final one though? Well, there could be some frontiers discovered after the time travel windows are closed. They’d be even finaller, so I won’t bet on it.

 

 

Errones, infectious biases that corrupt thinking

I know it isn’t always obvious in some of my blogs what they have to do with the future. This one is about error tendencies, but of course making an error now affects the future, so they are relevant and in any case, there is even a future for error tendencies. A lot of the things I will talk about are getting worse, so there is a significant futures trend here too. Much of the future is determined by happenings filtered through human nature so anything that affects human nature strongly should be an important consideration in futurology. Enough justification for my human nature thinkings. On with the show.

Hormones are chemicals that tend to push the behavior of an organic process in a particular direction, including feelings and consequentially analysis. A man flooded with testosterone may be more inclined to make a more risky decision. A lot of interpersonal interactions and valuations are influenced by hormones too, to varying degrees.

In much the same way, many other forces can influence our thinking or perception and hence analysis of external stimuli such as physical facts or statistics. A good scientist or artist may learn to be more objective and to interpret what they observe with less bias, but for almost everyone, some perceptive biases remain, and after perception, many analytical biases result from learned thinking behaviors. Some of those thinking behaviors may be healthy, such as being able to consciously discount emotions to make more clinical decisions when required, or to take full account of them at other times. Others however are less healthy and introduce errors.

Error-forcing agents

There are many well-known examples of such error-forcing agents. One is the notorious halo effect that surrounds attractive women, that may lead many people to believe they are better or nicer in many other ways than women who are less attractive. Similarly, tall men are perceived to be better managers and leaders.

Another is that celebrities from every area find their opinions are valued far outside the fields where they are actually expert. Why should an actor or pop singer be any more knowledgeable or wiser than anyone else not trained in that field? Yet they are frequently asked for their opinions and listened to, perhaps at the expense of others.

When it’s a singer or actor encouraging people to help protect a rain forest, it’s pretty harmless. When they’re trying to tell us what we should eat or believe, then it can become dangerous. When it is a politician making pronouncements about which scientists we should believe on climate change, or which medicines should be made available, it can cause prolonged harm. The reason I am writing this blog now is that we are seeing a lot more of that recently – for example, politicians in many countries suddenly pretending they can speak authoritatively on which results to believe from climate science and astrophysics even when most scientists couldn’t. A few of them have some scientific understanding, but the vast majority don’t and many actually show very little competence when it comes to clear thinking even in their own jurisdictions, let alone outside.

Errones

These groups are important, because they are emitting what I will call errones, hormone-like thinking biases that lead us to make errors. Politicians get to be elected by being good at influencing people, celebs too become popular by appealing to our tastes. By overvaluing pronouncements from these groups, our thinking is biased in that direction without good reason. It is similar in effect to a hormone, in that we may not be consciously aware of it, but it influences our thinking all the same. So we may have held a reasonably well-thought-out opinion of something, and then a favored celebrity or politician makes a speech on it, and even though they have no particular expertise in the matter, our opinion changes in that direction. Our subsequent perceptions, interpretations, analyses and opinions on many other areas may subsequently be affected by the bias caused by that errone. Worse still, in our interactions with others, the errone may spread to them too. They are infectious. Similar to Richard Dawkins’ memes, which are ideas that self-perpetuate and spread through a population, errones may self-reinforce and spread organically too, but errones are not ideas like memes, but are biases in thinking more like hormones, hence the name errone.

Some general thinking errors are extremely common and we are familiar with them, but tat doesn’t stop us being affected sometimes if we don’t engage due care.

Consensus

Other errones are assembled over years of exposure to our culture. Some even have some basis in some situations, but become errones when we apply them elsewhere. Consensus is a useful concept when we apply it to things that are generally nice to eat, but it has no proper place in science and becomes an errone when cited there. As Einstein pointed out when confronted with a long list of scientists who disagreed with him, if he was wrong, even one would suffice. There was once a consensus that the Earth was flat, that there were four elements, that there was an ether, that everything was created by a god. In each case, successions of individuals challenged the consensus until eventually people were persuaded of the error.

Authority

Another well-known errone is attitude to authority. Most parents will be well familiar with the experience of their kid believing everything teacher tells them and refusing to believe them when they say the teacher is talking nonsense (in case you didn’t know, teachers are not always right about everything). In varying degrees, people believe their doctors, scientists, parents, politicians not by the quality of their actual output but by the prejudice springing from their authority. Even within a field, people with high authority can make mistakes. I was rather pleased a long time ago when I spotted a couple of mistakes in Stephen Hawking’s ‘A brief history of time’ even though he seemingly has an extra digit in his IQ. He later admitted those same errors and I was delighted. He had the best authority in the world on the subject, but still made a couple of errors. I am pleased I hadn’t just assumed he must have been right and accepted what he said.

Vested interest

Yet another errone with which you should be familiar is vested interest. People often have an ax to grind on a particular issue and it is therefore appropriate to challenge what they are saying, but it is a big error to dismiss something as wrong simply because someone has an interest in a particular outcome. A greengrocer is still telling the truth when they say that vegetables are good for you. The correct answer to 7+6 is 13 regardless of who says so. You shouldn’t listen to someone else telling you the answer is 15 who says ‘well he would say it is 13 wouldn’t he…’

These common errors in thinking are well documented, but we still make new ones.

Word association errones

Some errones can be summed up in single words. For example ‘natural’, ‘organic’, ‘synthetic’, ‘fair’, ‘progressive’, ‘right’, ‘left’ are all words we hear every day that activate a range of prejudicial processes that color our processing of any subsequent inputs. Arsenic is natural, foxgloves are natural, so is uranium. That doesn’t necessarily make them good things to eat. Not every idea from the right or left of politics is good or bad. Stupidity exists across the political spectrum, while even the extremes have occasional good ideas. But errones cause us to apply filters and make judgments that bad ideas or things are good or that good ideas or things are bad, merely because of their origin. This errone is traditionally known as ‘tarring everything with the same brush’ just because they fall in the same broad category.

Deliberate errone creation

In my view, single word errones are the most dangerous, and we add to the list occasionally. The currently fashionable word ‘Self-proclaimed’ (yeah, OK, it’s hyphenated) is intended to suggest that someone has no genuine right to a platform and therefore should be ignored. It is as much an insult as calling someone an idiot, but is more malign because it seeks to undermine not just a single statement or argument, but everything that person says. Political correctness is very rich with such words. People mostly think using words, so coloring their meaning gradually over time means that people will still think the same way using the same verbal reasoning, but since the meaning of the words they are using has changed slightly, they will end up with a result that sounds the same as it used to, but now means something quite different.

For example, we’ve seen exactly that happen over the last decade by the redefining of poverty to be having an income below a percentage of average income rather than the traditional definition of being unable to afford basic essentials. People still retain the same emotional connection to the words poor and poverty, and are still shocked as politicians cite ever worsening statistics of the numbers of people in poverty even as society gets wealthier. Under its new meaning, if everyone’s income increased 1000-fold overnight, exactly the same number of people would remain ‘in poverty’, even though they could now all afford to live in luxury. People wanting to talk about poverty in its original meaning now have to use different language. The original words have been captured as political weapons. This errone was created and spread very deliberately and has had exactly the effect desired. People now have the same attitude to low income as they once held to poor.

All very 1984

Capturing language and fencing off entire areas of potential thought by labelling them is a proven excellent technique for furthering a cause. It is of course the basis of Orwell’s 1984, by which the authorities enslave a population by enforcing a particular group-think, with words as their primary tool, and understanding of the techniques has been much practiced around the world. Orwell wrote his book to highlight the problem, but it hasn’t gone away, but rather got worse. Increasing understanding of human psychology and use of advanced marketing techniques have only added to its power and effectiveness. In absolutely 1984 style, ‘progressive’ sounds very loving and positive and ‘regressive’ very nasty and negative, but how has it come that we describe alternative tax policies in such terms? Tax is rightfully an issue for political parties to debate and decide, but surely democratic politics is there to allow people a mechanism to live alongside peacefully in mutual tolerance and respect, not for each side to treat the other as inferiors who should be scorned and ostracized. However, infection biases someone’s thinking and is therefore error forcing, and an errone.

Similarly, ‘traditional’ was once a word we used to describe normal or slightly old-fashioned views, but political correctness seeks to quickly replace traditional values by using descriptors such as ‘dinosaur’, ‘bigoted’, ‘prejudiced’ for anyone who doesn’t follow their line. Most people are terrified of being labelled as such so will quickly fall in line with whatever the current demands for politically correct compliance are. Once someone does so, they adjust the external presentation of their own thinking to make the new status quo more acceptable to them, and seek to authenticate and justify themselves to others by proselytizing the errone, self censoring and controlling their own thinking according to the proscribed filters and value set. They basically accept the errone, build it into place and nurture it. Memes are powerful. Errones are worse because they get far deeper into places mere ideas can’t.

Thanks to the deliberate infection with such errones, it is no longer possible to hold a discussion or even to state statistical facts across a wide range of topics without demonstrating a me-too bias. If analysis and debate can no longer be done without deliberate introduction of systemic error,  when error is not seen as a problem but as a requirement, then I suggest we are in trouble. We should be able to agree at least on basic facts, and then argue what to do about them, but even facts now are heavily filtered and distorted at numerous stages before we are allowed access to them.

Old wives’ tales (no age or gender-related slur intended)

Not all errones are related to this kind of tribal-cultural-political warfare and deliberately fabricated and spread. Some are commonly held assumptions that are wrong, such as old wives’ tales or because people are not very good at thinking about exponential or non-linear systems. Take an example. Most environmentalists agree that rapid IT obsolescence is a big problem, resulting in massive waste and causing far more environmental impact than would be necessary if we just made things last longer. However, each generation of IT uses far less resource than the one it replaces, and in a few more generations of devices, we’ll be able to do all we do today in just a few grams of device. With far more people in the world wealthy enough and wanting all that function, doing it with today’s technology would have huge environmental impact, but with tomorrow’s, very much less. Thus slowing down the obsolescence cycle would have dire environmental consequences. The best way to help the environment is to progress quickly to ultra-low-impact IT. Similar errors exist across environmental policy world-wide, and the cause is the simple errone that reducing the impact of any part of a system will reduce the full system impact. That is very often incorrect. This same environmental errone has caused massive environmental and human damage already and will cause far more before it is done, by combining enthusiasm to act with what is now very commonly held analytical error.

Linear thinking

The Errone of linear thinking probably results from constant exposure to it in others, making it hard to avoid infection. Typical consequences are inability to take correctly account for future technology or future wealth, also typically assuming that everything except the problem you’re considering will remain the same, while your problem increases. A  related errone is not allowing for the fact that exponential growths generally only happen for a limited time, followed by eventual leveling off or even decline, especially when related to human systems such as population, obesity, debt etc. Many stories of doom are based on the assumption that some current exponential growth such as population or resource use will continue forever, which is nonsense, but the errone seems to have found some niches where it retains viability.

Errone communication

Errones spread through a population simply via exposure, using any medium. Watching an innocent TV program, reading a newspaper article or hearing a remark in a pub are all typical ways they spread. Just as some diseases can reduce resistance to other diseases, some errones such as the celebrity halo effect can lead to easier infection by others. People are far more likely to be infected by an errone from their favorite celebrity than a stranger. If you see them making an error in their reasoning but making it sound plausible because they believe it, there is a good chance you may be infected by it and also help to spread it. Also, like diseases, people have varying vulnerability to different types of errones.

Being smart won’t make you immune

Intelligence isn’t necessarily a defense and may even be essential to create vulnerability. Someone who is highly intelligent may actually be more susceptible to errones that are packaged in elaborate intellectual coatings, that may be useless for infecting less intelligent people who might just ignore them. A sophisticated economic errone may only be able to infect people with a high level of expertise in economics, since nobody else would understand it, but may nevertheless still be an errone, still wrong thinking. Similarly, some of the fine political theories across every point on the spectrum might be mind-numbingly dull to most people and therefore pass over with no effect, but may take root and flourish in certain political elites. Obviously lots of types of social and special interest groups have greater exposure and vulnerability to certain types of errones. There may well be some errones connected with basketball strategies but they can’t have an effect on me since I have zero knowledge of or interest in the game, and never have had any, so the basic platform for them to operate doesn’t exist in my brain.

Errones may interact with each other. Some may act as a platform for others, or fertilize them, or create a vulnerability or transmission path, or they may even be nested. It is possible to have an entire field of knowledge that is worse than useless and yet still riddled with errors. For example, someone may make some errone-type statistical errors when analyzing the effects of a homeopathic treatment. The fact that a whole field is nonsensical does not make it immune from extra errors within.

Perceptual errones are built into our brains too – some of which are part pre-programmed and part infectious. There are many well-known optical illusions that affect almost everyone. The mechanics of perception introduce the error, and that error may feed into other areas such as decision making. I suffer from vertigo, and even a simple picture of a large drop is quite enough to fool my brain into a fear reaction even though there is obviously no danger present. This phobia may not be part genetic and part infectious, and other phobias can be certainly be communicated, such as fear of spiders or snakes.

Group-think related errones

A very different class of errone is the collective one, closely related to group-think. The problem of ‘designed by committee’ is well known. A group of very smart people can collectively make really dumb decisions. There are many possible reasons and not all are errone-related. Agreeing with the boss or not challenging the idiot loud-mouth can both get bad results with no need for errones. Groupthink is where most people in the room shares the same prejudice, and that can often be an errone. If other people that you respect think something, you may just accept and adopt that view without thinking it through. If it is incorrect, or worse, if it is correct but only applies in certain conditions, and you don’t know that, or don’t know the conditions, then it can lead to later errors.

I once sat through an electronics lecture explaining why it was impossible to ever get more than 2.4kbit/s second through a copper telephone wire and no matter what happened, we never would, and you can’t change the laws of physics. That’s hard to believe today when ADSL easily delivers over 4Mbit/s to my home down the same copper wire. The physics wasn’t wrong, it just only applied to certain ways of doing things, and that lecturer obviously hadn’t understood that and thought it was a fundamental limit that would block any technique. I could use a similar excuse to explain why I failed a thermodynamics exam on my first attempt. It just seemed obviously wrong to me that you couldn’t get any energy from the waste heat from a power station. Our lecturer had delivered the correct thermodynamic equations for the first stage of a heat engine and then incorrectly left us knowing that that was it, and no additional heat could be used however clever anyone might be. I couldn’t see how that could possibly be right and that confusion remained for months afterwards until I finally saw it explained properly. Meanwhile, I was vulnerable to errors caused by knowing something that was wrong, that had been communicated to me by a poor lecturer. Well, that’s my side, but I have to admit it is theoretically possible that maybe I just didn’t listen properly. Either way, it’s still an errone.

Why I am mentioning this one in a group-think section is because misunderstandings and misapplications of thermodynamics have permeated large populations withing the climate change discussion community. Whichever side you are on, you will be familiar with some errors that affect the other lot, probably less so with the errones that you have been infected with. Just like me I guess.

On a larger scale, entire nations can be affected by errones. We don’t think of patriotism as an error, although it clearly affects our value judgments, but patriotism is just one aspect of our bias towards communities close to where we live. Whereas patriotism starts as a benign loyalty to your country, extending that loyalty into a belief in superiority is certainly a very common errone, thinking that anything and everyone in other countries must be less good than what you have close to home. The opposite exists too. In some countries, people assume that anything from abroad must be better. Of course, in some countries, they’re right.

The huge impacts of errones

Errones can be extremely expensive too. The banking crisis was caused in good measure by a widespread errone connected with valuation of complex derivatives. Once that happened, a different errone affected the rest of the population. Even though the bank crash was costly, it only directly accounted for a tiny fraction of the overall global economic crash. The rest was caused by a crisis of confidence, a confidence errone if you like. The economy had been sound, so there was absolutely no reason for any collapse, but once the errone that a recession was coming took hold, it became strongly self-fulfilling. Everyone shut their wallets, started being unduly careful with their spending and economies crashed. Those of us who challenged that assumption at the time were too few and too influential to prevent it. So errones can be an enormous problem.

Elsewhere economic errones are common. Housing bubbles, the web bubble, tulip bubbles, we don’t ever seem to learn and the bubble errone mutates and reappears again and again like flu viruses. Investment errones are pretty ubiquitous, even at government level. The UK created what is commonly known now as The Concorde Fallacy, an errone that makes people more inclined to throw money down the drain on a project if they already have spent a lot on it.

Still other errones affect people in their choice of where to live. People often discount liability to earthquakes, volcanoes,  hurricanes, tsuanmis and floods if they haven’t happened for a long time. When probability finally catches up, they are caught unprepared and often looking for someone to blame. The normality of everyday life quickly builds up into experience that pervades thinking and hides away thoughts of disaster. In stark contrast, other people fall easy prey to stories of doom and gloom, because they have been infected with errones that make them seem more dangerous or likely than in reality.

Health errones are an obvious problem. Scientists and nutritionists change advice on what to eat and drink from time to time as new research brings results, but the news of change in advice is not always accepted. Many people will not hear the news, others will not accept it because they are sick of changing advice from scientists, others will just hear and ignore it. The result is that outdated advice, sometimes wrong advice, can persist and continue to spread long after it has been proven wrong. What was once considered good advice essentially mutates into an errone. The current fat v sugar debate will be interesting to follow in this regard, since it will have ongoing effects throughout the entire food, sports, entertainment and leisure industries. We can be certain that some of the things we currently strongly believe are actually errones that lead to errors in many areas of our lives.

Looking at transport, everyone knows it is safer to fly than drive, but actually those stats only work for long trips. If you only want to travel 5km, it is safer to drive than to fly. 50km starts to favor flying and more than that certainly sees flying being safest. That errone probably has an immeasurably small impact in consequentially wrong decisions, but has managed to spread very successfully.

I could go on – there are a lot of errones around, and we keep making more of them. But enough for now.