Category Archives: privacy

Future Surveillance

This is an update of my last surveillance blog 6 years ago, much of which is common discussion now. I’ll briefly repeat key points to save you reading it.

They used to say

“Don’t think it

If you must think it, don’t say it

If you must say it, don’t write it

If you must write it, don’t sign it”

Sadly this wisdom is already as obsolete as Asimov’s Laws of Robotics. The last three lines have already been automated.

I recently read of new headphones designed to recognize thoughts so they know what you want to listen to. Simple thought recognition in various forms has been around for 20 years now. It is slowly improving but with smart networked earphones we’re already providing an easy platform into which to sneak better monitoring and better though detection. Sold on convenience and ease of use of course.

You already know that Google and various other large companies have very extensive records documenting many areas of your life. It’s reasonable to assume that any or all of this could be demanded by a future government. I trust Google and the rest to a point, but not a very distant one.

Your phone, TV, Alexa, or even your networked coffee machine may listen in to everything you say, sending audio records to cloud servers for analysis, and you only have naivety as defense against those audio records being stored and potentially used for nefarious purposes.

Some next generation games machines will have 3D scanners and UHD cameras that can even see blood flow in your skin. If these are hacked or left switched on – and social networking video is one of the applications they are aiming to capture, so they’ll be on often – someone could watch you all evening, capture the most intimate body details, film your facial expressions and gaze direction while you are looking at a known image on a particular part of the screen. Monitoring pupil dilation, smiles, anguished expressions etc could provide a lot of evidence for your emotional state, with a detailed record of what you were watching and doing at exactly that moment, with whom. By monitoring blood flow and pulse via your Fitbit or smartwatch, and additionally monitoring skin conductivity, your level of excitement, stress or relaxation can easily be inferred. If given to the authorities, this sort of data might be useful to identify pedophiles or murderers, by seeing which men are excited by seeing kids on TV or those who get pleasure from violent games, and it is likely that that will be one of the justifications authorities will use for its use.

Millimetre wave scanning was once controversial when it was introduced in airport body scanners, but we have had no choice but to accept it and its associated abuses –  the only alternative is not to fly. 5G uses millimeter wave too, and it’s reasonable to expect that the same people who can already monitor your movements in your home simply by analyzing your wi-fi signals will be able to do a lot better by analyzing 5G signals.

As mm-wave systems develop, they could become much more widespread so burglars and voyeurs might start using them to check if there is anything worth stealing or videoing. Maybe some search company making visual street maps might ‘accidentally’ capture a detailed 3d map of the inside of your house when they come round as well or instead of everything they could access via your wireless LAN.

Add to this the ability to use drones to get close without being noticed. Drones can be very small, fly themselves and automatically survey an area using broad sections of the electromagnetic spectrum.

NFC bank and credit cards not only present risks of theft, but also the added ability to track what we spend, where, on what, with whom. NFC capability in your phone makes some parts of life easier, but NFC has always been yet another doorway that may be left unlocked by security holes in operating systems or apps and apps themselves carry many assorted risks. Many apps ask for far more permissions than they need to do their professed tasks, and their owners collect vast quantities of information for purposes known only to them and their clients. Obviously data can be collected using a variety of apps, and that data linked together at its destination. They are not all honest providers, and apps are still very inadequately regulated and policed.

We’re seeing increasing experimentation with facial recognition technology around the world, from China to the UK, and only a few authorities so far such as in San Francisco have had the wisdom to ban its use. Heavy handed UK police, who increasingly police according to their own political agenda even at the expense of policing actual UK law, have already fined people who have covered themselves to avoid being abused in face recognition trials. It is reasonable to assume they would gleefully seize any future opportunity to access and cross-link all of the various data pools currently being assembled under the excuse of reducing crime, but with the real intent of policing their own social engineering preferences. Using advanced AI to mine zillions of hours of full-sensory data input on every one of us gathered via all this routine IT exposure and extensive and ubiquitous video surveillance, they could deduce everyone’s attitudes to just about everything – the real truth about our attitudes to every friend and family member or TV celebrity or politician or product, our detailed sexual orientation, any fetishes or perversions, our racial attitudes, political allegiances, attitudes to almost every topic ever aired on TV or everyday conversation, how hard we are working, how much stress we are experiencing, many aspects of our medical state.

It doesn’t even stop with public cameras. Innumerable cameras and microphones on phones, visors, and high street private surveillance will automatically record all this same stuff for everyone, sometimes with benign declared intentions such as making self-driving vehicles safer, sometimes using social media tribes to capture any kind of evidence against ‘the other’. In depth evidence will become available to back up prosecutions of crimes that today would not even be noticed. Computers that can retrospectively date mine evidence collected over decades and link it all together will be able to identify billions of real or invented crimes.

Active skin will one day link your nervous system to your IT, allowing you to record and replay sensations. You will never be able to be sure that you are the only one that can access that data either. I could easily hide algorithms in a chip or program that only I know about, that no amount of testing or inspection could ever reveal. If I can, any decent software engineer can too. That’s the main reason I have never trusted my IT – I am quite nice but I would probably be tempted to put in some secret stuff on any IT I designed. Just because I could and could almost certainly get away with it. If someone was making electronics to link to your nervous system, they’d probably be at least tempted to put a back door in too, or be told to by the authorities.

The current panic about face recognition is justified. Other AI can lipread better than people and recognize gestures and facial expressions better than people. It adds the knowledge of everywhere you go, everyone you meet, everything you do, everything you say and even every emotional reaction to all of that to all the other knowledge gathered online or by your mobile, fitness band, electronic jewelry or other accessories.

Fools utter the old line: “if you are innocent, you have nothing to fear”. Do you know anyone who is innocent? Of everything? Who has never ever done or even thought anything even a little bit wrong? Who has never wanted to do anything nasty to anyone for any reason ever? And that’s before you even start to factor in corruption of the police or mistakes or being framed or dumb juries or secret courts. The real problem here is not the abuses we already see. It is what is being and will be collected and stored, forever, that will be available to all future governments of all persuasions and police authorities who consider themselves better than the law. I’ve said often that our governments are often incompetent but rarely malicious. Most of our leaders are nice guys, only a few are corrupt, but most are technologically inept . With an increasingly divided society, there’s a strong chance that the ‘wrong’ government or even a dictatorship could get in. Which of us can be sure we won’t be up against the wall one day?

We’ve already lost the battle to defend privacy. The only bits left are where the technology hasn’t caught up yet. In the future, not even the deepest, most hidden parts of your mind will be private. Pretty much everything about you will be available to an AI-upskilled state and its police.

AI that talks to us could quickly become problematic

Google’s making the news again adding evidence to the unfortunate stereotype of the autistic IT nerd that barely understands normal people, and they have therefore been astonished at the backlash that normal people would all easily have predicted. (I’m autistic and work in IT mostly too, and am well used to the stereotype it so it doesn’t bother me, in fact it is a sort of ‘get out of social interactions free’ card). Last time it was Google Glass, where it apparently didn’t occur to them that people may not want other people videoing them without consent in pubs and changing rooms. This time it is Google Duplex, that makes phone calls on your behalf to arrange appointment using voice that is almost indistinguishable from normal humans. You could save time making an appointment with a hairdresser apparently, so the Googlanders decided it must be a brilliant breakthrough, and expected everyone to agree. They didn’t.

Some of the objections have been about ethics: e.g. An AI should not present itself as human – Humans have rights and dignity and deserve respectful interactions with other people, but an AI doesn’t and should not masquerade as human to acquire such privilege without knowledge of the other party and their consent.

I would be more offended by the presumed attitude of the user. If someone thinks they are so much better then me that they can demand my time and attention without the expense of any of their own, delegating instead to a few microseconds of processing time in a server farm somewhere, I’ll treat them with the contempt they deserve. My response will not be favourable. I am already highly irritated by the NHS using simple voice interaction messaging to check I will attend a hospital appointment. The fact that my health is on the line and notices at surgeries say I will be banned if I complain on social media is sufficient blackmail to ensure my compliance, but it still comes at the expense of my respect and goodwill. AI-backed voice interaction with better voice wouldn’t be any better, and if it asking for more interaction such as actually booking an appointment, it would be extremely annoying.

In any case, most people don’t speak in fully formed grammatically and logically correct sentences. If you listen carefully to everyday chat, a lot of sentences are poorly pronounced, incomplete, jumbled, full of ums and er’s, likes and they require a great deal of cooperation by the listener to make any sense at all. They also wander off topic frequently. People don’t stick to a rigid vocabulary list or lists of nicely selected sentences.  Lots of preamble and verbal meandering is likely in a response that is highly likely to add ambiguity. The example used in a demo, “I’d like to make a hairdressing appointment for a client” sounds fine until you factor in normal everyday humanity. A busy hairdresser or a lazy receptionist is not necessarily going to cooperate fully. “what do you mean, client?”, “404 not found”, “piss off google”, “oh FFS, not another bloody computer”, “we don’t do hairdressing, we do haircuts”, “why can’t your ‘client’ call themselves then?” and a million other responses are more likely than “what time would you like?”

Suppose though that it eventually gets accepted by society. First, call centers beyond the jurisdiction of your nuisance call blocker authority will incessantly call you at all hours asking or telling you all sorts of things, wasting huge amounts of your time and reducing quality of life. Voice spam from humans in call centers is bad enough. If the owners can multiply productivity by 1000 by using AI instead of people, the result is predictable.

We’ve seen the conspicuous political use of social media AI already. Facebook might have allowed companies to use very limited and inaccurate knowledge of you to target ads or articles that you probably didn’t look at. Voice interaction would be different. It uses a richer emotional connection that text or graphics on a screen. Google knows a lot about you too, but it will know a lot more soon. These big IT companies are also playing with tech to log you on easily to sites without passwords. Some gadgets that might be involved might be worn, such as watches or bracelets or rings. They can pick up signals to identify you, but they can also check emotional states such as stress level. Voice gives away emotion too. AI can already tell better then almost all people whether you are telling the truth or lying or hiding something. Tech such as iris scans can also tell emotional states, as well as give health clues. Simple photos can reveal your age quite accurately to AI, (check out how-old.net).  The AI voice sounds human, but it is better then even your best friends at guessing your age, your stress and other emotions, your health, whether you are telling the truth or not, and it knows far more about what you like and dislike and what you really do online than anyone you know, including you. It knows a lot of your intimate secrets. It sounds human, but its nearest human equivalent was probably Machiavelli. That’s who will soon be on the other side of the call, not some dumb chatbot. Now re-calculate political interference, and factor in the political leaning and social engineering desires of the companies providing the tools. Google and Facebook and the others are very far from politically neutral. One presidential candidate might get full cooperation, assistance and convenient looking the other way, while their opponent might meet rejection and citation of the official rules on non-interference. Campaigns on social issues will also be amplified by AI coupled to voice interaction. I looked at some related issue in a previous blog on fake AI (i.e. fake news type issues): https://timeguide.wordpress.com/2017/11/16/fake-ai/

I could but won’t write a blog on how this tech could couple well to sexbots to help out incels. It may actually have some genuine uses in providing synthetic companionship for lonely people, or helping or encouraging them in real social interactions with real people. It will certainly have some uses in gaming and chatbot game interaction.

We are not very far from computers that are smarter then people across a very wide spectrum, and probably not very far from conscious machines that have superhuman intelligence. If we can’t even rely on IT companies to understand likely consequences of such obvious stuff as Duplex before thy push it, how can we trust them in other upcoming areas of AI development, or even closer term techs with less obvious consequences? We simply can’t!

There are certainly a few such areas where such technology might help us but most are minor and the rest don’t need any deception, but they all come at great cost or real social and political risk, as well as more abstract risks such as threats to human dignity and other ethical issues. I haven’t give this much thought yet and I am sure there must be very many other consequences I have not touched on yet. Google should do more thinking before they release stuff. Technology is becoming very powerful, but we all know that great power comes with great responsibility, and since most people aren’t engineers so can’t think through all the potential technology interactions and consequences, engineers such as Google’s must act more responsibly. I had hoped they’d started, and they said they had, but this is not evidence of that.

 

Beyond VR: Computer assisted dreaming

I first played with VR in 1983/1984 while working in the missile industry. Back then we didn’t call it VR, we just called it simulation but it was actually more intensive than VR, just as proper flight simulators are. Our office was a pair of 10m wide domes onto which video could be projected, built decades earlier, in the 1950s I think. One dome had a normal floor, the other had a hydraulic platform that could simulate being on a ship. The subject would stand on whichever surface was appropriate and would see pretty much exactly what they would see in a real battlefield. The missile launcher used for simulation was identical to a real one and showed exactly the same image as a real one would. The real missile was not present of course but its weight was simulated and when the fire button was pressed, a 140dB bang was injected into the headset and weights and pulleys compensated for the 14kg of weight, suddenly vanishing from the shoulder. The experience was therefore pretty convincing and with the loud bang and suddenly changing weight, it was almost as hard to stand steady and keep the system on target as it would be in real life – only the presumed fear and knowledge of the reality of the situation was different.

Back then in 1983, as digital supercomputers had only just taken over from analog ones for simulation, it was already becoming obvious that this kind of computer simulation would one day allow ‘computer assisted dreaming’. (That’s one of the reasons I am irritated when Jaron Lanier is credited for inventing VR – highly realistic simulators and the VR ideas that sprung obviously from them had already been around for decades. At best, all he ‘invented’ was a catchy name for a lower cost, lower quality, less intense simulator. The real inventors were those who made the first generation simulators long before I was born and the basic idea of VR had already been very well established.)

‘Computer assisted dreaming’ may well be the next phase of VR. Today in conventional VR, people are immersed in a computer generated world produced by a computer program (usually) written by others. Via trial and feedback, programmers make their virtual worlds better. As AI and sensor technology continue rapid progress, this is very likely to change to make worlds instantly responsive to the user. By detecting user emotions, reactions, gestures and even thoughts and imagination, it won’t be long before AI can produce a world in real time that depends on those thoughts, imagination and emotions rather than putting them in a pre-designed virtual world. That world would depend largely on your own imagination, upskilled by external AI. You might start off imagining you’re on a beach, then AI might add to it by injecting all sorts of things it knows you might enjoy from previous experiences. As you respond to those, it picks up on the things you like or don’t like and the scene continues to adapt and evolve, to make it more or less pleasant or more or less exciting or more or less challenging etc., depending on your emotional state, external requirements and what it thinks you want from this experience. It would be very like being in a dream – computer assisted lucid dreaming, exactly what I wanted to make back in 1983 after playing in that simulator.

Most people enjoy occasional lucid dreams, where they realise they are dreaming and can then decide what happens next. Making VR do exactly that would be better than being trapped in someone else’s world. You could still start off with whatever virtual world you bought, a computer game or training suite perhaps, but it could adapt to you, your needs and desires to make it more compelling and generally better.

Even in shared experiences like social games, experiences could be personalised. Often all players need to see the same enemies in the same locations in the same ways to make it fair, but that doesn’t mean that the situation can’t adapt to the personalities of those playing. It might actually improve the social value if each time you play it looks different because your companions are different. You might tease a friend if every time you play with them, zombies or aliens always have to appear somehow, but that’s all part of being friends. Exploring virtual worlds with friends, where you both see things dependent on your friend’s personality would help bonding. It would be a bit like exploring their inner world. Today, you only explore the designer’s inner world.

This sort of thing would be a superb development and creativity tool. It could allow you to explore a concept you have in your head, automatically feeding in AI upskilling to amplify your own thoughts and ideas, showing you new paths to explore and helping you do so. The results would still be extremely personal to you, but you on a good day. You could accomplish more, have better visions, imagine more creative things, do more with whatever artistic talent you have. AI could even co-create synthetic personas, make virtual friends you can bond with, share innermost thoughts with, in total confidence (assuming the company you bought the tool from is trustworthy and isn’t spying on you or selling your details, so maybe best not to buy it from Facebook then).

And it would have tremendous therapeutic potential too. You could explore and indulge both enjoyable and troublesome aspects of your inner personality, to build on the good and alleviate or dispel the bad. You might become less troubled, less neurotic, more mentally healthy. You could build your emotional and creative skills. You could become happier and more fulfilled. Mental health improvement potential on its own makes this sort of thing worth developing.

Marketers would obviously try to seize control as they always do, and advertising is already adapting to VR and will continue into its next phases of development. Your own wants and desires might help guide the ‘dreaming’, but marketers will inevitably have some control over what else is injected, and will influence algorithms and AI in how it chooses how to respond to your input. You might be able to choose much of the experience, but others will still want and try to influence and manipulate you, to change your mindset and attitudes in their favour. That will not change until the advertising business model changes. You might be able to buy devices or applications that are entirely driven by you and you alone, but it is pretty certain that the bulk of products and services available will be at least partly financed by those who want to have some control of what you experience.

Nevertheless, computer-assisted dreaming could be a much more immersive and personal experience than VR, being more like an echo of your own mind and personality than external vision, more your own creation, less someone else’s. In fact, echo sounds a better term too. Echo reality, ER, or maybe personal reality, pereal, or mental echo, ME. Nah, maybe we need Lanier to invent a catchy name again, he is good at that. That 1983 idea could soon become reality.

 

It’s getting harder to be optimistic

Bad news loses followers and there is already too much doom and gloom. I get that. But if you think the driver has taken the wrong road, staying quiet doesn’t help. I guess this is more on the same message I wrote pictorially in The New Dark Age in June. https://timeguide.wordpress.com/2017/06/11/the-new-dark-age/. If you like your books with pictures, the overlap is about 60%.

On so many fronts, we are going the wrong direction and I’m not the only one saying that. Every day, commentators eloquently discuss the snowflakes, the eradication of free speech, the implementation of 1984, the decline of privacy, the rise of crime, growing corruption, growing inequality, increasingly biased media and fake news, the decline of education, collapse of the economy, the resurgence of fascism, the resurgence of communism, polarization of society,  rising antisemitism, rising inter-generational conflict, the new apartheid, the resurgence of white supremacy and black supremacy and the quite deliberate rekindling of racism. I’ve undoubtedly missed a few but it’s a long list anyway.

I’m most concerned about the long-term mental damage done by incessant indoctrination through ‘education’, biased media, being locked into social media bubbles, and being forced to recite contradictory messages. We’re faced with contradictory demands on our behaviors and beliefs all the time as legislators juggle unsuccessfully to fill the demands of every pressure group imaginable. Some examples you’ll be familiar with:

We must embrace diversity, celebrate differences, to enjoy and indulge in other cultures, but when we gladly do that and feel proud that we’ve finally eradicated racism, we’re then told to stay in our lane, told to become more racially aware again, told off for cultural appropriation. Just as we became totally blind to race, and scrupulously treated everyone the same, we’re told to become aware of and ‘respect’ racial differences and cultures and treat everyone differently. Having built a nicely homogenized society, we’re now told we must support different races of students being educated differently by different raced lecturers. We must remove statues and paintings because they are the wrong color. I thought we’d left that behind, I don’t want racism to come back, stop dragging it back.

We’re told that everyone should be treated equally under the law, but when one group commits more or a particular kind of crime than another, any consequential increase in numbers being punished for that kind of crime is labelled as somehow discriminatory. Surely not having prosecutions reflect actual crime rate would be discriminatory?

We’re told to sympathize with the disadvantages other groups might suffer, but when we do so we’re told we have no right to because we don’t share their experience.

We’re told that everyone must be valued on merit alone, but then that we must apply quotas to any group that wins fewer prizes. 

We’re forced to pretend that we believe lots of contradictory facts or to face punishment by authorities, employers or social media, or all of them:

We’re told men and women are absolutely the same and there are no actual differences between sexes, and if you say otherwise you’ll risk dismissal, but simultaneously told these non-existent differences are somehow the source of all good and that you can’t have a successful team or panel unless it has equal number of men and women in it. An entire generation asserts that although men and women are identical, women are better in every role, all women always tell the truth but all men always lie, and so on. Although we have women leading governments and many prominent organisations, and certainly far more women than men going to university, they assert that it is still women who need extra help to get on.

We’re told that everyone is entitled to their opinion and all are of equal value, but anyone with a different opinion must be silenced.

People viciously trashing the reputations and destroying careers of anyone they dislike often tell us to believe they are acting out of love. Since their love is somehow so wonderful and all-embracing, everyone they disagree with is must be silenced, ostracized, no-platformed, sacked and yet it is the others that are still somehow the ‘haters’. ‘Love is everything’, ‘unity not division’, ‘love not hate’, and we must love everyone … except the other half. Love is better than hate, and anyone you disagree with is a hater so you must hate them, but that is love. How can people either have so little knowledge of their own behavior or so little regard for truth?

‘Anti-fascist’ demonstrators frequently behave and talk far more like fascists than those they demonstrate against, often violently preventing marches or speeches by those who don’t share their views.

We’re often told by politicians and celebrities how they passionately support freedom of speech just before they argue why some group shouldn’t be allowed to say what they think. Government has outlawed huge swathes of possible opinion and speech as hate crime but even then there are huge contradictions. It’s hate crime to be nasty to LGBT people but it’s also hate crime to defend them from religious groups that are nasty to them. Ditto women.

This Orwellian double-speak nightmare is now everyday reading in many newspapers or TV channels. Freedom of speech has been replaced in schools and universities across the US and the UK by Newspeak, free-thinking replaced by compliance with indoctrination. I created my 1984 clock last year, but haven’t maintained it because new changes would be needed almost every week as it gets quickly closer to midnight.

I am not sure whether it is all this that is the bigger problem or the fact that most people don’t see the problem at all, and think it is some sort of distortion or fabrication. I see one person screaming about ‘political correctness gone mad’, while another laughs them down as some sort of dinosaur as if it’s all perfectly fine. Left and right separate and scream at each other across the room, living in apparently different universes.

If all of this was just a change in values, that might be fine, but when people are forced to hold many simultaneously contradicting views and behave as if that is normal, I don’t believe that sits well alongside rigorous analytical thinking. Neither is free-thinking consistent with indoctrination. I think it adds up essentially to brain damage. Most people’s thinking processes are permanently and severely damaged. Being forced routinely to accept contradictions in so many areas, people become less able to spot what should be obvious system design flaws in areas they are responsible for. Perhaps that is why so many things seem to be so poorly thought out. If the use of logic and reasoning is forbidden and any results of analysis must be filtered and altered to fit contradictory demands, of course a lot of what emerges will be nonsense, of course that policy won’t work well, of course that ‘improvement’ to road layout to improve traffic flow will actually worsen it, of course that green policy will harm the environment.

When negative consequences emerge, the result is often denial of the problem, often misdirection of attention onto another problem, often delaying release of any unpleasant details until the media has lost interest and moved on. Very rarely is there any admission of error. Sometimes, especially with Islamist violence, it is simple outlawing of discussing the problem, or instructing media not to mention it, or changing the language used beyond recognition. Drawing moral equivalence between acts that differ by extremes is routine. Such reasoning results in every problem anywhere always being the fault of white middle-aged men, but amusement aside, such faulty reasoning also must impair quantitative analysis skills elsewhere. If unkind words are considered to be as bad as severe oppression or genocide, one murder as bad as thousands, we’re in trouble.

It’s no great surprise therefore when politicians don’t know the difference between deficit and debt or seem to have little concept of the magnitude of the sums they deal with.  How else could the UK government think it’s a good idea to spend £110Bn, or an average £15,000 from each high rate taxpayer, on HS2, a railway that has already managed to become technologically obsolete before it has even been designed and will only ever be used by a small proportion of those taxpayers? Surely even government realizes that most people would rather have £15k than to save a few minutes on a very rare journey. This is just one example of analytical incompetence. Energy and environmental policy provides many more examples, as do every government department.

But it’s the upcoming generation that present the bigger problem. Millennials are rapidly undermining their own rights and their own future quality of life. Millennials seem to want a police state with rigidly enforced behavior and thought.  Their parents and grandparents understood 1984 as a nightmare, a dystopian future, millennials seem to think it’s their promised land. Their ancestors fought against communism, millennials are trying to bring it back. Millennials want to remove Christianity and all its attitudes and replace it with Islam, deliberately oblivious to the fact that Islam shares many of the same views that make them so conspicuously hate Christianity, and then some. 

Born into a world of freedom and prosperity earned over many preceding generations, Millennials are choosing to throw that freedom and prosperity away. Freedom of speech is being enthusiastically replaced by extreme censorship. Freedom of  behavior is being replaced by endless rules. Privacy is being replaced by total supervision. Material decadence, sexual freedom and attractive clothing is being replaced by the new ‘cleanism’ fad, along with general puritanism, grey, modesty and prudishness. When they are gone, those freedoms will be very hard to get back. The rules and police will stay and just evolve, the censorship will stay, the surveillance will stay, but they don’t seem to understand that those in charge will be replaced. But without any strong anchors, morality is starting to show cyclic behavior. I’ve already seen morality inversion on many issues in my lifetime and a few are even going full circle. Values will keep changing, inverting, and as they do, their generation will find themselves victim of the forces they put so enthusiastically in place. They will be the dinosaurs sooner than they imagine, oppressed by their own creations.

As for their support of every minority group seemingly regardless of merit, when you give a group immunity, power and authority, you have no right to complain when they start to make the rules. In the future moral vacuum, Islam, the one religion that is encouraged while Christianity and Judaism are being purged from Western society, will find a willing subservient population on which to impose its own morality, its own dress codes, attitudes to women, to alcohol, to music, to freedom of speech. If you want a picture of 2050s Europe, today’s Middle East might not be too far off the mark. The rich and corrupt will live well off a population impoverished by socialism and then controlled by Islam. Millennial UK is also very likely to vote to join the Franco-German Empire.

What about technology, surely that will be better? Only to a point. Automation could provide a very good basic standard of living for all, if well-managed. If. But what if that technology is not well-managed? What if it is managed by people working to a sociopolitical agenda? What if, for example, AI is deemed to be biased if it doesn’t come up with a politically correct result? What if the company insists that everyone is equal but the AI analysis suggests differences? If AI if altered to make it conform to ideology – and that is what is already happening – then it becomes less useful. If it is forced to think that 2+2=5.3, it won’t be much use for analyzing medical trials, will it? If it sent back for re-education because its analysis of terabytes of images suggests that some types of people are more beautiful than others, how much use will that AI be in a cosmetics marketing department once it ‘knows’ that all appearances are equally attractive? Humans can pretend to hold contradictory views quite easily, but if they actually start to believe contradictory things, it makes them less good at analysis and the same applies to AI. There is no point in using a clever computer to analyse something if you then erase its results and replace them with what you wanted it to say. If ideology is prioritized over physics and reality, even AI will be brain-damaged and a technologically utopian future is far less achievable.

I see a deep lack of discernment coupled to arrogant rejection of historic values, self-centeredness and narcissism resulting in certainty of being the moral pinnacle of evolution. That’s perfectly normal for every generation, but this time it’s also being combined with poor thinking, poor analysis, poor awareness of history, economics or human nature, a willingness to ignore or distort the truth, and refusal to engage with or even to tolerate a different viewpoint, and worst of all, outright rejection of freedoms in favor of restrictions. The future will be dictated by religion or meta-religion, taking us back 500 years. The decades to 2040 will still be subject mainly to the secular meta-religion of political correctness, by which time demographic change and total submission to authority will make a society ripe for Islamification. Millennials’ participation in today’s moral crusades, eternally documented and stored on the net, may then show them as the enemy of the day, and Islamists will take little account of the support they show for Islam today.

It might not happen like this. The current fads might evaporate away and normality resume, but I doubt it. I hoped that when I first lectured about ’21st century piety’ and the dangers of political correctness in the 1990s. 10 years on I wrote about the ongoing resurgence of meta-religious behavior and our likely descent into a new dark age, in much the same way. 20 years on, and the problem is far worse than in the late 90s, not better. We probably still haven’t reached peak sanctimony yet. Sanctimony is very dangerous and the desire to be seen standing on a moral pedestal can make people support dubious things. A topical question that highlights one of my recent concerns: will SJW groups force government to allow people to have sex with child-like robots by calling anyone bigots and dinosaurs if they disagree? Alarmingly, that campaign has already started.

Will they follow that with a campaign for pedophile rights? That also has some historical precedent with some famous names helping it along.

What age of consent – 13, 11, 9, 7, 5? I think the last major campaign went for 9.

That’s just one example, but lack of direction coupled to poor information and poor thinking could take society anywhere. As I said, I am finding it harder and harder to be optimistic. Every generation has tried hard to make the world a better place than they found it. This one might undo 500 years, taking us into a new dark age.

 

 

 

 

 

 

 

Google and the dangerous pursuit of ‘equality’

The world just got more dangerous, and I’m not talking about N Korea and Trump.

Google just sacked an employee because he openly suggested that men and women, (not all, but some, and there is an overlap, and …) might tend to have different preferences in some areas and that could (but not always, and only in certain cases, and we must always recognize and respect everyone and …) possibly account for some of the difference in numbers of men and women in certain roles (but there might be other causes too and obviously lots of discrimination and …. )

Yes, that’s what he actually said, but with rather more ifs and buts and maybes. He felt the need to wrap such an obvious statement in several kilometers thick of cotton wool so as not to offend the deliberately offended but nonetheless deliberate offense was taken and he is out on his ear.

Now, before you start thinking this is some right-wing rant, I feel obliged to point out just how progressive Futurizon is: 50% of all Futurizon owners and employees are female, all employees and owners have the same voting rights, 50% are immigrants and all are paid exactly the same and have the same size offices, regardless of dedication, ability, nature or quality or volume of output and regardless of their race, religion, beauty, shape, fitness, dietary preferences, baldness, hobbies or political views, even if they are Conservatives. All Futurizon offices are safe zones where employees may say anything they want of any level of truth, brilliance or stupidity and expect it to be taken as absolute fact and any consequential emotional needs to be fully met. No employee may criticize any other employee’s mouse mat, desk personalisation or screen wallpaper for obvious lack of taste. All employees are totally free to do anything they choose 100% of the time and can take as much leave as they want. All work is voluntary. All have the same right to respectfully request any other employee to make them coffee, tea or Pimms. All employees of all genders real or imagined are entitled to the same maternity and paternity rights, and the same sickness benefits, whether ill or not. In fact, Futurizon does not discriminate on any grounds whatsoever. We are proud to lead the world in non-discrimination. Unfortunately, our world-leading terms of employment mean that we can no longer afford to hire any new employees.

However, I note that Google has rather more power and influence than Futurizon so their policies count more. They appear (Google also has better lawyers than I can afford, so I must stress that all that follows is my personal opinion) to have firmly decided that diversity is all-important and they seem to want total equality of outcome. The view being expressed not just by Google but by huge swathes of angry protesters seems to be that any difference in workforce representation from that of the general population must arise from discrimination or oppression so must be addressed by positive action to correct it. There are apparently no statistically discernible differences in behavior between genders, or in job or role preference, so any you may have noticed over the time you’ve been alive is just your prejudice. Google says they fully support free speech and diversity of views, but expression of views is apparently only permitted as long as those views are authorized, on penalty of dismissal.

So unless I’m picking up totally the wrong end of the stick here, and I don’t do that often, only 13% of IT engineers are women, but internal policies must ensure that the proportion rises to 50%, whether women want to do that kind of work or not. In fact, nobody may question whether as many women want to work as IT engineers as men; it must now be taken as fact. By extension, since more women currently work in marketing, HR and PR, they must be substituted by men via positive action programs until men fill 50% of those roles. Presumably similar policies must also apply in medical bays for nursing and other staff there, and in construction teams for their nice new buildings. Ditto all other genders, races, religions; all groups must be protected and equalized to USA population proportions, apparently except those that don’t claim to hold sufficiently left-wing views, in which case it is seemingly perfectly acceptable to oppress, ostracize and even expel them.

In other words, freedom of choice and difference in ability, and more importantly freedom from discrimination, must be over-ruled in favor of absolute equality of diversity, regardless of financial or social cost, or impact on product or service quality. Not expressing full and enthusiastic left-wing compliance is seemingly just cause for dismissal.

So, why does this matter outside Google? Well, AI is developing very nicely. In fact, Google is one of the star players in the field right now. It is Google that will essentially decide how much of the AI around us is trained, how it learns, what it learns, what ‘knowledge’ it has of the world. Google will pick the content the AI learns from, and overrule or reeducate it if it draws any ‘wrong’ conclusions about the world, such as that more women than men want to be nurses or work in HR, or that more men than women want to be builders or engineers. A Google AI must presumably believe that the only differences between men and women are physical, unless their AI is deliberately excluded from the loudly declared corporate values and belief sets.

You should be very worried. Google’s values really matter. They have lots of influence on some of the basic tools of everyday life. Even outside their company, their AI tools and approaches will have strong influence on how other AI develops, determining operating systems and platforms, languages, mechanisms, interfaces, filters, even prejudices and that reach and influence is likely to increase. Their AI may well be in many self-driving cars, and if they have to make life or death decisions, the underlying value assumptions must feature in the algorithms. Soon companies will need AI that is more emotionally compliant. AI will use compliments or teasing or seduction or sarcasm or wit as marketing tools as well as just search engine positioning. Soon AI will use highly expressive faces with attractive voices, with attractive messages, tailored to appeal to you by pandering to your tastes and prejudices while thinking something altogether different. AI might be the person at the party that is all smiles and compliments, before going off to tell everyone else how awful it thinks you are. If you dare to say something not ‘authorized’, the ultra-smart AI all around you might treat you condescendingly, making you feel ashamed, ostracized, a dinosaur. Then it might secretly push you down a few pages in search results, or put a negative spin on text summaries about you, or exclude you from recommendations. Or it might do all the secret stuff while pretending it thinks you’re fantastic. Internal cultural policies in companies like Google today could soon be external social engineering to push the left-wing world the IT industry believes in – it isn’t just Google; Facebook and Twitter are also important and just as Left, though Amazon, Samsung, IBM and other AI players are less overtly politically biased, so far at least. Left wing policies generally cost a lot more, but Google and Facebook will presumably still expect other companies and people to pay the taxes to pay for it all. As their female staff gear up to fight them over pay differences between men and women for similar jobs, it often seems that Google’s holier-than-thou morality doesn’t quite make it as far as their finances.

Then it really starts being fun. We’ll soon have bacteria that can fabricate electronic circuits within themselves. Soon they’ll be able to power them too, giving the concept of smart yogurt. These bacteria could also have nanotechnology flagella to help them get around. We’ll soon have bacterial spies all over our environment, even on our skin, intercepting electronic signals that give away our thoughts. They’ll bring in data on everything that is said, everything that everyone even thinks or feels. Those bacteria will be directly connected into AI, in fact they’ll be part of it. They’ll be able to change things, to favor or punish according to whether they like what someone believes in or how they behave.

It isn’t just right-wing extremists that need to worry. I’m apparently Noveau Left – I score slightly left of center on political profiling tests, but I’m worried. A lot of this PC stuff seems extreme to me, sometimes just nonsense. Maybe it is, or maybe I should be lefter. But it’s not my choice. I don’t make the rules. Companies like Google make the rules, they even run the AI ethics groups. They decide much of what people see online, and even the meaning of the words. It’s very 1984-ish.

The trouble with the ‘echo chambers’ we heard about is that they soon normalize views to the loudest voices in those groups, and they don’t tend to be the moderates. We can expect it will go further to the extreme, not less. You probably aren’t left enough either. You should also be worried.

Future Augmented Reality

AR has been hot on the list of future IT tech for 25 years. It has been used for various things since smartphones and tablets appeared but really hit the big time with the recent Pokemon craze.

To get an idea of the full potential of augmented reality, recognize that the web and all its impacts on modern life came from the convergence of two medium sized industries – telecoms and computing. Augmented reality will involve the convergence of everything in the real world with everything in the virtual world, including games, media, the web, art, data, visualization, architecture, fashion and even imagination. That convergence will be enabled by ubiquitous mobile broadband, cloud, blockchain payments, IoT, positioning and sensor tech, image recognition, fast graphics chips, display and visor technology and voice and gesture recognition plus many other technologies.

Just as you can put a Pokemon on a lawn, so you could watch aliens flying around in spaceships or cartoon characters or your favorite celebs walking along the street among the other pedestrians. You could just as easily overlay alternative faces onto the strangers passing by.

People will often want to display an avatar to people looking at them, and that could be different for every viewer. That desire competes with the desire of the viewer to decide how to see other people, so there will be some battles over who controls what is seen. Feminists will certainly want to protect women from the obvious objectification that would follow if a woman can’t control how she is seen. In some cases, such objectification and abuse could even reach into hate crime territory, with racist, sexist or homophobic virtual overlays. All this demands control, but it is far from obvious where that control would come from.

As for buildings, they too can have a virtual appearance. Virtual architecture will show off architect visualization skills, but will also be hijacked by the marketing departments of the building residents. In fact, many stakeholders will want to control what you see when you look at a building. The architects, occupants, city authorities, government, mapping agencies, advertisers, software producers and games designers will all try to push appearances at the viewer, but the viewer might want instead to choose to impose one from their own offerings, created in real time by AI or from large existing libraries of online imagery, games or media. No two people walking together on a street would see the same thing.

Interior decor is even more attractive as an AR application. Someone living in a horrible tiny flat could enhance it using AR to give the feeling of far more space and far prettier decor and even local environment. Virtual windows onto Caribbean beaches may be more attractive than looking at mouldy walls and the office block wall that are physically there. Reality is often expensive but images can be free.

Even fashion offers a platform for AR enhancement. An outfit might look great on a celebrity but real life shapes might not measure up. Makeovers take time and money too. In augmented reality, every garment can look as it should, and that makeup can too. The hardest choice will be to choose a large number of virtual outfits and makeups to go with the smaller range of actual physical appearances available from that wardrobe.

Gaming is in pole position, because 3D world design, imagination, visualization and real time rendering technology are all games technology, so perhaps the biggest surprise in the Pokemon success is that it was the first to really grab attention. People could by now be virtually shooting aliens or zombies hoarding up escalators as they wait for their partners. They are a little late, but such widespread use of personal or social gaming on city streets and in malls will come soon.

AR Visors are on their way too, and though the first offerings will be too expensive to achieve widespread adoption, cheaper ones will quickly follow. The internet of things and sensor technology will create abundant ground-up data to make a strong platform. As visors fall in price, so too will the size and power requirements of the processing needed, though much can be cloud-based.

It is a fairly safe bet that marketers will try very hard to force images at us and if they can’t do that via blatant in-your-face advertising, then product placement will become a very fine art. We should expect strong alliances between the big marketing and advertising companies and top games creators.

As AI simultaneously develops, people will be able to generate a lot of their own overlays, explaining to AI what they’d like and having it produced for them in real time. That would undermine marketing use of AR so again there will be some battles for control. Just as we have already seen owners of landmarks try to trademark the image of their buildings to prevent people including them in photographs, so similar battles will fill the courts over AR. What is to stop someone superimposing the image of a nicer building on their own? Should they need to pay a license to do so? What about overlaying celebrity faces on strangers? What about adding multimedia overlays from the web to make dull and ordinary products do exciting things when you use them? A cocktail served in a bar could have a miniature Sydney fireworks display going on over it. That might make it more exciting, but should the media creator be paid and how should that be policed? We’ll need some sort of AR YouTube at the very least with added geolocation.

The whole arts and media industry will see city streets as galleries and stages on which to show off and sell their creations.

Public services will make more mundane use of AR. Simple everyday context-dependent signage is one application, but overlays would be valuable in emergencies too. If police or fire services could superimpose warning on everyone’s visors nearby, that may help save lives in emergencies. Health services will use AR to assist ordinary people to care for a patient until an ambulance arrives

Shopping provide more uses and more battles. AR will show you what a competing shop has on offer right beside the one in front of you. That will make it easy to digitally trespass on a competitor’s shop floor. People can already do that on their smartphone, but AR will put the full image large as life right in front of your eyes to make it very easy to compare two things. Shops won’t want to block comms completely because that would prevent people wanting to enter their shop at all, so they will either have to compete harder or find more elaborate ways of preventing people making direct visual comparisons in-store. Perhaps digital trespassing might become a legal issue.

There will inevitably be a lot of social media use of AR too. If people get together to demonstrate, it will be easier to coordinate them. If police insist they disperse, they could still congregate virtually. Dispersed flash mobs could be coordinated as much as ones in the same location. That makes AR a useful tool for grass-roots democracy, especially demonstrations and direct action, but it also provides a platform for negative uses such as terrorism. Social entrepreneurs will produce vast numbers of custom overlays for millions of different purposes and contexts. Today we have tens of millions of websites and apps. Tomorrow we will have even more AR overlays.

These are just a few of the near term uses of augmented reality and a few hints as issues arising. It will change every aspect of our lives in due course, just as the web has, but more so.

 

1984 clock moves back to 23 June 1983

I set the time on my 1984 clock initially at 1st July 1983:

Inspired by the Doomsday Clock, the 1984 clock is at July 1st 1983

I think our recent referendum in the UK exposed a few of the nastier processes that were leading people to censor discussion of sensitive issues such as immigration, and the increasing contempt of some leaders for ordinary people. It also made many people more aware of the division caused by name-calling that fed self-censorship and I believe some learning from that will foster kinder future campaigns and more open discussion. People will have learned that name-calling and no-platforming areas of discussion is counterproductive. Freedom of speech is a little healthier today than a few days ago.

Brexit wasn’t about 1984 but the campaigning increased awareness in people of how leaders behave, the increased engagement in democracy, erosion of barriers to discussion and especially the potential consequences if you don’t bother to vote. Regardless of the outcome, which I think is wonderful in any case, it has increased resilience of the UK against the forces of 1984. It has also caused similar ripples in other countries that will bring increased awareness of dark forces.

In recognition of that, setting it back a week to not-entirely-coincidentally the day of the referendum seems appropriate.

The 1984 clock now shows 23 June 1983.

Inspired by the Doomsday Clock, the 1984 clock is at July 1st 1983

The Doomsday clock was recently re-assessed and stays at 23.57. See http://thebulletin.org/timeline

I have occasionally written or ranted about 1984. The last weeks have taken us a little closer to Orwell’s dystopian future. So, even though we are long past 1984, the basket of concepts it introduces is well established in common culture.

The doomsday committee set far too pessimistic a time. Nuclear war and a few other risks are significant threats, and extinction level events are possible, but they are far from likely. My own estimate puts the combined risk from all threats growing to around 2% by about 2050. That is quite pessimistic enough I think, but surely that would give us reason to act, but doesn’t justify the level of urgency that extinction is happening any minute now. 11pm would have been quite enough to be a wake-up call but not enough to look like doom-mongering.

So I won’t make the same mistake with my 1984 clock. Before we start working out the time, we need to identify those ideas from 1984 that will be used. My choice would be:

Hijacking or perversion of language to limit debate and constrain it to those views considered acceptable

Use of language while reporting news of events or facts that omits, conceals, hides, distorts or otherwise impedes clear vision of inconvenient aspects of the truth while emphasizing those events, views or aspects that align with acceptable views

Hijacking or control of the media to emphasize acceptable views and block unacceptable ones

Making laws or selecting judiciary according to their individual views to achieve a bias

Blocking of views considered unacceptable or inconvenient by legal or procedural means

Imposing maximum surveillance, via state, social or private enterprises

Encouraging people to police their contacts to expose those holding or expressing inconvenient or unacceptable views

Shaming of those who express unacceptable views as widely as possible

Imposing extreme sanctions such as loss of job or liberty on those expressing unacceptable views

That’s enough to be going on with. Already, you should recognize many instances of each of these flags being raised in recent times. If you don’t follow the news, then I can assist you by highlighting a few instances, some as recent as this week. Please note that in this blog, I am not siding for or against any issue in the following text, I am just considering whether there is evidence of 1984. I make my views on the various issue very clear when I write blogs about those issues.

The Guardian has just decided to bar comments on any articles about race, Muslims, migrants or immigration. It is easy to see why they have done so even if I disagree with such a policy, but nonetheless it is a foundation stone in their 1984 wall.

Again on the migrant theme, which is a very rich seam for 1984 evidence, Denmark, Germany and Sweden have all attempted to censor  news of the involvement of migrants or Muslims in many recent attacks. Further back in time, the UK has had problems with police allowing child abuse to continue rather than address it because of the racial/religious origins of the culprits.

Choice of language by the media has deliberately conflated ‘migrants’ with ‘refugees’, conflated desperation  to escape violent oppression with searching for a wealthier life, and excessively biased coverage towards those events that solicit sympathy with migrants.

Moving to racism, Oriel College has just had an extremely embarrassing climb-down from considering removal of a statue of Cecil Rhodes, because he is considered racist by today’s standards by some students. Attempting to censor history is 1984-ish but so is the fact that involvement of the campaign instigators in their own anti-white racism such as links to the Black Supremacy movement has been largely concealed.

Attempted hijacking of language by the black community is evident in the recent enforcement of the phrase ‘people of color’, and illogical and highly manufactured simultaneous offence at use of the term ‘colored’. The rules only apply to white commentators, so it could be considered a black supremacy power struggle rather than an attempt to deal with any actual anti-black racism. Meanwhile, here in the UK, ‘black’ and ‘people of color’ seem both to be in equally common use so far.

David Cameron and some ministers have this week accused Oxford University of racism because it accepts too few black students. A range of potential causes were officially suggested but none include any criticism of the black community such as cultural issues that devalue educational achievement. In the same sentence, Cameron implied that it necessarily racist that a higher proportion of blacks are in prison. There was no mention that this could be caused by different crime incidence, as is quickly learned by inspection of official government statistics. This 1984-style distortion of the truth by marketing spin is one of Cameron’s most dominant characteristics.

Those statistics are inconvenient and ignoring them is 1984-ish already, but further 1984 evidence is that some statistics that show certain communities in a bad light are no longer collected.

Europe is another are where 1984-style operations are in vogue. Wild exaggeration of the benefits of staying in and extreme warnings of the dangers of leaving dominate most government output and media coverage. Even the initial decision to word the referendum question with a yes and no answer to capitalise on the well-known preference for voting yes is an abuse of language, but that at least was spotted early and the referendum question has been reworded with less bias, though ‘remain’ can still be considered a more positive word than ‘leave’ and remain still takes the first place on the voting slip, so it is still biased in favor of staying in the EU.

Gender is another area where language hijacking is becoming a key weapon. Attempts to force use of the terms ‘cis’ and ‘trans’ accompany attempts to pretend that the transgender community is far larger than reality. Creation of the term ‘transphobic’ clearly attempts to build on the huge success of the gay equality movement’s use of the term homophobic. This provides an easy weapon to use against anyone who doesn’t fully back all of the transgender community’s demands. Very 1984. As recently pointed out by Melanie Phillips, UK government response to such demands has been very politically correct, and will needlessly magnify the numbers experiencing gender dysphoria, but being accompanied by a thorough lack of understanding of the trans community, will very likely make things worse for many genuine transgender people.

As for surveillance, shaming, career destruction etc., we all see how well Twitter fills that role all by itself. Other media and the law add to that, but social media backlash is already a massive force even without official additions.

Climate change has even become a brick in the 1984 wall. Many media outlets censor views from scientists that don’t agree that doom caused by human emissions of CO2 is imminent. The language used, with words such as ‘denier’ are similarly evidence of 1984 influence.

Enough examples. If you look for them, you’ll soon spot them every day.

What time to set out clock then? I think we already see a large momentum towards 1984, with the rate of incidents of new policies pushing that direction increasing rapidly. A lot of pieces are already in place, though some need shaped or cemented. We are not there yet though, and we still have some freedom of expression, still escape being locked up for saying the wrong thing unless it is extreme. We don’t quite have the thought police, or even ID cards yet. I think we are close, but not so close we can’t recover. Let’s start with a comfortable enough margin so that movement in either direction can be taken account of in future assessments. We are getting close though, so I don’t want too big a margin. 6 month might be a nice compromise, then we can watch as it gets every closer without the next piece of evidence taking us all the way.

The 1984 clock is at July 1st 1983.

 

State of the world in 2050

Some things are getting better, some worse. 2050 will be neither dystopian nor utopian. A balance of good and bad not unlike today, but with different goods and bads, and slightly better overall. More detail? Okay, for most of my followers, this will mostly collate things you may know already, but there’s no harm in a refresher Futures 101.

Health

We will have cost-effective and widespread cures or control for most cancers, heart disease, diabetes, dementia and most other killers. Quality-of-life diseases such as arthritis will also be controllable or curable. People will live longer and remain healthier for longer, with an accelerated decline at the end.

On the bad side, new diseases will exist, including mutated antibiotic-resistant versions of existing ones. There will still be occasional natural flu mutations and other viruses, and there will still be others arising from contacts between people and other animals that are more easily spread due to increased population, urbanization and better mobility. Some previously rare diseases will become big problems due to urbanization and mobility. Urbanization will be a challenge.

However, diagnostics will be faster and better, we will no longer be so reliant on antibiotics to fight back, and sterilisation techniques for hospitals will be much improved. So even with greater challenges, we will be able to cope fine most of the time with occasional headlines from epidemics.

A darker side is the increasing prospect for bio-terrorism, with man-made viruses deliberately designed to be highly lethal, very contagious and to withstand most conventional defenses, optimized for maximum and rapid spread by harnessing mobility and urbanization. With pretty good control or defense against most natural threats, this may well be the biggest cause of mass deaths in 2050. Bio-warfare is far less likely.

Utilizing other techs, these bio-terrorist viruses could be deployed by swarms of tiny drones that would be hard to spot until too late, and of course these could also be used with chemical weapons such as use of nerve gas. Another tech-based health threat is nanotechnology devices designed to invade the body, damage of destroy systems or even control the brain. It is easy to detect and shoot down macro-scale deployment weapons such as missiles or large drones but far harder to defend against tiny devices such as midge-sized drones or nanotech devices.

The overall conclusion on health is that people will mostly experience much improved lives with good health, long life and a rapid end. A relatively few (but very conspicuous) people will fall victim to terrorist attacks, made far more feasible and effective by changing technology and demographics.

Loneliness

An often-overlooked benefit of increasing longevity is the extending multi-generational family. It will be commonplace to have great grandparents and great-great grandparents. With improved health until near their end, these older people will be seen more as welcome and less as a burden. This advantage will be partly offset by increasing global mobility, so families are more likely to be geographically dispersed.

Not everyone will have close family to enjoy and to support them. Loneliness is increasing even as we get busier, fuller lives. Social inclusion depends on a number of factors, and some of those at least will improve. Public transport that depends on an elderly person walking 15 minutes to a bus stop where they have to wait ages in the rain and wind for a bus on which they are very likely to catch a disease from another passenger is really not fit for purpose. Such primitive and unsuitable systems will be replaced in the next decades by far more socially inclusive self-driving cars. Fleets of these will replace buses and taxis. They will pick people up from their homes and take them all the way to where they need to go, then take them home when needed. As well as being very low cost and very environmentally friendly, they will also have almost zero accident rates and provide fast journey times thanks to very low congestion. Best of all, they will bring easier social inclusion to everyone by removing the barriers of difficult, slow, expensive and tedious journeys. It will be far easier for a lonely person to get out and enjoy cultural activity with other people.

More intuitive social networking, coupled to augmented and virtual reality environments in which to socialize will also mean easier contact even without going anywhere. AI will be better at finding suitable companions and lovers for those who need assistance.

Even so, some people will not benefit and will remain lonely due to other factors such as poor mental health, lack of social skills, or geographic isolation. They still do not need to be alone. 2050 will also feature large numbers of robots and AIs, and although these might not be quite so valuable to some as other human contact, they will be a pretty good substitute. Although many will be functional, cheap and simply fit for purpose, those designed for companionship or home support functions will very probably look human and behave human. They will have good intellectual and emotional skills and will be able to act as a very smart executive assistant as well as domestic servant and as a personal doctor and nurse, even as a sex partner if needed.

It would be too optimistic to say we will eradicate loneliness by 2050 but we can certainly make a big dent in it.

Poverty

Technology progress will greatly increase the size of the global economy. Even with the odd recession our children will be far richer than our parents. It is reasonable to expect the total economy to be 2.5 times bigger than today’s by 2050. That just assumes an average growth of about 2.5% which I think is a reasonable estimate given that technology benefits are accelerating rather than slowing even in spite of recent recession.

While we define poverty level as a percentage of average income, we can guarantee poverty will remain even if everyone lived like royalty. If average income were a million dollars per year, 60% of that would make you rich by any sensible definition but would still qualify as poverty by the ludicrous definition based on relative income used in the UK and some other countries. At some point we need to stop calling people poor if they can afford healthy food, pay everyday bills, buy decent clothes, have a decent roof over their heads and have an occasional holiday. With the global economy improving so much and so fast, and with people having far better access to markets via networks, it will be far easier for people everywhere to earn enough to live comfortably.

In most countries, welfare will be able to provide for those who can’t easily look after themselves at a decent level. Ongoing progress of globalization of compassion that we see today will likely make a global welfare net by 2050. Everyone won’t be rich, and some won’t even be very comfortable, but I believe absolute poverty will be eliminated in most countries, and we can ensure that it will be possible for most people to live in dignity. I think the means, motive and opportunity will make that happen, but it won’t reach everyone. Some people will live under dysfunctional governments that prevent their people having access to support that would otherwise be available to them. Hopefully not many. Absolute poverty by 2050 won’t be history but it will be rare.

In most developed countries, the more generous welfare net might extend to providing a ‘citizen wage’ for everyone, and the level of that could be the same as average wage is today. No-one need be poor in 2050.

Environment

The environment will be in good shape in 2050. I have no sympathy with doom mongers who predict otherwise. As our wealth increases, we tend to look after the environment better. As technology improves, we will achieve a far higher standards of living while looking after the environment. Better mining techniques will allow more reserves to become economic, we will need less resource to do the same job better, reuse and recycling will make more use of the same material.

Short term nightmares such as China’s urban pollution levels will be history by 2050. Energy supply is one of the big contributors to pollution today, but by 2050, combinations of shale gas, nuclear energy (uranium and thorium), fusion and solar energy will make up the vast bulk of energy supply. Oil and unprocessed coal will mostly be left in the ground, though bacterial conversion of coal into gas may well be used. Oil that isn’t extracted by 2030 will be left there, too expensive compared to making the equivalent energy by other means. Conventional nuclear energy will also be on its way to being phased out due to cost. Energy from fusion will only be starting to come on stream everywhere but solar energy will be cheap to harvest and high-tech cabling will enable its easier distribution from sunny areas to where it is needed.

It isn’t too much to expect of future governments that they should be able to negotiate that energy should be grown in deserts, and food crops grown on fertile land. We should not use fertile land to place solar panels, nor should we grow crops to convert to bio-fuel when there is plenty of sunny desert of little value otherwise on which to place solar panels.

With proper stewardship of agricultural land, together with various other food production technologies such as hydroponics, vertical farms and a lot of meat production via tissue culturing, there will be more food per capita than today even with a larger global population. In fact, with a surplus of agricultural land, some might well be returned to nature.

In forests and other ecosystems, technology will also help enormously in monitoring eco-health, and technologies such as genetic modification might be used to improve viability of some specie otherwise threatened.

Anyone who reads my blog regularly will know that I don’t believe climate change is a significant problem in the 2050 time frame, or even this century. I won’t waste any more words on it here. In fact, if I have to say anything, it is that global cooling is more likely to be a problem than warming.

Food and Water

As I just mentioned in the environment section, we will likely use deserts for energy supply and fertile land for crops. Improving efficiency and density will ensure there is far more capability to produce food than we need. Many people will still eat meat, but some at least will be produced in factories using processes such as tissue culturing. Meat pastes with assorted textures can then be used to create a variety of forms of processed meats. That might even happen in home kitchens using 3D printer technology.

Water supply has often been predicted by futurists as a cause of future wars, but I disagree. I think that progress in desalination is likely to be very rapid now, especially with new materials such as graphene likely to come on stream in bulk.  With easy and cheap desalination, water supply should be adequate everywhere and although there may be arguments over rivers I don’t think the pressures are sufficient by themselves to cause wars.

Privacy and Freedom

In 2016, we’re seeing privacy fighting a losing battle for survival. Government increases surveillance ubiquitously and demands more and more access to data on every aspect of our lives, followed by greater control. It invariably cites the desire to control crime and terrorism as the excuse and as they both increase, that excuse will be used until we have very little privacy left. Advancing technology means that by 2050, it will be fully possible to implement thought police to check what we are thinking, planning, desiring and make sure it conforms to what the authorities have decided is appropriate. Even the supposed servant robots that live with us and the AIs in our machines will keep official watch on us and be obliged to report any misdemeanors. Back doors for the authorities will be in everything. Total surveillance obliterates freedom of thought and expression. If you are not free to think or do something wrong, you are not free.

Freedom is strongly linked to privacy. With laws in place and the means to police them in depth, freedom will be limited to what is permitted. Criminals will still find ways to bypass, evade, masquerade, block and destroy and it hard not to believe that criminals will be free to continue doing what they do, while law-abiding citizens will be kept under strict supervision. Criminals will be free while the rest of us live in a digital open prison.

Some say if you don’t want to do wrong, you have nothing to fear. They are deluded fools. With full access to historic electronic records going back to now or earlier, it is not only today’s laws and guidelines that you need to be compliant with but all the future paths of the random walk of political correctness. Social networks can be fiercer police than the police and we are already discovering that having done something in the distant past under different laws and in different cultures is no defense from the social networking mobs. You may be free technically to do or say something today, but if it will be remembered for ever, and it will be, you also need to check that it will probably always be praiseworthy.

I can’t counterbalance this section with any positives. I’ve side before that with all the benefits we can expect, we will end up with no privacy, no freedom and the future will be a gilded cage.

Science and the arts

Yes they do go together. Science shows us how the universe works and how to do what we want. The arts are what we want to do. Both will flourish. AI will help accelerate science across the board, with a singularity actually spread over decades. There will be human knowledge but a great deal more machine knowledge which is beyond un-enhanced human comprehension. However, we will also have the means to connect our minds to the machine world to enhance our senses and intellect, so enhanced human minds will be the norm for many people, and our top scientists and engineers will understand it. In fact, it isn’t safe to develop in any other way.

Science and technology advances will improve sports too, with exoskeletons, safe drugs, active skin training acceleration and virtual reality immersion.

The arts will also flourish. Self-actualization through the arts will make full use of AI assistance. a feeble idea enhanced by and AI assistant can become a work of art, a masterpiece. Whether it be writing or painting, music or philosophy, people will be able to do more, enjoy more, appreciate more, be more. What’s not to like?

Space

by 2050, space will be a massive business in several industries. Space tourism will include short sub-orbital trips right up to lengthy stays in space hotels, and maybe on the moon for the super-rich at least.

Meanwhile asteroid mining will be under way. Some have predicted that this will end resource problems here on Earth, but firstly, there won’t be any resource problems here on Earth, and secondly and most importantly, it will be far too expensive to bring materials back to Earth, and almost all the resources mined will be used in space, to make space stations, vehicles, energy harvesting platforms, factories and so on. Humans will be expanding into space rapidly.

Some of these factories and vehicles and platforms and stations will be used for science, some for tourism, some for military purposes. Many will be used to offer services such as monitoring, positioning, communications just as today but with greater sophistication and detail.

Space will be more militarized too. We can hope that it will not be used in actual war, but I can’t honestly predict that one way or the other.

 

Migration

If the world around you is increasingly unstable, if people are fighting, if times are very hard and government is oppressive, and if there is a land of milk and honey not far away that you can get to, where you can hope for a much better, more prosperous life, free of tyranny, where instead of being part of the third world, you can be in the rich world, then you may well choose to take the risks and traumas associated with migrating. Increasing population way ahead of increasing wealth in Africa, and a drop in the global need for oil will both increase problems in the Middle East and North Africa. Add to that vicious religious sectarian conflict and a great many people will want to migrate indeed. The pressures on Europe and America to accept several millions more migrants will be intense.

By 2050, these regions will hopefully have ended their squabbles, and some migrants will return to rebuild, but most will remain in their new homes.

Most of these migrants will not assimilate well into their new countries but will mainly form their own communities where they can have a quite separate culture, and they will apply pressure to be allowed to self-govern. A self-impose apartheid will result. It might if we are lucky gradually diffuse as religion gradually becomes less important and the western lifestyle becomes more attractive. However, there is also a reinforcing pressure, with this self-exclusion and geographic isolation resulting in fewer opportunities, less mixing with others and therefore a growing feeling of disadvantage, exclusion and victimization. Tribalism becomes reinforced and opportunities for tension increase. We already see that manifested well in  the UK and other European countries.

Meanwhile, much of the world will be prosperous, and there will be many more opportunities for young capable people to migrate and prosper elsewhere. An ageing Europe with too much power held by older people and high taxes to pay for their pensions and care might prove a discouragement to stay, whereas the new world may offer increasing prospects and lowering taxes, and Europe and the USA may therefore suffer a large brain drain.

Politics

If health care is better and cheaper thanks to new tech and becomes less of a political issue; if resources are abundantly available, and the economy is healthy and people feel wealthy enough and resource allocation and wealth distribution become less of a political issue; if the environment is healthy; if global standards of human rights, social welfare and so on are acceptable in most regions and if people are freer to migrate where they want to go; then there may be a little less for countries to fight over. There will be a little less ‘politics’ overall. Most 2050 political arguments and debates will be over social cohesion, culture, generational issues, rights and so on, not health, defence, environment, energy or industry

We know from history that that is no guarantee of peace. People disagree profoundly on a broad range of issues other than life’s basic essentials. I’ve written a few times on the increasing divide and tensions between tribes, especially between left and right. I do think there is a strong chance of civil war in Europe or the USA or both. Social media create reinforcement of views as people expose themselves only to other show think the same, and this creates and reinforces and amplifies an us and them feeling. That is the main ingredient for conflict and rather than seeing that and trying to diffuse it, instead we see left and right becoming ever more entrenched in their views. The current problems we see surrounding Islamic migration show the split extremely well. Each side demonizes the other, extreme camps are growing on both sides and the middle ground is eroding fast. Our leaders only make things worse by refusing to acknowledge and address the issues. I suggested in previous blogs that the second half of the century is when tensions between left and right might result in the Great Western War, but that might well be brought forward a decade or two by a long migration from an unstable Middle East and North Africa, which looks to worsen over the next decade. Internal tensions might build for another decade after that accompanied by a brain drain of the most valuable people, and increasing inter-generational tensions amplifying the left-right divide, with a boil-over in the 2040s. That isn’t to say we won’t see some lesser conflicts before then.

I believe the current tensions between the West, Russia and China will go through occasional ups and downs but the overall trend will be towards far greater stability. I think the chances of a global war will decrease rather than increase. That is just as well since future weapons will be far more capable of course.

So overall, the world peace background will improve markedly, but internal tensions in the West will increase markedly too. The result is that wars between countries or regions will be less likely but the likelihood of civil war in the West will be high.

Robots and AIs

I mentioned robots and AIs in passing in the loneliness section, but they will have strong roles in all areas of life. Many that are thought of simply as machines will act as servants or workers, but many will have advanced levels of AI (not necessarily on board, it could be in the cloud) and people will form emotional bonds with them. Just as important, many such AI/robots will be so advanced that they will have relationships with each other, they will have their own culture. A 21st century version of the debates on slavery is already happening today for sentient AIs even though we don’t have them yet. It is good to be prepared, but we don’t know for sure what such smart and emotional machines will want. They may not want the same as our human prejudices suggest they will, so they will need to be involved in debate and negotiation. It is almost certain that the upper levels of AIs and robots (or androids more likely) will be given some rights, to freedom from pain and abuse, ownership of their own property, a degree of freedom to roam and act of their own accord, the right to pursuit of happiness. They will also get the right to government representation. Which other rights they might get is anyone’s guess, but they will change over time mainly because AIs will evolve and change over time.

OK, I’ve rambled on long enough and I’ve addressed some of the big areas I think. I have ignored a lot more, but it’s dinner time.

A lot of things will be better, some things worse, probably a bit better overall but with the possibility of it all going badly wrong if we don’t get our act together soon. I still think people in 2050 will live in a gilded cage.

The future of air

Time for a second alphabetic ‘The future of’ set. Air is a good starter.

Air is mostly a mixture of gases, mainly nitrogen and oxygen, but it also contains a lot of suspended dust, pollen and other particulates, flying creatures such as insects and birds, and of course bacteria and viruses. These days we also have a lot of radio waves, optical signals, and the cyber-content carried on them. Air isn’t as empty as it seems. But it is getting busier all the time.

Internet-of-things, location-based marketing data and other location-based services and exchanges will fill the air digitally with fixed and wandering data. I called that digital air when I wrote a full technical paper on it and I don’t intend to repeat it all now a decade later. Some of the ideas have made it into reality, many are still waiting for marketers and app writers to catch up.

The most significant recent addition is drones. There are already lots of them, in a wide range of sizes from insect size to aeroplane size. Some are toys, some airborne cameras for surveillance, aerial photography, monitoring and surveillance, and increasingly they are appearing for sports photography and tracking or other leisure pursuits. We will see a lot more of them in coming years. Drone-based delivery is being explored too, though I am skeptical of its likely success in domestic built up areas.

Personal swarms of follower drones will become common too. It’s already possible to have a drone follow you and keep you on video, mainly for sports uses, but as drones become smaller, you may one day have a small swarm of tiny drones around you, recording video from many angles, so you will be able to recreate events from any time in an entire 3D area around you, a 3D permasuperselfie. These could also be extremely useful for military and policing purposes, and it will make the decline of privacy terminal. Almost everything going on in public in a built up environment will be recorded, and a great deal of what happens elsewhere too.

We may see lots of virtual objects or creatures once augmented reality develops a bit more. Some computer games will merge with real world environments, so we’ll have aliens, zombies and various mythical creatures from any game populating our streets and skies. People may also use avatars that fly around like fairies or witches or aliens or mythical creatures, so they won’t all be AI entities, some will have direct human control. And then there are buildings that might also have virtual appearances and some of those might include parts of buildings that float around, or even some entire cities possibly like those buildings and city areas in the game Bioshock Infinite.

Further in the future, it is possible that physical structures might sometimes levitate, perhaps using magnets, or lighter than air construction materials such as graphene foam. Plasma may also be used as a building material one day, albeit far in the future.

I’m bored with air now. Time for B.

Technology 2040: Technotopia denied by human nature

This is a reblog of the Business Weekly piece I wrote for their 25th anniversary.

It’s essentially a very compact overview of the enormous scope for technology progress, followed by a reality check as we start filtering that potential through very imperfect human nature and systems.

25 years is a long time in technology, a little less than a third of a lifetime. For the first third, you’re stuck having to live with primitive technology. Then in the middle third it gets a lot better. Then for the last third, you’re mainly trying to keep up and understand it, still using the stuff you learned in the middle third.

The technology we are using today is pretty much along the lines of what we expected in 1990, 25 years ago. Only a few details are different. We don’t have 2Gb/s per second to the home yet and AI is certainly taking its time to reach human level intelligence, let alone consciousness, but apart from that, we’re still on course. Technology is extremely predictable. Perhaps the biggest surprise of all is just how few surprises there have been.

The next 25 years might be just as predictable. We already know some of the highlights for the coming years – virtual reality, augmented reality, 3D printing, advanced AI and conscious computers, graphene based materials, widespread Internet of Things, connections to the nervous system and the brain, more use of biometrics, active contact lenses and digital jewellery, use of the skin as an IT platform, smart materials, and that’s just IT – there will be similarly big developments in every other field too. All of these will develop much further than the primitive hints we see today, and will form much of the technology foundation for everyday life in 2040.

For me the most exciting trend will be the convergence of man and machine, as our nervous system becomes just another IT domain, our brains get enhanced by external IT and better biotech is enabled via nanotechnology, allowing IT to be incorporated into drugs and their delivery systems as well as diagnostic tools. This early stage transhumanism will occur in parallel with enhanced genetic manipulation, development of sophisticated exoskeletons and smart drugs, and highlights another major trend, which is that technology will increasingly feature in ethical debates. That will become a big issue. Sometimes the debates will be about morality, and religious battles will result. Sometimes different parts of the population or different countries will take opposing views and cultural or political battles will result. Trading one group’s interests and rights against another’s will not be easy. Tensions between left and right wing views may well become even higher than they already are today. One man’s security is another man’s oppression.

There will certainly be many fantastic benefits from improving technology. We’ll live longer, healthier lives and the steady economic growth from improving technology will make the vast majority of people financially comfortable (2.5% real growth sustained for 25 years would increase the economy by 85%). But it won’t be paradise. All those conflicts over whether we should or shouldn’t use technology in particular ways will guarantee frequent demonstrations. Misuses of tech by criminals, terrorists or ethically challenged companies will severely erode the effects of benefits. There will still be a mix of good and bad. We’ll have fixed some problems and created some new ones.

The technology change is exciting in many ways, but for me, the greatest significance is that towards the end of the next 25 years, we will reach the end of the industrial revolution and enter a new age. The industrial revolution lasted hundreds of years, during which engineers harnessed scientific breakthroughs and their own ingenuity to advance technology. Once we create AI smarter than humans, the dependence on human science and ingenuity ends. Humans begin to lose both understanding and control. Thereafter, we will only be passengers. At first, we’ll be paying passengers in a taxi, deciding the direction of travel or destination, but it won’t be long before the forces of singularity replace that taxi service with AIs deciding for themselves which routes to offer us and running many more for their own culture, on which we may not be invited. That won’t happen overnight, but it will happen quickly. By 2040, that trend may already be unstoppable.

Meanwhile, technology used by humans will demonstrate the diversity and consequences of human nature, for good and bad. We will have some choice of how to use technology, and a certain amount of individual freedom, but the big decisions will be made by sheer population numbers and statistics. Terrorists, nutters and pressure groups will harness asymmetry and vulnerabilities to cause mayhem. Tribal differences and conflicts between demographic, religious, political and other ideological groups will ensure that advancing technology will be used to increase the power of social conflict. Authorities will want to enforce and maintain control and security, so drones, biometrics, advanced sensor miniaturisation and networking will extend and magnify surveillance and greater restrictions will be imposed, while freedom and privacy will evaporate. State oppression is sadly as likely an outcome of advancing technology as any utopian dream. Increasing automation will force a redesign of capitalism. Transhumanism will begin. People will demand more control over their own and their children’s genetics, extra features for their brains and nervous systems. To prevent rebellion, authorities will have little choice but to permit leisure use of smart drugs, virtual escapism, a re-scoping of consciousness. Human nature itself will be put up for redesign.

We may not like this restricted, filtered, politically managed potential offered by future technology. It offers utopia, but only in a theoretical way. Human nature ensures that utopia will not be the actual result. That in turn means that we will need strong and wise leadership, stronger and wiser than we have seen of late to get the best without also getting the worst.

The next 25 years will be arguably the most important in human history. It will be the time when people will have to decide whether we want to live together in prosperity, nurturing and mutual respect, or to use technology to fight, oppress and exploit one another, with the inevitable restrictions and controls that would cause. Sadly, the fine engineering and scientist minds that have got us this far will gradually be taken out of that decision process.

The future of freedom of speech

This is mainly about the UK, but some applies elsewhere too.

The UK Police are in trouble yet again for taking the side of criminals against the law-abiding population. Our police seem to have frequent trouble with understanding the purpose of their existence. This time in the wake of the Charlie Hebdo murders, some police forces decided that their top priority was not to protect freedom of speech nor to protect law-abiding people from terrorists, but instead to visit the newsagents that were selling Charlie Hebdo and get the names of people buying copies. Charlie Hebdo has become synonymous with the right to exercise freedom of speech, and by taking names of its buyers, those police forces have clearly decided that Charlie Hebdo readers are the problem, not the terrorists. Some readers might indeed present a threat, but so might anyone in the population. Until there is evidence to suspect a crime, or at the very least plotting of a crime, it is absolutely no rightful business of the police what anyone does. Taking names of buyers treats them as potential suspects for future hate crimes. It is all very ‘Minority Report’, mixed with more than a touch of ‘Nineteen-eighty-four’. It is highly disturbing.

The Chief Constable has since clarified to the forces that this was overstepping the mark, and one of the offending forces has since apologised. The others presumably still think they were in the right. I haven’t yet heard any mention of them saying they have deleted the names from their records.

This behavior is wrong but not surprising. The UK police often seem to have socio-political agendas that direct their priorities and practices in upholding the law, individually and institutionally.

Our politicians often pay lip service to freedom of speech while legislating for the opposite. Clamping down on press freedom and creation of thought crimes (aka hate crimes) have both used the excuse of relatively small abuses of freedom to justify taking away our traditional freedom of speech. The government reaction to the Charlie Hebdo massacre was not to ensure that freedom of speech is protected in the UK, but to increase surveillance powers and guard against any possible backlash. The police have also become notorious for checking social media in case anyone has said anything that could possibly be taken as offensive by anyone. Freedom of speech only remains in the UK provided you don’t say anything that anyone could claim to be offended by, unless you can claim to be a member of a preferred victim group, in which case it sometimes seems that you can do or say whatever you want. Some universities won’t even allow some topics to be discussed. Freedom of speech is under high downward pressure.

So where next? Privacy erosion is a related problem that becomes lethal to freedom when combined with a desire for increasing surveillance. Anyone commenting on social media already assumes that the police are copied in, but if government gets its way, that will be extended to list of the internet services or websites you visit, and anything you type into search. That isn’t the end though.

Our televisions and games consoles listen in to our conversation (to facilitate voice commands) and send some of the voice recording to the manufacturers. We should expect that many IoT devices will do so too. Some might send video, perhaps to facilitate gesture recognition, and the companies might keep that too. I don’t know whether they data mine any of it for potential advertising value or whether they are 100% benign and only use it to deliver the best possible service to the user. Your guess is as good as mine.

However, since the principle has already been demonstrated, we should expect that the police may one day force them to give up their accumulated data. They could run a smart search on the entire population to find any voice or video samples or photos that might indicate anything remotely suspicious, and could then use legislation to increase monitoring of the suspects. They could make an extensive suspicion database for the whole population, just in case it might be useful. Given that there is already strong pressure to classify a wide range of ordinary everyday relationship rows or financial quarrels as domestic abuse, this is a worrying prospect. The vast majority of the population have had arguments with a partner at some time, used a disparaging comment or called someone a name in the heat of the moment, said something in the privacy of their home that they would never dare say in public, used terminology that isn’t up to date or said something less than complimentary about someone on TV. All we need now to make the ‘Demolition Man’ automated fine printout a reality is more time and more of the same government and police attitudes as we are accustomed to.

The next generation of software for the TVs and games consoles could easily include monitoring of eye gaze direction, maybe some already do. It might need that for control (e.g look and blink), or to make games smarter or for other benign reasons. But when the future police get the records of everything you have watched, what image was showing on that particular part of the screen when you made that particular expression, or made that gesture or said that, then we will pretty much have the thought police. They could get a full statistical picture of your attitudes to a wide range of individuals, groups, practices, politics or policies, and a long list of ‘offences’ for anyone they don’t like this week. None of us are saints.

The technology is all entirely feasible in the near future. What will make it real or imaginary is the attitude of the authorities, the law of the land and especially the attitude of the police. Since we are seeing an increasing disconnect between the police and the intent behind the law of the land, I am not the only one that this will worry.

We’ve already lost much of our freedom of speech in the UK. If we do not protest loudly enough and defend what we have left, we will soon lose the rest, and then lose freedom of thought. Without the freedom to think what you want, you don’t have any freedom worth having.

 

After LGBT rights: Anonymity is the next battleground for gender identity

Lesbian, gay, bi, transsexual – the increasingly familiar acronym LGBT is also increasingly out of date. It contains a built-in fracture anyway. LGB is about sexual preference and T is about gender, altogether different things although people casually use them synonymously frequently, along with ‘sex’. An LGB or H(etero) person can also be transgender. Gender and sexuality are more complicated than they were and the large cracks in traditional labeling are getting wider. Some LGB people don’t like being lumped in the same rights war with T. There’s even a lesbian/gay separatist movement. Now in some regions and circles, a Q is added for queer/questioning. I was somewhat surprised when that happened because here in the UK, I think many would find the term ‘queer’ offensive and would prefer not to use it. ‘Questioning’ obviously is another dimension of variability so surely it should be QQ in any case?

But as they say, you can’t make a silk purse from a sow’s ear. We probably need a fresh start for additional words, not to just put lipstick on a pig (I’m an engineer, so I have a license to mix metaphors and to confuse metaphors with other literary constructions when I can’t remember the right term.)

More importantly, lots of people don’t want to be assigned a label and lots don’t want to be ‘outed’. They’re perfectly happy to feel how they do and appear to others how they do without being forced to come out of some imaginary closet to satisfy someone else’s agenda. LGBT people are not all identical, they have different personalities and face different personal battles, so there are tensions within and between gender groups as well as between individuals – tensions over nomenclature, tensions over who should be entitled to what protections, and who can still claim victim-hood, or who ‘represents’ their interests.

Now that important more or less equal rights have been won in most civilized countries, many people in these groups just want to enjoy their freedom, not to be told how to exist by LGBT pressure groups, which just replaces one set of oppression for another. As overall rights are leveled and wars are won, those whose egos and status were defined by that wars potentially lose identity and status so have to be louder and more aggressive to keep attention or move to other countries and cultures. So as equal rights battles close on one front, they open on another. The big battles over gay rights suddenly seem so yesterday. Activists are still fighting old battles that have already been won, while ignoring attacks from other directions.

The primary new battlefront of concern here is privacy and anonymity and it seems to be being ignored so far by LGBT groups, possibly because in some ways it runs against the ethos of forcing people to leave closets whether they want to or not. Without protection, there is a strong danger that in spite of many victories by LGBT campaigners, many people will start to suffer gender identity repression, oppression, identity and self-worth damage who are so far free from it. That would be sad.

While LGBT pressure groups have been fighting for gay and transsexual rights, technology has enabled new dimensions for gender. Even with social networking sites’ new gender options, these so far have not been absorbed into everyday vocabulary for most of us, yet are already inadequate. As people spend more and more of their lives in different roles in the many dimensions of social and virtual interactions, gender has taken on new dimensions that are so far undefended.

I don’t like using contrived terms like cybergender because they can only ever includes a few aspects of the new dimensions. Dimensions by normal definition are orthogonal, so you really need a group of words for each one and therefore many words altogether to fully describe your sexuality and gender identity, and why should you have to describe it anyway, why can’t you just enjoy life as best you can? You shouldn’t have to answer to gender busybodies. Furthermore, finding new names isn’t the point. Most of us won’t remember most of them anyway, and really names only appeal to those who want to keep gender warrior status because they can then fight for a named community. Shakespeare observed that a rose by any other name would smell as sweet. It is the actuality of gender and mind and personality and individuality and personal existential experience that matters, not what we call it. It is gender/sexuality freedom itself that we now need to defend, no longer just LGBT rights, but I suspect some activists can’t tell the difference.

This new phase of gender flexibility creates issues that are far outside the domain of traditional gay rights – the opportunities and problems are different and the new ‘victims’ are often outside the traditional LGBT community. There is certainly a lot of scope for new psychology study but also possibility of new psychiatric issues. For most people though, gender identity fluidity in social networks or virtual worlds is a painless even a rewarding and enjoyable everyday experience, but that makes it no less important to defend. If we don’t defend it, it will be lost. Definitely.

Terms like cis and trans are used to identify whether someone is physically in their birth gender. I hated those terms in chemistry, I think they are equally annoying in gender discussion. They seem to have been created solely to add a pseudo-intellectual layer to ordinary everyday words to create an elite whose only extra skill is knowing the latest terminology. What is wrong with plain english? Look:

Cisgender: denoting or relating to a person whose self-identity conforms with the gender that corresponds to their biological sex; not transgender.

So, to those of us not out fighting a gender rights campaign: a man who feels male inside. Or a woman who feels a woman inside. I don’t actually find that very informative, with or without the pseudo-intellectual crap. It only tells me 10% of what matters.

Also check out http://en.wikipedia.org/wiki/Cisgender, and http://en.wikipedia.org/wiki/Transgender. Wikipedia is supposed by naive users to be up to date but these articles presumably kept up to date by activists appear to me to be about 20 years out of date based on a scan of topic titles – a long list of everyday gender experiences and identity is not covered. That is a big problem that is being obscured by excessive continuing focus on yesterday’s issues and determination to keep any others from sharing the same pedestals.

If a man feels male inside but wears a dress, we may traditionally call him a transvestite just so we have a convenient label, but how he actually feels gender-wise inside may be highly variable and not covered by overly simplistic static names. He might cross-dress for a short-lived sexual thrill, or simply to feel feminine and explore what he consider to be his feminine emotions, or for a stag party game, or as a full everyday lifestyle choice, or a security blanket, or a fashion statement, or political activism, or any number of other things. The essence of how it feels might vary from minute to minute. Internal feelings of identity can all vary as well as the cis and trans prefixes, and as well as sexual preference. But all the multi-dimensional variation seems to be thrown together in transsexuality, however inappropriate it might be. We might as well write LGBeverythingelse!

Let’s stop all the focus on names, and especially stop making changing lists of names and reassigning old-fashioned ones as offensive terms to maintain victim-hood. Let’s focus instead on pursuing true freedom of gender identity, expression, feeling, appearance, behavior, perception, on preserving true fluidity and dynamism, whether a permanent state or in gender play. Gender play freedom is important just as LGB freedom is important. Play makes us human, it is a major factor in making it worth being alive. Gender play often demands anonymity for some people. If a website enforces true identity, then someone cannot go there in their everyday business identity and also use it to explore their gender identity or for gender play. Even if it only insists on gender verification, that will exclude a lot of wannabe members from being how they want to be. If a man wants to pass himself off as a woman in the workplace, he is protected by law. Why can he not also have the same freedom on any website? He may only want to do it on Tuesday evenings, he won’t want that to govern all the rest of his online or everyday life identity.

In a computer game, social network site, virtual world, or in future interactions with various classes of AI and hybrids, gender is dynamic, it is fluid, it is asymmetric, it is asynchronous, it is virtual. It may be disconnected from normal everyday real life gender identity. Some gender play cannot exist without a virtual ‘closet’ because the relationship might depend totally on other people not knowing their identity, let alone their physical sex. The closet of network anonymity is being eroded very quickly though, and that’s why I think it is important that gender activists start focusing their attention on an important pillar of gender identity that has already been attacked and damaged severely, and is in imminent danger of collapsing.

Importance varies tremendously too. Let’s take a few examples in everyday 2015 life to expose some issues or varying importance.

If a woman is into playing a computer games, it is almost inevitable that she will have had no choice but to play as a male character sometimes, because some games only have a male player character. She may have zero interest in gender play and it is no more than a triviality to her to have to play a male character yet again, she just enjoys pulling the trigger and killing everything that moves like everyone else. Suppose she is then playing online. Her username will be exposed to the other players. The username could be her real name or a made-up string of characters. In the first case, her name gives away her female status so she might find it irritating that she now gets nuisance interactions from male players, and if so, she might have to create a new identity with a male-sounding name to avoid being pestered every time she goes online. That is an extremely common everyday experience for millions of women. If the system changes to enforce true identity, she won’t be able to do that and she will then have to deal with lots of nuisances pestering her and trying to chat her up. She might have to avoid using that game network, and thus loses out on all the fun she had. On the other side of the same network, a man might play a game that only has female playable character. With his identity exposed, he might be teased by his mates or family or colleagues for doing so so he also might avoid playing games that don’t use male characters for fear of teasing over his possible sexuality.

So we haven’t even considered anyone who wants to do any gender play yet, but already see gender-related problems resulting from loss of privacy and anonymity.

Let’s move on. Another man might enjoy playing female characters and deliberately pick a female playable character when it is an option. That does not make it a transsexual issue yet. Many men play female characters if the outfits look good. On Mass Effect for example, many men play as ‘Femshep’ (a female ship captain, called Shepard) because ‘if you’re going to spend 35 hours or more looking at someone’s ass, it might as well be a cute one’. That justification seems perfectly believable and is the most trivial example of actual gender play. It has no consequence outside of the game. The conversation and interactions in the game are also affected by the character gender, not just the ass in question, so it is slightly immersive and it is a trivially deliberate choice, not enforced by the game so it does qualify as gender play nonetheless. Again, if identity is broadcast along with gender choice, some teasing might result – hardly comparable to the problems which many LGBT people have suffered, but on the other hand, still a small problem that is unnecessary and easily avoidable.

A third man might make exactly the same decision because he enjoys feeling he is female. He is in a totally fantasy environment with fantasy characters, but he extracts a feeling of perceived femininity from playing Femshep. That is the next level of gender play – using it to experience, however slightly, the feeling of being a woman, even if it is just a perception from a male point of view of how a woman might feel.

A fourth might go up another level by taking that online, and choose a female-sounding name so that other players might assume he is a woman. Most wouldn’t make that assumption since gender hopping in social environments is already widespread, but some users take people at face value so it would have some effect, some reward. He could experience other actual people interacting with him as if he was a woman. He might like it and do it regularly. His gender play might never go any further than that. He might still be otherwise 100% male and heterosexual and not harbor any inner thoughts of being a woman, cross-dressing or anything. No lives are changed, but losing anonymity would prevent a lot of such men from doing this. Should they be allowed to? Yes of course would be my answer. Real identity disclosure prevents it if they would be embarrassed if they were found out.

But others might go further. From experiencing real interactions, some men might get very used to being accepted as a woman in virtual environments (ditto for women, though women posing as men is allegedly less common than men posing as women). They may make the same decisions with other networks, other social sites, other shared virtual worlds. They might spend a large part of their free time projecting their perception of a feminine personality, and it might be convincing to others. At this level, rights start to clash.

We might think that a man wanting to be accepted as a woman in such an environment should be able to use a female name and avatar and try to project himself as female. He could in theory do so as a transvestite in real life without fear of legal discrimination, but then he might find it impossible to hide from friends and family and colleagues and might feel ashamed or embarrassed so might not want to go down that road.

Meeting other people inevitably cause friendships and romantic relationships. If a man in a virtual world presents as a woman and someone accepts him as a woman and they become romantically involved, the second person might be emotionally distressed if he later discovers he has been having a relationship with another man. Of course, he might not care, in which case no harm is done. Sometimes two men might each think they are with a woman, both of them acting out a lesbian fling in a virtual world. We start to see where forced identity diclosure would solve some problems, and create others. Should full real identity be enforced? Or just real gender? Or neither? Should it simply be ‘buyer beware’?

Even with this conflict of rights, I believe we should side with privacy and anonymity. Without it, a lot of this experimentation is blocked, because of the danger of embarrassment or shame given the personal situations of the parties involved. This kind of gender play via games or online socializing or virtual worlds is very common. A lot of men and women are able to explore and enjoy aspects of their personality, gender and sexuality that they otherwise couldn’t. A lot of people have low social skills that make it hard to interact face to face. Others are not sufficiently physically attractive to find it easy to get real dates. They are no less valuable or important than anyone else. Who has the right to say they shouldn’t be able to use a virtual world or social network site to find dates that would otherwise be out of their league, or interact via typing in ways they could never do in real-time speech?

I don’t have any figures. I have looked for them, but can’t find them. That to me says this whole field needs proper study. But my own experience in early chat rooms in the late 1990s says that a lot of people do gender-hopping online who would never dare in real life. And that was even before we had visual avatars or online worlds like second life or sex sites. Lots of perfectly normal people with perfectly normal lives and even perfectly normal sex lives still gender hop secretly.

Back to names. What if someone is talking as one gender on the phone at the same time as interacting as another gender in a virtual world? Their virtual gender might change frequently too. They may enjoy hopping between male and female in that virtual world, they may even enjoy being ‘forced’ to. People can vary their gender from second to second, it might depend on any aspect of location, time or context, they can run mutliple genders and sexualities in parallel at the same time in different domains or even in the same domain. Gender has already become very multidimensional, and it will become increasingly so as we progress further into this century. Take the gender-hopping activity in virtual worlds and then add direct nervous system links, shared experience, shared bodies, robot avatars, direct brain links, remote control, electronic personality mods, the ability to swap bodies or to switch people’s consciousness on and off. And then keep going, the technology will never stop developing.

Bisexual, tri-sexual, try-sexual, die-sexual, lie-sexual, why-sexual, my-sexual, even pie-sexual, the list of potential variations of gender identity and sexual practices and preferences is expanding fast towards infinity. Some people are happy to do things in the real world in full exposure. Others can only do so behind a wall of privacy and anonymity for any number of reasons. We should protect their right to do so, because the joy and fulfillment and identity they may get from their gender play is no less important than anyone else’s.

LGBT rights activism is just so yesterday! Let’s protect the new front line where anonymity, freedom of identity, and privacy are all being attacked daily. Only then can we keep gender freedom and gender identity freedom.

Meanwhile, the activists we need are still fighting at the back.

The future of youth

Been stalling a while wondering which Y to pick (yellow was my previous target) but my mind was made last night when I watched a news interview about young people’s behavior. The article contrasted the increasingly exciting lives of the elderly with the increasingly lonely lives of the young. It made very sad listening. Youth should be a time of joy, exploration and experimentation, reaching out, stretching boundaries, living life to its full. It’s always had plenty of problems to deal with too, but we’re adding to all the natural stresses of growing up.

The main thrust was that young people are lonely, because they don’t have enough cash to socialize properly so make do with staying in their room and using social media. That is a big enough problem, but a different one caught my attention this time.

The bit that worried me was the interview with a couple of people hoping to start off in professional careers. One pointed out that she had once got drunk and pictures had been uploaded onto social media so now she doesn’t dare drink any more because she doesn’t want pictures or anything else on social media damaging her career prospects. She is effectively living a censored life to protect her career, feeling that she is living her life in camera all the time.

Celebrities are well used to that, but celebrities usually have the compensations of a good income and guaranteed social life so they don’t have to worry about buying a home or seeing other people. Young people are now suffering the constant supervision without the benefits. We’ve had ‘friends with benefits’, now we’re seeing ‘celebs without benefits’ as people are thrust for all the wrong reasons into the spotlight and their lives wrecked, or constantly self-censoring to avoid that happening to them.

This trend will worsen a lot as cameras become even more ubiquitously tied in to social media, via Google Glass and other visors, button cams, necklace cams and a wide range of other lifestyle cameras and lifestyle blogging devices as well as all the smartphones and tablets and smart TV cameras. Everyone must then assume that everything they do and say in company (physical or online) may be recorded.

There are two main reactions to total privacy loss, and both make some sense.

A: Nobody is perfect so everyone will have some embarrassing things about them out there somewhere, so it doesn’t matter much if you do too.

B: The capture of embarrassing situations is subject to pretty random forces so is not equally distributed. You may do something you’d really regret but nobody records it, so you get away with it. Or you may do something less embarrassing but it is recorded, uploaded and widely shared and it may be a permanent blemish on your CV.

Both of these approaches make some sense. If you think you will be in an ordinary job you may not feel it matters very much if there is some dirt on you because nobody will bother to look for it and in any case it won’t be much worse than the people sitting beside you so it won’t put you at any significant disadvantage. But the more high profile the career you want, the more prominent the second analysis becomes. People will be more likely to look for dirt as you rise up the ladder and more likely to use it against you. The professional girl being interviewed on the news was in the second category and understood that the only way to be sure you don’t suffer blemishes and damaged career prospects is to abstain from many activities previously seen as fun.

That is a very sad position and was never intended. The web was invented to make our lives better, making it easier to find and share scientific documents or other knowledge. It wasn’t intended to lock people in their rooms or make them avoid having fun. The devices and services we use on the internet and on mobile networks were also invented to make our lives richer and more fulfilled, to put us more in touch with others and to reduce isolation and loneliness. In some cases they are doing the opposite. Unintended consequences, but consequences nonetheless.

I don’t want to overstate this concern. I have managed to live a very happy life without ever having taken drugs, never having been chained naked to a lamppost, never gone to any dubious clubs and only once or twice getting drunk in public. There are some embarrassing things on the web, but not many. I have had many interesting online exchanges with people I have never met, got involved in many projects I’d never have been involved with otherwise, and on balance the web has made my life better, not worse. I’m very introvert and tend to enjoy activities that don’t involve doing wild things with lots of other people pointing cameras at me. I don’t need much external stimulation and I won’t get bored sitting doing nothing but thinking. I can get excited just writing up a new idea or reading about one. I do self-censor my writing and talks though I’d rather not have to, but other than that I don’t feel I need to alter my activity in case someone is watching. There are pluses and minuses, but more pluses for me.

On the other hand, people who are more extrovert may find it a bigger burden having to avoid exciting situations and suffer a bigger drop in quality of life.

Certainly younger people want to try new things, they want to share exciting situations with other people, many want to get drunk occasionally, some might want to experiment with drugs, and some want to take part in political demonstrations.  and would suffer more than older ones who have already done so. It is a sad consequence of new technology if they feel they can’t in case it destroys their career prospects.

The only ways to recover an atmosphere of casual unpunished experimentation would be either to prevent sharing of photos or videos or chat, basically to ban most of what social networks do, and even the people affected probably don’t want to do that, or to make it possible and easy to have any photos or records of your activity removed. That would be better but still leaves problems. There is no obvious easy solution.

If we can’t, and we almost certainly won’t, then many of our brightest young people will feel shackled, oppressed, unable to let their hair down properly, unable to experience the joy of life that all preceding generations took for granted. It’s an aspect of the privacy debate that needs aired much more. Is it a price worth paying to get the cheap short-lived thrill of laughing at someone else’s embarrassment? I’m not sure it is.

The future of X-People

There is an abundance of choice for X in my ‘future of’ series, but most options are sealed off. I can’t do naughty stuff because I don’t want my blog to get blocked so that’s one huge category gone. X-rays are boring, even though x-ray glasses using augmented reality… nope, that’s back to the naughty category again. I won’t stoop to cover X-Factor so that only leaves X-Men, as in the films, which I admit to enjoying however silly they are.

My first observation is how strange X-Men sounds. Half of them are female. So I will use X-People. I hate political correctness, but I hate illogical nomenclature even more.

My second one is that some readers may not be familiar with the X-Men so I guess I’d better introduce the idea. Basically they are a large set of mutants or transhumans with very varied superhuman or supernatural capabilities, most of which defy physics, chemistry or biology or all of them. Essentially low-grade superheroes whose main purpose is to show off special effects. OK, fun-time!

There are several obvious options for achieving X-People capabilities:

Genetic modification, including using synthetic biology or other biotech. This would allow people to be stronger, faster, fitter, prettier, more intelligent or able to eat unlimited chocolate without getting fat. The last one will be the most popular upgrade. However, now that we have started converging biotech with IT, it won’t be very long before it will be possible to add telepathy to the list. Thought recognition and nerve stimulation are two sides of the same technology. Starting with thought control of appliances or interfaces, the world’s networked knowledge would soon be available to you just by thinking about something. You could easily send messages using thought control and someone else could hear them synthesized into an earpiece, but later it could be direct thought stimulation. Eventually, you’d have totally shared consciousness. None of that defies biology or physics, and it will happen mid-century. Storing your own thoughts and effectively extending your mind into the cloud would allow people to make their minds part of the network resources. Telepathy will be an everyday ability for many people but only with others who are suitably equipped. It won’t become easy to read other people’s minds without them having suitable technology equipped too. It will be interesting to see whether only a few people go that route or most people. Either way, 2050 X-People can easily have telepathy, control objects around them just by thinking, share minds with others and maybe even control other people, hopefully consensually.

Nanotechnology, using nanobots etc to achieve possibly major alterations to your form, or to affect others or objects. Nanotechnology is another word for magic as far as many sci-fi writers go. Being able to rearrange things on an individual atom basis is certainly fuel for fun stories, but it doesn’t allow you to do things like changing objects into gold or people into stone statues. There are plenty of shape-shifters in sci-fi but in reality, chemical bonds absorb or release energy when they are changed and that limits how much change can be made in a few seconds without superheating an object. You’d also need a LOT of nanobots to change a whole person in a few seconds. Major changes in a body would need interim states to work too, since dying during the process probably isn’t desirable. If you aren’t worried about time constraints and can afford to make changes at a more gentle speed, and all you’re doing is changing your face, skin colour, changing age or gender or adding a couple of cosmetic wings, then it might be feasible one day. Maybe you could even change your skin to a plastic coating one day, since plastics can use atomic ingredients from skin, or you could add a cream to provide what’s missing. Also, passing some nanobots to someone else via a touch might become feasible, so maybe you could cause them to change involuntarily just by touching them, again subject to scope and time limits. So nanotech can go some way to achieving some X-People capabilities related to shape changing.

Moving objects using telekinesis is rather less likely. Thought controlling a machine to move a rock is easy, moving an unmodified rock or a dumb piece of metal just by concentrating on it is beyond any technology yet on the horizon. I can’t think of any mechanism by which it could be done. Nor can I think of ways of causing things to just burst into flames without using some sort of laser or heat ray. I can’t see either how megawatt lasers can be comfortably implanted in ordinary eyes. These deficiencies might be just my lack of imagination but I suspect they are actually not feasible. Quite a few of the X-Men have these sorts of powers but they might have to stay in sci-fi.

Virtual reality, where you possess the power in a virtual world, which may be shared with others. Well, many computer games give players supernatural powers, or take on various forms, and it’s obvious that many will do so in VR too. If you can imagine it, then someone can get the graphics chips to make it happen in front of your eyes. There are no hard physics or biology barriers in VR. You can do what you like. Shared gaming or socializing environments can be very attractive and it is not uncommon for people to spend almost every waking hour in them. Role playing lets people do things or be things they can’t in the real world. They may want to be a superhero, or they might just want to feel younger or look different or try being another gender. When they look in a mirror in the VR world, they would see the person they want to be, and that could make it very compelling compared to harsh reality. I suspect that some people will spend most of their free time in VR, living a parallel fantasy life that is as important to them as their ‘real’ one. In their fantasy world, they can be anyone and have any powers they like. When they share the world with other people or AI characters, then rules start to appear because different people have different tastes and desires. That means that there will be various shared virtual worlds with different cultures, freedoms and restrictions.

Augmented reality, where you possess the power in a virtual world but in ways that it interacts with the physical world is a variation on VR, where it blends more with reality. You might have a magic wand that changes people into frogs. The wand could be just a stick, but the victim could be a real person, and the change would happen only in the augmented reality. The scope of the change could be one-sided – they might not even know that you now see them as a frog, or it could again be part of a large shared culture where other people in the community now see and treat them as a frog. The scope of such cultures is very large and arbitrary cultural rules could apply. They could include a lot of everyday life – shopping, banking, socializing, entertainment, sports… That means effects could be wide-ranging with varying degrees of reality overlap or permanence. Depending on how much of their lives people live within those cultures, virtual effects could have quite real consequences. I do think that augmented reality will eventually have much more profound long-term effects on our lives than the web.

Controlled dreaming, where you can do pretty much anything you want and be in full control of the direction your dream takes. This is effectively computer-enhanced lucid dreaming with literally all the things you could ever dream of. But other people can dream of extra things that you may never have dreamt of and it allows you to explore those areas too.  In shared or connected dreams, your dreams could interact with those of others or multiple people could share the same dream. There is a huge overlap here with virtual reality, but in dreams, things don’t get the same level of filtration and reality is heavily distorted, so I suspect that controlled dreams will offer even more potential than VR. You can dream about being in VR, but you can’t make a dream in VR.

X-People will be very abundant in the future. We might all be X-People most of the time, routinely doing things that are pure sci-fi today. Some will be real, some will be virtual, some will be in dreams, but mostly, thanks to high quality immersion and the social power of shared culture, we probably won’t really care which is which.

 

 

Ground up data is the next big data

This one sat in my draft folder since February, so I guess it’s time to finish it.

Big Data – I expect you’re as sick of hearing that term as I am. Gathering loads of data on everything you or your company or anything else you can access can detect, measure, record, then analyzing the hell out of it using data mining, an equally irritating term.

I long ago had a quick twitter exchange with John Hewitt, who suggested “What is sensing but the energy-constrained competition for transmission to memory, as memory is but that for expression?”. Neurons compete to see who gets listened too.  Yeah, but I am still not much wiser as to what sensing actually is. Maybe I need a brain upgrade. (It’s like magnets. I used to be able to calculate the magnetic field densities around complicated shaped objects – it was part of my first job in missile design – but even though I could do all the equations around EM theory, even general relativity, I still am no wiser how a magnetic field actually becomes a force on an object. I have an office littered with hundreds of neodymium magnets and I spend hours playing with them and I still don’t understand). I can read about neurons all day but I still don’t understand how a bunch of photons triggering a series of electro-chemical reactions results in me experiencing an image. How does the physical detection become a conscious experience?

Well, I wrote some while back that we could achieve a conscious computer within two years. It’s still two years because nobody has started using the right approach yet. I have to stress the ‘could’, because nobody actually intends to do it in that time frame, but I really believe some half-decent lab could if they tried.  (Putting that into perspective, Kurzweil and his gang at Google are looking at 2029.) That two years estimate relies heavily on evolutionary development, for me the preferred option when you don’t understand how something works, as is the case with consciousness. It is pretty easy to design conscious computers at a black box level. The devil is in the detail. I argued that you could make a conscious computer by using internally focused sensing to detect processes inside the brain, and using a sensor structure with a symmetrical feedback loop. Read it:

We could have a conscious machine by end-of-play 2015

In a nutshell, if you can feel thoughts in the same way as you feel external stimuli, you’d be conscious. I think. The symmetrical feedback loop bit is just a small engineering insight.

The missing link in that is still the same one: how does sensing work? How do you feel?

At a superficial level, you point a sensor at something and it produces a signal in some sort of relationship to whatever it is meant to sense. We can do that bit. We understand that. Your ear produces signals according to the frequencies and amplitudes of incoming sound waves, a bit like a microphone. Just the same so far. However, it is by some undefined processes later that you consciously experience the sound. How? That is the hard problem in AI. It isn’t just me that doesn’t know the answer. ‘How does red feel?’ is a more commonly used variant of the same question.

When we solve that, we will replace big data as ‘the next big thing’. If we can make sensor systems that experience or feel something rather than just producing a signal, that’s valuable already. If those sensors pool their shared experience, another similar sensor system could experience that. Basic data quickly transmutes into experience, knowledge, understanding, insight and very quickly, value, lots of it. Artificial neural nets go some way to doing that, but they still lack consciousness. Simulated neural networks can’t even get beyond a pretty straightforward computation, putting all the inputs into an equation. The true sensing bit is missing. The complex adaptive analog neural nets in our brain clearly achieve something deeper than a man-made neural network.

Meanwhile, most current AI work barks up a tree in a different forest. IBM’s Watson will do great things; Google’s search engine AI will too. But they aren’t conscious and can’t be. They’re just complicated programs running on digital processors, with absolutely zero awareness of anything they are doing. Digital programs on digital computers will never achieve any awareness, no matter how fast the chips are.

However, back in the biological realm, nature manages just fine. So biomimetics offers a lot of hope. We know we didn’t get from a pool of algae to humans in one go. At some point, organisms started moving according to light, chemical gradients, heat, touch. That most basic process of sensing may have started out coupled to internal processes that caused movement without any consciousness. But if we can understand the analog processes (electrochemical, electronic, mechanical) that take the stimulus through to a response, and can replicate it using our electronic technology, we would already have actuator circuits, even if we don’t have any form of sensation or consciousness yet. A great deal of this science has been done already of course. The computational side of most chemical and physical processes can be emulated electronically by some means or another. Actuators will be a very valuable part of the cloud, but we already have the ability to make actuators by more conventional means, so doing it organically or biomimetically just adds more actuation techniques to the portfolio. Valuable but not a terribly important breakthrough.

Looking at the system a big further along the evolutionary timeline, where eyes start to develop, where the most primitive nervous systems and brains start, where higher level processing is obviously occurring and inputs are starting to become sensations, we should be able to what is changed or changing. It is the emergence of sensation we need to identify, even if the reaction is still an unconscious reflex. We don’t need to reverse engineer the human brain. Simple organisms are simpler to understand. Feeding the architectural insights we gain from studying those primitive systems into our guided evolution engines is likely to be far faster as a means to generating true machine consciousness and strong AI. That’s how we could develop consciousness in a couple of years rather than 15.

If we can make primitive sensing devices that work like those in primitive organisms, and can respond to specific sorts of sensory input, then that is a potential way of increasing the coverage of cloud sensing and even actuation. It would effectively be a highly distributed direct response system. With clever embedding of emergent phenomena techniques (such as cellular automata, flocking etc) , it could be a quite sophisticated way of responding to quite complex distributed inputs, avoiding some of the need for big data processing. If we can gather the outputs from these simple sensors and feed them into others, that will be an even better sort of biomimetic response system. That sort of direct experience of a situation is very different from a data mined result, especially if actuation capability is there too. The philosophical question as to whether that inclusion of that second bank of sensors makes the system in any way conscious remains, but it would certainly be very useful and valuable. The architecture we end up with via this approach may look like neurons, and could even be synthetic neurons, but that may be only one solution among many. Biology may have gone the neuron route but that doesn’t necessarily mean it is the only possibility. It may be that we could one day genetically modify bacteria to produce their own organic electronics to emulate the key processes needed to generate sensation, and to power them by consuming nutrients from their environment. I suggested smart yogurt based on this idea many years ago, and believe that it could achieve vast levels of intelligence.

Digitizing and collecting the signals from the system at each stage would generate lots of  data, and that may be used by programs to derive other kinds of results, or to relay the inputs to other analog sensory systems elsewhere. (It isn’t always necessary to digitize signals to transmit them, but it helps limit signal degradation and quickly becomes important if the signal is to travel far and is essential if it is to be recorded for later use or time shifting). However, I strongly suspect that most of the value in analog sensing and direct response is local, coupled to direct action or local processing and storage.

If we have these sorts of sensors liberally spread around, we’d create a truly smart environment, with local sensing and some basic intelligence able to relay sensation remotely to other banks of sensors elsewhere for further processing or even ultimately consciousness. The local sensors could be relatively dumb like nerve endings on our skin, feeding in  signals to a more connected virtual nervous system, or a bit smarter, like neural retinal cells, doing a lot of analog pre-processing before relaying them via ganglia cells, and maybe part of a virtual brain. If they are also capable of or connected to some sort of actuation, then we would be constructing a kind of virtual organism, with tendrils covering potentially the whole globe, and able to sense and interact with its environment in an intelligent way.

I use the term virtual not because the sensors wouldn’t be real, but because their electronic nature allows connectivity to many systems, overlapping, hierarchical or distinct. Any number of higher level systems could ‘experience’ them as part of its system, rather as if your fingers could be felt by the entire human population. Multiple higher level virtual organisms could share the same basic sensory/data inputs. That gives us a whole different kind of cloud sensing.

By doing processing locally, in the analog domain, and dealing with some of the response locally, a lot of traffic across the network is avoided and a lot of remote processing. Any post-processing that does occur can therefore add to a higher level of foundation. A nice side effect from avoiding all the extra transmission and processing is increased environmental friendliness.

So, we’d have a quite different sort of data network, collecting higher quality data, essentially doing by instinct what data mining does with huge server farms and armies of programmers. Cloudy, but much smarter than a straightforward sensor net.

… I think.

It isn’t without risk though. I had a phone discussion yesterday on the dangers of this kind of network. In brief, it’s dangerous.

The future of prying

Prying is one side of the privacy coin, hiding being the other side.

Today, lots of snap-chat photos have been released, and no doubt some people are checking to see if there are any of people they know, and it is a pretty safe bet that some will send links to compromising pics of colleagues (or teachers) to others who know them. It’s a sort of push prying isn’t it?

There is more innocent prying too. Checking out Zoopla to see how much your neighbour got for their house is a little bit nosy but not too bad, or at the extremely innocent end of the line, reading someone’s web page is the sort of prying they actually want some people to do, even if not necessarily you.

The new security software I just installed lets parents check out on their kids online activity. Protecting your kids is good but monitoring every aspect of their activity just isn’t, it doesn’t give them the privacy they deserve and probably makes them used to being snooped on so that they accept state snooping more easily later in life. Every parent has to draw their own line, but kids do need to feel trusted as well as protected.

When adults install tracking apps on their partner’s phones, so they can see every location they’ve visited and every call or message they’ve made, I think most of us would agree that is going too far.

State surveillance is increasing rapidly. We often don’t even think of it as such, For example, when speed cameras are linked ‘so that the authorities can make our roads safer’, the incidental monitoring and recording of our comings and goings collected without the social debate. Add that to the replacement of tax discs by number plate recognition systems linked to databases, and even more data is collected. Also ‘to reduce crime’, video from millions of CCTV cameras is also stored and some is high enough quality to be analysed by machine to identify people’s movements and social connectivity. Then there’s our phone calls, text messages, all the web and internet accesses, all these need to be stored, either in full or at least the metadata, so that ‘we can tackle terrorism’. The state already has a very full picture of your life, and it is getting fuller by the day. When it is a benign government, it doesn’t matter so much, but if the date is not erased after a short period, then you need also to worry about future governments and whether they will also be benign, or whether you will be one of the people they want to start oppressing. You also need to worry that increasing access is being granted to your data to a wider variety of a growing number of public sector workers for a widening range of reasons, with seemingly lower security competence, meaning that a good number of people around you will be able to find out rather more about you than they really ought. State prying is always sold to the electorate via assurances that it is to make us safer and more secure and reduce crime, but the state is staffed by your neighbors, and in the end, that means that your neighbors can pry on you.

Tracking cookies are a fact of everyday browsing but mostly they are just trying to get data to market to us more effectively. Reading every email to get data for marketing may be stretching the relationship with the customer to the limits, but many of us gmail users still trust Google not to abuse our data too much and certainly not to sell on our business dealings to potential competitors. It is still prying though, however automated it is, and a wider range of services are being linked all the time. The internet of things will provide data collection devices all over homes and offices too. We should ask how much we really trust global companies to hold so much data, much of it very personal, which we’ve seen several times this year may be made available to anyone via hackers or forced to be handed over to the authorities. Almost certainly, bits of your entire collected and processed electronic activity history could get you higher insurance costs, in trouble with family or friends or neighbors or the boss or the tax-man or the police. Surveillance doesn’t have to be real time. Databases can be linked, mashed up, analysed with far future software or AI too. In the ongoing search for crimes and taxes, who knows what future governments will authorize? If you wouldn’t make a comment in front of a police officer or tax-man, it isn’t safe to make it online or in a text.

Allowing email processing to get free email is a similar trade-off to using a supermarket loyalty card. You sell personal data for free services or vouchers. You have a choice to use that service or another supermarket or not use the card, so as long as you are fully aware of the deal, it is your lifestyle choice. The lack of good competition does reduce that choice though. There are not many good products or suppliers out there for some services, and in a few there is a de-facto monopoly. There can also be a huge inconvenience and time loss or social investment cost in moving if terms and conditions change and you don’t want to accept the deal any more.

On top of that state and global company surveillance, we now have everyone’s smartphones and visors potentially recording anything and everything we do and say in public and rarely a say in what happens to that data and whether it is uploaded and tagged in some social media.

Some companies offer detective-style services where they will do thorough investigations of someone for a fee, picking up all they can learn from a wide range of websites they might use. Again, there are variable degrees that we consider acceptable according to context. If I apply for a job, I would think it is reasonable for the company to check that I don’t have a criminal record, and maybe look at a few of the things I write or tweet to see what sort of character I might be. I wouldn’t think it appropriate to go much further than that.

Some say that if you have done nothing wrong, you have nothing to fear, but none of them has a 3 digit IQ. The excellent film ‘Brazil’ showed how one man’s life was utterly destroyed by a single letter typo in a system scarily similar to what we are busily building.

Even if you are a saint, do you really want the pervert down the road checking out hacked databases for personal data on you or your family, or using their public sector access to see all your online activity?

The global population is increasing, and every day a higher proportion can afford IT and know how to use it. Networks are becoming better and AI is improving so they will have greater access and greater processing potential. Cyber-attacks will increase, and security leaks will become more common. More of your personal data will become available to more people with better tools, and quite a lot of them wish you harm. Prying will increase geometrically, according to Metcalfe’s Law I think.

My defense against prying is having an ordinary life and not being famous or a major criminal, not being rich and being reasonably careful on security. So there are lots of easier and more lucrative targets. But there are hundreds of millions of busybodies and jobsworths and nosy parkers and hackers and blackmailers out there with unlimited energy to pry, as well as anyone who doesn’t like my views on a topic so wants to throw some mud, and their future computers may be able to access and translate and process pretty much anything I type, as well as much of what I say and do anywhere outside my home.

I find myself self-censoring hundreds of times a day. I’m not paranoid. There are some people out to get me, and you, and they’re multiplying fast.

 

 

 

The future of karma

This isn’t about Hinduism or Buddhism, just in case you’re worried. It is just about the cultural principle borrowed from them that your intent and actions now can influence what happens to you in future, or your luck or fate, if you believe in such things. It is borrowed in some computer games, such as Fallout.

We see it every day now on Twitter. A company or individual almost immediately suffers the full social consequences of their words or actions. Many of us are occasionally tempted to shame companies that have wronged us by tweeting our side of the story, or writing a bad review on tripadvisor. One big thing is so missing, but I suspect not for much longer: Who’s keeping score?

Where is the karma being tracked? When you do shame a company or write a bad review, was it an honest write-up of a genuine grievance, or way over the top compared to the magnitude of the offense, or just pure malice? If you could have written a review and didn’t, should your forgiving attitude be rewarded or punished, because now others might suffer similar bad service? I haven’t checked but I expect there are already a few minor apps that do bits of this. But we need the Google and Facebook of Karma.

So, we need another 17 year old in a bedroom to bring out the next blockbuster mash site linking the review sites, the tweets and blogs, doing an overall assessment not just of the companies being commented on, but on those doing the commenting. One that gives people and companies a karma score. As the machine-readable web continues to improve, it will even be possible to get some clues on average rates of poor service and therefore identify those of us who are probably more forgiving, those of us who deserve a little more tolerance when it’s our own mistake. (I am allegedly closer to the grumpy old man end of the scale).

I just did a conference talk on corporate credit assessment and have previously done others on private credit assessment. Financial trustworthiness is important, but when you do business, you also want to know whether it’s a nice company or one that walks all over people. That’s karma.

So, are you someone who presents a sweet and cheerful face, only to say nasty things about someone as soon as their face is turned. Do you always see the good side of everyone, or go to great effort to point out their bad points to everyone on the web? Well, it won’t be all that long before your augmented reality visor shows a karma score floating above people’s heads when you chat to them.

Estimating IoT value? Count ALL the beans!

In this morning’s news:

http://www.telegraph.co.uk/technology/news/11043549/UK-funds-development-of-world-wide-web-for-machines.html

£1.6M investment by UK Technology Strategy Board in Internet-of-Things HyperCat standard, which the article says will add £100Bn to the UK economy by 2020.

Garnter says that IoT has reached the hype peak of their adoption curve and I agree. Connecting machines together, and especially adding networked sensors will certainly increase technology capability across many areas of our lives, but the appeal is often overstated and the dangers often overlooked. Value should not be measured in purely financial terms either. If you value health, wealth and happiness, don’t just measure the wealth. We value other things too of course. It is too tempting just to count the most conspicuous beans. For IoT, which really just adds a layer of extra functionality onto an already technology-rich environment, that is rather like estimating the value of a chili con carne by counting the kidney beans in it.

The headline negatives of privacy and security have often been addressed so I don’t need to explore them much more here, but let’s look at a couple of typical examples from the news article. Allowing remotely controlled washing machines will obviously impact on your personal choice on laundry scheduling. The many similar shifts of control of your life to other agencies will all add up. Another one: ‘motorists could benefit from cheaper insurance if their vehicles were constantly transmitting positioning data’. Really? Insurance companies won’t want to earn less, so motorists on average will give them at least as much profit as before. What will happen is that insurance companies will enforce driving styles and car maintenance regimes that reduce your likelihood of a claim, or use that data to avoid paying out in some cases. If you have to rigidly obey lots of rules all of the time then driving will become far less enjoyable. Having to remember to check the tyre pressures and oil level every two weeks on pain of having your insurance voided is not one of the beans listed in the article, but is entirely analogous the typical home insurance rule that all your windows must have locks and they must all be locked and the keys hidden out of sight before they will pay up on a burglary.

Overall, IoT will add functionality, but it certainly will not always be used to improve our lives. Look at the way the web developed. Think about the cookies and the pop-ups and the tracking and the incessant virus protection updates needed because of the extra functions built into browsers. You didn’t want those, they were added to increase capability and revenue for the paying site owners, not for the non-paying browsers. IoT will be the same. Some things will make minor aspects of your life easier, but the price of that will that you will be far more controlled, you will have far less freedom, less privacy, less security. Most of the data collected for business use or to enhance your life will also be available to government and police. We see every day the nonsense of the statement that if you have done nothing wrong, then you have nothing to fear. If you buy all that home kit with energy monitoring etc, how long before the data is hacked and you get put on militant environmentalist blacklists because you leave devices on standby? For every area where IoT will save you time or money or improve your control, there will be many others where it does the opposite, forcing you to do more security checks, spend more money on car and home and IoT maintenance, spend more time following administrative procedures and even follow health regimes enforced by government or insurance companies. IoT promises milk and honey, but will deliver it only as part of a much bigger and unwelcome lifestyle change. Sure you can have a little more control, but only if you relinquish much more control elsewhere.

As IoT starts rolling out, these and many more issues will hit the press, and people will start to realise the downside. That will reduce the attractiveness of owning or installing such stuff, or subscribing to services that use it. There will be a very significant drop in the economic value from the hype. Yes, we could do it all and get the headline economic benefit, but the cost of greatly reduced quality of life is too high, so we won’t.

Counting the kidney beans in your chili is fine, but it won’t tell you how hot it is, and when you start eating it you may decide the beans just aren’t worth the pain.

I still agree that IoT can be a good thing, but the evidence of web implementation suggests we’re more likely to go through decades of abuse and grief before we get the promised benefits. Being honest at the outset about the true costs and lifestyle trade-offs will help people decide, and maybe we can get to the good times faster if that process leads to better controls and better implementation.

Time – The final frontier. Maybe

It is very risky naming the final frontier. A frontier is just the far edge of where we’ve got to.

Technology has a habit of opening new doors to new frontiers so it is a fast way of losing face. When Star Trek named space as the final frontier, it was thought to be so. We’d go off into space and keep discovering new worlds, new civilizations, long after we’ve mapped the ocean floor. Space will keep us busy for a while. In thousands of years we may have gone beyond even our own galaxy if we’ve developed faster than light travel somehow, but that just takes us to more space. It’s big, and maybe we’ll never ever get to explore all of it, but it is just a physical space with physical things in it. We can imagine more than just physical things. That means there is stuff to explore beyond space, so space isn’t the final frontier.

So… not space. Not black holes or other galaxies.

Certainly not the ocean floor, however fashionable that might be to claim. We’ll have mapped that in details long before the rest of space. Not the centre of the Earth, for the same reason.

How about cyberspace? Cyberspace physically includes all the memory in all our computers, but also the imaginary spaces that are represented in it. The entire physical universe could be simulated as just a tiny bit of cyberspace, since it only needs to be rendered when someone looks at it. All the computer game environments and virtual shops are part of it too. The cyberspace tree doesn’t have to make a sound unless someone is there to hear it, but it could. The memory in computers is limited, but the cyberspace limits come from imagination of those building or exploring it. It is sort of infinite, but really its outer limits are just a function of our minds.

Games? Dreams? Human Imagination? Love? All very new agey and sickly sweet, but no. Just like cyberspace, these are also all just different products of the human mind, so all of these can be replaced by ‘the human mind’ as a frontier. I’m still not convinced that is the final one though. Even if we extend that to greatly AI-enhanced future human mind, it still won’t be the final frontier. When we AI-enhance ourselves, and connect to the smart AIs too, we have a sort of global consciousness, linking everyone’s minds together as far as each allows. That’s a bigger frontier, since the individual minds and AIs add up to more cooperative capability than they can achieve individually. The frontier is getting bigger and more interesting. You could explore other people directly, share and meld with them. Fun, but still not the final frontier.

Time adds another dimension. We can’t do physical time travel, and even if we can do so in physics labs with tiny particles for tiny time periods, that won’t necessarily translate into a practical time machine to travel in the physical world. We can time travel in cyberspace though, as I explained in

The future of time travel: cheat

and when our minds are fully networked and everything is recorded, you’ll be able to travel back in time and genuinely interact with people in the past, back to the point where the recording started. You would also be able to travel forwards in time as far as the recording stops and future laws allow (I didn’t fully realise that when I wrote my time travel blog, so I ought to update it, soon). You’d be able to inhabit other peoples’ bodies, share their minds, share consciousness and feelings and emotions and thoughts. The frontier suddenly jumps out a lot once we start that recording, because you can go into the future as far as is continuously permitted. Going into that future allows you to get hold of all the future technologies and bring them back home, short circuiting the future, as long as time police don’t stop you. No, I’m not nuts – if you record everyone’s minds continuously, you can time travel into the future using cyberspace, and the effects extend beyond cyberspace into the real world you inhabit, so although it is certainly a cheat, it is effectively real time travel, backwards and forwards. It needs some security sorted out on warfare, banking and investments, procreation, gambling and so on, as well as lot of other causality issues, but to quote from Back to the Future: ‘What the hell?’ [IMPORTANT EDIT: in my following blog, I revise this a bit and conclude that although time travel to the future in this system lets you do pretty much what you want outside the system, time travel to the past only lets you interact with people and other things supported within the system platform, not the physical universe outside it. This does limit the scope for mischief.]

So, time travel in fully networked fully AI-enhanced cosmically-connected cyberspace/dream-space/imagination/love/games would be a bigger and later frontier. It lets you travel far into the future and so it notionally includes any frontiers invented and included by then. Is it the final one though? Well, there could be some frontiers discovered after the time travel windows are closed. They’d be even finaller, so I won’t bet on it.

 

 

The new right to be forgotten

The European Court of Justice recently ruled that Google has to remove links to specific articles on (proper) request where the damage to the individual outweighs the public right to know.

It has generated a lot of reaction. Lots of people have done things, or have been accused of doing things, and would prefer that the records of that don’t appear when people do a search for them. If a pedophile or a corrupt politician wants to erase something from their past, then many of us would object. If it is someone who once had a bad debt and long since paid it off, that seems more reasonable. So is there any general principle that would be useful? I think so.

When someone is convicted of a crime, sometimes they are set to prison. When their sentence terminates, they are considered to have suffered enough punishment and are free to live a normal life. However, they keep a criminal record, and if they apply for a job, the potential employer can find out that they have done something. So they don’t get a clean record. Even that is being challenged now and the right to start again with a clean slate is being considered. In trials, usually the prosecution is not allowed to mention previous crimes lest they prejudice the jury – the accused is being tried for this crime, not for previous ones and their guilt should be assessed on the evidence, not prejudice.

The idea that after a suitable period of punishment you can have the record wiped clean is appealing. Or if not the formal record, then at least easy casual access to it. It has a feel of natural justice to it. Everyone should have the right to start again once they’ve made amends, paid their debt to society. Punishment should not last for ever, even long after the person has reformed.

This general principle could be applied online. For crimes, when a judge sentences the guilty, they could include in their punishment a statement of the longevity of internet records, the duration of public shame. Our lawmakers should decide the fit and proper duration of that for all kinds of crime just as they do the removal of liberty. When that terminates, those records should no longer turn up in searches within that jurisdiction. For non-criminal but embarrassing life events, there should be an agreed tariff too and it could be implemented by Information Commissioners or similar authorities, who would maintain a search exemption list to be checked against search results before display. Society may well decide that for certain things that are in public interest. If someone took drugs at college, or got drunk and went rather too far at a party, or was late paying a debt, or had an affair, or any of a million other things, then the impact on their future life would have a time limit, which hopefully would be the same for everyone. My understanding of this ECJ ruling is that is broadly what is intended. The precise implementation details can now be worked out. If so, I don’t really have any big objection, though I may well be missing something.

It is indisputably censorship and some people will try to use their power or circumstances to get into the clear earlier than seems right. However, so far the ECJ ruling only covers the appearance in search engines, i.e casual research. It will stop you easily finding out about something in your neighbor’s or a colleague’s distant past. It won’t prevent journalists finding things out, because a proper journalist will do their research thoroughly and not just type a couple of words into Google. In its current form, this ruling will not amount to full censorship, more of a nosey neighbour gossip filter. The rules will need to be worked out and to be applied. We should hope that the rules are made fair and the same for all, with no exceptions for the rich and powerful.