Category Archives: culture

Should Dr Who be a different sex or race?

Dr Who is one of my first TV memories. I even got a Chad Valley toy projector with Dr Who slides.

There seems to be a current obsession with political correctness regarding the next Doctor, so I thought I’d throw in my two pennies worth. As you probably know if you are a regular reader, I’m not a big fan of PC. I much prefer actual truth to adjusted truth, whatever it looks like.

Dr Who was originally intended to have 7 lives and when he dies, he regenerates into a new body, convenient since that allows the character to remain but a new actor to take over. Those 7 lives are now long gone, and the original 7 has conveniently been dropped from the lore ages ago. The gender of the Doctor remains male, as in the original set of books, allegedly, but there is much debate about changing Dr Who to a woman. Some people object to that.

I don’t care either way since it has become so dull and predictable and PC that I never watch it any more anyway. Any sci-fi interest has long since been replaced by blatant activism. Now there is more debate on whether Doctor should be gay or a different color. All 13 so far (though I haven’t seen the last several episodes so I might be out of date) have been straight white men. Shouldn’t he/she be black or at the very least, non-white? An interesting question, hence my blog.

We do have some base for an answer. Regenerated Doctors don’t look like their predecessors, so genes related to appearance are presumably ignored, whereas the Doctor retains the same overall biology and species, keeping two hearts for example and remaining humanoid, so many genes are acted on. Does that apply to gender? Who knows, who cares? If it is important to stick to the lore, then he should remain male. If not, then it should really be on the basis of whichever actor or actress could play the character best.

What about race then? If he was human, then why not be another race? Most humans are not white, so if the Doctor were human, and genetics doesn’t count, then gender and race should presumably be random. However, again, any story is entitled to stick to its lore. Dr Who is not human, but an alien from Galifrey, in which case, to be scrupulously fair, I’d expect regenerations to follow the statistical demographic mix on Galifrey. I’d have to say they do based on episodes that show crowds on Galifrey.

Given that the default from the original stories is for Dr Who to be a straight white male, surely it is sexist or racist or anti-straight to demand he be anything but. If the series were about ancient Egyptians, few people would be demanding Cleopatra be played by a white man.

In fact, given that the stories have all had British Doctors, since they were aimed at a British audience, then it could be argued that Doctors should follow the racial mix of the UK. Due to recent immigration, BME Brits now make up about 10% of the current population, but that proportion was much lower in the past. If we calculate the probability that all 13 Doctors would be white if each were based on the racial makeup of the UK at the time of casting, then the probability that all would be white is about 40%. Slightly less than average, but certainly not evidence for any discrimination.

If, and that’s a big if, we now make the concession that all future Doctors should be randomly chosen to represent UK ethnic makeup rather than ‘sticking to the lore’, which is important to many viewers, then obviously 50% from now on should be women and around 10% of future Doctors should be non-white, with 2% black and the rest from other BME variants.  If the average Doctor Who actor survives 4 years in the role, then we should certainly expect a woman to play the Doctor soon, but only start worrying about racial discrimination if we still haven’t seen a BME Doctor in the next 6 or 7 regenerations, i.e. by 2045. Complaining before that is just anti-white racist activism with no factual basis.

 

The new dark age

dark age 2017coverAs promised, here is a slide-set illustrating the previous blog, just click the link if the slides are not visible.

The new dark age

Utopia scorned: The 21st Century Dark Age

Link to accompanying slides:

https://timeguide.files.wordpress.com/2017/06/the-new-dark-age.pdf

Eating an ice-cream and watching a squirrel on the feeder in our back garden makes me realize what a privileged life I lead. I have to work to pay the bills, but my work is not what my grandfather would have thought of as work, let alone my previous ancestors. Such a life is only possible because of the combined efforts of tens of thousands of preceding generations who struggled to make the world a slightly better place than they found it, meaning that with just a few years more effort, our generation has been able to create today’s world.

I appreciate the efforts of previous generations, rejoice in the start-point they left us, and try to play my small part in making it better still for those who follow. Next generations could continue such gains indefinitely, but that is not a certainty. Any generation can choose not to for whatever reasons. Analyzing the world and the direction of cultural evolution over recent years, I am no longer sure that the progress mankind has made to date is safe.

Futurists talk of weak signals, things that indicate change, but are too weak to be conclusive. The new dark age was a weak signal when I first wrote about it well over a decade ago. My more recent blog is already old: https://timeguide.wordpress.com/2011/05/31/stone-age-culture-returning-in-the-21st-century/

Although it’s a good while since I last wrote about it, recent happenings have made me even more convinced of it. Even as raw data, connectivity and computational power becomes ever more abundant, the quality of what most people believe to be knowledge is falling, with data and facts filtered and modified to fit agendas. Social compliance enforces adherence to strict codes of political correctness, with its high priests ever more powerful as the historical proven foundations of real progress are eroded and discarded. Indoctrination appears to have replaced education, with a generation locked in to an intellectual prison, unable to dare to think outside it, forbidden to deviate from the group-think on pain of exile. As their generation take control, I fear progress won over millennia will back-slide badly. They and their children will miss out on utopia because they are unable to see it, it is hidden from them.

A potentially wonderful future awaits millennials. Superb technology could give them a near utopia, but only if they allow it to happen. They pore scorn on those who have gone before them, and reject their culture and accumulated wisdom replacing it with little more than ideology, putting theoretical models and dogma in place of reality. Castles built on sand will rarely survive. The sheer momentum of modernist thinking ensures that we continue to develop for some time yet, but will gradually approach a peak. After that we will see slowdown of overall progress as scientific development continues, but with the results owned and understood by a tinier and tinier minority of humans and an increasing amount of AI, with the rest of society living in a word they barely understand, following whatever is currently the most fashionable trend on a random walk and gradually replacing modernity with a dark age world of superstition, anti-knowledge and inquisitors. As AI gradually replaces scientists and engineers in professional roles, even the elite will start to become less and less well-informed on reality or how things work, reliant on machines to keep it all going. When the machines fail due to solar flares or more likely, inter-AI tribal conflict, few people will even understand that they have become H G Wells’ Eloi. They will just wonder why things have stopped and look for someone to blame, or wonder if a god may want a sacrifice. Alternatively, future tribes might use advanced technologies they don’t understand to annihilate each other.

It will be a disappointing ending if it goes either route, especially with a wonderful future on offer nearby, if only they’d gone down a different path. Sadly, it is not only possible but increasingly likely. All the wonderful futures I and other futurists have talked about depend on the same thing, that we proceed according to modernist processes that we know work. A generation who has been taught that they are old-fashioned and rejected them will not be able to reap the rewards.

I’ll follow this blog with a slide set that illustrates the problem.

AI Activism Part 2: The libel fields

This follows directly from my previous blog on AI activism, but you can read that later if you haven’t already. Order doesn’t matter.

https://timeguide.wordpress.com/2017/05/29/ai-and-activism-a-terminator-sized-threat-targeting-you-soon/

Older readers will remember an emotionally powerful 1984 film called The Killing Fields, set against the backdrop of the Khmer Rouge’s activity in Cambodia, aka the Communist Part of Kampuchea. Under Pol Pot, the Cambodian genocide of 2 to 3 million people was part of a social engineering policy of de-urbanization. People were tortured and murdered (some in the ‘killing fields’ near Phnom Penh) for having connections with former government of foreign governments, for being the wrong race, being ‘economic saboteurs’ or simply for being professionals or intellectuals .

You’re reading this, therefore you fit in at least the last of these groups and probably others, depending on who’s making the lists. Most people don’t read blogs but you do. Sorry, but that makes you a target.

As our social divide increases at an accelerating speed throughout the West, so the choice of weapons is moving from sticks and stones or demonstrations towards social media character assassination, boycotts and forced dismissals.

My last blog showed how various technology trends are coming together to make it easier and faster to destroy someone’s life and reputation. Some of that stuff I was writing about 20 years ago, such as virtual communities lending hardware to cyber-warfare campaigns, other bits have only really become apparent more recently, such as the deliberate use of AI to track personality traits. This is, as I wrote, a lethal combination. I left a couple of threads untied though.

Today, the big AI tools are owned by the big IT companies. They also own the big server farms on which the power to run the AI exists. The first thread I neglected to mention is that Google have made their AI an open source activity. There are lots of good things about that, but for the purposes of this blog, that means that the AI tools required for AI activism will also be largely public, and pressure groups and activist can use them as a start-point for any more advanced tools they want to make, or just use them off-the-shelf.

Secondly, it is fairly easy to link computers together to provide an aggregated computing platform. The SETI project was the first major proof of concept of that ages ago. Today, we take peer to peer networks for granted. When the activist group is ‘the liberal left’ or ‘the far right’, that adds up to a large number of machines so the power available for any campaign is notionally very large. Harnessing it doesn’t need IT skill from contributors. All they’d need to do is click a box on a email or tweet asking for their support for a campaign.

In our new ‘post-fact’, fake news era, all sides are willing and able to use social media and the infamous MSM to damage the other side. Fakes are becoming better. Latest AI can imitate your voice, a chat-bot can decide what it should say after other AI has recognized what someone has said and analysed the opportunities to ruin your relationship with them by spoofing you. Today, that might not be quite credible. Give it a couple more years and you won’t be able to tell. Next generation AI will be able to spoof your face doing the talking too.

AI can (and will) evolve. Deep learning researchers have been looking deeply at how the brain thinks, how to make neural networks learn better and to think better, how to design the next generation to be even smarter than humans could have designed it.

As my friend and robotic psychiatrist Joanne Pransky commented after my first piece, “It seems to me that the real challenge of AI is the human users, their ethics and morals (Their ‘HOS’ – Human Operating System).” Quite! Each group will indoctrinate their AI to believe their ethics and morals are right, and that the other lot are barbarians. Even evolutionary AI is not immune to religious or ideological bias as it evolves. Superhuman AI will be superhuman, but might believe even more strongly in a cause than humans do. You’d better hope the best AI is on your side.

AI can put articles, blogs and tweets out there, pretending to come from you or your friends, colleagues or contacts. They can generate plausible-sounding stories of what you’ve done or said, spoof emails in fake accounts using your ID to prove them.

So we’ll likely see activist AI armies set against each other, running on peer to peer processing clouds, encrypted to hell and back to prevent dismantling. We’ve all thought about cyber-warfare, but we usually only think about viruses or keystroke recorders, or more lately, ransom-ware. These will still be used too as small weapons in future cyber-warfare, but while losing files or a few bucks from an account is a real nuisance, losing your reputation, having it smeared all over the web, with all your contacts being told what you’ve done or said, and shown all the evidence, there is absolutely no way you could possible explain your way convincingly out of every one of those instances. Mud does stick, and if you throw tons of it, even if most is wiped off, much will remain. Trust is everything, and enough doubt cast will eventually erode it.

So, we’ve seen  many times through history the damage people are willing to do to each other in pursuit of their ideology. The Khmer Rouge had their killing fields. As political divide increases and battles become fiercer, the next 10 years will give us The Libel Fields.

You are an intellectual. You are one of the targets.

Oh dear!

 

AI and activism, a Terminator-sized threat targeting you soon

You should be familiar with the Terminator scenario. If you aren’t then you should watch one of the Terminator series of films because you really should be aware of it. But there is another issue related to AI that is arguably as dangerous as the Terminator scenario, far more likely to occur and is a threat in the near term. What’s even more dangerous is that in spite of that, I’ve never read anything about it anywhere yet. It seems to have flown under our collective radar and is already close.

In short, my concern is that AI is likely to become a heavily armed Big Brother. It only requires a few components to come together that are already well in progress. Read this, and if you aren’t scared yet, read it again until you understand it 🙂

Already, social media companies are experimenting with using AI to identify and delete ‘hate’ speech. Various governments have asked them to do this, and since they also get frequent criticism in the media because some hate speech still exists on their platforms, it seems quite reasonable for them to try to control it. AI clearly offers potential to offset the huge numbers of humans otherwise needed to do the task.

Meanwhile, AI is already used very extensively by the same companies to build personal profiles on each of us, mainly for advertising purposes. These profiles are already alarmingly comprehensive, and increasingly capable of cross-linking between our activities across multiple platforms and devices. Latest efforts by Google attempt to link eventual purchases to clicks on ads. It will be just as easy to use similar AI to link our physical movements and activities and future social connections and communications to all such previous real world or networked activity. (Update: Intel intend their self-driving car technology to be part of a mass surveillance net, again, for all the right reasons: http://www.dailymail.co.uk/sciencetech/article-4564480/Self-driving-cars-double-security-cameras.html)

Although necessarily secretive about their activities, government also wants personal profiles on its citizens, always justified by crime and terrorism control. If they can’t do this directly, they can do it via legislation and acquisition of social media or ISP data.

Meanwhile, other experiences with AI chat-bots learning to mimic human behaviors have shown how easily AI can be gamed by human activists, hijacking or biasing learning phases for their own agendas. Chat-bots themselves have become ubiquitous on social media and are often difficult to distinguish from humans. Meanwhile, social media is becoming more and more important throughout everyday life, with provably large impacts in political campaigning and throughout all sorts of activism.

Meanwhile, some companies have already started using social media monitoring to police their own staff, in recruitment, during employment, and sometimes in dismissal or other disciplinary action. Other companies have similarly started monitoring social media activity of people making comments about them or their staff. Some claim to do so only to protect their own staff from online abuse, but there are blurred boundaries between abuse, fair criticism, political difference or simple everyday opinion or banter.

Meanwhile, activists increasingly use social media to force companies to sack a member of staff they disapprove of, or drop a client or supplier.

Meanwhile, end to end encryption technology is ubiquitous. Malware creation tools are easily available.

Meanwhile, successful hacks into large company databases become more and more common.

Linking these various elements of progress together, how long will it be before activists are able to develop standalone AI entities and heavily encrypt them before letting them loose on the net? Not long at all I think.  These AIs would search and police social media, spotting people who conflict with the activist agenda. Occasional hacks of corporate databases will provide names, personal details, contacts. Even without hacks, analysis of publicly available data going back years of everyone’s tweets and other social media entries will provide the lists of people who have ever done or said anything the activists disapprove of.

When identified, they would automatically activate armies of chat-bots, fake news engines and automated email campaigns against them, with coordinated malware attacks directly on the person and indirect attacks by communicating with employers, friends, contacts, government agencies customers and suppliers to do as much damage as possible to the interests of that person.

Just look at the everyday news already about alleged hacks and activities during elections and referendums by other regimes, hackers or pressure groups. Scale that up and realize that the cost of running advanced AI is negligible.

With the very many activist groups around, many driven with extremist zeal, very many people will find themselves in the sights of one or more activist groups. AI will be able to monitor everyone, all the time.  AI will be able to target each of them at the same time to destroy each of their lives, anonymously, highly encrypted, hidden, roaming from server to server to avoid detection and annihilation, once released, impossible to retrieve. The ultimate activist weapon, that carries on the fight even if the activist is locked away.

We know for certain the depths and extent of activism, the huge polarization of society, the increasingly fierce conflict between left and right, between sexes, races, ideologies.

We know about all the nice things AI will give us with cures for cancer, better search engines, automation and economic boom. But actually, will the real future of AI be harnessed to activism? Will deliberate destruction of people’s everyday lives via AI be a real problem that is almost as dangerous as Terminator, but far more feasible and achievable far earlier?

AI is mainly a stimulative technology that will create jobs

AI has been getting a lot of bad press the last few months from doom-mongers predicting mass unemployment. Together with robotics, AI will certainly help automate a lot of jobs, but it will also create many more and will greatly increase quality of life for most people. By massively increasing the total effort available to add value to basic resources, it will increase the size of the economy and if that is reasonably well managed by governments, that will be for all our benefit. Those people who do lose their jobs and can’t find or create a new one could easily be supported by a basic income financed by economic growth. In short, unless government screws up, AI will bring huge benefits, far exceeding the problems it will bring.

Over the last 20 years, I’ve often written about the care economy, where the more advanced technology becomes, the more it allows to concentrate on those skills we consider fundamentally human – caring, interpersonal skills, direct human contact services, leadership, teaching, sport, the arts, the sorts of roles that need emphatic and emotional skills, or human experience. AI and robots can automate intellectual and physical tasks, but they won’t be human, and some tasks require the worker to be human. Also, in most careers, it is obvious that people focus less and less on those automatable tasks as they progress into the most senior roles. Many board members in big companies know little about the industry they work in compared to most of their lower paid workers, but they can do that job because being a board member is often more about relationships than intellect.

AI will nevertheless automate many tasks for many workers, and that will free up much of their time, increasing their productivity, which means we need fewer workers to do those jobs. On the other hand, Google searches that take a few seconds once took half a day of research in a library. We all do more with our time now thanks to such simple AI, and although all those half-days saved would add up to a considerable amount of saved work, and many full-time job equivalents, we don’t see massive unemployment. We’re all just doing better work. So we can’t necessarily conclude that increasing productivity will automatically mean redundancy. It might just mean that we will do even more, even better, like it has so far. Or at least, the volume of redundancy might be considerably less. New automated companies might never employ people in those roles and that will be straight competition between companies that are heavily automated and others that aren’t. Sometimes, but certainly not always, that will mean traditional companies will go out of business.

So although we can be sure that AI and robots will bring some redundancy in some sectors, I think the volume is often overestimated and often it will simply mean rapidly increasing productivity, and more prosperity.

But what about AI’s stimulative role? Jobs created by automation and AI. I believe this is what is being greatly overlooked by doom-mongers. There are three primary areas of job creation:

One is in building or programming robots, maintaining them, writing software, or teaching them skills, along with all the associated new jobs in supporting industry and infrastructure change. Many such jobs will be temporary, lasting a decade or so as machines gradually take over, but that transition period is extremely valuable and important. If anything, it will be a lengthy period of extra jobs and the biggest problem may well be filling those jobs, not widespread redundancy.

Secondly, AI and robots won’t always work direct with customers. Very often they will work via a human intermediary. A good example is in medicine. AI can make better diagnoses than a GP, and could be many times cheaper, but unless the patient is educated, and very disciplined and knowledgeable, it also needs a human with human skills to talk to a patient to make sure they put in correct information. How many times have you looked at an online medical diagnosis site and concluded you have every disease going? It is hard to be honest sometimes when you are free to interpret every possible symptom any way you want, much easier to want to be told that you have a special case of wonderful person syndrome. Having to explain to a nurse or technician what is wrong forces you to be more honest about it. They can ask you similar questions, but your answers will need to be moderated and sensible or you know they might challenge you and make you feel foolish. You will get a good diagnosis because the input data will be measured, normalized and scaled appropriately for the AI using it. When you call a call center and talk to a human, invariably they are already the front end of a massive AI system. Making that AI bigger and better won’t replace them, just mean that they can deal with your query better.

Thirdly, and I believe most importantly of all, AI and automation will remove many of the barriers that stop people being entrepreneurs. How many business ideas have you had and not bothered to implement because it was too much effort or cost or both for too uncertain a gain? 10? 100? 1000? Suppose you could just explain your idea to your home AI and it did it all for you. It checked the idea, made a model, worked out how to make it work or whether it was just a crap idea. It then explained to you what the options were and whether it would be likely to work, and how much you might earn from it, and how much you’d actually have to do personally and how much you could farm out to the cloud. Then AI checked all the costs and legal issues, did all the admin, raised the capital by explaining the idea and risks and costs to other AIs, did all the legal company setup, organised the logistics, insurance, supply chains, distribution chains, marketing, finance, personnel, ran the payroll and tax. All you’d have to do is some of the fun work that you wanted to do when you had the idea and it would find others or machines or AI to fill in the rest. In that sort of world, we’d all be entrepreneurs. I’d have a chain of tea shops and a fashion empire and a media empire and run an environmental consultancy and I’d be an artist and a designer and a composer and a genetic engineer and have a transport company and a construction empire. I don’t do any of that because I’m lazy and not at all entrepreneurial, and my ideas all ‘need work’ and the economy isn’t smooth and well run, and there are too many legal issues and regulations and it would all be boring as hell. If we automate it and make it run efficiently, and I could get as much AI assistance as I need or want at every stage, then there is nothing to stop me doing all of it. I’d create thousands of jobs, and so would many other people, and there would be more jobs than we have people to fill them, so we’d need to build even more AI and machines to fill the gaps caused by the sudden economic boom.

So why the doom? It isn’t justified. The bad news isn’t as bad as people make out, and the good news never gets a mention. Adding it together, AI will stimulate more jobs, create a bigger and a better economy, we’ll be doing far more with our lives and generally having a great time. The few people who will inevitably fall through the cracks could easily be financed by the far larger economy and the very generous welfare it can finance. We can all have the universal basic income as our safety net, but many of us will be very much wealthier and won’t need it.

 

Google v Facebook – which contributes most to humanity?

Please don’t take this too seriously, it’s intended as just a bit of fun. All of it is subjective and just my personal opinion of the two companies.

Google’s old motto of ‘do no evil’ has taken quite a battering over the last few years, but my overall feeling towards them remains somewhat positive overall. Facebook’s reputation has also become muddied somewhat, but I’ve never been an active user and always found it supremely irritating when I’ve visited to change privacy preferences or read a post only available there, so I guess I am less positive towards them. I only ever post to Facebook indirectly via this blog and twitter. On the other hand, both companies do a lot of good too. It is impossible to infer good or bad intent because end results arise from a combination of intent and many facets of competence such as quality of insight, planning, competence, maintenance, response to feedback and many others. So I won’t try to differentiate intent from competence and will just stick to casual amateur observation of the result. In order to facilitate score-keeping of the value of their various acts, I’ll use a scale from very harmful to very beneficial, -10 to +10.

Google (I can’t bring myself to discuss Alphabet) gave us all an enormous gift of saved time, improved productivity and better self-fulfilment by effectively replacing a day in the library with a 5 second online search. We can all do far more and live richer lives as a result. They have continued to build on that since, adding extra features and improved scope. It’s far from perfect, but it is a hell of a lot better than we had before. Score: +10

Searches give Google a huge and growing data pool covering the most intimate details of every aspect of our everyday lives. You sort of trust them not to blackmail you or trash your life, but you know they could. The fact remains that they actually haven’t. It is possible that they might be waiting for the right moment to destroy the world, but it seems unlikely. Taking all our intimate data but choosing not to end the world yet: Score +9

On the other hand, they didn’t do either of those things purely through altruism. We all pay a massive price: advertising. Advertising is like a tax. Almost every time you buy something, part of the price you pay goes to advertisers. I say almost because Futurizon has never paid a penny yet for advertising and yet we have sold lots, and I assume that many other organisations can say the same, but most do advertise, and altogether that siphons a huge amount from our economy. Google takes lots of advertising revenue, but if they didn’t take it, other advertisers would, so I can only give a smallish negative for that: Score -3

That isn’t the only cost though. We all spend very significant time getting rid of ads, wasting time by clicking on them, finding, downloading and configuring ad-blockers to stop them, re-configuring them to get entry to sites that try to stop us from using ad-blockers, and often paying per MB for unsolicited ad downloads to our mobiles. I don’t need to quantify that to give all that a score of -9.

They are still 7 in credit so they can’t moan too much.

Tax? They seem quite good at minimizing their tax contributions, while staying within the letter of the law, while also paying good lawyers to argue what the letter of the law actually says. Well, most of us try at least a bit to avoid paying taxes we don’t have to pay. Google claims to be doing us all a huge favor by casting light on the gaping holes in international tax law that let them do it, much like a mugger nicely shows you the consequences of inadequate police coverage by enthusiastically mugging you. Noting the huge economic problems caused across the world by global corporates paying far less tax than would seem reasonable to the average small-business-owner, I can’t honestly see how this could live comfortably with their do-no evil mantra. Score: -8

On the other hand, if they paid all that tax, we all know governments would cheerfully waste most of it. Instead, Google chooses to do some interesting things with it. They gave us Google Earth, which at least morally cancels out their ‘accidental’ uploading of everyone’s wireless data as their street-view cars went past.They have developed self-driving cars. They have bought and helped develop Deep-mind and their quantum computer. They have done quite a bit for renewable energy. They have spent some on high altitude communications planes supposedly to bring internet to the rural parts of the developing world. When I were a lad, I wanted to be a rich bastard so I could do all that. Now, I watch as the wealthy owners of these big companies do it instead. I am fairly happy with that. I get the results and didn’t have to make the effort. We get less tax, but at least we get some nice toys. Almost cancels. Score +6

They are trying to use their AI to analyse massive data pools of medical records to improve medicine. Score +2

They are also building their databases more while doing that but we don’t yet see the downside. We have to take what they are doing on trust until evidence shows otherwise.

Google has tried and failed at many things that were going to change the world and didn’t, but at least they tried. Most of us don’t even try. Score +2

Oh yes, they bought YouTube, so I should factor that in. Mostly harmless and can be fun. Score: +2

Almost forgot Gmail too. Score +3

I’m done. Total Google contribution to humanity: +14

Well done! Could do even better.

I’ve almost certainly overlooked some big pluses and minuses, but I’ll leave it here for now.

Now Facebook.

It’s obviously a good social network site if you want that sort of thing. It lets people keep in touch with each other, find old friends and make new ones. It lets others advertise their products and services, and others to find or spread news. That’s all well and good and even if I and many other people don’t want it, many others do, so it deserves a good score, even if it isn’t as fantastic as Google’s search, that almost everyone uses, all the time. Score +5

Connected, but separate from simply keeping in touch, is the enormous pleasure value people presumably get from socializing. Not me personally, but ‘people’. Score +8

On the downside: Quite a lot of problems result from people, especially teens, spending too much time on Facebook. I won’t reproduce the results of all the proper academic  studies here, but we’ve all seen various negative reports: people get lower grades in their exams, people get bullied, people become socially competitive – boasting about their successes while other people feel insecure or depressed when others seem to be doing better, or are prettier, or have more friends. Keeping in touch is good, but cutting bits off others’ egos to build your own isn’t. It is hard not to conclude that the negative uses of keeping in touch outweigh the positive ones. Long-lived bad-feelings outweigh short-lived ego-boosts. Score: -8

Within a few years of birth, Facebook evolved from a keeping-in-touch platform to a general purpose mini-web. Many people were using Facebook to do almost everything that others would do on the entire web. Being in a broom cupboard is fine for 5 minutes if you’re playing hide and seek, but it is not desirable as a permanent state. Still, it is optional, so isn’t that bad per se: Score: -3

In the last 2 or 3 years, it has evolved further, albeit probably unintentionally, to become a political bubble, as has become very obvious in Brexit and the US Presidential Election, though it was already apparent well before those. Facebook may not have caused the increasing divide we are seeing between left and right, across the whole of the West, but it amplifies it. Again, I am not implying any intent, just observing the result. Most people follow people and media that echoes their own value judgments. They prefer resonance to dissonance. They prefer to have their views reaffirmed than to be disputed. When people find a comfortable bubble where they feel they belong, and stay there, it is easy for tribalism to take root and flourish, with demonization of the other not far behind. We are now seeing that in our bathtub society, with two extremes and a rapidly shallowing in-between that was not long ago the vast majority. Facebook didn’t create human nature; rather, it is a victim of it, but nonetheless it provides a near-monopoly social network that facilitates such political bubbles and their isolation while doing far too little to encourage integration in spite of its plentiful resources. Dangerous and Not Good. Score -10

On building databases of details of our innermost lives, managing not to use the data to destroy our lives but instead only using it to sell ads, they compare with Google. I’ll score that the same total for the same reasons: Net Score -3

Tax? Quantities are different, but eagerness to avoid tax seems similar to Google. Principles matter. So same score: -8

Assorted messaging qualifies as additional to the pure social networking side I think so I’ll generously give them an extra bit for that: Score +2

They occasionally do good things with it like Google though. They also are developing a high altitude internet, and are playing with space exploration. Tiny bit of AI stuff, but not much else has crossed my consciousness. I think it is far less than Google but still positive, so I’ll score: +3

I honestly can’t think of any other significant contributions from Facebook to make the balance more positive, and I tried. I think they want to make a positive contribution, but are too focused on income to tackle the social negatives properly.

Total Facebook contribution to humanity: -14.

Oh dear! Must do better.

Conclusion: We’d be a lot worse off without Google. Even with their faults, they still make a great contribution to humankind. Maybe not quite a ‘do no evil’ rating, but certainly they qualify for ‘do net good’. On the other hand, sadly, I have to say that my analysis suggests we’d be a lot better off without Facebook. As much better off without them as we benefit by having Google.

If I have left something major out, good or bad, for either company please feel free to add your comments. I have deliberately left out their backing of their own political leanings and biases because whether you think they are good or bad depends where you are coming from. They’d only score about +/-3 anyway, which isn’t a game changer.

 

 

Future sex, gender and relationships: how close can you get?

Using robots for gender play

Using robots for gender play

I recently gave a public talk at the British Academy about future sex, gender, and relationship, asking the question “How close can you get?”, considering particularly the impact of robots. The above slide is an example. People will one day (between 2050 and 2065 depending on their budget) be able to use an android body as their own or even swap bodies with another person. Some will do so to be young again, many will do so to swap gender. Lots will do both. I often enjoy playing as a woman in computer games, so why not ‘come back’ and live all over again as a woman for real? Except I’ll be 90 in 2050.

The British Academy kindly uploaded the audio track from my talk at

If you want to see the full presentation, here is the PowerPoint file as a pdf:

sex-and-robots-british-academy

I guess it is theoretically possible to listen to the audio while reading the presentation. Most of the slides are fairly self-explanatory anyway.

Needless to say, the copyright of the presentation belongs to me, so please don’t reproduce it without permission.

Enjoy.

Christmas in 2040

I am cheating with this post, since I did a newspaper interview that writes up some of my ideas and will save time rewriting it all. Here’s a link:

https://www.thesun.co.uk/living/2454633/dinner-cooked-by-robots-no-wrapping-paper-and-video-make-up-for-the-office-party-this-is-what-christmas-will-look-like-in-2040-according-to-futurologist-dr-ian-pearson/

I hope you all have a wonderful Christmas.

Chat-bots will help reduce loneliness, a bit

Amazon is really pushing its Echo and Dot devices at the moment and some other companies also use Alexa in their own devices. They are starting to gain avatar front ends too. Microsoft has their Cortana transforming into Zo, Apple has Siri’s future under wraps for now. Maybe we’ll see Siri in a Sari soon, who knows. Thanks to rapidly developing AI, chatbots and other bots have also made big strides in recent years, so it’s obvious that the two can easily be combined. The new voice control interfaces could become chatbots to offer a degree of companionship. Obviously that isn’t as good as chatting to real people, but many, very many people don’t have that choice. Loneliness is one of the biggest problems of our time. Sometimes people talk to themselves or to their pet cat, and chatting to a bot would at least get a real response some of the time. It goes further than simple interaction though.

I’m not trying to understate the magnitude of the loneliness problem, and it can’t solve it completely of course, but I think it will be a benefit to at least some lonely people in a few ways. Simply having someone to chat to will already be of some help. People will form emotional relationships with bots that they talk to a lot, especially once they have a visual front end such as an avatar. It will help some to develop and practice social skills if that is their problem, and for many others who feel left out of local activity, it might offer them real-time advice on what is on locally in the next few days that might appeal to them, based on their conversations. Talking through problems with a bot can also help almost as much as doing so with a human. In ancient times when I was a programmer, I’d often solve a bug by trying to explain how my program worked, and in doing so i would see the bug myself. Explaining it to a teddy bear would have been just as effective, the chat was just a vehicle for checking through the logic from a new angle. The same might apply to interactive conversation with a bot. Sometimes lonely people can talk too much about problems when they finally meet people, and that can act as a deterrent to future encounters, so that barrier would also be reduced. All in all, having a bot might make lonely people more able to get and sustain good quality social interactions with real people, and make friends.

Another benefit that has nothing to do with loneliness is that giving a computer voice instructions forces people to think clearly and phrase their requests correctly, just like writing a short computer program. In a society where so many people don’t seem to think very clearly or even if they can, often can’t express what they want clearly, this will give some much needed training.

Chatbots could also offer challenges to people’s thinking, even to help counter extremism. If people make comments that go against acceptable social attitudes or against known facts, a bot could present the alternative viewpoint, probably more patiently than another human who finds such viewpoints frustrating. I’d hate to see this as a means to police political correctness, though it might well be used in such a way by some providers, but it could improve people’s lack of understanding of even the most basic science, technology, culture or even politics, so has educational value. Even if it doesn’t convert people, it might at least help them to understand their own views more clearly and be better practiced at communicating their arguments.

Chat bots could make a significant contribution to society. They are just machines, but those machines are tools for other people and society as a whole to help more effectively.