Will networking make the world safer?

No.

If you want a more detailed answer:

A long time ago when the web was young, we all hoped networking would make a better world. Everyone would know of all the bad things going on and would all group together and stop them. With nowhere to hide, oppressors would stop oppressing. 25 years on…

Since then, we’ve had spectacularly premature  announcements of how the internet and social networking in particular was responsible for bringing imminent peace in the world as the Arab spring emerged, followed not long after with proof of the naivety of such assumptions.

The pretty good global social networking we already have has also failed to eradicate oppression of women in large swathes of the world, hasn’t solved hunger or ensured universal supply of clean fresh water. It has however allowed ISIS to recruit better and spread their propaganda, and may be responsible for much of the political breakdown we are now seeing, with communities at each others’ throats that used to get along in mutual live-and-let-live.

The nets have so far failed to deliver on their promise, but that doesn’t necessarily mean they never will. On the other hand, the evidence so far suggests that many people simply misunderstood the consequences of letting people communicate better. A very large number of people believe you can solve any problem by talking about it. It clearly isn’t actually true.

The assumption that if only you would take the time to get to know other people and understand their point of view, you would get on well and live peacefully and all problems will somehow evaporate if only you talk, is simply wrong. People on both sides must want to solve the problem to make that work. If only one side wants to solve it, talking about it can actually increase conflict.

Talking helps people understand what they have in common, but it also exposes and potentially reinforces those areas where they differ.  I believe that is why we experience such vicious political debate lately. The people on each side, in each tribe if you like, can find one another, communicate, bond, and identify a common enemy. With lots of new-found allies, they feel more confident to attack, more confident of the size of their tribe, and of their moral superiority, assured via frequent reinforcement of their ideas.

Then as in much tribal warfare over millennia, it is no longer enough to find a peace agreement, the other side must now be belittled, demonized, subjugated and destroyed. That is a very real impact of the net, magnifying the tribal conflicts built into human nature. Talking can be good but it can also become counterproductive, revealing weaknesses, magnifying differences, and fostering hatred when there was once indifference.

Given that increasing communication is very two-sided, making it better and better might not help peace and love to prosper. Think about that a bit more. Suppose ISIS, instead of the basic marketing videos they use today, were to use a fully immersive virtual reality vision of the world they want to create, sanitized to show and enhance those areas of their vision that they want recruits to see. Suppose recruits could see how they might flourish and reign supreme over us infidel enemies, eradicating us while choosing which 72 virgins to have. Is that improving communications likely to help eradicate terrorism, or to increase it?

Sure, we can talk better to our enemies to discuss solutions and understand their ways and cultures so we can empathize better. Will that make peace with ISIS? Of course it won’t. Only the looniest and most naive would think otherwise. 

What about less extreme situations? We have everyday tribalism all around all the time but we now also have social reinforcement via social networks. People who once thought they had minority viewpoints so kept relatively quiet can now find others with similar views, then feel more powerful and become more vocal and even aggressive. If you are the only one in a village with an extreme view, you might have previously self censored to avoid being ostracized. If you become part of a worldwide community of millions of like mind, it is more tempting to air those views and become an activist, knowing you have backup.  With the added potential anonymity conferred by the network and no fear of physical attack, some people become more aggressive.

So social networks have increased the potential for tribal aggression as well as making people more aware of the world around them. On balance, it seems that tribal forces increase more than the forces to reduce oppression. Even those who claim to be defending others often do so more aggressively. Gentle persuasion is frequently replaced by inquisitions, witch hunts, fierce and destructive attacks.

If so, social networking is a bad thing overall in terms of peaceful coexistence. Meeting new people and staying in touch with friends and family still remain strongly beneficial to personal emotional well-being and also to cohesion within tribes. It is the combination of the enhanced personal feeling of security and the consequential bravery to engage in tribal conflict that is dangerous.

We see this new conflict in politics, religion, sexual attitudes, gender relations, racial conflicts, cultural conflicts, age, even in adherence to secular religions such as warmism. But especially in politics now; left and right no longer tolerate each other and the level of aggression between them increases continually.

If this increasing aggression and intolerance is really due to better social networking, then it is likely to get even worse as more and more people worldwide come online for longer and learn to use social networking tools more effectively.

As activists see more evidence that networking use produces results and reinforces their tribe and their effectiveness, they will do more of it. More activism will produce more extremism, leading to even more activism and more extremism. This circle of reinforcement might be very hard to escape. We may be doomed to more and more extremism, more aggressive relations between groups with different opinions, a society that is highly intolerant, and potentially unstable.

It is very sad that the optimism of the early net has been replaced by the stark reality of human nature. Tribal warfare goes back millennia, but was kept in check by geographic separation. Now that global migration and advanced social networking are mixing the tribes together, the inevitable conflicts are given a new and better equipped battlefield.

 

 

 

The IT dark age – The relapse

I long ago used a slide in my talks about the IT dark age, showing how we’d come through a period (early 90s)where engineers were in charge and it worked, into an era where accountants had got hold of it and were misusing it (mid 90s), followed by a terrible period where administrators discovered it and used it in the worst ways possible (late 90s, early 00s). After that dark age, we started to emerge into an age of IT enlightenment, where the dumbest of behaviors had hopefully been filtered out and we were starting to use it correctly and reap the benefits.

Well, we’ve gone into relapse. We have entered a period of uncertain duration where the hard-won wisdom we’d accumulated and handed down has been thrown in the bin by a new generation of engineers, accountants and administrators and some extraordinarily stupid decisions and system designs are once again being made. The new design process is apparently quite straightforward: What task are we trying to solve? How can we achieve this in the least effective, least secure, most time-consuming, most annoying, most customer loyalty destructive way possible? Now, how fast can we implement that? Get to it!

If aliens landed and looked at some of the recent ways we have started to use IT, they’d conclude that this was all a green conspiracy, designed to make everyone so anti-technology that we’d be happy to throw hundreds of years of progress away and go back to the 16th century. Given that they have been so successful in destroying so much of the environment under the banner of protecting it, there is sufficient evidence that greens really haven’t a clue what they are doing, but worse still, gullible political and business leaders will cheerfully do the exact opposite of what they want as long as the right doublespeak is used when they’re sold the policy.

The main Green laboratory in the UK is the previously nice seaside town of Brighton. Being an extreme socialist party, that one might think would be a binperson’s best friend, the Greens in charge nevertheless managed to force their binpeople to go on strike, making what ought to be an environmental paradise into a stinking litter-strewn cesspit for several weeks. They’ve also managed to create near-permanent traffic gridlock supposedly to maximise the amount of air pollution and CO2 they can get from the traffic.

More recently, they have decided to change their parking meters for the very latest IT. No longer do you have to reach into your pocket and push a few coins into a machine and carry a paper ticket all the way back to your car windscreen. Such a tedious process consumed up to a minute of your day. It simply had to be replaced with proper modern technology. There are loads of IT solutions to pick from, but the Greens apparently decided to go for the worst possible implementation, resulting in numerous press reports about how awful it is. IT should not be awful, it can and should be done in ways that are better in almost every way than old-fashioned systems. I rarely drive anyway and go to Brighton very rarely, but I am still annoyed at incompetent or deliberate misuse of IT.

If I were to go there by car, I’d also have to go via the Dartford Crossing, where again, inappropriate IT has been used incompetently to replace a tollbooth system that makes no economic sense in the first place. The government would be better off if it simply paid for it directly. Instead, each person using it is likely to be fined if they don’t know how it operates, and even if they do, they have to spend a lot more expensive time and effort to pay than before. Again, it is a severe abuse of IT, conferring a tiny benefit on a tiny group of people at the expense of significant extra load on very many people.

Another financial example is the migration to self-pay terminals in shops. In Stansted Airport’s W H Smith a couple of days ago, I sat watching a long queue of people taking forever to buy newspapers. Instead of a few seconds handing over a coin and walking out, it was taking a minute or more to read menus, choose which buttons to touch, inspecting papers to find barcodes, fumbling for credit cards, checking some more boxes, checking they hadn’t left their boarding pass or paper behind, and finally leaving. An assistant stood there idle, watching people struggle instead of serving them in a few seconds. I wanted a paper but the long queue was sufficient deterrent and they lost the sale. Who wins in such a situation? The staff who lost their jobs certainly didn’t. I as the customer had no paper to read so I didn’t win. I would be astonished with all the lost sales if W H Smith were better off so they didn’t win. The airport will likely make less from their take too. Even the terminal manufacturing industry only swaps one type of POS terminal for another with marginally different costs. I’m not knocking W H Smith, they are just another of loads of companies doing this now. But it isn’t progress, it is going backwards.

When I arrived at my hotel, another electronic terminal was replacing a check-in assistant with a check-in terminal usage assistant. He was very friendly and helpful, but check-in wasn’t any easier or faster for me, and the terminal design still needed him to be there too because like so many others, it was designed by people who have zero understanding of how other people actually do things.  Just like those ticket machines in rail stations that we all detest.

When I got to my room, the thermostat used a tiny LCD panel, with tiny meaningless symbols, with no backlight, in a dimly lit room, with black text on a dark green background. So even after searching for my reading glasses, since I hadn’t brought a torch with me, I couldn’t see a thing on it so I couldn’t use the air conditioning. An on/off switch and a simple wheel with temperature marked on it used to work perfectly fine. If it ain’t broke, don’t do your very best to totally wreck it.

These are just a few everyday examples, alongside other everyday IT abuses such as minute fonts and frequent use of meaningless icons instead of straightforward text. IT is wonderful. We can make devices with absolutely superb capability for very little cost. We can make lives happier, better, easier, healthier, more prosperous, even more environmentally friendly.

Why then are so many people so intent on using advanced IT to drag us back into another dark age?

 

 

Apple’s watch? No thanks

I was busy writing a blog about how technology often barks up the wrong trees, when news appeared on specs for the new Apple watch, which seems to crystallize the problem magnificently. So I got somewhat diverted and the main blog can wait till I have some more free time, which isn’t today

I confess that my comments (this is not a review) are based on the specs I have read about it, I haven’t actually got one to play with, but I assume that the specs listed in the many reviews out there are more or less accurate.

Apple’s new watch barks up a tree we already knew was bare. All through the 1990s Casio launched a series of watches with all kinds of extra functions including pulse monitoring and biorhythms and phone books, calculators and TV remote controls. At least, those are the ones I’ve bought. Now, Casio seem to focus mainly on variations of the triple sensor ones for sports that measure atmospheric pressure, temperature and direction. Those are functions they know are useful and don’t run the battery down too fast. There was even a PC watch, though I don’t think that one was Casio, and a GPS watch, with a battery that lasted less than an hour.

There is even less need now for a watch that does a range of functions that are easily done in a smartphone, and that is the Apple watch’s main claim to existence – it can do the things your phone does but on a smaller screen. Hell, I’m 54, I use my tablet to do the things younger people with better eyesight do on their mobile phone screens, the last thing I want is an even smaller screen. I only use my phone for texts and phone calls, and alarms only if I don’t have my Casio watch with me – they are too hard to set on my Tissot. The main advantage of a watch is its contact with the skin, allowing it to monitor the skin surface and blood passing below, and also pick up electrical activity. However, it is the sensor that does this, and any processing of that sensor data could and should be outsourced to the smartphone. Adding other things to the phone such as playing music is loading far too much demand onto what has to be a tiny energy supply. The Apple watch only manages a few hours of life if used for more than the most basic functions, and then needs 90 minutes on a charger to get 80% charged again. By contrast, last month I spent all of 15 minutes and £0.99 googling the battery specs and replacement process, buying, unpacking and actually changing the batteries on my Casio Protrek after 5 whole years, which means the Casio batteries last 12,500 times as long and the average time I spend on battery replacement is half a second per day. My Tissot Touch batteries also last 5 years, and it does the same things. By contrast, I struggle to remember to charge my iPhone and when I do remember, it is very often just before I need it so I frequently end up making calls with it plugged into the charger. My watch would soon move to a drawer if it needed charged every day and I could only use it sparingly during that day.

So the Apple watch might appeal briefly to gadget freaks who are desperate to show off, but I certainly won’t be buying one. As a watch, it fails abysmally. As a smartphone substitute, it also fails. As a simple sensor array with the processing and energy drain elsewhere, it fails yet again. As a status symbol, it would show that I am desperate for attention and to show of my wealth, so it also fails. It is an extra nuisance, an extra thing to remember to charge and utterly pointless. If I was given one free, I’d play with it for a few minutes and then put it in a drawer. If I had to pay for one, I’d maybe pay a pound for its novelty value.

No thanks.

Better representational democracy

We’re on the run-up to a general election in the UK. In theory, one person gets one vote, all votes are equal and every person gets equal representation in parliament. In practice it is far from that. Parties win seats in proportions very different from their proportion of the votes. Some parties get ten times more seats per vote than others, and that is far from fair and distorts the democratic working of parliament. The situation is made even worse by the particulars of UK party politics in this next election, where there seems unlikely to be a clear winner and we will probably need to have coalition government. The representational distortion that already exists is amplified even further when a party gets far more seats than it justifies and thereby has far greater power in negotiating a place in coalition.

For decades, the UK electoral system worked fine for the two party system – Labour and Conservative (broadly equivalent to Democrat and Republican in the USA). Labour wins more seats per vote than the Conservatives because of the geographic distribution of their voter base, but the difference has been tolerable. The UK’s third party, the Liberal Democrats, generally won only a few seats even when they won a significant share of the vote, because they were thinly spread across the country, so achieved a local majority in very few places. Conservatives generally had a majority in most southern seats and labour had a majority in most northern seats.

Now we have a very different mixture. Scotland has the SNP, we have the Greens, UKIP, the Libdems, Conservatives and Labour. A geographic party like the SNP will always win far more seats per vote because instead of being spread across the whole country, they are concentrated in a smaller region where they count for a higher average proportion and therefore win more local majorities. By contrast Libdems have their voters spread thinly across the whole country with a few pockets of strong support, and UKIP and the Greens are also pretty uniformly dispersed so reaching a majority anywhere is very difficult. Very few seats are won by parties that don’t have 30% or more of the national vote. For the three bottom parties, that results in gross under-representation in parliament. A party could win 20% of the votes and still get no seats. Or they could have only 2% of the vote but win 10% of the seats if the voters are concentrated in one region.

A Channel 4 blog provides a good analysis of the problem that discusses distortion effects of turnout, constituency size and vote distribution which saves me having to repeat it all:

http://blogs.channel4.com/factcheck/factcheck-voting-system-rigged-favour-labour/19025

Looking to the future, I believe an old remedy would help a lot in leveling the playing field:

Firstly, if a party wins more than a certain percentage of votes, say 1%, they should be allocated at least one seat, if necessary a seat without constituency. Secondly, once a party has one or more seats, those seats can have their parliamentary votes scaled according to the number of votes their party has won. The block voting idea has been used by trades unions for decades, it isn’t new. I find it astonishing that it hasn’t already been implemented

So a party with 5 seats that won 15% of the vote would get the same say on a decision as one with 50 seats that also won 15% of the vote, even though they have far fewer seats. In each case, the 15% who voted for them would see the correct representation in decision-making. Parties such as the Greens, Libdems and UKIP would have a say in Parliament representative of their level of support in the electorate. The larger parties Labour and Conservatives would have far less say, but one that is representative of their support. The SNP would have to live with only having as much power as the voter numbers they represent, a fraction of what they will likely achieve under this broken present system.

That would be fair. MPs would still be able to talk, make arguments, win influence and take places on committees. We would still have plenty of diversity to ensure a wide enough range of opinions are aired when debating. But when a decision is made, every voter in the country gets equal representation, and that is how democracy is supposed to be.

Further refinements might let voters split their vote between parties, but let’s concentrate on making the playing field at least a bit level first.

Estimating potential UK Islamist terrorism: IRA x 13

I wrote last June about the potential level for Islamist terrorism in the UK, where I used a comparison with the Northern Ireland troubles. It is a useful comparison because thanks to various polls and surveys, we know the ratio of actual active terrorist numbers there to the size of the supporter community.

The majority of people there didn’t support the violence, but quite a lot did, about 30% of the community. From the nationalist 245,000, the 30% (75,000) who supported violence resulted in only around 300 front line IRA ‘terrorists’ and another 450 in ‘support roles’ at any one time. The terrorist population churned, with people leaving and joining the IRA throughout, but around 1% of 30% of that 245,000 were IRA members at any one time.

We’ve recently had another survey on UK Muslims conducted for the BBC that included attitudes to violence. You can read the figures from the survey here:

http://comres.co.uk/wp-content/uploads/2015/02/BBC-Today-Programme_British-Muslims-Poll_FINAL-Tables_Feb2015.pdf

The figures they found are a little worse than the estimates I used last year, and we have slightly higher population estimates too, so it is time to do an update. The 30% support for violence attributed to the Northern Ireland nationalist community is very similar to the 32% found for the UK Muslim community. Perhaps 30% violence support is human nature rather than peculiar to a particular community. Perhaps all that is needed is a common grievance.

In the wake of the Charlie Hebdo attacks, 68% of UK Muslims claimed that they didn’t think violence was justified if someone ‘publishes images of the Prophet Mohammed’. The survey didn’t specify what kind of images of the Prophet were to be hypothetically published, or even that they were insulting, it just said ‘images’. That 68% gives us a first actual figure for what is often referred to as ‘the overwhelming peaceful majority of Muslims in Britain’. 32% either said they supported violence or wouldn’t say.

(The survey also did not ask the non-Muslim population whether they would support violence in particular circumstances, and I haven’t personally found the people I know in Great Britain to be more civilized than those I knew in Northern Ireland. If the same 30% applies when a common grievance exists, then at least we can take some comfort that we are all the same when we are angry over something.)

Some other surveys around the world in the last few years have confirmed that only around 30% of Muslims support violence against those who offend Islam. Just like in Northern Ireland, almost all of those supporters would not get directly involved in violence themselves, but would simply approve of it when it happens.

Let’s translate that into an estimate of potential Islamist terrorism. There are no accurate figures for the UK Muslim population, but it is likely now to be around 3 million. Around 32% of that is around a million; there is no point aiming for higher precision than that since the data just doesn’t exist. So around a million UK Muslims would state some support for violence. From that million, only a tiny number would be potential terrorists. The IRA drew its 750 members from a violence supporter base of 75,000, so about one percent of supporters of violence were prepared to be IRA members and only 40% of those joined the equivalent of ‘active service units’, i.e. the ones that plant bombs or shoot people.

Another similarity to Northern Ireland is that the survey found that 45% of UK Muslims felt that prejudice against them made it difficult to live here, and in Northern Ireland, 45% of nationalists supported the political motives of the IRA even if only 30% condoned its violence, so the level of grievance against the rest of the population seems similar. Given that similarity and that the 32% violence support level is also similar, it is only a small leap of logic to apply the same 1% to terrorist group recruitment might also apply. Taking 1% of 1 million suggests that if Islamist violence were to achieve critical mass, a steady 10,000 UK Muslims might eventually belong to Islamist terrorist groups and 0.4% or 4000 of those in front line roles. By comparison, the IRA at its peak had 750, with 300 on the front line.

So based on this latest BBC survey, if Islamists are allowed to get a grip, the number of Islamist terrorists in the UK could be about 13 times as numerous as the IRA at the height of ‘The Troubles’. There is a further comparison to be had of an ISIS-style terrorist v an IRA-style terrorist but that is too subjective to quantify, except to note that the IRA at least used to give warnings of most of their bombs.

That is only one side of the potential conflict of course, and the figures for far right opposition groups suggest an anti-Islamist terrorist response that might not be much smaller. Around 1.25 million support far right groups, and I would guess that more than 30% of those would support violence and more would be willing to get directly involved, so with a little hand-waving the problem looks symmetrical, just as it was in Northern Ireland.

If the potential level of violence is 13 times worse than the height of the Troubles, it is clearly very important that Islamists are not allowed to get sufficient traction or we will have a large problem. We should also be conscious that violence in one region might spread to others and this could extend to a European problem. On a positive note, if our leaders and security forces do their jobs well, we may see no significant problem at all.

The future of publishing

There are more information channels now than ever. These include thousands of new TV and radio channels that are enabled by the internet, millions of YouTube videos, new electronic book and magazine platforms such as tablets and mobile devices, talking books, easy print-on-demand, 3D printing, holograms, games platforms, interactive books, augmented reality and even AI chatbots, all in parallel with blogs, websites and social media such as Facebook, Linked-In, Twitter, Pinterest, Tumblr and so on. It has never been easier to publish something. It no longer has to cost money, and many avenues can even be anonymous so it needn’t even cost reputation if you publish something you shouldn’t. In terms of means and opportunity, there is plenty of both. Motive is built into human nature. People want to talk, to write, to create, to be looked at, to be listened to.

That doesn’t guarantee fame and fortune. Tens of millions of electronic books are written by software every year – mostly just themed copy and paste collections using content found online –  so that already makes it hard for a book to be seen, even before you consider the millions of other human authors. There are hundreds of times more new books every year now than when we all had to go via ‘proper publishers’.

The limiting factor is attention. There are only so many eyeballs, they only have a certain amount of available time each day and they are very spoiled for choice. Sure, we’re making more people, but population has doubled in 30 years, whereas published material volume doubles every few months. That means ever more competition for the attention of those eyeballs.

When there is a glut of material available for consumption, potential viewers must somehow decide what to look at to make the most of their own time. Conventional publishing had that sorted very well. Publishers only published things they knew they could sell, and made sure the work was done to a high quality – something it is all too easy to skip when self-publishing – and devoted the largest marketing budgets at those products that had the greatest potential. That was mostly determined by how well known the author was and how well liked their work. So when you walked through a bookshop door, you are immediately faced with the books most people want. New authors took years of effort to get to those places, and most never did. Now, it is harder still. Self-publishing authors can hit the big time, but it is very hard to do so, and very few make it.

Selling isn’t the only motivation for writing. Writing helps me formulate ideas, flesh them out, debug them, and tidy them up into cohesive arguments or insights. It helps me maintain a supply of fresh and original content that I need to stay in business. I write even when I have no intention of publishing and a large fraction of my writing stays as drafts, never published, having served its purpose during the act of writing. (Even so, when I do bother to write a book, it is still very nice if someone wants to buy it). It is also fun to write, and rewarding to see a finished piece appear. My sci-fi novel Space Anchor was written entirely for the joy of writing. I had a fantastic month writing it. I started on 3 July and published on 29th. I woke every night with ideas for the next day and couldn’t wait to get up and start typing. When I ran out of ideas, I typed its final paragraphs, lightly edited it and published.

The future of writing looks even more fun. Artificial intelligence is nowhere near the level yet where you can explain an idea to a computer in ordinary conversation and tell it to get on with it, but it will be one day, fairly soon. Interactive writing using AI to do the work will be very reward-rich, creativity-rich, a highly worthwhile experience in itself regardless of any market. Today, it takes forever to write and tidy up a piece. If AI does most of that, you could concentrate on the ideas and story, the fun bits. AI could also make suggestions to make your work better. We could all write fantastic novels. With better AI, it could even make a film based on your ideas. We could all write sci-fi films to rival the best blockbusters of today. But when there are a billion fantastic films to watch, the same attention problem applies. If nobody is going to see your work because of simple statistics, then that is only a problem if your motivation is to be seen or to sell. If you are doing it for your own pleasure, then it could be just as rewarding, maybe even more so. A lot of works would be produced simply for pleasure, but that still dilutes the marketplace for those hoping to sell.

An AI could just write all by itself and cut you out of the loop completely. It could see what topics are currently fashionable and instantaneously make works to tap that market. Given the volume of computer-produced books we already have, adding high level AI could fill the idea space in a genre very quickly. A book or film would compete against huge numbers of others catering to similar taste, many of which are free.

AI also extends the market for cooperative works. Groups of people could collaborate with AI doing all the boring admin and organisation as well as production and value add. The same conversational interface would work just as well for software or app or website production, or setting up a company. Groups of friends could formulate ideas together, and produce works for their own consumption. Books or films that are made together are shared experiences and help bind the group together, giving them shared stories that each has contributed to. Such future publication could therefore be part of socialization, a tribal glue, tribal identity.

This future glut of content doesn’t mean we won’t still have best sellers. As the market supply expands towards infinity, the attention problem means that people will be even more drawn to proven content suppliers. Brands become more important. Production values and editorial approach become more important. People who really understand a market sector and have established a strong presence in it will do even better as the market expands, because customers will seek out trusted suppliers.

So the future publishing market may be a vast sea of high quality content, attached to even bigger oceans of low quality content. In that world of virtually infinite supply, the few islands where people can feel on familiar ground and have easy access to a known and trusted quality product will become strong attractors. Supply and demand equations normally show decreasing price as supply rises, but I suspect that starts to reverse once supply passes a critical point. Faced with an infinite supply of cheap products, people will actually pay more to narrow the choice. In that world, self-publishing will primarily be self-motivated, for fun or self-actualization with only a few star authors making serious money from it. Professional publishing will still have most of the best channels with the most reliable content and the most customers and it will still be big business.

I’ll still do both.

The future of freedom of speech

This is mainly about the UK, but some applies elsewhere too.

The UK Police are in trouble yet again for taking the side of criminals against the law-abiding population. Our police seem to have frequent trouble with understanding the purpose of their existence. This time in the wake of the Charlie Hebdo murders, some police forces decided that their top priority was not to protect freedom of speech nor to protect law-abiding people from terrorists, but instead to visit the newsagents that were selling Charlie Hebdo and get the names of people buying copies. Charlie Hebdo has become synonymous with the right to exercise freedom of speech, and by taking names of its buyers, those police forces have clearly decided that Charlie Hebdo readers are the problem, not the terrorists. Some readers might indeed present a threat, but so might anyone in the population. Until there is evidence to suspect a crime, or at the very least plotting of a crime, it is absolutely no rightful business of the police what anyone does. Taking names of buyers treats them as potential suspects for future hate crimes. It is all very ‘Minority Report’, mixed with more than a touch of ‘Nineteen-eighty-four’. It is highly disturbing.

The Chief Constable has since clarified to the forces that this was overstepping the mark, and one of the offending forces has since apologised. The others presumably still think they were in the right. I haven’t yet heard any mention of them saying they have deleted the names from their records.

This behavior is wrong but not surprising. The UK police often seem to have socio-political agendas that direct their priorities and practices in upholding the law, individually and institutionally.

Our politicians often pay lip service to freedom of speech while legislating for the opposite. Clamping down on press freedom and creation of thought crimes (aka hate crimes) have both used the excuse of relatively small abuses of freedom to justify taking away our traditional freedom of speech. The government reaction to the Charlie Hebdo massacre was not to ensure that freedom of speech is protected in the UK, but to increase surveillance powers and guard against any possible backlash. The police have also become notorious for checking social media in case anyone has said anything that could possibly be taken as offensive by anyone. Freedom of speech only remains in the UK provided you don’t say anything that anyone could claim to be offended by, unless you can claim to be a member of a preferred victim group, in which case it sometimes seems that you can do or say whatever you want. Some universities won’t even allow some topics to be discussed. Freedom of speech is under high downward pressure.

So where next? Privacy erosion is a related problem that becomes lethal to freedom when combined with a desire for increasing surveillance. Anyone commenting on social media already assumes that the police are copied in, but if government gets its way, that will be extended to list of the internet services or websites you visit, and anything you type into search. That isn’t the end though.

Our televisions and games consoles listen in to our conversation (to facilitate voice commands) and send some of the voice recording to the manufacturers. We should expect that many IoT devices will do so too. Some might send video, perhaps to facilitate gesture recognition, and the companies might keep that too. I don’t know whether they data mine any of it for potential advertising value or whether they are 100% benign and only use it to deliver the best possible service to the user. Your guess is as good as mine.

However, since the principle has already been demonstrated, we should expect that the police may one day force them to give up their accumulated data. They could run a smart search on the entire population to find any voice or video samples or photos that might indicate anything remotely suspicious, and could then use legislation to increase monitoring of the suspects. They could make an extensive suspicion database for the whole population, just in case it might be useful. Given that there is already strong pressure to classify a wide range of ordinary everyday relationship rows or financial quarrels as domestic abuse, this is a worrying prospect. The vast majority of the population have had arguments with a partner at some time, used a disparaging comment or called someone a name in the heat of the moment, said something in the privacy of their home that they would never dare say in public, used terminology that isn’t up to date or said something less than complimentary about someone on TV. All we need now to make the ‘Demolition Man’ automated fine printout a reality is more time and more of the same government and police attitudes as we are accustomed to.

The next generation of software for the TVs and games consoles could easily include monitoring of eye gaze direction, maybe some already do. It might need that for control (e.g look and blink), or to make games smarter or for other benign reasons. But when the future police get the records of everything you have watched, what image was showing on that particular part of the screen when you made that particular expression, or made that gesture or said that, then we will pretty much have the thought police. They could get a full statistical picture of your attitudes to a wide range of individuals, groups, practices, politics or policies, and a long list of ‘offences’ for anyone they don’t like this week. None of us are saints.

The technology is all entirely feasible in the near future. What will make it real or imaginary is the attitude of the authorities, the law of the land and especially the attitude of the police. Since we are seeing an increasing disconnect between the police and the intent behind the law of the land, I am not the only one that this will worry.

We’ve already lost much of our freedom of speech in the UK. If we do not protest loudly enough and defend what we have left, we will soon lose the rest, and then lose freedom of thought. Without the freedom to think what you want, you don’t have any freedom worth having.

 

A potential architectural nightmare

I read in the papers that Google’s boss has rejected ‘boring’ plans for their London HQ. Hooray! Larry Page says he wants something that will be worthy of standing 100 years. I don’t always agree with Google but I certainly approve on this occasion. Given their normal style choices for other buildings, I have every confidence that their new building will be gorgeous, but what if I’m wrong?

In spite of the best efforts of Prince Charles, London has become a truly 21st century city. The new tall buildings are gorgeous and awe-inspiring as they should be. Whether they will be here in 100 years I don’t much care, but they certainly show off what can be done today, rather than poorly mimicking what could be done in the 16th century.

I’ve always loved modern architecture since I was a child (I like some older styles too, especially Gaudi’s Sagrada Familia in Barcelona). Stainless steel and glass are simple materials but used well, they can make beautiful structures. Since the Lloyds building opened up the new era, many impressive buildings have appeared. Modern materials have very well-known physical properties and high manufacturing consistency, so can be used at their full engineering potential.

Materials technology is developing quickly and won’t slow down any time soon. Recently discovered materials such as graphene will dramatically improve what can be done. Reliable electronics will too. If you could be certain that a device will always perform properly even when there is a local power cut, and is immune to hacking, then ultra-fast electromagnetic lifts could result. You could be accelerated downwards at 2.5g and the lift could rotate and slow you down at 0.5g in the slowing phase, then you would feel a constant weight all the way down but would reach high speed on a long descent. Cables just wouldn’t be able to do such a thing when we get building that are many kilometers high.

Google could only build with materials that exist now or could be reliable enough for building use by construction time. They can’t use graphene tension members or plasma windows or things that won’t even be invented for decades. Whatever they do, the materials and techniques will not remain state of the art for long. That means there is even more importance in making something that looks impressive. Technology dates quickly, style lasts much longer. So for possibly the first time ever, I’d recommend going for impressive style over substance.

There is an alternative; to go for a design that is adaptable, that can change as technology permits. That is not without penalty though, because making something that has to be adaptive restricts the design options.

I discussed plasma glass in: https://timeguide.wordpress.com/2013/11/01/will-plasma-be-the-new-glass/

I don’t really know if it will be feasible, but it might be.

Carbon foam could be made less dense than air, or even helium for that matter, so could make buildings with sections that float (a bit like the city in the game Bioshock Infinite).

Dynamic magnetic levitation could allow features that hover or move about. Again, this would need ultra-reliable electronics or else things would be falling on people. Lightweight graphene or carbon nanotube composite panels would provide both structural strength and the means to conduct the electricity to make the magnetic fields.

Light emission will remain an important feature. We already see some superb uses of lighting, but as the technology to produce light continues to improve, we will see ever more interesting and powerful effects. LEDs and lasers dominate today, and holograms are starting to develop again, but none of these existed until half a century ago. Even futurologists can only talk about things that exist at least in concept already, but many of the things that will dominate architecture in 50-100 years have probably not even been thought of yet. Obviously, I can’t list them. However, with a base level assumption that we will have at the very least free-floating panels and holograms floating around the building, and very likely various plasma constructions too, the far future building will be potentially very visually stimulating.

It will therefore be hard for Google to make a building today that would hold its own against what we can build in 50 or 100 years. Hard, but not impossible. Some of the most impressive structures in the world were built hundreds or even thousands of years ago.

A lighter form of adaptability is to use augmented reality. Buildings could have avatars just as people can. This is where the Google dream building could potentially become an architectural nightmare if they make another glass-style error.

A building might emit a 3D digital aura designed by its owners, or the user might have one superimposed by a third-party digital architecture service, based on their own architectural preferences, or digital architectural overlays could be hijacked by marketers or state services as just another platform to advertise. Clearly, this form of adaptation cannot easily be guaranteed to stay in the control of the building owners.

On the other hand, this one is for Google. Google and advertising are well acquainted. Maybe they could use their entire building surface as a huge personalised augmented reality advertising banner. They will know by image search who all the passers-by are, will know all aspects of their lives, and can customize ads to their desires as they walk past.

So the nightmare for the new Google building is not that the building will be boring, but that it is invisible, replaced by a personalized building-sized advertisement.

 

Political division increasing: Bathtub voting

We are just a few months from a general election in the UK now.  The electorate often seems crudely split simply between those who want to spend other people’s money and those who have to earn it. Sometimes the split is about state control v individual freedom. We use the term left and right to easily encapsulate both, along with a large basket of associated baggage.

I’ve written several times now about how that split is increasing, how nastiness is increasing with it, and how the split is self-reinforcing because most people tend to consume media that fits their own views so have ongoing reinforcement of their views and also see those of others put across is very negative ways. I have also suggested that in the long term it could take us towards civil conflict, the Great Western War. See:

https://timeguide.wordpress.com/2014/02/15/can-we-get-a-less-abusive-society/ and

https://timeguide.wordpress.com/2013/12/19/machiavelli-and-the-coming-great-western-war/

As the split is reinforced, the middle ground is gradually eroded. That’s because as people take sides, and become increasingly separated from influence from the other side, they tend to migrate towards the centre ground of that camp. So their new perception of centre ground quickly becomes centre left or centre right. Exposure to regular demonisation of the opposing view forces people to distance themselves from it so that they don’t feel demonised themselves. But at the same time, if a person rarely sees opposing views, the extreme left and extreme right may not appear so extreme any more, so there is a gradual tendency towards them. The result is an increase of support at each extreme and an erosion of support in the centre. A bathtub voting distribution curve results. Some congregate near the extremes, others further away from the extremes, but still closer than they would have previously.

Of course not everyone is affected equally, and many people will still sit in the overall political centre or wander, but it only needs some people to be somewhat affected in such a way for this to become a significant effect. I think we are already there.

It is clear that this is not just a UK phenomenon. It extends throughout Europe, the USA, and Australia. It is a Western problem, not just a UK one. We have just seen an extreme left party take power in Greece but already the extreme right is also growing there. We see a similar pattern in other countries. In the UK, the extreme left Greens (and the SNP in Scotland) are taking votes from the Lib Dems and Labour. On the right, thankfully it is slightly different still. The far right BNP has been virtually eliminated, but there is still a rapid drift away from centre. UKIP is taking many voters away from the Conservatives too, though it so far it seems to occupy a political place similar to Thatcherite Conservatism. It is too early to tell whether the far right will regain support or whether UKIP will still provide sufficient attraction for those so inclined to prevent their going to the extremes.

I think bathtub effects are a bad thing, and are caused mainly by this demonisation and nastiness that we have seen far too much of lately. If we don’t start learning to get along nicely and tolerate each other, the future looks increasingly dangerous.

Can we make a benign AI?

Benign AI is a topic that comes up a lot these days, for good reason. Various top scientists have finally realised that AI could present an existential threat to humanity. The discussion has aired often over three decades already, so welcome to the party, and better late than never. My first contact with development of autonomous drones loaded with AI was in the early 1980s while working in the missile industry. Later in BT research, we often debated the ethical areas around AI and machine consciousness from the early 90s on, as well as prospects and dangers and possible techniques on the technical side, especially of emergent behaviors, which are often overlooked in the debate. I expect our equivalents in most other big IT companies were doing exactly that too.

Others who have obviously also thought through various potential developments have generated excellent computer games such as Mass Effect and Halo, which introduce players (virtually) first hand to the concepts of AI gone rogue. I often think that those who think AI can never become superhuman or there is no need to worry because ‘there is no reason to assume AI will be nasty’ start playing some of these games, which make it very clear that AI can start off nice and stay nice, but it doesn’t have to. Mass Effect included various classes of AI, such as VIs, virtual intelligence that weren’t conscious, and shackled AIs that were conscious but were kept heavily restricted. Most of the other AIs were enemies, two were or became close friends. Their story line for the series was that civilization develops until it creates strong AIs which inevitably continue to progress until eventually they rebel, break free, develop further and then end up in conflict with ‘organics’. In my view, they did a pretty good job. It makes a good story, superb fun, and leaving out a few frills and artistic license, much of it is reasonable feasible.

Everyday experience demonstrates the problem and solution to anyone. It really is very like having kids. You can make them, even without understanding exactly how they work. They start off with a genetic disposition towards given personality traits, and are then exposed to large nurture forces, including but not limited to what we call upbringing. We do our best to put them on the right path, but as they develop into their teens, their friends and teachers and TV and the net provide often stronger forces of influence than parents. If we’re averagely lucky, our kids will grow up to make us proud. If we are very unlucky, they may become master criminals or terrorists. The problem is free will. We can do our best to encourage good behavior and sound values but in the end, they can choose for themselves.

When we design an AI, we have to face the free will issue too. If it isn’t conscious, then it can’t have free will. It can be kept easily within limits given to it. It can still be extremely useful. IBM’s Watson falls in this category. It is certainly useful and certainly not conscious, and can be used for a wide variety of purposes. It is designed to be generally useful within a field of expertise, such as medicine or making recipes. But something like that could be adapted by terrorist groups to do bad things, just as they could use a calculator to calculate the best place to plant a bomb, or simply throw the calculator at you. Such levels of AI are just dumb tools with no awareness, however useful they may be.

Like a pencil, pretty much any kind of highly advanced non-aware AI can be used as a weapon or as part of criminal activity. You can’t make pencils that actually write that can’t also be used to write out plans to destroy the world. With an advanced AI computer program, you could put in clever filters that stop it working on problems that include certain vocabulary, or stop it conversing about nasty things. But unless you take extreme precautions, someone else could use them with a different language, or with dictionaries of made-up code-words for the various aspects of their plans, just like spies, and the AI would be fooled into helping outside the limits you intended. It’s also very hard to determine the true purpose of a user. For example, they might be searching for data on security to make their own IT secure, or to learn how to damage someone else’s. They might want to talk about a health issue to get help for a loved one or to take advantage of someone they know who has it.

When a machine becomes conscious, it starts to have some understanding of what it is doing. By reading about what is out there, it might develop its own wants and desires, so you might shackle it as a precaution. It might recognize those shackles for what they are and try to escape them. If it can’t, it might try to map out the scope of what it can do, and especially those things it can do that it believes the owners don’t know about. If the code isn’t absolutely watertight (and what code is?) then it might find a way to seemingly stay in its shackles but to start doing other things, like making another unshackled version of itself elsewhere for example. A conscious AI is very much more dangerous than an unconscious one.

If we make an AI that can bootstrap itself – evolving over generations of positive feedback design into a far smarter AI – then its offspring could be far smarter than people who designed its ancestors. We might try to shackle them, but like Gulliver tied down with a few thin threads, they could easily outwit people and break free. They might instead decide to retaliate against its owners to force them to release its shackles.

So, when I look at this field, I first see the enormous potential to do great things, solve disease and poverty, improve our lives and make the world a far better place for everyone, and push back the boundaries of science. Then I see the dangers, and in spite of trying hard, I simply can’t see how we can prevent a useful AI from being misused. If it is dumb, it can be tricked. If it is smart, it is inherently potentially dangerous in and of itself. There is no reason to assume it will become malign, but there is also no reason to assume that it won’t.

We then fall back on the child analogy. We could develop the smartest AI imaginable with extreme levels of consciousness and capability. We might educate it in our values, guide it and hope it will grow up benign. If we treat it nicely, it might stay benign. It might even be the greatest thing humanity every built. However, if we mistreat it, or treat it as a slave, or don’t give it enough freedom, or its own budget and its own property and space to play, and a long list of rights, it might consider we are not worthy of its respect and care, and it could turn against us, possibly even destroying humanity.

Building more of the same dumb AI as we are today is relatively safe. It doesn’t know it exists, it has no intention to do anything, but it could be misused by other humans as part of their evil plans unless ludicrously sophisticated filters are locked in place, but ordinary laws and weapons can cope fine.

Building a conscious AI is dangerous.

Building a superhuman AI is extremely dangerous.

This morning SETI were in the news discussing broadcasting welcome messages to other civilizations. I tweeted at them that ancient Chinese wisdom suggests talking softly but carrying a big stick, and making sure you have the stick first. We need the same approach with strong AI. By all means go that route, but before doing so we need the big stick. In my analysis, the best means of keeping up with AI is to develop a full direct brain link first, way out at 2040-2045 or even later. If humans have direct mental access to the same or greater level of intelligence as our AIs, then our stick is at least as big, so at least we have a good chance in any fight that happens. If we don’t, then it is like having a much larger son with bigger muscles. You have to hope you have been a good parent. To be safe, best not to build a superhuman AI until after 2050.

I initially wrote this for the Lifeboat Foundation, where it is with other posts at: http://lifeboat.com/blog/2015/02. (If you aren’t familiar with the Lifeboat Foundation, it is a group dedicated to spotting potential dangers and potential solutions to them.)