Category Archives: security

The future of bacteria

Bacteria have already taken the prize for the first synthetic organism. Craig Venter’s team claimed the first synthetic bacterium in 2010.

Bacteria are being genetically modified for a range of roles, such as converting materials for easier extraction (e.g. coal to gas, or concentrating elements in landfill sites to make extraction easier), making new food sources (alongside algae), carbon fixation, pollutant detection and other sensory roles, decorative, clothing or cosmetic roles based on color changing, special surface treatments, biodegradable construction or packing materials, self-organizing printing… There are many others, even ignoring all the military ones.

I have written many times on smart yogurt now and it has to be the highlight of the bacterial future, one of the greatest hopes as well as potential danger to human survival. Here is an extract from a previous blog:

Progress is continuing to harness bacteria to make components of electronic circuits (after which the bacteria are dissolved to leave the electronics). Bacteria can also have genes added to emit light or electrical signals. They could later be enhanced so that as well as being able to fabricate electronic components, they could power them too. We might add various other features too, but eventually, we’re likely to end up with bacteria that contain electronics and can connect to other bacteria nearby that contain other electronics to make sophisticated circuits. We could obviously harness self-assembly and self-organisation, which are also progressing nicely. The result is that we will get smart bacteria, collectively making sophisticated, intelligent, conscious entities of a wide variety, with lots of sensory capability distributed over a wide range. Bacteria Sapiens.

I often talk about smart yogurt using such an approach as a key future computing solution. If it were to stay in a yogurt pot, it would be easy to control. But it won’t. A collective bacterial intelligence such as this could gain a global presence, and could exist in land, sea and air, maybe even in space. Allowing lots of different biological properties could allow colonization of every niche. In fact, the first few generations of bacteria sapiens might be smart enough to design their own offspring. They could probably buy or gain access to equipment to fabricate them and release them to multiply. It might be impossible for humans to stop this once it gets to a certain point. Accidents happen, as do rogue regimes, terrorism and general mad-scientist type mischief.

Transhumanists seem to think their goal is the default path for humanity, that transhumanism is inevitable. Well, it can’t easily happen without going first through transbacteria research stages, and that implies that we might well have to ask transbacteria for their consent before we can develop true transhumans.

Self-organizing printing is a likely future enhancement for 3D printing. If a 3D printer can print bacteria (onto the surface of another material being laid down, or as an ingredient in a suspension as the extrusion material itself, or even a bacterial paste, and the bacteria can then generate or modify other materials, or use self-organisation principles to form special structures or patterns, then the range of objects that can be printed will extend. In some cases, the bacteria may be involved in the construction and then die or be dissolved away.

Estimating IoT value? Count ALL the beans!

In this morning’s news:

http://www.telegraph.co.uk/technology/news/11043549/UK-funds-development-of-world-wide-web-for-machines.html

£1.6M investment by UK Technology Strategy Board in Internet-of-Things HyperCat standard, which the article says will add £100Bn to the UK economy by 2020.

Garnter says that IoT has reached the hype peak of their adoption curve and I agree. Connecting machines together, and especially adding networked sensors will certainly increase technology capability across many areas of our lives, but the appeal is often overstated and the dangers often overlooked. Value should not be measured in purely financial terms either. If you value health, wealth and happiness, don’t just measure the wealth. We value other things too of course. It is too tempting just to count the most conspicuous beans. For IoT, which really just adds a layer of extra functionality onto an already technology-rich environment, that is rather like estimating the value of a chili con carne by counting the kidney beans in it.

The headline negatives of privacy and security have often been addressed so I don’t need to explore them much more here, but let’s look at a couple of typical examples from the news article. Allowing remotely controlled washing machines will obviously impact on your personal choice on laundry scheduling. The many similar shifts of control of your life to other agencies will all add up. Another one: ‘motorists could benefit from cheaper insurance if their vehicles were constantly transmitting positioning data’. Really? Insurance companies won’t want to earn less, so motorists on average will give them at least as much profit as before. What will happen is that insurance companies will enforce driving styles and car maintenance regimes that reduce your likelihood of a claim, or use that data to avoid paying out in some cases. If you have to rigidly obey lots of rules all of the time then driving will become far less enjoyable. Having to remember to check the tyre pressures and oil level every two weeks on pain of having your insurance voided is not one of the beans listed in the article, but is entirely analogous the typical home insurance rule that all your windows must have locks and they must all be locked and the keys hidden out of sight before they will pay up on a burglary.

Overall, IoT will add functionality, but it certainly will not always be used to improve our lives. Look at the way the web developed. Think about the cookies and the pop-ups and the tracking and the incessant virus protection updates needed because of the extra functions built into browsers. You didn’t want those, they were added to increase capability and revenue for the paying site owners, not for the non-paying browsers. IoT will be the same. Some things will make minor aspects of your life easier, but the price of that will that you will be far more controlled, you will have far less freedom, less privacy, less security. Most of the data collected for business use or to enhance your life will also be available to government and police. We see every day the nonsense of the statement that if you have done nothing wrong, then you have nothing to fear. If you buy all that home kit with energy monitoring etc, how long before the data is hacked and you get put on militant environmentalist blacklists because you leave devices on standby? For every area where IoT will save you time or money or improve your control, there will be many others where it does the opposite, forcing you to do more security checks, spend more money on car and home and IoT maintenance, spend more time following administrative procedures and even follow health regimes enforced by government or insurance companies. IoT promises milk and honey, but will deliver it only as part of a much bigger and unwelcome lifestyle change. Sure you can have a little more control, but only if you relinquish much more control elsewhere.

As IoT starts rolling out, these and many more issues will hit the press, and people will start to realise the downside. That will reduce the attractiveness of owning or installing such stuff, or subscribing to services that use it. There will be a very significant drop in the economic value from the hype. Yes, we could do it all and get the headline economic benefit, but the cost of greatly reduced quality of life is too high, so we won’t.

Counting the kidney beans in your chili is fine, but it won’t tell you how hot it is, and when you start eating it you may decide the beans just aren’t worth the pain.

I still agree that IoT can be a good thing, but the evidence of web implementation suggests we’re more likely to go through decades of abuse and grief before we get the promised benefits. Being honest at the outset about the true costs and lifestyle trade-offs will help people decide, and maybe we can get to the good times faster if that process leads to better controls and better implementation.

Ultra-simple computing: Part 4

Gel processing

One problem with making computers with a lot of cores is the wiring. Another is the distribution of tasks among the cores. Both of these can be solved with relatively simple architecture. Processing chips usually have a lot of connectors, letting them get data in parallel. But a beam of light can contain rays of millions of wavelengths, far more parallelism than is possible with wiring. If chips communicated using light with high density wavelength division multiplexing, it will solve some wiring issues. Taking another simple step, processors that are freed from wiring don’t have to be on a circuit board, but could be suspended in some sort of gel. Then they could use free space interconnection to connect to many nearby chips. Line of sight availability will be much easier than on a circuit board. Gel can also be used to cool chips.

Simpler chips with very few wired connections also means less internal wiring too. This reduces size still further and permits higher density of suspension without compromising line of sight.

Ripple scheduler

Process scheduling can also be done more simply with many processors. Complex software algorithms are not needed. In an array of many processors, some would be idle while some are already engaged on tasks. When a job needs processed, a task request (this could be as simple as a short pulse of a certain frequency) would be broadcast and would propagate through the array. On encountering an idle processor, the idle processor would respond with an accept response (again this could be a single pulse of another frequency. This would also propagate out as a wave through the array. These two waves may arrive at a given processor in quick succession.

Other processors could stand down automatically once one has accepted the job (i.e. when they detect the acceptance wave). That would be appropriate when all processors are equally able. Alternatively, if processors have different capabilities, the requesting agent would pick a suitable one from the returning acceptances, send a point to point message to it, and send out a cancel broadcast wave to stand others down. It would exchange details about the task with this processor on a point to point link, avoiding swamping the system with unnecessary broadcast messages.  An idle processor in the array would thus see a request wave, followed by a number of accept waves. It may then receive a personalized point to point message with task information, or if it hasn’t been chosen, it would just see the cancel wave of . Busy processors would ignore all communications except those directed specifically to them.

I’m not saying the ripple scheduling is necessarily the best approach, just an example of a very simple system for process scheduling that doesn’t need sophisticated algorithms and code.

Activator Pastes

It is obvious that this kind of simple protocol can be used with a gel processing medium populated with a suitable mixture of different kinds of processors, sensors, storage, transmission and power devices to provide a fully scalable self-organizing array that can perform a high task load with very little administrative overhead. To make your smart gel, you might just choose the volume of weight ratios of components you want and stir them into a gel rather like mixing a cocktail. A paste made up in this way could be used to add sensing, processing and storage to any surface just by painting some of the paste onto it.

A highly sophisticated distributed cloud sensor network for example could be made just by painting dabs of paste onto lamp posts. Solar power or energy harvesting devices in the paste would power the sensors to make occasional readings, pre-process them, and send them off to the net. This approach would work well for environmental or structural monitoring, surveillance, even for everyday functions like adding parking meters to lines marking the spaces on the road where they interact with ID devices in the car or an app on the driver’s smartphone.

Special inks could contain a suspension of such particles and add a highly secure electronic signature onto one signed by pen and ink.

The tacky putty stuff that we use to stick paper to walls could use activator paste as the electronic storage and processing medium to let you manage  content an e-paper calendar or notice on a wall.

I can think of lots of ways of using smart pastes in health monitoring, packaging, smart makeup and so on. The basic principle stays the same though. It would be very cheap and yet very powerful, with many potential uses. Self-organising, and needs no set up beyond giving it a job to do, which could come from any of your devices. You’d probably buy it by the litre, keep some in the jar as your computer, and paste the rest of it all over the place to make your skin, your clothes, your work-spaces and your world smart. Works for me.

 

Ultra-simple computing part 3

Just in time v Just in case

Although the problem isn’t as bad now as it has been, a lot of software runs on your computers just in case it might be needed. Often it isn’t, and sometimes the PC is shut down or rebooted without it ever having been used. This wastes our time, wastes a little energy, and potentially adds functionality or weaknesses that can be exploited by hackers.

If it only loaded the essential pieces of software, risks would be minimised and initial delays reduced. There would be a slightly bigger delay once the code is needed because it would have to load then but since a lot of code is rarely used, the overall result would still be a big win. This would improve security and reliability. If all I am doing today is typing and checking occasional emails, a lot of the software currently loaded in my PC memory is not needed. I don’t even need a firewall running all the time if network access is disabled in between my email checks. If networking and firewall is started when I want to check email or start browsing, and then all network access is disabled after I have checked, then security would be a bit better. I also don’t need all the fancy facilities in Office when all I am doing is typing. I definitely don’t want any part of Office to use any kind of networking in either direction for any reason (I use Thunderbird, not Outlook for email). So don’t load the code yet; I don’t want it running; it only adds risks, not benefits. If I want to do something fancy in a few weeks time, load the code then. If I want to look up a word in a dictionary or check a hyperlink, I could launch a browser and copy and paste it. Why do anything until asked? Forget doing stuff just in case it might occasionally generate a tiny time saving. Just in time is far safer and better than just in case.

So, an ultra-simple computer should only load what is needed, when it is needed. It would only open communications when needed, and then only to the specific destination required. That frees up processors and memory, reduces risks and improves speed.

Software distribution

Storing software on hard disks or in memory lets the files be changed, possibly by a virus. Suppose instead that software were to be distributed on ROM chips. They can be very cheap, so why not? No apps, no downloads. All the software on your machine would be in read only memory, essentially part of the hardware. This would change a few things in computer design. First, you’d have a board with lots of nice slots in it, into which you plug the memory chips you’ve bought with the programs you want on them. (I’ll get to tablets and phones later, obviously a slightly different approach is needed for portable devices). Manufacturers would have a huge interest in checking their  code first, because they can’t put fixes out later except on replacement chips. Updating the software to a new version would simply mean inserting a new chip. Secondly, since the chips are read only, the software on them cannot be corrupted. There is no mechanism by which a virus or other malware could get onto the chip.

Apps could be distributed in collections – lifestyle or business collections. You could buy subscriptions to app agencies that issued regular chips with their baskets of apps on them. Or you could access apps online via the cloud. Your machine would stay clean.

It could go further. As well as memory chips, modules could include processing, controller or sensory capabilities. Main processing may still be in the main part of the computer but specialist capabilities could be added in this way.

So, what about tablets and phones? Obviously you can’t plug lots of extra chips into slots in those because it would be too cumbersome to make them with lots of slots to do so. One approach would be to use your PC or laptop to store and keep up to date a single storage chip that goes into your tablet or phone. It could use a re-programmable ROM that can’t be tampered with by your tablet. All your apps would live on it, but it would be made clean and fresh every day. Tablets could have a simple slot to insert that single chip, just as a few already do for extra memory.

Multi-layered security

If your computer is based on algorithms encoded on read only memory chips or better still, directly as hardware circuits, then it could boot from cold very fast, and would be clean of any malware. To be useful, it would need a decent amount of working memory too, and of course that could provide a short term residence for malware, but a restart would clean it all away. That provides a computer that can easily be reset to a clean state and work properly again right away.

Another layer of defense is to disallow programs access to things they don’t need. You don’t open every door and window in your home every time you want to go in or out. Why open every possible entrance that your office automation package might ever want to use just because you want to type an article? Why open the ability to remotely install or run programs on your computer without your knowledge and consent just because you want to read a news article or look at a cute kitten video? Yet we have accepted such appallingly bad practice from the web browser developers because we have had no choice. It seems that the developers’ desires to provide open windows to anyone that wants to use them outweighs the users’ desires for basic security common sense. So the next layer of defense is really pretty obvious. We want a browser that doesn’t open doors and windows until we explicitly tell it to, and even then it checks everything that tries to get through.

It may still be that you occasionally want to run software from a website, maybe to play a game. Another layer of defense that could help then is to restrict remote executables to a limited range of commands with limited scope. It is also easy additionally to arrange a sandbox where code can run but can’t influence anything outside the sandbox. For example, there is no reason a game would need to inspect files on your computer apart from stored games or game-related files. Creating a sandbox that can run a large range of agreed functions to enable games or other remote applications but is sealed from anything else on the computer would enable remote benign executables without compromising security. Even if they were less safe, confining activity to the sandbox allows the machine to be sterilized by sweeping that area and doesn’t necessitate a full reset. Even without the sandbox, knowing the full capability of the range of permitted commands enables damage limitation and precision cleaning. The range of commands should be created with the end user as priority, letting them do what they want with the lowest danger. It should not be created with application writers as top priority since that is where the security risk arises. Not all potential application writers are benign and many want to exploit or harm the end user for their own purposes. Everyone in IT really ought to know that and should never forget it for a minute and it really shouldn’t need to be said.

Ultra-simple computing: Part 2

Chip technology

My everyday PC uses an Intel Core-I7 3770 processor running at 3.4GHz. It has 4 cores running 8 threads on 1.4 billion 22nm transistors on just 160mm^2 of chip. It has an NVIDIA GeForce GTX660 graphics card, and has 16GB of main memory. It is OK most of the time, but although the processor and memory utilisation rarely gets above 30%, its response is often far from instant.

Let me compare it briefly with my (subjectively at time of ownership) best ever computer, my Macintosh 2Fx, RIP, which I got in 1991, the computer on which I first documented both the active contact lens and text messaging and on which I suppose I also started this project. The Mac 2Fx ran a 68030 processor at 40MHz, with 273,000 transistors and 4MB of RAM, and an 80MB hard drive. Every computer I’ve used since then has given me extra function at the expense of lower performance, wasted time and frustration.

Although its OS is stored on a 128GB solid state disk, my current PC takes several seconds longer to boot than my Macintosh Fx did – it went from cold to fully operational in 14 seconds – yes, I timed it. On my PC today, clicking a browser icon to first page usually takes a few seconds. Clicking on a word document back then took a couple of seconds to open. It still does now. Both computers gave real time response to typing and both featured occasional unexplained delays. I didn’t have any need for a firewall or virus checkers back then, but now I run tedious maintenance routines a few times every week. (The only virus I had before 2000 was nVir, which came on the Mac2 system disks). I still don’t get many viruses, but the significant time I spend avoiding them has to be counted too.

Going back further still, to my first ever computer in 1981, it was an Apple 2, and only had 9000 transistors running at 2.5MHz, with a piddling 32kB of memory. The OS was tiny. Nevertheless, on it I wrote my own spreadsheet, graphics programs, lens design programs, and an assortment of missile, aerodynamic and electromagnetic simulations. Using the same transistors as the I7, you could make 1000 of these in a single square millimetre!

Of course some things are better now. My PC has amazing graphics and image processing capabilities, though I rarely make full use of them. My PC allows me to browse the net (and see video ads). If I don’t mind telling Google who I am I can also watch videos on YouTube, or I could tell the BBC or some other video provider who I am and watch theirs. I could theoretically play quite sophisticated computer games, but it is my work machine, so I don’t. I do use it as a music player or to show photos. But mostly, I use it to write, just like my Apple 2 and my Mac Fx. Subjectively, it is about the same speed for those tasks. Graphics and video are the main things that differ.

I’m not suggesting going back to an Apple 2 or even an Fx. However, using I7 chip tech, a 9000 transistor processor running 1360 times faster and taking up 1/1000th of a square millimetre would still let me write documents and simulations, but would be blazingly fast compared to my old Apple 2. I could fit another 150,000 of them on the same chip space as the I7. Or I could have 5128 Mac Fxs running at 85 times normal speed. Or you could have something like a Mac FX running 85 times faster than the original for a tiny fraction of the price. There are certainly a few promising trees in the forest that nobody seems to have barked up. As an interesting aside, that 22nm tech Apple 2 chip would only be ten times bigger than a skin cell, probably less now, since my PC is already several months old

At the very least, that really begs the question what all this extra processing is needed for and why there is still ever any noticeable delay doing anything in spite of it. Each of those earlier machines was perfectly adequate for everyday tasks such as typing or spreadsheeting. All the extra speed has an impact only on some things and most is being wasted by poor code. Some of the delays we had 20 and 30 years ago still affect us just as badly today.

The main point though is that if you can make thousands of processors on a standard sized chip, you don’t have to run multitasking. Each task could have a processor all to itself.

The operating system currently runs programs to check all the processes that need attention, determine their priorities, schedule processing for them, and copy their data in and out of memory. That is not needed if each process can have its own dedicated processor and memory all the time. There are lots of ways of using basic physics to allocate processes to processors, relying on basic statistics to ensure that collisions rarely occur. No code is needed at all.

An ultra-simple computer could therefore have a large pool of powerful, free processors, each with their own memory, allocated on demand using simple physical processes. (I will describe a few options for the basic physics processes later). With no competition for memory or processing, a lot of delays would be eliminated too.

Ultra-simple computing: Part 1

Introduction

This is first part of a techie series. If you aren’t interested in computing, move along, nothing here. It is a big topic so I will cover it in several manageable parts.

Like many people, I spent a good few hours changing passwords after the Heartbleed problem and then again after ebay’s screw-up. It is a futile task in some ways because passwords are no longer a secure defense anyway. A decent hacker with a decent computer can crack hundreds of passwords in an hour, so unless an account is locked after a few failed attempts, and many aren’t, passwords only manage to keep out casual observers and the most amateurish hackers.

The need for simplicity

A lot of problems are caused by the complexity of today’s software, making it impossible to find every error and hole. Weaknesses have been added to operating systems, office automation tools and browsers to increase functionality for only a few users, even though they add little to most of us most of the time. I don’t think I have ever executed a macro in Microsoft office for example and I’ve certainly never used print merge or many its other publishing and formatting features. I was perfectly happy with Word 93 and most things added since then (apart from the real time spelling and grammar checker) have added irrelevant and worthless features at the expense of safety. I can see very little user advantage of allowing pop-ups on web sites, or tracking cookies. Their primary purpose is to learn about us to make marketing more precise. I can see why they want that, but I can’t see why I should. Users generally want pull marketing, not push, and pull doesn’t need cookies, there are better ways of sending your standard data when needed if that’s what you want to do. There are many better ways of automating logons to regular sites if that is needed.

In a world where more of the people who wish us harm are online it is time to design an alternative platform which it is designed specifically to be secure from the start and no features are added that allow remote access or control without deliberate explicit permission. It can be done. A machine with a strictly limited set of commands and access can be made secure and can even be networked safely. We may have to sacrifice a few bells and whistles, but I don’t think we will need to sacrifice many that we actually want or need. It may be less easy to track us and advertise at us or to offer remote machine analysis tools, but I can live with that and you can too. Almost all the services we genuinely want can still be provided. You could still browse the net, still buy stuff, still play games with others, and socialize. But you wouldn’t be able to install or run code on someone else’s machine without their explicit knowledge. Every time you turn the machine on, it would be squeaky clean. That’s already a security benefit.

I call it ultra-simple computing. It is based on the principle that simplicity and a limited command set makes it easy to understand and easy to secure. That basic physics and logic is more reliable than severely bloated code. That enough is enough, and more than that is too much.

We’ve been barking up the wrong trees

There are a few things you take for granted in your IT that needn’t be so.

Your PC has an extremely large operating system. So does your tablet, your phone, games console… That isn’t really necessary. It wasn’t always the case and it doesn’t have to be the case tomorrow.

Your operating system still assumes that your PC has only a few processing cores and has to allocate priorities and run-time on those cores for each process. That isn’t necessary.

Although you probably use some software in the cloud, you probably also download a lot of software off the net or install from a CD or DVD. That isn’t necessary.

You access the net via an ISP. That isn’t necessary. Almost unavoidable at present, but only due to bad group-think. Really, it isn’t necessary.

You store data and executable code in the same memory and therefore have to run analysis tools that check all the data in case some is executable. That isn’t necessary.

You run virus checkers and firewalls to prevent unauthorized code execution or remote access. That isn’t necessary.

Overall, we live with an IT system that is severely unfit for purpose. It is dangerous, bloated, inefficient, excessively resource and energy intensive, extremely fragile and yet vulnerable to attack via many routes, designed with the user as a lower priority than suppliers, with the philosophy of functionality at any price. The good news is that it can be replaced by one that is absolutely fit for purpose, secure, invulnerable, cheap and reliable, resource-efficient, and works just fine. Even better, it could be extremely cheap so you could have both and live as risky an online life in those areas that don’t really matter, knowing you have a safe platform to fall back on when your risky system fails or when you want to do anything that involves your money or private data.

Switching people off

A very interesting development has been reported in the discovery of how consciousness works, where neuroscientists stimulating a particular brain region were able to switch a woman’s state of awareness on and off. They said: “We describe a region in the human brain where electrical stimulation reproducibly disrupted consciousness…”

http://www.newscientist.com/article/mg22329762.700-consciousness-onoff-switch-discovered-deep-in-brain.html.

The region of the brain concerned was the claustrum, and apparently nobody had tried stimulating it before, although Francis Crick and Christof Koch had suggested the region would likely be important in achieving consciousness. Apparently, the woman involved in this discovery was also missing some of her hippocampus, and that may be a key factor, but they don’t know for sure yet.

Mohamed Koubeissi and his the team at the George Washington university in Washington DC were investigating her epilepsy and stimulated her claustrum area with high frequency electrical impulses. When they did so, the woman lost consciousness, no longer responding to any audio or visual stimuli, just staring blankly into space. They verified that she was not having any epileptic activity signs at the time, and repeated the experiment with similar results over two days.

The team urges caution and recommends not jumping to too many conclusions. They did observe the obvious potential advantages as an anesthesia substitute if it can be made generally usable.

As a futurologist, it is my job to look as far down the road as I can see, and imagine as much as I can. Then I filter out all the stuff that is nonsensical, or doesn’t have a decent potential social or business case or as in this case, where research teams suggest that it is too early to draw conclusions. I make exceptions where it seems that researchers are being over-cautious or covering their asses or being PC or unimaginative, but I have no evidence of that in this case. However, the other good case for making exceptions is where it is good fun to jump to conclusions. Anyway, it is Saturday, I’m off work, so in the great words of Dr Emmett Brown in ‘Back to the future':  “Well, I figured, what the hell.”

OK, IF it works for everyone without removing parts of the brain, what will we do with it and how?

First, it is reasonable to assume that we can produce electrical stimulation at specific points in the brain by using external kit. Trans-cranial magnetic stimulation might work, or perhaps implants may be possible using injection of tiny particles that migrate to the right place rather than needing significant surgery. Failing those, a tiny implant or two via a fine needle into the right place ought to do the trick. Powering via induction should work. So we will be able to produce the stimulation, once the sucker victim subject has the device implanted.

I guess that could happen voluntarily, or via a court ordered protective device, as a condition of employment or immigration, or conditional release from prison, or a supervision order, or as a violent act or in war.

Imagine if government demands a legal right to access it, for security purposes and to ensure your comfort and safety, of course.

If you think 1984 has already gone too far, imagine a government or police officer that can switch you off if you are saying or thinking the wrong thing. Automated censorship devices could ensure that nobody discusses prohibited topics.

Imagine if people on the street were routinely switched off as a VIP passes to avoid any trouble for them.

Imagine a future carbon-reduction law where people are immobilized for an hour or two each day during certain periods. There might be a quota for how long you are allowed to be conscious each week to limit your environmental footprint.

In war, captives could have devices implanted to make them easy to control, simply turned off for packing and transport to a prison camp. A perimeter fence could be replaced by a line in the sand. If a prisoner tries to cross it, they are rendered unconscious automatically and put back where they belong.

Imagine a higher class of mugger that doesn’t like violence much and prefers to switch victims off before stealing their valuables.

Imagine being able to switch off for a few hours to pass the time on a long haul flight. Airlines could give discounts to passengers willing to be disabled and therefore less demanding of attention.

Imagine  a couple or a group of friends, or a fetish club, where people can turn each other off at will. Once off, other people can do anything they please with them – use them as dolls, as living statues or as mannequins, posing them, dressing them up. This is not an adult blog so just use your imagination – it’s pretty obvious what people will do and what sorts of clubs will emerge if an off-switch is feasible, making people into temporary toys.

Imagine if you got an illegal hacking app and could freeze the other people in your vicinity. What would you do?

Imagine if your off-switch is networked and someone else has a remote control or hacks into it.

Imagine if an AI manages to get control of such a system.

Having an off-switch installed could open a new world of fun, but it could also open up a whole new world for control by the authorities, crime control, censorship or abuse by terrorists and thieves and even pranksters.

 

 

Google is wrong. We don’t all want gadgets that predict our needs.

In the early 1990s, lots of people started talking about future tech that would work out what we want and make it happen. A whole batch of new ideas came out – internet fridges, smart waste-baskets, the ability to control your air conditioning from the office or open and close curtains when you’re away on holiday. 25 years on almost and we still see just a trickle of prototypes, followed by a tsunami of apathy from the customer base.

Do you want an internet fridge, that orders milk when you’re running out, or speaks to you all the time telling you what you’re short of, or sends messages to your phone when you are shopping? I certainly don’t. It would be extremely irritating. It would crash frequently. If I forget to clean the sensors it won’t work. If I don’t regularly update the software, and update the security, and get it serviced, it won’t work. It will ask me for passwords. If my smart loo notices I’m putting on weight, the fridge will refuse to open, and tell the microwave and cooker too so that they won’t cook my lunch. It will tell my credit card not to let me buy chocolate bars or ice cream. It will be a week before kitchen rage sets in and I take a hammer to it. The smart waste bin will also be covered in tomato sauce from bean cans held in a hundred orientations until the sensor finally recognizes the scrap of bar-code that hasn’t been ripped off. Trust me, we looked at all this decades ago and found the whole idea wanting. A few show-off early adopters want it to show how cool and trendy they are, then they’ll turn it off when no-one is watching.

EDIT: example of security risks from smart devices (this one has since been fixed) http://www.bbc.co.uk/news/technology-28208905

If I am with my best friend, who has known me for 30 years, or my wife, who also knows me quite well, they ask me what I want, they discuss options with me. They don’t think they know best and just decide things. If they did, they’d soon get moaned at. If I don’t want my wife or my best friend to assume they know what I want best, why would I want gadgets to do that?

The first thing I did after checking out my smart TV was to disconnect it from the network so that it won’t upload anything and won’t get hacked or infected with viruses. Lots of people have complained about new adverts on TV that control their new xBoxes via the Kinect voice recognition. The ‘smart’ TV receiver might be switched off as that happens. I am already sick of things turning themselves off without my consent because they think they know what I want.

They don’t know what is best. They don’t know what I want. Google doesn’t either. Their many ideas about giving lots of information it thinks I want while I am out are also things I will not welcome. Is the future of UI gadgets that predict your needs, as Wired says Google thinks? No, it isn’t. What I want is a really intuitive interface so I can ask for what I want, when I want it. The very last thing I want is an idiot device thinking it knows better than I do.

We are not there yet. We are nowhere near there yet. Until we are, let me make my own decisions. PLEASE!

Limits of ISIS terrorism in the UK

This is the 3rd article in my short series trying to figure out the level of terrorist danger ISIS poses in the UK, again comparing them with the IRA in the Northern Ireland ‘troubles’. (ISIS = Islamic State of Iraq and al-Sham. IRA = Irish Republican Army). I don’t predict the level it will actually get to, which depends on too many factors, only the limits if everything goes their way.

http://timeguide.wordpress.com/2014/06/22/isis-comparison-with-the-ira-conflict/ discussed the key difference, that ISIS is a religious group and the IRA was a nationalist one.

http://timeguide.wordpress.com/2014/06/25/a-pc-roost-for-terrorist-chickens/ then discusses the increased vulnerability in the UK now thanks to ongoing political correctness.

IRA

Wikipedia says: The Provisional IRA’s armed campaign, primarily in Northern Ireland but also in England and mainland Europe, caused the deaths of approximately 1,800 people. The dead included around 1,100 members of the British security forces, and about 640 civilians.

It also gives a plausible estimate of the number of its members :

By the late 1980s and early 1990s, it was estimated that in the late 1980s the IRA had roughly 300 members in Active Service Units and about another 450 serving in supporting roles [such as "policing" nationalist areas, intelligence gathering, and hiding weapons.]

Sinn Fein, (which was often called the IRA’s ‘political wing’) managed to get 43% support from the nationalist community at its peak in 1981 after the hunger strikes. Provisional IRA approval ratings sat at around 30%. Supporting violence is not the same as supporting use of political means – some want to fight for a cause but won’t do so using violence. That 30% yields an IRA supporter population of around 75,000 from 245,000 nationalist voters. So, from a supporter population of 75,000, only 300 were in IRA active service units and 450 in supporting roles at any particular time, although thousands were involved over the whole troubles. That is a total of only 1% of the relevant population from which they were drawn – those who supported violent campaigns. Only 0.4% were in active service units, i.e actual terrorists. That is an encouragingly small percentage.

ISIS

Government estimate of the number of young men from the UK that went overseas to fight with ISIS is around 500. According to a former head of MI6, 300 have returned already. Some of those will be a problem and some will have lost sympathy with the cause, just as some men joined the IRA and later left, all the way through the troubles. Some will not have gone overseas and therefore can’t be identified and tracked the same way. Over time, ISIS will attempt to recruit more to the cause, and some will drop out. I can’t find official estimates of numbers but there are ways of making such estimates.

Building on Paddy Ashdown’s analogy with the IRA, the same kinds of young men will join ISIS as those who joined the IRA – those with no hope of status or fame or glory from their normal lives who want to be respected and be seen as heroic rebel fighters by holding a weapon, who are easy prey for charismatic leaders with exciting recruitment campaigns. The UK Muslim young men community faces high unemployment.

ISIS draws its support from the non-peace-loving minority of the Muslim community. Citing Wikipedia again, a Pew Research Centre poll showed 72% of Muslims worldwide said violence against civilians is never justified, surprisingly similar to the equivalent 70% found in the Nationalist community in Northern Ireland. They also found in the US and UK that over 1 in 4 Muslims think suicide bombing is sometimes justified, not very different from the world-wide level. (A 2006 survey by NOP found that only 9% of UK Muslims supported violence. Whether attitudes have changed or it is just the way questions are asked is anyone’s guess; for now, I’ll run with both, the calculations are easy.

The 25-30% figures are similar to the situation in Northern Ireland in spite of quite different causes. I lived a third of my life in Belfast and I don’t think the people there generally are any less civilized than people here in England. Maybe it’s just human nature that when faced with a common grievance, 25-30% of us will consider that violence is somewhat acceptable against civilians and support a sub-population of 0.4% terrorists fighting on our behalf.

On the other hand, the vast majority of 70%+ of us are peace-loving. A glass half full or half empty, take your pick.

The UK Muslim community is around 3 million, similar to the USA in fact. 28% of that yields a potential supporter population of  840,000. The potential terrorist 1% of that is 8,400 and 0.4% is 3,360.  If we’re optimistic and take NOP’s 2006 figure of 9% supporting violence, then 270,000 people would be supporting 1080 terrorists if the right terrorist group were to appear in the right circumstances with the right cause and the right leaders and good marketing and were to succeed in its campaigning. That puts an upper limit for extreme Islamist terrorism in the UK at between 3 and 11 times as big as the IRA was at its peak if everything goes its way.

However, neither is the actual number of UK ISIS terrorists, only the potential number of terrorists available if the cause/motivation is right, if the community buys into it, if the ISIS leaders are charismatic, and if they do their marketing well in their catchment communities. So far, 500 have emerged and actually gone off to fight with ISIS, 300 have returned. We don’t know how many stayed here or are only thinking of joining up, or aren’t even thinking of it but might, and we don’t know what will happen that might aggravate the situation and increase recruitment. We don’t know how many will try to come here that aren’t from the UK. There are plenty of ‘known unknowns’.

Some of the known unknowns  are good ones though – it isn’t all scary. In the Middle East, ISIS has clear objectives and controls cities, arms and finance. They say they want to cause problems here too, but they’re a bit busy right now, they don’t have a clear battle to fight here, and most of all our Muslim community doesn’t want to be the source of large scale terrorism so isn’t likely to be cooperative with such an extremist and barbaric group as ISIS. Their particular style of barbarism and particularly extremist views are likely to put off many who might consider supporting another extremist Islamist group. There also isn’t an easy supply of weapons here. All these work in our favor and will dampen ISIS efforts.

So the magnitude of the problem will come down to the relative efforts of our security forces, the efforts of the peace-loving Muslim majority to prevent young men being drawn towards extremism, and the success of ISIS marketing and recruitment. We do know that we do not want 3,360 home-grown ISIS terrorists wandering around the UK, or a similar number in the USA.

Finally, there are two sides to every conflict. ISIS terrorism would likely lead to opposing paramilitary groups. As far as their potential support base goes, ‘Far right’ parties add up to about 2%, about 1.25 million, but I would guess that a much higher proportion of an extremist group supports violence than the general population, so some hand-waving suggests that a similarly sized opposition supporter population terrorist group is not unlikely. We know from elsewhere in Ireland and other EU countries that that 2% could grow to the 25-30% we saw earlier if our government really loses control. In the USA, the catchment group on the ISIS side is still only the same size as the UK, but the potential armed resistance to them is far greater.

In summary, ISIS is potentially a big problem, with 300 home grown potential ISIS terrorists already here in the UK and trained, hundreds being trained overseas and an unknown quantity not yet on the radar. If all goes badly, that could grow to between 1000 and over 3000 active terrorists, compared to the IRA which typically only had 300 active terrorists at a time. Some recent trends have made us much more vulnerable, but there are also many other that lean against ISIS success.

I have a lot of confidence in our intelligence and security forces, who have already prevented a great many potential terrorist acts. The potential magnitude of the problem will keep them well-motivated for quite a while. There is a lot at stake, and ISIS must not get UK terrorism off the ground.

 

A PC roost for terrorist chickens

Political correctness as a secular religion substitute

Being politically correct makes people feel they are good people. It provides a secular substitute for the psychological rewards people used to get from being devoutly religious, a self-built pedestal from which to sneer down on others who are not compliant with all the latest politically correct decrees. It started out long ago with a benign goal to protect abused and vulnerable minorities, but it has since evolved and mutated into a form of oppression in its own right. Surely we all want to protect the vulnerable and all want to stamp out racism, but political correctness long left those goals in the dust. Minorities are often protected without their consent or approval from things they didn’t even know existed, but still have to face any consequent backlash when they are blamed. Perceived oppressors are often victimized based on assumptions, misrepresentations and straw man analyses rather than actual facts or what they actually said. For PC devotees, one set of prejudices and bigotry is simply replaced by another. Instead of erasing barriers within society, political correctness often creates or reinforces them.

Unlike conventional religion, which is largely separated from the state and allows advocates to indulge with little effect on others, political correctness has no such state separation, but is instead deeply integrated into politics, hence its name. It often influences lawmakers, regulators, the media, police and even the judiciary and thereby incurs a cost of impact on the whole society. The PC elite standing on their pedestals get their meta-religious rewards at everyone’s expense, usually funded by the very taxpayers they oppress.

Dangers

Political correctness wouldn’t exist if many didn’t want it that way, but even if the rest of us object to it, it is something we have learned to live with. Sometimes however, denial of reality, spinning reasoning upside down or diverting attention away from unpleasant facts ceases to be just irritating and becomes dangerous. Several military and political leaders have recently expressed grave concerns about our vulnerability to a new wave of terrorism originating from the current Middle East problems. Even as the threat grows, the PC elite try to divert attention to blaming the West, equating moralities and cultural values and making it easier for such potential terrorism to gestate. There are a number of trends resulting from PC and together they add to the terrorist threats we’re currently facing while reducing our defenses, creating something of a perfect storm. Let’s look at some dangers that arise from just three PC themes – the worship of diversity, the redefining of racism, and moral equivalence and see some of the problems and weaknesses they cause. I know too little about the USA to make sensible comment on the exact situation there, but of course they are also targets of the same terrorist groups. I will talk about the UK situation, since that is where I live.

Worship of diversity

In the UK, the Labour Party admitted that they encouraged unchecked immigration throughout their time in power. It is now overloading public services and infrastructure across the UK, and it was apparently done ‘to rub the Conservatives’ noses in diversity’ (as well as to increase Labour supporter population). With EC policy equally PC, other EU countries have had to implement similar policies. Unfortunately, in their eagerness to be PC, neither the EC nor Labour saw any need to impose any limits or even a points system to ensure countries get the best candidates for their needs.

In spite of the PC straw man argument that is often used, the need for immigration is not in dispute, only its magnitude and sources. We certainly need immigration and most immigrants are just normal people just looking for a better life in the UK or refugees looking for safety from overseas conflicts. No reasonable person has any problem with immigration per se, nor the color of the immigrants, but any debate about immigration only last seconds before someone PC throws in accusations of racism, which I’ll discuss shortly. I think I am typical of most British people in being very happy to have people of all shades all around me, and would defend genuine efforts to win equality, but I still think we should not allow unlimited immigration. In reality, after happily welcoming generations of immigrants from diverse backgrounds, what most people see as the problem now is the number of people immigrating and the difficulties it makes for local communities to accommodate and provide services and resources for them, or sometimes even to communicate with them. Stresses have thus resulted from actions born of political correctness that was based on a fallacy, seeking to magnify a racism problem that had almost evaporated. Now that PC policy has created a situation of system overload and non-integration, tensions between communities are increasing and racism is likely to resurface. In this case, PC has already backfired, badly. Across the whole of Europe, the consequences of political correctness have led directly to increased polarization and the rise of extremist parties. It has achieved the exact opposite of the diversity utopia it originally set out to achieve. Like most British, I would like to keep racism consigned to history, but political correctness is resurrecting it.

There are security problems too. A few immigrants are not the nice ordinary people we’d be glad to have next door, but are criminals looking to vanish or religious extremists hoping to brainwash people, or terrorists looking for bases to plan future operations and recruit members. We may even have let in a few war criminals masquerading as refugees after their involvement in genocides. Nobody knows how many less-than-innocent ones are here but with possibly incompetent and certainly severely overworked border agencies, at least some of the holes in the net are still there.

Now that Edward Snowden has released many of the secrets of how our security forces stay on top of terrorism and the PC media have gleefully published some of them, terrorists can minimize their risk of being caught and maximize the numbers of people harmed by their activities. They can also immigrate and communicate more easily.

Redefining Racism

Racism as originally defined is a mainly historic problem in the UK, at least from the host community (i.e. prejudice, discrimination, or antagonism directed against someone of a different race based on the belief that one’s own race is superior). On that definition I have not heard a racist comment or witnessed a racist act against someone from an ethnic minority in the UK for well over a decade (though I accept some people may have a different experience; racism hasn’t vanished completely yet).

However, almost as if the main purpose were to keep the problem alive and protect their claim to holiness, the politically correct elite has attempted, with some legal success, to redefine racism from this ‘treating people of different race as inferior’, to “saying anything unfavorable, whether factual or not, to or about anyone who has a different race, religion, nationality, culture or even accent, or mimicking any of their attributes, unless you are from a protected minority. Some minorities however are to be considered unacceptable and not protected”. Maybe that isn’t how they might write it, but that is clearly what they mean.

I can’t buy into such a definition. It hides true racism and makes it harder to tackle. A healthy society needs genuine equality of race, color, gender, sexuality and age, not privileges for some and oppression for others.

I don’t believe in cultural or ideological equality. Culture and ideology should not be entitled to the same protection as race or color or gender. People can’t choose what color or nationality they were born, but they can choose what they believe and how they behave, unless oppression genuinely prevents them from choosing. We need to clearly distinguish between someone’s race and their behavior and culture, not blur the two. Cultures are not equal. They differ in how they treat people, how they treat animals, their views on democracy, torture, how they fight, their attitudes to freedom of speech and religion. If someone’s religion or culture doesn’t respect equality and freedom and democracy, or if it accepts torture of people or animals, or if its fighters don’t respect the Geneva Convention, then I don’t respect it; I don’t care what color or race or nationality they are.

Opinions are not all equally valid either. You might have an opinion that my art is every bit as good as Monet’s and Dali’s. If so, you’re an idiot, whatever your race or gender.

I can criticize culture or opinion or religion without any mention of race or skin color, distinguishing easily between what is inherited and what is chosen, between body and mind. No big achievement; so can most people. We must protect that distinction. If we lose that distinction between body and mind, there can be no right and wrong, and no justice. If you have freedom of choice, then you also have a responsibility for your choice and you should accept the consequences of that choice. If we can accept a wrong just because it comes from someone in a minority group or is approved of by some religion, how long will it be before criminals are considered just another minority? A recent UK pedophile scandal involved senior PC politicians supporting a group arguing for reduction of the age of consent to 10 and decriminalization of sex with young children. They didn’t want to offend the minority group seeking it, that wouldn’t have been politically correct enough. Although it was a long time ago, it still shows that it may only be a matter of time before being a pedophile is considered just another lifestyle choice, as good as any other. If it has happened once, it may happen again, and the PC climate next time might let it through.

Political correctness prevents civilized discussion across a broad field of academic performance, crime, culture and behavior and therefore prevents many social problems from being dealt with. The PC design of ‘hate crime’ with deliberately fuzzy boundaries generates excess censorship by officialdom and especially self-censorship across society due to fear of false accusation or accidentally falling foul of it. That undermines communication between groups and accelerates tribal divisions and conflict. Views that cannot be voiced can still exist and may grow more extreme and when finally given an outlet, may cause far greater problems.

PC often throws up a self-inflicted problem when a member of a minority group does or says something bad or clearly holds views that are also politically incorrect. PC media tries to avoid reporting any such occurrences, usually trying to divert attention onto another topic and accusing any other media that does deal with it of being racist or use their other weapon, the ad-hom attack. If they can’t avoid reporting it, they strenuously avoid any mention of the culprit’s minority group and if they can’t do that, will search for some way to excuse it, blame it on someone else or pretend it doesn’t matter. Although intended to avoid feeding racism, this makes it more difficult to get the debate necessary and can even increase suspicion of cover-ups and preferential treatment.

Indeed accusations of racism have become a powerful barrier to be thrown up whenever an investigation threatens to uncover any undesirable activity by a member of any ethnic or national minority and even more-so if a group is involved. For example, the authorities were widely accused of racism for investigating the ‘Trojan Horse’ stories, in a city that has already produced many of the recent UK additions to ISIS. Police need to be able to investigate and root out activities that could lead to more extremism and especially those that might be brainwashing kids for terrorism. A police force now terrified of being accused of being institutionally racist is greatly impeded when the race card is played. With an ever-expanding definition, it is played more and more frequently.

Moral relativism

It is common on TV to see atrocities by one side in overseas conflicts being equated to lesser crimes by the other. In fact, rather than even declaring equivalence, PC moral equivalence seemingly insists that all moral judgments are valued in inverse proportion to their commonality with traditional Western values. At best it often equates things from either side that really should not be equated. This creates a highly asymmetric playing field that benefits propaganda from terrorist groups and rogue regimes and undermines military efforts to prevent terrorist acts. It also decreases resistance to views and behaviors that undermine existing values while magnifying any grievance against the West.

PC media often gives a platform to extremists hoping to win new recruits, presumably so they can pretend to be impartial. While our security forces were doing their best to remove recruitment propaganda from the web, some TV news programs gleefully gave them regular free air time. Hate preachers have often been given lengthy interviews to put their arguments across.

The West’s willingness to defend itself is already greatly undermined after decades of moral equivalence eating away at any notion that we have something valuable or special to defend. Fewer and fewer people are prepared to defend our countries or our values against those who wish to replace liberal democracy with medieval tyranny. Our armies fight with threats of severe legal action and media spotlights highlighting every misjudgment on our side, while fighting against those who respect no such notions of civilized warfare.

Summary

Individually, these are things we have learned to live with, but added together, they put the West at a huge disadvantage when faced with media-savvy enemies such as ISIS. We can be certain that ISIS will make full use of each and every one of these PC weaknesses in our cultural defense. The PC chickens may come home to roost.