Category Archives: crime

The future of biometric identification and authentication

If you work in IT security, the first part of this will not be news to you, skip to the section on the future. Otherwise, the first sections look at the current state of biometrics and some of what we already know about their security limitations.

Introduction

I just read an article on fingerprint recognition. Biometrics has been hailed by some as a wonderful way of determining someone’s identity, and by others as a security mechanism that is far too easy to spoof. I generally fall in the second category. I don’t mind using it for simple unimportant things like turning on my tablet, on which I keep nothing sensitive, but so far I would never trust it as part of any system that gives access to my money or sensitive files.

My own history is that voice recognition still doesn’t work for me, fingerprints don’t work for me, and face recognition doesn’t work for me. Iris scan recognition does, but I don’t trust that either. Let’s take a quick look at conventional biometrics today and the near future.

Conventional biometrics

Fingerprint recognition.

I use a Google Nexus, made by Samsung. Samsung is in the news today because their Galaxy S5 fingerprint sensor was hacked by SRLabs minutes after release, not the most promising endorsement of their security competence.

http://www.telegraph.co.uk/technology/samsung/10769478/Galaxy-S5-fingerprint-scanner-hacked.html

This article says the sensor is used in the user authentication to access Paypal. That is really not good. I expect quite a few engineers at Samsung are working very hard indeed today. I expect they thought they had tested it thoroughly, and their engineers know a thing or two about security. Every engineer knows you can photograph a fingerprint and print a replica in silicone or glue or whatever. It’s the first topic of discussion at any Biometrics 101 meeting. I would assume they tested for that. I assume they would not release something they expected to bring instant embarrassment on their company, especially something failing by that classic mechanism. Yet according to this article, that seems to be the case. Given that Samsung is one of the most advanced technology companies out there, and that they can be assumed to have made reasonable effort to get it right, that doesn’t offer much hope for fingerprint recognition. If they don’t do it right, who will?

My own experience with fingerprint recognition history is having to join a special queue every day at Universal Studios because their fingerprint recognition entry system never once recognised me or my child. So I have never liked it because of false negatives. For those people for whom it does work, their fingerprints are all over the place, some in high quality, and can easily be obtained and replicated.

As just one token in multi-factor authentication, it may yet have some potential, but as a primary access key, not a chance. It will probably remain be a weak authenticator.

Face recognition

There are many ways of recognizing faces – visible light, infrared or UV, bone structure, face shapes, skin texture patterns, lip-prints, facial gesture sequences… These could be combined in simultaneous multi-factor authentication. The technology isn’t there yet, but it offers more hope than fingerprint recognition. Using the face alone is no good though. You can make masks from high-resolution photographs of people, and photos could be made using the same spectrum known to be used in recognition systems. Adding gestures is a nice idea, but in a world where cameras are becoming ubiquitous, it wouldn’t be too hard to capture the sequence you use. Pretending that a mask is alive by adding sensing and then using video to detect any inspection for pulse or blood flows or gesture requests and then to provide appropriate response is entirely feasible, though it would deter casual entry. So I am not encouraged to believe it would be secure unless and until some cleverer innovation occurs.

What I do know is that I set my tablet up to recognize me and it works about one time in five. The rest of the time I have to wait till it fails and then type in a PIN. So on average, it actually slows entry down. False negative again. Giving lots of false negatives without the reward of avoiding false positives is not a good combination.

Iris scans

I was a subject in one of the early trials for iris recognition. It seemed very promising. It always recognized me and never confused me with someone else. That was a very small scale trial though so I’d need a lot more convincing before I let it near my bank account. I saw the problem of replication an iris using a high quality printer and was assured that that couldn’t work because the system checks for the eye being alive by watching for jitter and shining a light and watching for pupil contraction. Call me too suspicious but I didn’t and don’t find that at all reassuring. It won’t be too long before we can make a thin sheet high-res polymer display layered onto a polymer gel underlayer that contracts under electric field, with light sensors built in and some software analysis for real time response. You could even do it as part of a mask with the rest of the face also faithfully mimicking all the textures, real-time responses, blood flow mimicking, gesture sequences and so on. If the prize is valuable enough to justify the effort, every aspect of the eyes, face and fingerprints could be mimicked. It may be more Mission Impossible than casual high street robbery but I can’t yet have any confidence that any part of the face or gestures would offer good security.

DNA

We hear frequently that DNA is a superbly secure authenticator. Every one of your cells can identify you. You almost certainly leave a few cells at the scene of a crime so can be caught, and because your DNA is unique, it must have been you that did it. Perfect, yes? And because it is such a perfect authenticator, it could be used confidently to police entry to secure systems.

No! First, even for a criminal trial, only a few parts of your DNA are checked, they don’t do an entire genome match. That already brings the chances of a match down to millions rather than billions. A chance of millions to one sounds impressive to a jury until you look at the figure from the other direction. If you have 1 in 70 million chance of a match, a prosecution barrister might try to present that as a 70 million to 1 chance that you’re guilty and a juror may well be taken in. The other side of that is that 100 people of the 7 billion would have that same 1 in 70 million match. So your competent defense barrister should  present that as only a 1 in 100 chance that it was you. Not quite so impressive.

I doubt a DNA system used commercially for security systems would be as sophisticated as one used in forensic labs. It will be many years before an instant response using large parts of your genome could be made economic. But what then? Still no. You leave DNA everywhere you go, all day, every day. I find it amazing that it is permitted as evidence in trials, because it is so easy to get hold of someone’s hairs or skin flakes. You could gather hairs or skin flakes from any bus seat or hotel bathroom or bed. Any maid in a big hotel or any airline cabin attendant could gather packets of tissue and hair samples and in many cases could even attach a name to them.  Your DNA could be found at the scene of any crime having been planted there by someone who simply wanted to deflect attention from themselves and get someone else convicted instead of them. They don’t even need to know who you are. And the police can tick the crime solved box as long as someone gets convicted. It doesn’t have to be the culprit. Think you have nothing to fear if you have done nothing wrong? Think again.

If someone wants to get access to an account, but doesn’t mind whose, perhaps a DNA-based entry system would offer good potential, because people perceive it as secure, whereas it simply isn’t. So it might not be paired with other secure factors. Going back to the maid or cabin attendant. Both are low paid. A few might welcome some black market bonuses if they can collect good quality samples with a name attached, especially a name of someone staying in a posh suite, probably with a nice account or two, or privy to valuable information. Especially if they also gather their fingerprints at the same time. Knowing who they are, getting a high res pic of their face and eyes off the net, along with some voice samples from videos, then making a mask, iris replica, fingerprint and if you’re lucky also buying video of their gesture patterns from the black market, you could make an almost perfect multi-factor biometric spoof.

It also becomes quickly obvious that the people who are the most valuable or important are also the people who are most vulnerable to such high quality spoofing.

So I am not impressed with biometric authentication. It sounds good at first, but biometrics are too easy to access and mimic. Other security vulnerabilities apply in sequence too. If your biometric is being measured and sent across a network for authentication, all the other usual IT vulnerabilities still apply. The signal could be intercepted and stored, replicated another time, and you can’t change your body much, so once your iris has been photographed or your fingerprint stored and hacked, it is useless for ever. The same goes for the other biometrics.

Dynamic biometrics

Signatures, gestures and facial expressions offer at least the chance to change them. If you signature has been used, you could start using a new one. You could sign different phrases each time, as a personal one-time key. You could invent new gesture sequences. These are really just an equivalent to passwords. You have to remember them and which one you use for which system. You don’t want a street seller using your signature to verify a tiny transaction and then risk the seller using the same signature to get right into your account.

Summary of status quo

This all brings us back to the most basic of security practice. You can only use static biometrics safely as a small part of a multi-factor system, and you have to use different dynamic biometrics such as gestures or signatures on a one time basis for each system, just as you do with passwords. At best, they provide a simple alternative to a simple password. At worst, they pair low actual security with the illusion of high security, and that is a very bad combination indeed.

So without major progress, biometrics in its conventional meaning doesn’t seem to have much of a future. If it is not much more than a novelty or a toy, and can only be used safely in conjunction with some proper security system, why bother at all?

The future

You can’t easily change your eyes or your DNA or you skin, but you can add things to your body that are similar to biometrics or interact with it but offer the flexibility and replaceability of electronics.

I have written frequently about active skin, using the skin as a platform for electronics, and I believe the various layers of it offer the best potential for security technology.

Long ago, RFID chips implants became commonplace in pets and some people even had them inserted too. RFID variants could easily be printed on a membrane and stuck onto the skin surface. They could be used for one time keys too, changing each time they are used. Adding accelerometers, magnetometers, pressure sensors or even location sensors could all offer ways of enhancing security options. Active skin allows easy combination of fingerprints with other factors.

 

Ultra-thin and uninvasive security patches could be stuck onto the skin, and could not be removed without damaging them, so would offer a potentially valuable platform. Pretty much any kinds and combinations of electronics could be used in them. They could easily be made to have a certain lifetime. Very thin ones could wash off after a few days so could be useful for theme park entry during holidays or for short term contractors. Banks could offer stick on electronic patches that change fundamentally how they work every month, making it very hard to hack them.

Active skin can go inside the skin too, not just on the surface. You could for example have an electronic circuit or an array of micro-scale magnets embedded among the skin cells in your fingertip. Your fingerprint alone could easily be copied and spoofed, but not the accompanying electronic interactivity from the active skin that can be interrogated at the same time. Active skin could measure all sorts of properties of the body too, so personal body chemistry at a particular time could be used. In fact, medical monitoring is the first key development area for active skin, so we’re likely to have a lot of body data available that could make new biometrics. The key advantage here is that skin cells are very large compared to electronic feature sizes. A decent processor or memory can be made around the size of one skin cell and many could be combined using infrared optics within the skin. Temperature or chemical gradients between inner and outer skin layers could be used to power devices too.

If you are signing something, the signature could be accompanied by a signal from the fingertip, sufficiently close to the surface being signed to be useful. A ring on a finger could also offer a voluminous security electronics platform to house any number of sensors, memory and processors.

Skin itself offers a reasonable communications route, able to carry a few Mbit’s of data stream, so touching something could allow a lot of data transfer very quickly. A smart watch or any other piece of digital jewelry or active skin security patch could use your fingertip to send an authentication sequence. The watch would know who you are by constant proximity and via its own authentication tools. It could easily be unauthorized instantly when detached or via a remote command.

Active makeup offer a novel mechanism too. Makeup will soon exist that uses particles that can change color or alignment under electronic control, potentially allowing video rate pattern changes. While that makes for fun makeup, it also allows for sophisticated visual authentication sequences using one-time keys. Makeup doesn’t have to be confined only to the face of course, and security makeup could maybe be used on the forearm or hands. Combining with static biometrics, many-factor authentication could be implemented.

I believe active skin, using membranes added or printed onto and even within the skin, together with the use of capsules, electronic jewelry, and even active makeup offers the future potential to implement extremely secure personal authentication systems. This pseudo-biometric authentication offers infinitely more flexibility and changeability than the body itself, but because it is attached to the body, offers much the same ease of use and constant presence as other biometrics.

Biometrics may be pretty useless as it is, but the field does certainly have a future. We just need to add some bits. The endless potential variety of those bits and their combinations makes the available creativity space vast.

 

 

WMDs for mad AIs

We think sometimes about mad scientists and what they might do. It’s fun, makes nice films occasionally, and highlights threats years before they become feasible. That then allows scientists and engineers to think through how they might defend against such scenarios, hopefully making sure they don’t happen.

You’ll be aware that a lot more talk of AI is going on again now. It does seem to be picking up progress finally. If it succeeds well enough, a lot more future science and engineering will be done by AI than by people. If genuinely conscious, self-aware AI, with proper emotions etc becomes feasible, as I think it will, then we really ought to think about what happens when it goes wrong. (Sci-fi computer games producers already do think that stuff through sometimes – my personal favorite is Mass Effect). We will one day have some insane AIs. In Mass Effect, the concept of AI being shackled is embedded in the culture, thereby attempting to limit the damage it could presumably do. On the other hand, we have had Asimov’s laws of robotics for decades, but they are sometimes being ignored when it comes to making autonomous defense systems. That doesn’t bode well. So, assuming that Mass Effect’s writers don’t get to be in charge of the world, and instead we have ideological descendants of our current leaders, what sort of things could an advanced AI do in terms of its chosen weaponry?

Advanced AI

An ultra-powerful AI is a potential threat in itself. There is no reason to expect that an advanced AI will be malign, but there is also no reason to assume it won’t be. High level AI could have at least the range of personality that we associate with people, with a potentially greater  range of emotions or motivations, so we’d have the super-helpful smart scientist type AIs but also perhaps the evil super-villain and terrorist ones.

An AI doesn’t have to intend harm to be harmful. If it wants to do something and we are in the way, even if it has no malicious intent, we could still become casualties, like ants on a building site.

I have often blogged about achieving conscious computers using techniques such as gel computing and how we could end up in a terminator scenario, favored by sci-fi. This could be deliberate act of innocent research, military development or terrorism.

Terminator scenarios are diverse but often rely on AI taking control of human weapons systems. I won’t major on that here because that threat has already been analysed in-depth by many people.

Conscious botnets could arrive by accident too – a student prank harnessing millions of bots even with an inefficient algorithm might gain enough power to achieve high level of AI. 

Smart bacteriaBacterial DNA could be modified so that bacteria can make electronics inside their cell, and power it. Linking to other bacteria, massive AI could be achieved.

Zombies

Adding the ability to enter a human nervous system or disrupt or capture control of a human brain could enable enslavement, giving us zombies. Having been enslaved, zombies could easily be linked across the net. The zombie films we watch tend to miss this feature. Zombies in films and games tend to move in herds, but not generally under control or in a much coordinated way. We should assume that real ones will be full networked, liable to remote control, and able to share sensory systems. They’d be rather smarter and more capable than what we’re generally used to. Shooting them in the head might not work so well as people expect either, as their nervous systems don’t really need a local controller, and could just as easily be controlled by a collective intelligence, though blood loss would eventually cause them to die. To stop a herd of real zombies, you’d basically have to dismember them. More Dead Space than Dawn of the Dead.

Zombie viruses could be made other ways too. It isn’t necessary to use smart bacteria. Genetic modification of viruses, or a suspension of nanoparticles are traditional favorites because they could work. Sadly, we are likely to see zombies result from deliberate human acts, likely this century.

From Zombies, it is a short hop to full evolution of the Borg from Star Trek, along with emergence of characters from computer games to take over the zombified bodies.

Terraforming

Using strong external AI to make collective adaptability so that smart bacteria can colonize many niches, bacterial-based AI or AI using bacteria could engage in terraforming. Attacking many niches that are important to humans or other life would be very destructive. Terraforming a planet you live on is not generally a good idea, but if an organism can inhabit land, sea or air and even space, there is plenty of scope to avoid self destruction. Fighting bacteria engaged on such a pursuit might be hard. Smart bacteria could spread immunity to toxins or biological threats almost instantly through a population.

Correlated traffic

Information waves and other correlated traffic, network resonance attacks are another way of using networks to collapse economies by taking advantage of the physical properties of the links and protocols rather than using more traditional viruses or denial or service attacks. AIs using smart dust or bacteria could launch signals in perfect coordination from any points on any networks simultaneously. This could push any network into resonant overloads that would likely crash them, and certainly act to deprive other traffic of bandwidth.

Decryption

Conscious botnets could be used to make decryption engines to wreck security and finance systems. Imagine how much more so a worldwide collection of trillions of AI-harnessed organisms or devices. Invisibly small smart dust and networked bacteria could also pick up most signals well before they are encrypted anyway, since they could be resident on keyboards or the components and wires within. They could even pick up electrical signals from a person’s scalp and engage in thought recognition, intercepting passwords well before a person’s fingers even move to type them.

Space guns

Solar wind deflector guns are feasible, ionizing some of the ionosphere to make a reflective surface to deflect some of the incoming solar wind to make an even bigger reflector, then again, thus ending up with an ionospheric lens or reflector that can steer perhaps 1% of the solar wind onto a city. That could generate a high enough energy density to ignite and even melt a large area of city within minutes.

This wouldn’t be as easy as using space based solar farms, and using energy direction from them. Space solar is being seriously considered but it presents an extremely attractive target for capture because of its potential as a directed energy weapon. Their intended use is to use microwave beams directed to rectenna arrays on the ground, but it would take good design to prevent a takeover possibility.

Drone armies

Drones are already becoming common at an alarming rate, and the sizes of drones are increasing in range from large insects to medium sized planes. The next generation is likely to include permanently airborne drones and swarms of insect-sized drones. The swarms offer interesting potential for WMDs. They can be dispersed and come together on command, making them hard to attack most of the time.

Individual insect-sized drones could build up an electrical charge by a wide variety of means, and could collectively attack individuals, electrocuting or disabling them, as well as overload or short-circuit electrical appliances.

Larger drones such as the ones I discussed in

http://carbonweapons.com/2013/06/27/free-floating-combat-drones/ would be capable of much greater damage, and collectively, virtually indestructible since each can be broken to pieces by an attack and automatically reassembled without losing capability using self organisation principles. A mixture of large and small drones, possibly also using bacteria and smart dust, could present an extremely formidable coordinated attack.

I also recently blogged about the storm router

http://carbonweapons.com/2014/03/17/stormrouter-making-wmds-from-hurricanes-or-thunderstorms/ that would harness hurricanes, tornados or electrical storms and divert their energy onto chosen targets.

In my Space Anchor novel, my superheroes have to fight against a formidable AI army that appears as just a global collection of tiny clouds. They do some of the things I highlighted above and come close to threatening human existence. It’s a fun story but it is based on potential engineering.

Well, I think that’s enough threats to worry about for today. Maybe given the timing of release, you’re expecting me to hint that this is an April Fool blog. Not this time. All these threats are feasible.

Deterring rape and sexual assault

Since writing this a new set of stats has come out (yes, I should have predicted that):

http://www.ons.gov.uk/ons/rel/crime-stats/crime-statistics/focus-on-violent-crime-and-sexual-offences–2012-13/rft-table-2.xls

New technology appears all the time, but it seemed to me that some very serious problems were being under-addressed, such as rape and sexual assault. Technology obviously won’t solve them alone, but I believe it could help to some degree. However, I wanted to understand the magnitude of the problem first, so sought out the official statistics. I found it intensely frustrating task that left me angry that government is so bad at collecting proper data. So although I started this as another technology blog, it evolved and I now also discuss the statistics too, since poor quality data collection and communication on such an important issue as rape is a huge problem in itself. That isn’t a technology issue, it is one of government competence.

Anyway, the headline stats are that:

1060 rapes of women and 522 rapes of girls under 16 resulted in court convictions. A third as many attempted rapes also resulted in convictions.

14767 reports of rapes or attempted rapes (typically 25%) of females were initially recorded by the police, of which 33% were against girls under 16.

The Crime Survey for England and Wales estimates that 69000 women claim to have been subjected to rape or attempted rape.

I will discuss the stats further after I have considered how technology could help to reduce rape, the original point of the blog.

This is a highly sensitive area, and people get very upset with any discussion of rape because of its huge emotional impact. I don’t want to upset anybody by misplacing blame so let me say very clearly:

Rape or sexual assault are never a victim’s fault. There are no circumstances under which it is acceptable to take part in any sexual act with anyone against their will. If someone does so, it is entirely their fault, not the victim’s. People should not have to protect themselves but should be free to do as they wish without fear of being raped or sexually assaulted. Some people clearly don’t respect that right and rapes and sexual assaults happen. The rest of us want fewer people to be raped or assaulted and want more guilty people to be convicted. Technology can’t stop rape, and I won’t suggest that it can, but if it can help reduce someone’s chances of becoming a victim or help convict a culprit, even in just some cases, that’s progress.  I just want to do my bit to help as an engineer. Please don’t just think up reasons why a particular solution is no use in a particular case, think instead how it might help in a few. There are lots of rapes and assaults where nothing I suggest will be of any help at all. Technology can only ever be a small part of our fight against sex crime.

Let’s start with something we could easily do tomorrow, using social networking technology to alert potential victims to some dangers, deter stranger rape or help catch culprits. People encounter strangers all the time – at work, on transport, in clubs, pubs, coffee bars, shops, as well as dark alleys and tow-paths. In many of these places, we expect IT infrastructure, communications, cameras, and people with smartphones. 

Social networks often use location and some apps know who some of the people near you are. Shops are starting to use face recognition to identify regular customers and known troublemakers. Videos from building cameras are already often used to try to identify potential suspects or track their movements. Suppose in the not-very-far future, a critical mass of people carried devices that recorded the data of who was near them, throughout the day, and sent it regularly into the cloud. That device could be a special purpose device or it could just be a smartphone with an app on it. Suppose a potential victim in a club has one. They might be able to glance at an app and see a social reputation for many of the people there. They’d see that some are universally considered to be fine upstanding members of the community, even by previous partners, who thought they were nice people, just not right for them. They might see that a few others have had relationships where one or more of their previous partners had left negative feedback, which may or may not be justified. The potential victim might reasonably be more careful with the ones that have dodgy reputations, whether they’re justified or not, and even a little wary of those who don’t carry such a device. Why don’t they carry one? Surely if they were OK, they would? That’s what critical mass does. Above a certain level of adoption, it would rapidly become the norm. Like any sort of reputation, giving someone a false or unjustified rating would carry its own penalty. If you try to get back at an ex by telling lies about them, you’d quickly be identified as a liar by others, or they might sue you for libel. Even at this level, social networking can help alert some people to potential danger some of the time.

Suppose someone ends up being raped. Thanks to the collection of that data by their device (and those of others) of who was where, when, with whom, the police would more easily be able to identify some of the people the victim had encountered and some of them would be able to identify some of the others who didn’t carry such a device. The data would also help eliminate a lot of potential suspects too. Unless a rapist had planned in advance to rape, they may even have such a device with them. That might itself be a deterrent from later raping someone they’d met, because  they’d know the police would be able to find them easier. Some clubs and pubs might make it compulsory to carry one, to capitalise on the market from being known as relatively safe hangouts. Other clubs and pubs might be forced to follow suit. We could end up with a society where most of the time, potential rapists would know that their proximity to their potential victim would be known most of the time. So they might behave.

So even social networking such as we have today or could easily produce tomorrow is capable of acting as a deterrent to some people considering raping a stranger. It increases their chances of being caught, and provides some circumstantial evidence at least of their relevant movements when they are.

Smartphones are very underused as a tool to deter rape. Frequent use of social nets such as uploading photos or adding a diary entry into Facebook helps to make a picture of events leading up to a crime that may later help in inquiries. Again, that automatically creates a small deterrence by increasing the chances of being investigated. It could go a lot further though. Life-logging may use a microphone that records a continuous audio all day and a camera that records pictures when the scene changes. This already exists but is not in common use yet – frequent Facebook updates are as far as most people currently get to life-logging. Almost any phone is capable of recording audio, and can easily do so from a pocket or bag, but if a camera is to record frequent images, it really needs to be worn. That may be OK in several years if we’re all wearing video visors with built-in cameras, but in practice and for the short-term, we’re realistically stuck with just the audio.

So life-logging technology could record a lot of the events, audio and pictures leading up to an offense, and any smartphone could do at least some of this. A rapist might forcefully search and remove such devices from a victim or their bag, but by then they might already have transmitted a lot of data into the cloud, possibly even evidence of a struggle that may be used later to help convict. If not removed, it could even record audio throughout the offence, providing a good source of evidence. Smartphones also have accelerometers in them, so they could even act as a sort of black box, showing when a victim was still, walking, running, or struggling. Further, phones often have tracking apps on them, so if a rapist did steal a phone, it may show their later movements up to the point where they dumped it. Phones can also be used to issue distress calls. An emergency distress button would be easy to implement, and could transmit exact location stream audio  to the emergency services. An app could also be set up to issue a distress call automatically under specific circumstances, such at it detecting a struggle or a scream or a call for help. Finally, a lot of phones are equipped for ID purposes, and that will generally increase the proportion of people in a building whose identity is known. Someone who habitually uses their phone for such purposes could be asked to justify disabling ID or tracking services when later interviewed in connection with an offense. All of these developments will make it just a little bit harder to escape justice and that knowledge would act as a deterrent.

Overall, a smart phone, with its accelerometer, positioning, audio, image and video recording and its ability to record and transmit any such data on to cloud storage makes it into a potentially very useful black box and that surely must be a significant deterrent. From the point of view of someone falsely accused, it also could act as a valuable proof of innocence if they can show that the whole time they were together was amicable, or if indeed they were somewhere else altogether at the time. So actually, both sides of a date have an interest in using such black box smartphone technology and on a date with someone new, a sensible precautionary habit could be encouraged to enable continuous black box logging throughout a date. People might reasonably object to having a continuous recording happening during a legitimate date if they thought there was a danger it could be used by the other person to entertain their friends or uploaded on to the web later, but it could easily be implemented to protect privacy and avoiding the risk of misuse. That could be achieved by using an app that keeps the record on a database but gives nobody access to it without a court order. It would be hard to find a good reason to object to the other person protecting themselves by using such an app. With such protection and extra protection, perhaps it could become as much part of safe sex as using a condom. Imagine if women’s groups were to encourage a trend to make this sort of recording on dates the norm – no app, no fun!

These technologies would be useful primarily in deterring stranger rape or date rape. I doubt if they would help as much with rapes that are by someone the victim knows. There are a number of reasons. It’s reasonable to assume that when the victim knows the rapist, and especially if they are partners and have regular sex, it is far less likely that either would have a recording going. For example, a woman may change her mind during sex that started off consensually. If the man forces her to continue, it is very unlikely that there would be anything recorded to prove rape occurred. In an abusive or violent relationship, an abused partner might use an audio recording via a hidden device when they are concerned – an app could initiate a recording on detection of a secret keyword, or when voices are raised, even when the phone is put in a particular location or orientation. So it might be easy to hide the fact that a recording is going and it could be useful in some cases. However, the fear of being caught doing so by a violent partner might be a strong deterrent, and an abuser may well have full access to or even control of their partner’s phone, and most of all, a victim generally doesn’t know they are going to be raped. So the phone probably isn’t a very useful factor when the victim and rapist are partners or are often together in that kind of situation. However, when it is two colleagues or friends in a new kind of situation, which also accounts for a significant proportion of rapes, perhaps it is more appropriate and normal dating protocols for black box app use may more often apply. Companies could help protect employees by insisting that such a black box recording is in force when any employees are together, in or out of office hours. They could even automate it by detecting proximity of their employees’ phones.

The smartphone is already ubiquitous and everyone is familiar with installing and using apps, so any of this could be done right away. A good campaign supported by the right groups could ensure good uptake of such apps very quickly. And it needn’t be all phone-centric. A new class of device would be useful for those who feel threatened in abusive relationships. Thanks to miniaturisation, recording and transmission devices can easily be concealed in just about any everyday object, many that would be common in a handbag or bedroom drawer or on a bedside table. If abuse isn’t just a one-off event, they may offer a valuable means of providing evidence to deal with an abusive partner.

Obviously, black boxes or audio recording can’t stop someone from using force or threats, but it can provide good quality evidence, and the deterrent effect of likely being caught is a strong defence against any kind of crime. I think that is probably as far as technology can go. Self-defense weapons such as pepper sprays and rape alarms already exist, but we don’t allow use of tasers or knives or guns and similar restrictions would apply to future defence technologies. Automatically raising an alarm and getting help to the scene quickly is the only way we can reasonably expect technology to help deal with a rape that is occurring, but that makes the use of deterrence via probably detection all the more valuable. Since the technologies also help protect the innocent against false accusations, that would help in getting their social adoption.

So much for what we could do with existing technology. In a few years, we will become accustomed to having patches of electronics stuck on our skin. Active skin and even active makeup will have a lot of medical functions, but it could also include accelerometers, recording devices, pressure sensors and just about anything that uses electronics. Any part of the body can be printed with active skin or active makeup, which is then potentially part of this black box system. Invisibly small sensors in makeup, on thin membranes or even embedded among skin cells could notionally detect, measure and record any kiss, caress, squeeze or impact, even record the physical sensations experiences by recording the nerve signals. It could record pain or discomfort, along with precise timing, location, and measure many properties of the skin touching or kissing it too. It might be possible for a victim to prove exactly when a rape happened, exactly what it involved, and who was responsible. Such technology is already being researched around the world. It will take a while to develop and become widespread, but it will come.

I don’t want this to sound frivolous, but I suggested many years ago that when women get breast implants, they really ought to have at least some of the space used for useful electronics, and electronics can actually be made using silicone. A potential rapist can’t steal or deactivate a smart breast implant as easily as a phone. If a woman is going to get implants anyway, why not get ones that increase her safety by having some sort of built-in black box? We don’t have to wait a decade for the technology to do that.

The statistics show that many rapes and sexual assaults that are reported don’t result in a conviction. Some accusations may be false, and I couldn’t find any figures for that number, but lack of good evidence is one of the biggest reasons why many genuine rapes don’t result in conviction. Technology can’t stop rapes, but it can certainly help a lot to provide good quality evidence to make convictions more likely when rapes and assaults do occur.

By making people more aware of potentially risky dates, and by gathering continuous data streams when they are with someone, technology can provide an extra level of safety and a good deterrent against rape and sexual assault. That in no way implies that rape is anyone’s fault except the rapist, but with high social support, it could help make a significant drop in rape incidence and a large rise in conviction rates. I am aware that in the biggest category, the technology I suggest has the smallest benefit to offer, so we will still need to tackle rape by other means. It is only a start, but better some reduction than none.

The rest of this blog is about rape statistics, not about technology or the future. It may be of interest to some readers. Its overwhelming conclusion is that official stats are a mess and nobody has a clue how many rapes actually take place.

Summary Statistics

We hear politicians and special interest groups citing and sometimes misrepresenting wildly varying statistics all the time, and now I know why. It’s hard to know the true scale of the problem, and very easy indeed to be confused by  poor presentation of poor quality government statistics in the sexual offenses category. That is a huge issue and source of problems in itself. Although it is very much on the furthest edge of my normal brief, I spent three days trawling through the whole sexual offenses field, looking at the crime survey questionnaires, the gaping holes and inconsistencies in collected data, and the evolution of offense categories over the last decade. It is no wonder government policies and public debate are so confused when the data available is so poor. It very badly needs fixed. 

There are several stages at which some data is available outside and within the justice system. The level of credibility of a claim obviously varies at each stage as the level of evidence increases.

Outside of the justice system, someone may claim to have been raped in a self-completion module of The Crime Survey for England and Wales (CSEW), knowing that it is anonymous, nobody will query their response, no further verification will be required and there will be no consequences for anyone. There are strong personal and political reasons why people may be motivated to give false information in a survey designed to measure crime levels (in either direction), especially in those sections not done by face to face interview, and these reasons are magnified when people filling it in know that their answers will be scaled up to represent the whole population, so that already introduces a large motivational error source. However, even for a person fully intending to tell the truth in the survey, some questions are ambiguous or biased, and some are highly specific while others leave far too much scope for interpretation, leaving gaps in some areas while obsessing with others. In my view, the CSEW is badly conceived and badly implemented. In spite of unfounded government and police assurances that it gives a more accurate picture of crime than other sources, having read it, I have little more confidence in the Crime Survey for England and Wales (CSEW)  as an indicator of actual crime levels than a casual conversation in a pub. We can be sure that some people don’t report some rapes for a variety of reasons and that in itself is a cause for concern. We don’t know how many go unreported, and the CSEW is not a reasonable indicator. We need a more reliable source.

The next stage for potential stats is that anyone may report any rape to the police, whether of themselves, a friend or colleague, witnessing a rape of a stranger, or even something they heard. The police will only record some of these initial reports as crimes, on a fairly common sense approach. According to the report, ‘the police record a crime if, on the balance of probability, the circumstances as reported amount to a crime defined by law and if there is no credible evidence to the contrary‘. 7% of these are later dropped for reasons such as errors in initial recording or retraction. However, it has recently been revealed that some forces record every crime reported whereas others record it only after it has passed the assessment above, damaging the quality of the data by mixing two different types of data together. In such an important area of crime, it is most unsatisfactory that proper statistics are not gathered in a consistent way for each stage of the criminal justice process, using the same criteria in every force.

Having recorded crimes, the police will proceed some of them through the criminal justice system.

Finally, the courts will find proven guilt in some of those cases.

I looked for the data for each of these stages, expecting to find vast numbers of table detailing everything. Perhaps they exist, and I certainly followed a number of promising routes, but most of the roads I followed ended up leading back to the CSEW and the same overview report. This joint overview report for the UK was produced by the  Ministry of Justice, Home Office and the Office for National Statistics in 2013, and it includes a range of tables with selected data from actual convictions through to results of the crime survey of England and Wales. While useful, it omits a lot of essential data that I couldn’t find anywhere else either.

The report and its tables can be accessed from:

http://www.ons.gov.uk/ons/rel/crime-stats/an-overview-of-sexual-offending-in-england—wales/december-2012/index.html

Another site gives a nice infographic on police recording, although for a different period. It is worth looking at if only to see the wonderful caveat: ‘the police figures exclude those offences which have not been reported to them’. Here it is:

http://www.ons.gov.uk/ons/rel/crime-stats/crime-statistics/period-ending-june-2013/info-sexual-offenses.html

In my view the ‘overview of sexual offending’ report mixes different qualities of data for different crimes and different victim groups in such a way as to invite confusion, distortion and misrepresentation. I’d encourage you to read it yourself if only to convince you of the need to pressure government to do it properly. Be warned, a great deal of care is required to work out exactly what and which victim group each refers to. Some figures include all people, some only females, some only women 16-59 years old. Some refer to different crime groups with similar sounding names such as sexual assault and sexual offence, some include attempts whereas others don’t. Worst of all, some very important statistics are missing, and it’s easy to assume another one refers to what you are looking for when on closer inspection, it doesn’t. However, there doesn’t appear to be a better official report available, so I had to use it. I’ve done my best to extract and qualify the headline statistics.

Taking rapes against both males and females, in 2011, 1153 people were convicted of carrying out 2294 rapes or attempted rapes, an average of 2 each. The conviction rate was 34.6% of 6630 proceeded against, from 16041 rapes or attempted rapes recorded by the police. Inexplicably, conviction figures are not broken down by victim gender, nor by rape or attempted rape. 

Police recording stats are broken down well. Of the 16041, 1274 (8%) of the rapes and attempted rapes recorded by the police were against males, while 14767 (92%) were against females. 33% of the female rapes recorded and 70% of male rapes recorded were against children (though far more girls were raped than boys). Figures are also broken down well against ethnicity and age, for offender and victim. Figures elsewhere suggested that 25% of rape attempts are unsuccessful, which combined with the 92% proportion that were rapes of females would indicate 1582 convictions for actual rape of a female, approximately 1060 women and 522 girls, but those figures only hold true if the proportions are similar through to conviction. 

Surely such a report should clearly state such an important figure as the number of rapes of a female that led to a conviction, and not leave it to readers to calculate their own estimate from pieces of data spread throughout the report. Government needs to do a lot better at gathering, categorising, analysing and reporting clear and accurate data. 

That 1582 figure for convictions is important, but it represents only the figure for rapes proven beyond reasonable doubt. Some females were raped and the culprit went unpunished. There has been a lot of recent effort to try to get a better conviction rate for rapes. Getting better evidence more frequently would certainly help get more convictions. A common perception is that many or even most rapes are unreported so the focus is often on trying to get more women to report it when they are raped. If someone knows they have good evidence, they are more likely to report a rape or assault, since one of the main reasons they don’t report it is lack of confidence that the police can do anything.

Although I don’t have much confidence in the figures from the CSEW, I’ll list them anyway. Perhaps you have greater confidence in them. The CSEW uses a sample of people, and then results are scaled up to a representation of the whole population. The CSEW (Crime Survey of England and Wales) estimates that 52000 (95% confidence level of between 39000 and 66000) women between 16 and 59 years old claim to have been victim of actual rape in the last 12 months, based on anonymous self-completion questionnaires, with 69000 (95% confidence level of between 54000 and 85000) women claiming to have been victim of attempted or actual rape in the last 12 months. 

In the same period, 22053 sexual assaults were recorded by the police. I couldn’t find any figures for convictions for sexual assaults, only for sexual offenses, which is a different, far larger category that includes indecent exposure and voyeurism. It isn’t clear why the report doesn’t include the figures for sexual assault convictions. Again, government should do better in their collection and presentation of important statistics.

The overview report also gives the stats for the number of women who said they reported a rape or attempted rape. 15% of women said they told the police, 57% said they told someone else but not the police, and 28% said they told nobody. The report does give the reasons commonly cited for not telling the police: “Based on the responses of female victims in the 2011/12 survey, the most frequently cited were that it would be ‘embarrassing’, they ‘didn’t think the police could do much to help’, that the incident was ‘too trivial/not worth reporting’, or that they saw it as a ‘private/family matter and not police business’.”

Whether you pick the 2110 convictions of rape or attempted rape against a female or the 69000 claimed in anonymous questionnaires, or anywhere in between, a lot of females are being subjected to actual and attempted rapes, and a lot victim of sexual assault. The high proportion of victims that are young children is especially alarming. Male rape is a big problem too, but the figures are a lot lower than for female rape.

And another new book: You Tomorrow, 2nd Edition

I wrote You Tomorrow two years ago. It was my first ebook, and pulled together a lot of material I’d written on the general future of life, with some gaps then filled in. I was quite happy with it as a book, but I could see I’d allowed quite a few typos to get into the final work, and a few other errors too.

However, two years is a long time, and I’ve thought about a lot of new areas in that time. So I decided a few months ago to do a second edition. I deleted a bit, rearranged it, and then added quite a lot. I also wrote the partner book, Total Sustainability. It includes a lot of my ideas on future business and capitalism, politics and society that don’t really belong in You Tomorrow.

So, now it’s out on sale on Amazon

http://www.amazon.co.uk/You-Tomorrow-humanity-belongings-surroundings/dp/1491278269/ in paper, at £9.00 and

http://www.amazon.co.uk/You-Tomorrow-Ian-Pearson-ebook/dp/B00G8DLB24 in ebook form at £3.81 (guessing the right price to get a round number after VAT is added is beyond me. Did you know that paper books don’t have VAT added but ebooks do?)

And here’s a pretty picture:

You_Tomorrow_Cover_for_Kindle

Deep surveillance – how much privacy could you lose?

The news that seems to have caught much of the media in shock, that our electronic activities were being monitored, comes as no surprise at all to anyone working in IT for the last decade or two. In fact, I can’t see what’s new. I’ve always assumed since the early 90s that everything I write and do on-line or say or text on a phone or watch on digital TV or do on a game console is recorded forever and checked by computers now or will be checked some time in the future for anything bad. If I don’t want anyone to know I am thinking something, I keep it in my head. Am I paranoid? No. If you think I am, then it’s you who is being naive.

I know that if some technically competent spy with lots of time and resources really wants to monitor everything I do day and night and listen to pretty much everything I say, they could, but I am not important enough, bad enough, threatening enough or even interesting enough, and that conveys far more privacy than any amount of technology barriers ever could. I live in a world of finite but just about acceptable risk of privacy invasion. I’d like more privacy, but it’s too much hassle.

Although government, big business and malicious software might want to record everything I do just in case it might be useful one day, I still assume some privacy, even if it is already technically possible to bypass it. For example, I assume that I can still say what I want in my home without the police turning up even if I am not always politically correct. I am well aware that it is possible to use a function built into the networks called no-ring dial-up to activate the microphone on my phones without me knowing, but I assume nobody bothers. They could, but probably don’t. Same with malware on my mobiles.

I also assume that the police don’t use millimetre wave scanning to video me or my wife through the walls and closed curtains. They could, but probably don’t. And there are plenty of sexier targets to point spycams at so I am probably safe there too.

Probably, nobody bothers to activate the cameras on my iphone or Nexus, but I am still a bit cautious where I point them, just in case. There is simply too much malware out there to ever assume my IT is safe. I do only plug a camera and microphone into my office PC when I need to. I am sure watching me type or read is pretty boring, and few people would do it for long, but I have my office blinds drawn and close the living room curtains in the evening for the same reason – I don’t like being watched.

In a busy tube train, it is often impossible to stop people getting close enough to use an NFC scanner to copy details from my debit card and Barclaycard, but they can be copied at any till or in any restaurant just as easily, so there is a small risk but it is both unavoidable and acceptable. Banks discovered long ago that it costs far more to prevent fraud 100% than it does to just limit it and accept some. I adopt a similar policy.

Enough of today. What of tomorrow? This is a futures blog – usually.

Well, as MM Wave systems develop, they could become much more widespread so burglars and voyeurs might start using them to check if there is anything worth stealing or videoing. Maybe some search company making visual street maps might ‘accidentally’ capture a detailed 3d map of the inside of your house when they come round as well or instead of everything they could access via your wireless LAN. Not deliberately of course, but they can’t check every line of code that some junior might have put in by mistake when they didn’t fully understand the brief.

Some of the next generation games machines will have 3D scanners and HD cameras that can apparently even see blood flow in your skin. If these are hacked or left switched on – and social networking video is one of the applications they are aiming to capture, so they’ll be on often – someone could watch you all evening, capture the most intimate body details, film your facial expressions while you are looking at a known image on a particular part of the screen. Monitoring pupil dilation, smiles, anguished expressions etc could provide a lot of evidence for your emotional state, with a detailed record of what you were watching and doing at exactly that moment, with whom. By monitoring blood flow, pulse and possibly monitoring your skin conductivity via the controller, level of excitement, stress or relaxation can easily be inferred. If given to the authorities, this sort of data might be useful to identify paedophiles or murderers, by seeing which men are excited by seeing kids on TV or those who get pleasure from violent games, so obviously we must allow it, mustn’t we? We know that Microsoft’s OS has had the capability for many years to provide a back door for the authorities. Should we assume that the new Xbox is different?

Monitoring skin conductivity is already routine in IT labs ass an input. Thought recognition is possible too and though primitive today, we will see that spread as the technology progresses. So your thoughts can be monitored too. Thoughts added to emotional reactions and knowledge of circumstances would allow a very detailed picture of someone’s attitudes. By using high speed future computers to data mine zillions of hours of full sensory data input on every one of us gathered via all this routine IT exposure, a future government or big business that is prone to bend the rules could deduce everyone’s attitudes to just about everything – the real truth about our attitudes to every friend and family member or TV celebrity or politician or product, our detailed sexual orientation, any fetishes or perversions, our racial attitudes, political allegiances, attitudes to almost every topic ever aired on TV or everyday conversation, how hard we are working, how much stress we are experiencing, many aspects of our medical state. And they could steal your ideas, if you still have any after putting all your effort into self censorship.

It doesn’t even stop there. If you dare to go outside, innumerable cameras and microphones on phones, visors, and high street surveillance will automatically record all this same stuff for everyone. Thought crimes already exist in many countries including the UK. In depth evidence will become available to back up prosecutions of crimes that today would not even be noticed. Computers that can retrospectively date mine evidence collected over decades and link it all together will be able to identify billions of crimes.

Active skin will one day link your nervous system to your IT, allowing you to record and replay sensations. You will never be able to be sure that you are the only one that can access that data either. I could easily hide algorithms in a chip or program that only I know about, that no amount of testing or inspection could ever reveal. If I can, any decent software engineer can too. That’s the main reason I have never trusted my IT – I am quite nice but I would probably be tempted to put in some secret stuff on any IT I designed. Just because I could and could almost certainly get away with it. If someone was making electronics to link to your nervous system, they’d probably be at least tempted to put a back door in too, or be told to by the authorities.

Cameron utters the old line: “if you are innocent, you have nothing to fear”. Only idiots believe that. Do you know anyone who is innocent? Of everything? Who has never ever done or even thought anything even a little bit wrong? Who has never wanted to do anything nasty to a call centre operator? And that’s before you even start to factor in corruption of the police or mistakes or being framed or dumb juries or secret courts. The real problem here is not what Prism does and what the US authorities are giving to our guys. It is what is being and will be collected and stored, forever, that will be available to all future governments of all persuasions. That’s the problem. They don’t delete it. I’ve said often that our governments are often incompetent but not malicious. Most of our leaders are nice guys, even if some are a little corrupt in some cases. But what if it all goes wrong, and we somehow end up with a deeply divided society and the wrong government or a dictatorship gets in. Which of us can be sure we won’t be up against the wall one day?

We have already lost the battle to defend our privacy. Most of it is long gone, and the only bits left are those where the technology hasn’t caught up yet. In the future, not even the deepest, most hidden parts of your mind will be private. Ever.

Weapons on planes are everyday normality. We can’t ban them all.

I noted earlier that you can make a pretty dangerous Gauss rifle using a few easily available and legal components, and you could make a 3D-printed jig to arrange them for maximum effect. So I suggested that maybe magnets should be banned too.

(Incidentally, the toy ones you see on YouTube etc. typically just use a few magnets and some regular steel balls. Using large Nd magnets throughout with the positions and polarities optimally set would make it much more powerful). 

Now I learn that a US senator (Leland Yee of San Francisco), HT Dave Evans for the link http://t.co/REt2o9nF4t, wants 3D printers to be regulated somehow, in case they are used to make guns. That won’t reduce violence if you can easily acquire or make lethal weapons that are perfectly legal without one. On the ground, even highly lethal kitchen knives and many sharp tools aren’t licensed. Even narrowing it down to planes, there is quite a long list of potentially dangerous things you are still very welcome to take on board and are totally legal, some of which would be very hard to ban, so perhaps we should concentrate more on defence and catching those who wish us harm.

Here are some perfectly legal weapons that people carry frequently with many perfectly benign uses:

Your fingers. Fingernails particularly can inflict pain and give a deep scratch, but some people can blind or even kill others with their bare hands;

Sharp pencils or pencils and a sharpener; pens are harder still and can be pretty sharp too;

Hard plastic drink stirrers, 15cm long, that can be sharpened using a pencil sharpener; they often give you these on the flight so you don’t even have to bring them; hard plastics can be almost as dangerous as metals, so it is hard to see why nail files are banned and drinks stirrers and plastic knives aren’t;

CDs or DVDs, which can be easily broken to make sharp blades; I met a Swedish ex-captain once who said he always took one on board in his jacket pocket, just in case he needed to tackle a terrorist.

Your glasses. You can even take extra pairs if the ones you’re wearing are needed for you to see properly. Nobody checks the lenses to make sure the glass isn’t etched for custom breaking patterns, or whether the lenses can be popped out, with razor-sharp edges. They also don’t check that the ends of the arms don’t slide off. I’m sure Q could do a lot with a pair of glasses.

Rubber bands, can be used to make catapults or power other projectile weapons, and many can be combined to scale up the force;

Paperclips, some of which are pretty large and thick wire;

Nylon cord, which can be used dangerously in many ways. Nylon paracord can support half a ton but be woven into nice little bracelets, or shoelaces for that matter. Thin nylon cord is an excellent cutting tool.

Plastic zip ties (cable ties), the longer ones especially can be lethally used.

Plastic bags too can be used lethally.

All of these are perfectly legal but can be dangerous in the wrong hands. I am sure you can think of many others.

Amusingly, given the Senator’s proposed legislation, you could currently probably take on board a compact 3d printer to print any sharps you want, or a Liberator if you have one of the templates, and I rather expect many terrorist groups have a copy – and sometimes business class seats helpfully have an electrical power supply. I expect you might draw attention if you used one though.

There are lots of ways of storing energy to be released suddenly, a key requirement in many weapons. Springs are pretty good at that job. Many devices we use everyday like staple guns rely on springs that are compressed and then suddenly release all their force and energy when the mechanism passes a trigger point. Springs are allowed on board. It is very easy to design weapons based on accumulating potential energy across many springs that can then all simultaneously release them. If I can dream some up easily, so can a criminal. It’s also easy to invent mechanisms for self assembly of projectiles during flight, so parts of a projectile can be separately accelerated.

Banned devices that you could smuggle through detectors are also numerous.  High pressure gas reservoirs could easily be made using plastics or resins and could be used for a wide variety of pneumatic projectile weapons and contact or impact based stun weapons. Again, precision release mechanism could be designed for 3D printing at home, but a 3D printer isn’t essential, there are lots of ways of solving the engineering problems.

I don’t see how regulating printers would make us safer. After hundreds of thousands of years, we ought to know by now that if someone is intent on harming someone else, there is a huge variety of  ways of doing so, using objects or tools that are essential in everyday life and some that don’t need any tools at all, just trained hands.

Technology comes and goes, but nutters, criminals, terrorists and fanatics are here to stay. Only the innocent suffer the inconvenience of following the rules. It’s surely better to make less vulnerable systems.

3D printable guns are here to stay, but we need to ban magnets from flights too.

It’s interesting watching new technologies emerge. Someone has a bright idea, it gets hyped a bit, then someone counter-hypes a nightmare scenario and everyone panics. Then experts queue up to say why it can’t be done, then someone does it, then more panic, then knee-jerk legislation, then eventually the technology becomes part of everyday life.

I was once dismissed by our best radio experts when I suggested making cellphone masts like the ones you see on every high building today. I recall being taught that you couldn’t possibly ever get more than 19.2kbits/s down a phone line. I got heavily marked down in an appraisal for my obvious stupidity suggesting that mobile phones could include video cameras. I am well used to being told something is impossible, but if I can see how to make it work, I don’t care, I believe it anyway. My personal mantra is ‘just occasionally, everyone else IS wrong’. I am an engineer. Some engineers might not know how to do something, but others sometimes can.

When the printable gun was suggested (not by me this time!) I accepted it as an inevitable part of the future immediately. I then listened as experts argued that it could never survive the forces. But guess what? A gun doesn’t have to survive. It just needs to work once, then you use a fresh one. The first prototypes only worked for a few bullets before breaking. The Liberator was made to work just once. Missiles are like that. They fire once, only once. So you bring a few to the battle.

The recently uploaded blueprint for the Liberator printable gun has been taken offline after 100,000 copies were downloaded, so it will be about as hard to find as embarrassing pictures of any celebrity. There will be innovations, refinements, improvements, then we will see them in use by hobbyists and criminals alike.

But there are loads of ways to skin a cat, allegedly. A gun’s job is to quickly accelerate a small mass up to a high speed in a short distance. Using explosives in a bullet held in a printable lump of plastic clearly does the job on a one-shot basis, but you still need a bullet and they don’t sell them in Tesco’s. So why do it that way?

A Gauss Rifle is a science toy that can fire a ball-bearing across your living room. You can make one in 5 minutes using nothing more than sticky tape, a ruler and some neodymium magnets. Here’s a nice example of the toy version using simple steel balls:

http://scitoys.com/scitoys/scitoys/magnets/gauss.html

The concept is very well known, though a bit harder to Google now because so many computer games have used the same name for imaginary weapons. In an easily adapted version, where the steel balls are replaced by neodymium magnets held in place in alternately attracting and repelling polarities, when the first magnet is released, it is pulled by strong magnetic force to the second one, hitting it quite fast, and conveying all that energy to the next stage magnet, which is then pushed away from the one repelling it towards the one attracting it, so accumulating lots of energy. The energy accumulates over several stages, optimally harnessing the full repulsive and attractive forces available from the strong magnets. Too many stages result in the magnets shattering, but with care, four stages with simple steel balls can be used reasonably safely as a toy.

Some sites explain that if you position the magnets accurately with the poles oriented right, you can get it to make a small hole in a wall. I imagine you could design and print a gauss rifle jig with very high precision, far better than you could do with tape and your fingers, that would hold the magnets in the right locations and polarity orientations.  Then just put your magnets in and it is ready. Neodymium magnets are easily available in various sizes at low cost and the energy of the final ball is several times as high as the first one. With the larger magnets, the magnetic forces are extremely high so the energy accumulated would also be high. A sharp plastic dart housing the last ball would make quite a dangerous device. A Gauss rifle might lack the force of a conventional gun, but it could still be quite powerful. If I was in charge of airport security, I’d already be banning magnets from flights.

I really don’t see how you could stop someone making this sort of thing, or plastic crossbows or fancy plastic jigs with stored energy in springs that can be primed in an aircraft toilet that fire things in imaginative ways. There are zillions of ways to accelerate something, some of which can be done in cascades that only generate tolerable forces at any particular point so could easily work with printable materials. The current focus on firearms misses the point. You don’t have to transfer all the energy to a projectile in one short high pressure burst, you can accumulate it in stages. Focusing security controls on explosives-based systems will leave us vulnerable.

3D printable weapons are here to stay, but for criminals and terrorists, bullets with explosives in might soon be obsolete.

Killing machines

There is rising concern about machines such as drones and battlefield robots that could soon be given the decision on whether to kill someone. Since I wrote this and first posted it a couple of weeks ago, the UN has put out their thoughts as the DM writes today:

http://www.dailymail.co.uk/news/article-2318713/U-N-report-warns-killer-robots-power-destroy-human-life.html 

At the moment, drones and robots are essentially just remote controlled devices and a human makes the important decisions. In the sense that a human uses them to dispense death from a distance, they aren’t all that different from a spear or a rifle apart from scale of destruction and the distance from which death can be dealt. Without consciousness, a missile is no different from a spear or bullet, nor is a remote controlled machine that it is launched from. It is the act of hitting the fire button that is most significant, but proximity is important too. If an operator is thousands of miles away and isn’t physically threatened, or perhaps has never even met people from the target population, other ethical issues start emerging. But those are ethical issues for the people, not the machine.

Adding artificial intelligence to let a machine to decide whether a human is to be killed or not isn’t difficult per se. If you don’t care about killing innocent people, it is pretty easy. It is only made difficult because civilised countries value human lives, and because they distinguish between combatants and civilians.

Personally, I don’t fully understand the distinction between combatants and soldiers. In wars, often combatants have no real choice but to fight or are conscripted, and they are usually told what to do, often by civilian politicians hiding in far away bunkers, with strong penalties for disobeying. If a country goes to war, on the basis of a democratic mandate, then surely everyone in the electorate is guilty, even pacifists, who accept the benefits of living in the host country but would prefer to avoid the costs. Children are the only innocents.

In my analysis, soldiers in a democratic country are public sector employees like any other, just doing a job on behalf of the electorate. But that depends to some degree on them keeping their personal integrity and human judgement. The many military who take pride in following orders could be thought of as being dehumanised and reduced to killing machines. Many would actually be proud to be thought of as killing machines. A soldier like that, who merely follow orders, deliberately abdicates human responsibility. Having access to the capability for good judgement, but refusing to use it, they reduce themselves to a lower moral level than a drone. At least a drone doesn’t know what it is doing.

On the other hand, disobeying a direct order may save soothe issues of conscience but invoke huge personal costs, anything from shaming and peer disapproval to execution. Balancing that is a personal matter, but it is the act of balancing it that is important, not necessarily the outcome. Giving some thought to the matter and wrestling at least a bit with conscience before doing it makes all the difference. That is something a drone can’t yet do.

So even at the start, the difference between a drone and at least some soldiers is not always as big as we might want it to be, for other soldiers it is huge. A killing machine is competing against a grey scale of judgement and morality, not a black and white equation. In those circumstances, in a military that highly values following orders, human judgement is already no longer an essential requirement at the front line. In that case, the leaders might set the drones into combat with a defined objective, the human decision already taken by them, the local judgement of who or what to kill assigned to adaptive AI, algorithms and sensor readings. For a military such as that, drones are no different to soldiers who do what they’re told.

However, if the distinction between combatant and civilian is required, then someone has to decide the relative value of different classes of lives. Then they either have to teach it to the machines so they can make the decision locally, or the costs of potential collateral damage from just killing anyone can be put into the equations at head office. Or thirdly, and most likely in practice, a compromise can be found where some judgement is made in advance and some locally. Finally, it is even possible for killing machines to make decisions on some easier cases and refer difficult ones to remote operators.

We live in an electronic age, with face recognition, friend or foe electronic ID, web searches, social networks, location and diaries, mobile phone signals and lots of other clues that might give some knowledge of a target and potential casualties. How important is it to kill or protect this particular individual or group, or take that particular objective? How many innocent lives are acceptable cost, and from which groups – how many babies, kids, adults, old people? Should physical attractiveness or the victim’s professions be considered? What about race or religion, or nationality, or sexuality, or anything else that could possibly be found out about the target before killing them? How much should people’s personal value be considered, or should everyone be treated equal at point of potential death? These are tough questions, but the means of getting hold of the date are improving fast and we will be forced to answer them. By the time truly intelligent drones will be capable of making human-like decisions, they may well know who they are killing.

In some ways this far future with a smart or even conscious drone or robot making informed decisions before killing people isn’t as scary as the time between now and then. Terminator and Robocop may be nightmare scenarios, but at least in those there is clarity of which one is the enemy. Machines don’t yet have anywhere near that capability. However, if an objective is considered valuable, military leaders could already set a machine to kill people even when there is little certainty about the role or identity of the victims. They may put in some algorithms and crude AI to improve performance or reduce errors, but the algorithmic uncertainty and callous uncaring dispatch of potentially innocent people is very worrying.

Increasing desperation could be expected to lower barriers to use. So could a lower regard for the value of human life, and often in tribal conflicts people don’t consider the lives of the opposition to have a very high value. This is especially true in terrorism, where the objective is often to kill innocent people. It might not matter that the drone doesn’t know who it is killing, as long as it might be killing the right target as part of the mix. I think it is reasonable to expect a lot of battlefield use and certainly terrorist use of semi-smart robots and drones that kill relatively indiscriminatingly. Even when truly smart machines arrive, they might be set to malicious goals.

Then there is the possibility of rogue drones and robots. The Terminator/Robocop scenario. If machines are allowed to make their own decisions and then to kill, can we be certain that the safeguards are in place that they can always be safely deactivated? Could they be hacked? Hijacked? Sabotaged by having their fail-safes and shut-offs deactivated? Have their ‘minds’ corrupted? As an engineer, I’d say these are realistic concerns.

All in all, it is a good thing that concern is rising and we are seeing more debate. It is late, but not too late, to make good progress to limit and control the future damage killing machines might do. Not just directly in loss of innocent life, but to our fundamental humanity as armies get increasingly used to delegating responsibility to machines to deal with a remote dehumanised threat. Drones and robots are not the end of warfare technology, there are far scarier things coming later. It is time to get a grip before it is too late.

When people fought with sticks and stones, at least they were personally involved. We must never allow personal involvement to disappear from the act of killing someone.

We’re all getting nicer are we? Then tell that to those poor zombies in The Typing of the Dead. (Guest post by Chris Moseley)

This is a guest post from Chris Moseley, Owner and Managing Director of Infinite Space PR

There was a time when British bobbies rode bicycles, dressed in full fig policeman’s uniform, complete with Coxcomb helmet and brightly polished buttons on their tunics. This antediluvian fellow – let’s call him PC Pinkleton – would nod to Mrs Peartree, a spinster of this parish out for a walk in her sensible brown brogues, twin set and real pearls, and then wave to the local vicar as he pruned his roses. The worst ‘crime’ that PC Pinkleton might encounter would be a few young lads scrumping for apples in Squire Trelawney’s orchard. A clip around the ear, and a stern lecture on the moral perils of ‘thieving’ and PC Pinkleton’s duty and day were done. Then along came clashes between Mods and Rockers, pitched battles with skinheads, fights with bikers, football hooligans and flying pickets. Throw in a few rioting miners and poll tax protestors and for about a 30 year period life for the English bobby became pretty tough. Just at the point when PC Pinkleton was morphing from Dixon of Dock Green into Robocop, complete with padded riot gear, guns, mace and a US military style helmet, it appears that the uncivil civilian has become tamed.

This is the news, announced this week, that rates of murder and violent crime have fallen more rapidly in the UK in the past decade than many other countries in Western Europe. The UK Peace Index, from the Institute for Economics and Peace, found that UK homicides per 100,000 people had fallen from 1.99 in 2003, to one in 2012. The UK was more peaceful overall, it said, with the reasons for it many and varied. The index found Broadland, Norfolk, to be the most peaceful local council area but Lewisham, London, to be the least. The research by the international non-profit research organisation comes as a separate study by Cardiff University suggests the number of people treated in hospital in England and Wales after violent incidents fell by 14% in 2012. Some 267,291 people required care – 40,706 fewer than in 2011 – according to a sample of 54 hospital units, its report said. BBC home editor Mark Easton called it the “riddle of peacefulness” and said the fall in violence was “perhaps a symptom of a new morality”.

Well, I am just a bit sceptical about all this and more than a little annoyed that the BBC deliberately skirted a really interesting debate and chose instead to pursue an extremely anodyne and rather risible line of discussion. In essence, Mark Easton’s BBC TV and radio pieces concluded with the argument that perhaps as a society we had come to abhor violence. A lovely thought, and while the prospect of peace breaking out all over the place is an attractive one, and I don’t doubt the veracity of the findings of The UK Peace Index, I am more than little dubious about the notion that human nature has altered so markedly in such a short time. Perhaps one of the reasons that the UK in 2013 is more like A Brave New World than the dystopia of A Clockwork Orange is that nearly all of today’s violence is rendered sublimated and vicarious thanks to computer games, combined with the soporific influence of cheap, supermarket-procured booze. Computer games, particularly the violent ones are, after all, a form of Aldous Huxley’s Soma (“All of the benefits of Christianity and alcohol without their defects”), although rather than allowing one to drift into a peaceful state, they act as a cathartic vent. One can enter a virtual world of almost any description, reach for a virtual sword, gun or mace, and proceed to blitz the hell out the “enemy”, which is arguably a form of proxy violence that could instead by directed at one’s boss, a driver in a road rage encounter, the bank manager, even an annoying neighbour. One of the most popular games in the UK today, The Typing of the Dead, confronts the would-be gaming hero with hoards of zombies. Using a keyboard words flash up on the screen which the player needs to type as quickly as possible, thereby killing as many zombies as possible. What a relief to wipe out all those irritating pillocks who inevitably emerge from everyday life without once having to get one’s hands dirty (and what a great lesson in typing too).

Isn’t it possible that we’re just as violent and angry as we used to be? We just express our rage and violence, well, virtually.

http://www.infinitespacepr.com/

UK crime and policing

The news that the level of reported crime in the UK has fallen over the last decade or two is the subject of much debate.  Is it because crime has fallen, or because less is being reported? If crime has actually fallen, is that because the police are doing a better job or some other reason? Will crime fall or rise in the future?

My view is that our police are grossly overpaid (high salaries, huge pensions), often corrupt (by admission of chief inspectors), politically biased (plebgate, London riots) and self serving, lazy, inefficient, and generally a waste of money, and I don’t for a minute believe they deserve any credit for falling crime.

I think the crime figures are the sum of many components, none of which show the police in a good light. Let’s unpick that.

Let’s start from the generous standpoint that recorded crime may be falling – generous because even that assumes that they haven’t put too much political spin on the figures. I’d personally expect the police to spin it, but let’s ignore that for now.

Recorded crime isn’t a simple count of crimes committed, nor even those that people tell the police about.

Some crimes don’t even get as far as being reported of course. If confidence in the police is low, as it is, then people may think there is little point in wasting their time (and money, since you usually have to pay for the call now) in doing so. Reporting a crime often means spending ages giving loads of details, knowing absolutely nothing will happen other than, at best, that the crime is recorded. It is common perception based on everyday experience that police will often say there isn’t much they can do about x,y or z, so there is very little incentive to report many crimes. In the case of significant theft or vandalism someone might need a crime number to claim on insurance, but otherwise, if there is no hope that the police will find the criminal and then bother to prosecute them, many people won’t bother. So it is a safe assumption that a lot of crimes don’t even get as far as being mentioned to a police officer. I have seen many that I haven’t bothered to report, for exactly those reasons. So have you.

Once a crime does get mentioned to the police, it still has to jump over some more hurdles to actually make it into the official books.  From personal experience, I know some cases fall at those hurdles too. As well as the person telling the police, the person has to persuade the police to do something about it and demand that it is recorded. Since police want to look good, they resist doing that and will make excuses for not recording it officially. The police may also try to persuade the crime reporter to let them mark a case as solved even when it hasn’t been. They may also just sideline a case and hope it is forgotten about. So, some reported offences don’t make it onto the books and some that do are inaccurately marked as solved.

This means that crime levels exceed recorded crime levels. No big surprise there. But if that has been the case for many years, as it has, and recorded crime levels have fallen, that would still indicate a fall in crime levels. But that still doesn’t make the police look good.

Technology improvement alone would be expected to give a very significant reduction in crime level. Someone is less likely to commit a car theft since it is harder to do so now. They are less likely to murder or rape someone if they know that it is almost impossible to avoid leaving DNA evidence all over the place. They are less likely to shoplift or mug someone if they are aware of zillions of surveillance cameras that will record the act. Improving technology has certainly reduced crime.

A further reduction in crime level is expected due to changes in insurance. If your insurance policies demand that you have a car immobiliser and a burglar alarm, and lock your doors and windows with high quality locks, as they probably do, then that will reduce both home and car crime.

Another reduction is actually due to lack of confidence in the police. If you believe for whatever reasons that the police won’t protect you and your property, you will probably take more care of it yourself. The police try hard to encourage such thinking because it saves them effort. So they tell people not to attract crime by using expensive phones or wearing expensive jewellery or dressing in short skirts. Few people have so little common sense that they need such advice from the police, and lack of confidence in police protection is hardly something they can brag about.

More controversially, still further reduction has been linked recently to the drop in lead exposure via petrol. This is hypothesised to have reduced violent tendencies a little. By similar argument, increasing feminisation of men due to endocrine disrupters in the environment may also have played a part.

So, if the police can’t claim credit for a drop in crime, what effect do they have?

The police have managed to establish a strong reputation for handing out repeat cautions to those repeat criminals they can be bothered to catch, and making excuses why the rest are just too hard to track down, yet cracking down hard on easy-to-spot first offenders on political correctness or minor traffic offences. In short, they have created something of an inverted prison, where generally law-abiding people live expecting harsh penalties for doing anything slightly naughty, so that they can show high clear-up stats, while hardened criminals can expect to be let off with a slight slap of the wrist. Meanwhile, recent confessions from police chiefs indicate astonishingly high levels of corruption in every force. It looks convincingly as if police are all too often on the wrong side of the law. One law for them and one for us is the consistent picture. Reality stands in stark contrast with the dedication shown in TV police dramas. A bit like the NHS then.

What of the future? Technology will continue to make it easier to look after your own stuff and prevent it being used by a thief. It will make it easier to spot and identify criminals and collect evidence. Insurance will make it more difficult to avoid using such technology. Lack of confidence in the police will continue to grow, so people will take even more on themselves to avoid crime. The police will become even more worthless, even more of a force of state oppression and political correctness and even more of a criminal’s friend.

Meanwhile, as technology makes physical crime harder, more criminals have moved online. Technology has kept up to some degree, with the online security companies taking the protector role, not the police. The police influence here is to demand every more surveillance, less privacy and more restriction on online activity, but no actual help at all. Again they seek to create oppression in place of protection.

Crime will continue to fall, but the police will deserve even less credit. If we didn’t already have the police, we might have to invent something,  but it would bear little resemblance to what we have now.