My writing on the future of gender and same-sex reproduction now forms a section of my new book You Tomorrow, Second Edition, on the future of humanity, gender, lifestyle and our surroundings. Available from Amazon as paper and ebook.
My writing on the future of gender and same-sex reproduction now forms a section of my new book You Tomorrow, Second Edition, on the future of humanity, gender, lifestyle and our surroundings. Available from Amazon as paper and ebook.
In 1997 I delivered a presentation to the World Futures Society conference titled: The future of sex, politics and religion. In it, I used a few slides outlining secular substitutes for religion that constitute what I called ’21st century piety’. I’ve repeated my analysis many times since and still hold firmly by it, virtually unchanged since then. A lot of evidence since has backed it up, and lots of other people now agree.
My theory was that as people move away from traditional religion, the powerful inner drive remains to feel ‘holy’, that you are a good person, doing the right thing, on some moral high ground. It is a powerful force built into human nature, similar to the desire to feel social approval and status. When it is no longer satisfied by holding to religious rules, it may crystallise around other behaviours, that can mostly be summarised by ‘isms’. Vegetarianism and pacifism were the oldest ones to be conspicuous, often accompanied by New Ageism, followed soon by anti-capitalism, then environmentalism, now evolved into the even more religious warmism. Some behaviours don’t end in ism, but are just as obviously religion substitutes, such as subscribing strongly to political correctness or being an animal rights activist. Even hard-line atheism can be a religion substitute. It pushes exactly the same behavioural buttons.
I fully support protecting the environment, looking after animals, defending the poor, the powerless, the oppressed. I don’t mind vegetarians unless they start getting sanctimonious about it. I am not for a second suggesting there is anything bad about these. It is only when they become a religion substitute that they become problematic, but unfortunately that happens far too often. When something is internalised like a religious faith, it becomes almost immune to outside challenge, a faith unaffected by exposure to hard reality. But like religious faith, it remains a powerful driver of behaviour, and if the person involved is in power, potentially a powerful driver of policy. It can drive similar oppression of those with other world views, in much the same way as the Spanish Inquisition, but with a somewhat updated means of punishing the heretics. In short, the religion substitutes show many of the same problems we used to associate with the extremes of religion.
That’s the problem. The western world has managed pretty well over centuries to eventually separate religion from front line politics, so that politicians might pay lip service to some god or other to get elected, but would successfully put their religion aside once elected and the western state has been effectively secular for many years. Even though they have gained acceptance in much of the wider population, because these religion substitutes are newer, they are not yet actively filtered from the official decision processes, and in many cases have even gained the power levels that religion once held at its peak. They feature much more heavily in government policies, but since they are faith based rather than reality based, the policies based on them are often illogical and can even be counter-productive, achieving the opposite of what they intend. Wishful thinking does not unfortunately rank highly among the natural forces understood by physicists, chemists or biologists. It doesn’t even rank highly as a social force. Random policies seemingly pulled out of thin air don’t necessarily work just because they have been sprinkled with words such as equality, fairness and sustainability. Nature also requires that they meet other criteria – they have to follow basic laws of nature. They also have to be compatible with human social, economic, cultural and political forces. But having those sprinkles added is all that is needed to see them pass into legislation.
And that is what makes religion substitutes a threat to western civilisation. Passing nonsensical legislation just because it sounds nice is a fast way to cripple the economy, damage the environment, wreck education or degrade social cohesion, as we have already frequently seen. I don’t need to pick a particular country, this is almost universally true across the Western world. Policy making everywhere often seems to be little more than stringing together a few platitudes about ensuring fairness, equality, sustainability, with no actual depth or substance or systems analysis that would show reliable mechanisms by which they actually would happen, while ignoring unfashionable or unpleasant known forces or facts of nature that might prevent them from happening. Turning a blind eye to reality, while laying the wishful thinking on thickly and adding loads of nice sounding marketing words to make the policy politically accepted, using the unspoken but obvious threat of the Inquisition to ensure little resistance. That seems to be the norm now.
If it were global then the whole world would decline, but it isn’t. Some areas are even worse crippled by the extremes of religion itself. Others seem more logical. Many areas face joint problems of corruption and poverty. With different problems and different approaches to solving them, we will all fare differently.
But we know from history that empires don’t last for ever. The decline of the West is well under way, with secular religion substitution at the helm. When reality takes a back seat to faith, there can be no other outcome. And it is just faith, in different clothes, and it won’t work any better than religion did.
From Chris Moseley, Director of Infinite Space PR
The Conservative Party’s Cabinet Members and Central Office Mandarins are all Old Etonians, and all the backbenchers are ‘Swivel-eyed loons’. Call me sane and utterly pragmatic, but who really gives damn?
I first caught wind of David Cameron’s new speechwriter Clare Foges by way of a Tweet from Channel 4′s Economics Editor Faisal Islam several months back. Faisal’s a very bright spark, but he’s not the sort of chap to concern himself with literary criticism, well, not since his GCSEs in 1994. Nevertheless, he Tweeted the following reflections on Cameron’s new speechwriter at Davos earlier this year: “Has the PM got a new speechwriter? Nice timbre and pace. Almost iambic pentameter. Spot of alliteration. Occasional rap style rhyming.”
Mmm, this doesn’t sound quite authentic does it? It’s much more likely to be Craig Oliver – Cameron’s director of communications – cozening up to Faizal, whispering sweet nothings into his ear, and giving the hapless Faizal something to Tweet about methinks. Anyway, why the fuss about Cameron’s speechwriter? As ever The Daily Mail has the answer. Foges is a ‘raven-haired’ poet and former ice cream seller. Most tellingly, Foges isn’t a public school type. A devout Christian, she went to a comprehensive school, gaining a First in English at Southampton University, followed by a Masters in Poetry at Bristol.
About as subtle as a brick isn’t it? With the government laden with the alumni of Britain’s top public schools, and now licking its wounds post the PlebGate and, currently, the LoonGate scandals, the Coalition has been at pains to push its PC credentials. One can see the PR cogs whirring.The PR profiling of Foges has since been followed up with a serving of Justine Greening, in The Guardian of all places. “International development secretary Justine Greening is a Tory to watch. State-educated and from a family of Rotherham steel workers, her plain-speaking approach is getting her noticed,” gushes The Guardian’s Nicholas Watts and Patrick Wintour. Yuck! There’s something unutterably sly about all this media manipulation isn’t there? The slyness doesn’t come from its deftness, but from its slimy ‘we’re jus’ ordinary folks’ intent. Frankly, I don’t give a toss if all of the inhabitants of No.10 were educated at Eton and Harrow – I just want them to be effective, and preferably I’d like to hear news of someone coming into government who actually ran a business prior to entering politics.
Granted, Greening’s a top accountant, but she still doesn’t give the impression that she really knows what it’s like to be in the driving seat of a company, sweating to get that next big order. Nor does she give the impression that she knows anything about technology and innovation.
On the world’s political stage New Zealand’s John Key comes closest to impressing in the business background stakes. Key began a career in the foreign exchange market in New Zealand before moving overseas to work for Merrill Lynch, in which he became head of global foreign exchange in 1995, a position he held hold for six years. In 1999 he was appointed a member of the Foreign Exchange Committee of the Federal Reserve Bank of New York until leaving in 2001. The upshot is that the New Zealand economy’s doing pretty well, the NZ dollar is considered a safe haven currency, and the country’s exporting its ass off.
By the by, Key came from a working class background – but who really cares about that?
My guess is that the future for the UK political arena lies in the eventual appearance of a new gang of uber comprehensive school types who will eventually descend on No.10 before the next election. They’ll have something of the Hague about them – school debating squad/Young Conservatives clones.
A squadron of bright young things from the Shires, with their eyes fixed firmly on the economy, and not wind turbines or, Heaven forfend, gay marriage, might just sort out the present concerns expressed by the Swivel Eyes in the Shires.
And then perhaps we can all stop playing silly games and focus on the business of, well, business and getting the country back on its feet.
The race is on to build conscious and smart computers and brain replicas. This article explains some of Markam’s approach. http://www.wired.com/wiredscience/2013/05/neurologist-markam-human-brain/all/
It is a nice project, and its aims are to make a working replica of the brain by reverse engineering it. That would work eventually, but it is slow and expensive and it is debatable how valuable it is as a goal.
Imagine if you want to make an aeroplane from scratch. You could study birds and make extremely detailed reverse engineered mathematical models of the structures of individual feathers, and try to model all the stresses and airflows as the wing beats. Eventually you could make a good model of a wing, and by also looking at the electrics, feedbacks, nerves and muscles, you could eventually make some sort of control system that would essentially replicate a bird wing. Then you could scale it all up, look for other materials, experiment a bit and eventually you might make a big bird replica. Alternatively, you could look briefly at a bird and note the basic aerodynamics of a wing, note the use of lightweight and strong materials, then let it go. You don’t need any more from nature than that. The rest can be done by looking at ways of propelling the surface to create sufficient airflow and lift using the aerofoil, and ways to achieve the strength needed. The bird provides some basic insight, but it simply isn’t necessary to copy all a bird’s proprietary technology to fly.
Back to Markam. If the real goal is to reverse engineer the actual human brain and make a detailed replica or model of it, then fair enough. I wish him and his team, and their distributed helpers and affiliates every success with that. If the project goes well, and we can find insights to help with the hundreds of brain disorders and improve medicine, great. A few billion euros will have been well spent, especially given the waste of more billions of euros elsewhere on futile and counter-productive projects. Lots of people criticise his goal, and some of their arguments are nonsensical. It is a good project and for what it’s worth, I support it.
My only real objection is that a simulation of the brain will not think well and at best will be an extremely inefficient thinking machine. So if a goal is to achieve thought or intelligence, the project as described is barking up the wrong tree. If that isn’t a goal, so what? It still has the other uses.
A simulation can do many things. It can be used to follow through the consequences of an input if the system is sufficiently well modelled. A sufficiently detailed and accurate brain simulation could predict the impacts of a drug or behaviours resulting from certain mental processes. It could follow through the impacts and chain of events resulting from an electrical impulse this finding out what the eventual result of that will be. It can therefore very inefficiently predict the result of thinking, but by using extremely high speed computation, it could in principle work out the end result of some thoughts. But it needs enormous detail and algorithmic precision to do that. I doubt it is achievable simply because of the volume of calculation needed. Thinking properly requires consciousness and therefore emulation. A conscious circuit has to be built, not just modelled.
Consciousness is not the same as thinking. A simulation of the brain would not be conscious, even if it can work out the result of thoughts. It is the difference between printed music and played music. One is data, one is an experience. A simulation of all the processes going on inside a head will not generate any consciousness, only data. It could think, but not feel or experience.
Having made that important distinction, I still think that Markam’s approach will prove useful. It will generate many useful insights into the workings of the brain, and many of the processes nature uses to solve certain engineering problems. These insights and techniques can be used as input into other projects. Biomimetics is already proven as a useful tool in solving big problems. Looking at how the brain works will give us hints how to make a truly conscious, properly thinking machine. But just as with birds and airbuses, we can take ideas and inspiration from nature and then do it far better. No bird can carry the weight or fly as high or as fast as an aeroplane. No proper plane uses feathers or flaps its wings.
I wrote recently about how to make a conscious computer:
https://timeguide.wordpress.com/2013/02/15/how-to-make-a-conscious-computer/ and https://timeguide.wordpress.com/2013/02/18/how-smart-could-an-ai-become/
I still think that approach will work well, and it could be a decade faster than going Markam’s route. All the core technology needed to start making a conscious computer already exists today. With funding and some smart minds to set the process in motion, it could be done in a couple of years. The potential conscious and ultra-smart computer, properly harnessed, could do its research far faster than any human on Markam’s team. It could easily beat them to the goal of a replica brain. The converse is not true, Markam’s current approach would yield a conscious computer very slowly.
So while I fully applaud the effort and endorse the goals, changing the approach now could give far more bang for the buck, far faster.
I noted earlier that you can make a pretty dangerous Gauss rifle using a few easily available and legal components, and you could make a 3D-printed jig to arrange them for maximum effect. So I suggested that maybe magnets should be banned too.
(Incidentally, the toy ones you see on YouTube etc. typically just use a few magnets and some regular steel balls. Using large Nd magnets throughout with the positions and polarities optimally set would make it much more powerful).
Now I learn that a US senator (Leland Yee of San Francisco), HT Dave Evans for the link http://t.co/REt2o9nF4t, wants 3D printers to be regulated somehow, in case they are used to make guns. That won’t reduce violence if you can easily acquire or make lethal weapons that are perfectly legal without one. On the ground, even highly lethal kitchen knives and many sharp tools aren’t licensed. Even narrowing it down to planes, there is quite a long list of potentially dangerous things you are still very welcome to take on board and are totally legal, some of which would be very hard to ban, so perhaps we should concentrate more on defence and catching those who wish us harm.
Here are some perfectly legal weapons that people carry frequently with many perfectly benign uses:
Your fingers. Fingernails particularly can inflict pain and give a deep scratch, but some people can blind or even kill others with their bare hands;
Sharp pencils or pencils and a sharpener; pens are harder still and can be pretty sharp too;
Hard plastic drink stirrers, 15cm long, that can be sharpened using a pencil sharpener; they often give you these on the flight so you don’t even have to bring them; hard plastics can be almost as dangerous as metals, so it is hard to see why nail files are banned and drinks stirrers and plastic knives aren’t;
CDs or DVDs, which can be easily broken to make sharp blades; I met a Swedish ex-captain once who said he always took one on board in his jacket pocket, just in case he needed to tackle a terrorist.
Your glasses. You can even take extra pairs if the ones you’re wearing are needed for you to see properly. Nobody checks the lenses to make sure the glass isn’t etched for custom breaking patterns, or whether the lenses can be popped out, with razor-sharp edges. They also don’t check that the ends of the arms don’t slide off. I’m sure Q could do a lot with a pair of glasses.
Rubber bands, can be used to make catapults or power other projectile weapons, and many can be combined to scale up the force;
Paperclips, some of which are pretty large and thick wire;
Nylon cord, which can be used dangerously in many ways. Nylon paracord can support half a ton but be woven into nice little bracelets, or shoelaces for that matter. Thin nylon cord is an excellent cutting tool.
Plastic zip ties (cable ties), the longer ones especially can be lethally used.
Plastic bags too can be used lethally.
All of these are perfectly legal but can be dangerous in the wrong hands. I am sure you can think of many others.
Amusingly, given the Senator’s proposed legislation, you could currently probably take on board a compact 3d printer to print any sharps you want, or a Liberator if you have one of the templates, and I rather expect many terrorist groups have a copy – and sometimes business class seats helpfully have an electrical power supply. I expect you might draw attention if you used one though.
There are lots of ways of storing energy to be released suddenly, a key requirement in many weapons. Springs are pretty good at that job. Many devices we use everyday like staple guns rely on springs that are compressed and then suddenly release all their force and energy when the mechanism passes a trigger point. Springs are allowed on board. It is very easy to design weapons based on accumulating potential energy across many springs that can then all simultaneously release them. If I can dream some up easily, so can a criminal. It’s also easy to invent mechanisms for self assembly of projectiles during flight, so parts of a projectile can be separately accelerated.
Banned devices that you could smuggle through detectors are also numerous. High pressure gas reservoirs could easily be made using plastics or resins and could be used for a wide variety of pneumatic projectile weapons and contact or impact based stun weapons. Again, precision release mechanism could be designed for 3D printing at home, but a 3D printer isn’t essential, there are lots of ways of solving the engineering problems.
I don’t see how regulating printers would make us safer. After hundreds of thousands of years, we ought to know by now that if someone is intent on harming someone else, there is a huge variety of ways of doing so, using objects or tools that are essential in everyday life and some that don’t need any tools at all, just trained hands.
Technology comes and goes, but nutters, criminals, terrorists and fanatics are here to stay. Only the innocent suffer the inconvenience of following the rules. It’s surely better to make less vulnerable systems.
It’s interesting watching new technologies emerge. Someone has a bright idea, it gets hyped a bit, then someone counter-hypes a nightmare scenario and everyone panics. Then experts queue up to say why it can’t be done, then someone does it, then more panic, then knee-jerk legislation, then eventually the technology becomes part of everyday life.
I was once dismissed by our best radio experts when I suggested making cellphone masts like the ones you see on every high building today. I recall being taught that you couldn’t possibly ever get more than 19.2kbits/s down a phone line. I got heavily marked down in an appraisal for my obvious stupidity suggesting that mobile phones could include video cameras. I am well used to being told something is impossible, but if I can see how to make it work, I don’t care, I believe it anyway. My personal mantra is ‘just occasionally, everyone else IS wrong’. I am an engineer. Some engineers might not know how to do something, but others sometimes can.
When the printable gun was suggested (not by me this time!) I accepted it as an inevitable part of the future immediately. I then listened as experts argued that it could never survive the forces. But guess what? A gun doesn’t have to survive. It just needs to work once, then you use a fresh one. The first prototypes only worked for a few bullets before breaking. The Liberator was made to work just once. Missiles are like that. They fire once, only once. So you bring a few to the battle.
The recently uploaded blueprint for the Liberator printable gun has been taken offline after 100,000 copies were downloaded, so it will be about as hard to find as embarrassing pictures of any celebrity. There will be innovations, refinements, improvements, then we will see them in use by hobbyists and criminals alike.
But there are loads of ways to skin a cat, allegedly. A gun’s job is to quickly accelerate a small mass up to a high speed in a short distance. Using explosives in a bullet held in a printable lump of plastic clearly does the job on a one-shot basis, but you still need a bullet and they don’t sell them in Tesco’s. So why do it that way?
A Gauss Rifle is a science toy that can fire a ball-bearing across your living room. You can make one in 5 minutes using nothing more than sticky tape, a ruler and some neodymium magnets. Here’s a nice example of the toy version using simple steel balls:
The concept is very well known, though a bit harder to Google now because so many computer games have used the same name for imaginary weapons. In an easily adapted version, where the steel balls are replaced by neodymium magnets held in place in alternately attracting and repelling polarities, when the first magnet is released, it is pulled by strong magnetic force to the second one, hitting it quite fast, and conveying all that energy to the next stage magnet, which is then pushed away from the one repelling it towards the one attracting it, so accumulating lots of energy. The energy accumulates over several stages, optimally harnessing the full repulsive and attractive forces available from the strong magnets. Too many stages result in the magnets shattering, but with care, four stages with simple steel balls can be used reasonably safely as a toy.
Some sites explain that if you position the magnets accurately with the poles oriented right, you can get it to make a small hole in a wall. I imagine you could design and print a gauss rifle jig with very high precision, far better than you could do with tape and your fingers, that would hold the magnets in the right locations and polarity orientations. Then just put your magnets in and it is ready. Neodymium magnets are easily available in various sizes at low cost and the energy of the final ball is several times as high as the first one. With the larger magnets, the magnetic forces are extremely high so the energy accumulated would also be high. A sharp plastic dart housing the last ball would make quite a dangerous device. A Gauss rifle might lack the force of a conventional gun, but it could still be quite powerful. If I was in charge of airport security, I’d already be banning magnets from flights.
I really don’t see how you could stop someone making this sort of thing, or plastic crossbows or fancy plastic jigs with stored energy in springs that can be primed in an aircraft toilet that fire things in imaginative ways. There are zillions of ways to accelerate something, some of which can be done in cascades that only generate tolerable forces at any particular point so could easily work with printable materials. The current focus on firearms misses the point. You don’t have to transfer all the energy to a projectile in one short high pressure burst, you can accumulate it in stages. Focusing security controls on explosives-based systems will leave us vulnerable.
3D printable weapons are here to stay, but for criminals and terrorists, bullets with explosives in might soon be obsolete.
There is rising concern about machines such as drones and battlefield robots that could soon be given the decision on whether to kill someone. Since I wrote this and first posted it a couple of weeks ago, the UN has put out their thoughts as the DM writes today:
At the moment, drones and robots are essentially just remote controlled devices and a human makes the important decisions. In the sense that a human uses them to dispense death from a distance, they aren’t all that different from a spear or a rifle apart from scale of destruction and the distance from which death can be dealt. Without consciousness, a missile is no different from a spear or bullet, nor is a remote controlled machine that it is launched from. It is the act of hitting the fire button that is most significant, but proximity is important too. If an operator is thousands of miles away and isn’t physically threatened, or perhaps has never even met people from the target population, other ethical issues start emerging. But those are ethical issues for the people, not the machine.
Adding artificial intelligence to let a machine to decide whether a human is to be killed or not isn’t difficult per se. If you don’t care about killing innocent people, it is pretty easy. It is only made difficult because civilised countries value human lives, and because they distinguish between combatants and civilians.
Personally, I don’t fully understand the distinction between combatants and civilians. In wars, often combatants have no real choice but to fight or are conscripted, and they are usually told what to do, often by civilian politicians hiding in far away bunkers, with strong penalties for disobeying. If a country goes to war, on the basis of a democratic mandate, then surely everyone in the electorate is guilty, even pacifists, who accept the benefits of living in the host country but would prefer to avoid the costs. Children are the only innocents.
In my analysis, soldiers in a democratic country are public sector employees like any other, just doing a job on behalf of the electorate. But that depends to some degree on them keeping their personal integrity and human judgement. The many military who take pride in following orders could be thought of as being dehumanised and reduced to killing machines. Many would actually be proud to be thought of as killing machines. A soldier like that, who merely follow orders, deliberately abdicates human responsibility. Having access to the capability for good judgement, but refusing to use it, they reduce themselves to a lower moral level than a drone. At least a drone doesn’t know what it is doing.
On the other hand, disobeying a direct order may soothe issues of conscience but invoke huge personal costs, anything from shaming and peer disapproval to execution. Balancing that is a personal matter, but it is the act of balancing it that is important, not necessarily the outcome. Giving some thought to the matter and wrestling at least a bit with conscience before doing it makes all the difference. That is something a drone can’t yet do.
So even at the start, the difference between a drone and at least some soldiers is not always as big as we might want it to be, for other soldiers it is huge. A killing machine is competing against a grey scale of judgement and morality, not a black and white equation. In those circumstances, in a military that highly values following orders, human judgement is already no longer an essential requirement at the front line. In that case, the leaders might set the drones into combat with a defined objective, the human decision already taken by them, the local judgement of who or what to kill assigned to adaptive AI, algorithms and sensor readings. For a military such as that, drones are no different to soldiers who do what they’re told.
However, if the distinction between combatant and civilian is required, then someone has to decide the relative value of different classes of lives. Then they either have to teach it to the machines so they can make the decision locally, or the costs of potential collateral damage from just killing anyone can be put into the equations at head office. Or thirdly, and most likely in practice, a compromise can be found where some judgement is made in advance and some locally. Finally, it is even possible for killing machines to make decisions on some easier cases and refer difficult ones to remote operators.
We live in an electronic age, with face recognition, friend or foe electronic ID, web searches, social networks, location and diaries, mobile phone signals and lots of other clues that might give some knowledge of a target and potential casualties. How important is it to kill or protect this particular individual or group, or take that particular objective? How many innocent lives are acceptable cost, and from which groups – how many babies, kids, adults, old people? Should physical attractiveness or the victim’s professions be considered? What about race or religion, or nationality, or sexuality, or anything else that could possibly be found out about the target before killing them? How much should people’s personal value be considered, or should everyone be treated equal at point of potential death? These are tough questions, but the means of getting hold of the date are improving fast and we will be forced to answer them. By the time truly intelligent drones will be capable of making human-like decisions, they may well know who they are killing.
In some ways this far future with a smart or even conscious drone or robot making informed decisions before killing people isn’t as scary as the time between now and then. Terminator and Robocop may be nightmare scenarios, but at least in those there is clarity of which one is the enemy. Machines don’t yet have anywhere near that capability. However, if an objective is considered valuable, military leaders could already set a machine to kill people even when there is little certainty about the role or identity of the victims. They may put in some algorithms and crude AI to improve performance or reduce errors, but the algorithmic uncertainty and callous uncaring dispatch of potentially innocent people is very worrying.
Increasing desperation could be expected to lower barriers to use. So could a lower regard for the value of human life, and often in tribal conflicts people don’t consider the lives of the opposition to have a very high value. This is especially true in terrorism, where the objective is often to kill innocent people. It might not matter that the drone doesn’t know who it is killing, as long as it might be killing the right target as part of the mix. I think it is reasonable to expect a lot of battlefield use and certainly terrorist use of semi-smart robots and drones that kill relatively indiscriminatingly. Even when truly smart machines arrive, they might be set to malicious goals.
Then there is the possibility of rogue drones and robots. The Terminator/Robocop scenario. If machines are allowed to make their own decisions and then to kill, can we be certain that the safeguards are in place that they can always be safely deactivated? Could they be hacked? Hijacked? Sabotaged by having their fail-safes and shut-offs deactivated? Have their ‘minds’ corrupted? As an engineer, I’d say these are realistic concerns.
All in all, it is a good thing that concern is rising and we are seeing more debate. It is late, but not too late, to make good progress to limit and control the future damage killing machines might do. Not just directly in loss of innocent life, but to our fundamental humanity as armies get increasingly used to delegating responsibility to machines to deal with a remote dehumanised threat. Drones and robots are not the end of warfare technology, there are far scarier things coming later. It is time to get a grip before it is too late.
When people fought with sticks and stones, at least they were personally involved. We must never allow personal involvement to disappear from the act of killing someone.