Diesel – 4.4 times more deaths than by road accidents

In Dec 2010, the UK government released a report estimating that air pollution causes a ‘mortality burden’ of 340,000 years of life spread over an affected population of 200,000, equivalent to about 29,000 deaths each year in the UK, or a drop in average life expectancy across the whole population of 6 months. It also costs the NHS £27B per year. See:

http://webarchive.nationalarchives.gov.uk/20140505104658/http://www.comeap.org.uk/images/stories/Documents/Reports/COMEAP_Mortality_Effects_Press_Release.pdf

There is no more recent report as yet, although the figures in it refer to 2008.

Particulate matter (PM) is the worst offender and diesel engines are one of the main sources of PM, but they also emit some of the other offenders. COMEAP estimates that a quarter of PM-related deaths are caused by diesel engines, 7250 lives per year. Some of the PM comes from private vehicles. To save regeneration costs, some diesel drivers apparently remove the diesel particulate filters from their cars, which is illegal, and doing so would mean failing an MOT. See:

https://www.gov.uk/government/uploads/system/uploads/attachment_data/file/263018/diesel-particulate-filters-guidance.pdf

The government encouraged people to go diesel by offering significant tax advantages. Road tax and company car tax are lower for diesels, resulting in more than half of new cars now being diesels. (https://www.gov.uk/government/publications/vehicle-licensing-statistics-2013) Almost all public buses and taxis and still many trains are diesel.

7250 lives per year caused by diesel vehicles is a lot, and let’s remember that was an estimate based on 2008 particulates. There are many more diesels on our roads now than then (https://www.gov.uk/government/uploads/system/uploads/attachment_data/file/301636/veh0203.xls shows the number of diesel cars licensed has increased from 7163 to 10,064), but fuel efficiency has also improved in that period so total fuel use hasn’t increased much, only from 8788 to 9197 thousand tons of diesel. So the result isn’t as bad as it could have been and the proportionately scaled figure for 2012 would be 7587 deaths from diesel emissions. In 2013 there were only 1730 road deaths so 4.4 times as many people were killed by diesel emissions than road accidents.

I thought it would be interesting to compare deaths from just buses to those in road accidents, since buses are thought of by many as some sort of panacea whereas some of us see them as filthy environmental monsters. The proportion of diesel used by buses has fallen from 17% to 13.7% between 2008 and 2012. (I couldn’t find figures for the numbers of taxis, also officially included in public transport, since the fuel usage stats lump all cars together, but then I’ve never understood why taxis should be listed as public transport anyway.)

17% of the 7250 figure for 2008 gives 1232 deaths from public transport diesel emissions compared to 2538 road deaths that year, roughly half as many. However, for 2012, 13.7% of 7587 is 1039 deaths from public transport diesel emissions compared to 1754 people killed in road accidents in 2012.  That ratio has grown from 48.5% to 59% in just 4 years. Buses may use less fuel than cars but they certainly aren’t saints.

So, headline result: 60% as many people are killed by diesel emissions from buses as in road accidents, but altogether, 4.4 times as many people die due to diesel. The government is very noisy when it comes to reducing road deaths, but it should look at the far bigger gains that would be made by reducing diesel use. Perhaps it is time that the deaths arising from diesel emissions should be added to the road deaths figures. At least then there might be some better action against it.

As I wrote in a recent blog

(http://timeguide.wordpress.com/2014/07/18/road-deaths-v-hospital-hygiene/)

more still could be saved by just slightly improving the NHS. The £27B per year health costs saved by getting rid of diesel might go some way to doing both.

As a final observation, diesel was encouraged so much because it should help to reduce CO2 emissions, seen as a major contributor to global warming. In the last year or two, the sensitivity to CO2 emissions has been observed to be lower than originally thought. However, another major contribution to warming is the black carbon PM, noted especially for its contribution to melting glaciers by making them darker, also arising in large part from diesel. The efforts to reduce one contributor have increased another. Diesel doesn’t even solve the problem it was aimed at, but still causes others.

Ultra-simple computing: Part 4

Gel processing

One problem with making computers with a lot of cores is the wiring. Another is the distribution of tasks among the cores. Both of these can be solved with relatively simple architecture. Processing chips usually have a lot of connectors, letting them get data in parallel. But a beam of light can contain rays of millions of wavelengths, far more parallelism than is possible with wiring. If chips communicated using light with high density wavelength division multiplexing, it will solve some wiring issues. Taking another simple step, processors that are freed from wiring don’t have to be on a circuit board, but could be suspended in some sort of gel. Then they could use free space interconnection to connect to many nearby chips. Line of sight availability will be much easier than on a circuit board. Gel can also be used to cool chips.

Simpler chips with very few wired connections also means less internal wiring too. This reduces size still further and permits higher density of suspension without compromising line of sight.

Ripple scheduler

Process scheduling can also be done more simply with many processors. Complex software algorithms are not needed. In an array of many processors, some would be idle while some are already engaged on tasks. When a job needs processed, a task request (this could be as simple as a short pulse of a certain frequency) would be broadcast and would propagate through the array. On encountering an idle processor, the idle processor would respond with an accept response (again this could be a single pulse of another frequency. This would also propagate out as a wave through the array. These two waves may arrive at a given processor in quick succession.

Other processors could stand down automatically once one has accepted the job (i.e. when they detect the acceptance wave). That would be appropriate when all processors are equally able. Alternatively, if processors have different capabilities, the requesting agent would pick a suitable one from the returning acceptances, send a point to point message to it, and send out a cancel broadcast wave to stand others down. It would exchange details about the task with this processor on a point to point link, avoiding swamping the system with unnecessary broadcast messages.  An idle processor in the array would thus see a request wave, followed by a number of accept waves. It may then receive a personalized point to point message with task information, or if it hasn’t been chosen, it would just see the cancel wave of . Busy processors would ignore all communications except those directed specifically to them.

I’m not saying the ripple scheduling is necessarily the best approach, just an example of a very simple system for process scheduling that doesn’t need sophisticated algorithms and code.

Activator Pastes

It is obvious that this kind of simple protocol can be used with a gel processing medium populated with a suitable mixture of different kinds of processors, sensors, storage, transmission and power devices to provide a fully scalable self-organizing array that can perform a high task load with very little administrative overhead. To make your smart gel, you might just choose the volume of weight ratios of components you want and stir them into a gel rather like mixing a cocktail. A paste made up in this way could be used to add sensing, processing and storage to any surface just by painting some of the paste onto it.

A highly sophisticated distributed cloud sensor network for example could be made just by painting dabs of paste onto lamp posts. Solar power or energy harvesting devices in the paste would power the sensors to make occasional readings, pre-process them, and send them off to the net. This approach would work well for environmental or structural monitoring, surveillance, even for everyday functions like adding parking meters to lines marking the spaces on the road where they interact with ID devices in the car or an app on the driver’s smartphone.

Special inks could contain a suspension of such particles and add a highly secure electronic signature onto one signed by pen and ink.

The tacky putty stuff that we use to stick paper to walls could use activator paste as the electronic storage and processing medium to let you manage  content an e-paper calendar or notice on a wall.

I can think of lots of ways of using smart pastes in health monitoring, packaging, smart makeup and so on. The basic principle stays the same though. It would be very cheap and yet very powerful, with many potential uses. Self-organising, and needs no set up beyond giving it a job to do, which could come from any of your devices. You’d probably buy it by the litre, keep some in the jar as your computer, and paste the rest of it all over the place to make your skin, your clothes, your work-spaces and your world smart. Works for me.

 

Ultra-simple computing part 3

Just in time v Just in case

Although the problem isn’t as bad now as it has been, a lot of software runs on your computers just in case it might be needed. Often it isn’t, and sometimes the PC is shut down or rebooted without it ever having been used. This wastes our time, wastes a little energy, and potentially adds functionality or weaknesses that can be exploited by hackers.

If it only loaded the essential pieces of software, risks would be minimised and initial delays reduced. There would be a slightly bigger delay once the code is needed because it would have to load then but since a lot of code is rarely used, the overall result would still be a big win. This would improve security and reliability. If all I am doing today is typing and checking occasional emails, a lot of the software currently loaded in my PC memory is not needed. I don’t even need a firewall running all the time if network access is disabled in between my email checks. If networking and firewall is started when I want to check email or start browsing, and then all network access is disabled after I have checked, then security would be a bit better. I also don’t need all the fancy facilities in Office when all I am doing is typing. I definitely don’t want any part of Office to use any kind of networking in either direction for any reason (I use Thunderbird, not Outlook for email). So don’t load the code yet; I don’t want it running; it only adds risks, not benefits. If I want to do something fancy in a few weeks time, load the code then. If I want to look up a word in a dictionary or check a hyperlink, I could launch a browser and copy and paste it. Why do anything until asked? Forget doing stuff just in case it might occasionally generate a tiny time saving. Just in time is far safer and better than just in case.

So, an ultra-simple computer should only load what is needed, when it is needed. It would only open communications when needed, and then only to the specific destination required. That frees up processors and memory, reduces risks and improves speed.

Software distribution

Storing software on hard disks or in memory lets the files be changed, possibly by a virus. Suppose instead that software were to be distributed on ROM chips. They can be very cheap, so why not? No apps, no downloads. All the software on your machine would be in read only memory, essentially part of the hardware. This would change a few things in computer design. First, you’d have a board with lots of nice slots in it, into which you plug the memory chips you’ve bought with the programs you want on them. (I’ll get to tablets and phones later, obviously a slightly different approach is needed for portable devices). Manufacturers would have a huge interest in checking their  code first, because they can’t put fixes out later except on replacement chips. Updating the software to a new version would simply mean inserting a new chip. Secondly, since the chips are read only, the software on them cannot be corrupted. There is no mechanism by which a virus or other malware could get onto the chip.

Apps could be distributed in collections – lifestyle or business collections. You could buy subscriptions to app agencies that issued regular chips with their baskets of apps on them. Or you could access apps online via the cloud. Your machine would stay clean.

It could go further. As well as memory chips, modules could include processing, controller or sensory capabilities. Main processing may still be in the main part of the computer but specialist capabilities could be added in this way.

So, what about tablets and phones? Obviously you can’t plug lots of extra chips into slots in those because it would be too cumbersome to make them with lots of slots to do so. One approach would be to use your PC or laptop to store and keep up to date a single storage chip that goes into your tablet or phone. It could use a re-programmable ROM that can’t be tampered with by your tablet. All your apps would live on it, but it would be made clean and fresh every day. Tablets could have a simple slot to insert that single chip, just as a few already do for extra memory.

Multi-layered security

If your computer is based on algorithms encoded on read only memory chips or better still, directly as hardware circuits, then it could boot from cold very fast, and would be clean of any malware. To be useful, it would need a decent amount of working memory too, and of course that could provide a short term residence for malware, but a restart would clean it all away. That provides a computer that can easily be reset to a clean state and work properly again right away.

Another layer of defense is to disallow programs access to things they don’t need. You don’t open every door and window in your home every time you want to go in or out. Why open every possible entrance that your office automation package might ever want to use just because you want to type an article? Why open the ability to remotely install or run programs on your computer without your knowledge and consent just because you want to read a news article or look at a cute kitten video? Yet we have accepted such appallingly bad practice from the web browser developers because we have had no choice. It seems that the developers’ desires to provide open windows to anyone that wants to use them outweighs the users’ desires for basic security common sense. So the next layer of defense is really pretty obvious. We want a browser that doesn’t open doors and windows until we explicitly tell it to, and even then it checks everything that tries to get through.

It may still be that you occasionally want to run software from a website, maybe to play a game. Another layer of defense that could help then is to restrict remote executables to a limited range of commands with limited scope. It is also easy additionally to arrange a sandbox where code can run but can’t influence anything outside the sandbox. For example, there is no reason a game would need to inspect files on your computer apart from stored games or game-related files. Creating a sandbox that can run a large range of agreed functions to enable games or other remote applications but is sealed from anything else on the computer would enable remote benign executables without compromising security. Even if they were less safe, confining activity to the sandbox allows the machine to be sterilized by sweeping that area and doesn’t necessitate a full reset. Even without the sandbox, knowing the full capability of the range of permitted commands enables damage limitation and precision cleaning. The range of commands should be created with the end user as priority, letting them do what they want with the lowest danger. It should not be created with application writers as top priority since that is where the security risk arises. Not all potential application writers are benign and many want to exploit or harm the end user for their own purposes. Everyone in IT really ought to know that and should never forget it for a minute and it really shouldn’t need to be said.

Ultra-simple computing: Part 2

Chip technology

My everyday PC uses an Intel Core-I7 3770 processor running at 3.4GHz. It has 4 cores running 8 threads on 1.4 billion 22nm transistors on just 160mm^2 of chip. It has an NVIDIA GeForce GTX660 graphics card, and has 16GB of main memory. It is OK most of the time, but although the processor and memory utilisation rarely gets above 30%, its response is often far from instant.

Let me compare it briefly with my (subjectively at time of ownership) best ever computer, my Macintosh 2Fx, RIP, which I got in 1991, the computer on which I first documented both the active contact lens and text messaging and on which I suppose I also started this project. The Mac 2Fx ran a 68030 processor at 40MHz, with 273,000 transistors and 4MB of RAM, and an 80MB hard drive. Every computer I’ve used since then has given me extra function at the expense of lower performance, wasted time and frustration.

Although its OS is stored on a 128GB solid state disk, my current PC takes several seconds longer to boot than my Macintosh Fx did – it went from cold to fully operational in 14 seconds – yes, I timed it. On my PC today, clicking a browser icon to first page usually takes a few seconds. Clicking on a word document back then took a couple of seconds to open. It still does now. Both computers gave real time response to typing and both featured occasional unexplained delays. I didn’t have any need for a firewall or virus checkers back then, but now I run tedious maintenance routines a few times every week. (The only virus I had before 2000 was nVir, which came on the Mac2 system disks). I still don’t get many viruses, but the significant time I spend avoiding them has to be counted too.

Going back further still, to my first ever computer in 1981, it was an Apple 2, and only had 9000 transistors running at 2.5MHz, with a piddling 32kB of memory. The OS was tiny. Nevertheless, on it I wrote my own spreadsheet, graphics programs, lens design programs, and an assortment of missile, aerodynamic and electromagnetic simulations. Using the same transistors as the I7, you could make 1000 of these in a single square millimetre!

Of course some things are better now. My PC has amazing graphics and image processing capabilities, though I rarely make full use of them. My PC allows me to browse the net (and see video ads). If I don’t mind telling Google who I am I can also watch videos on YouTube, or I could tell the BBC or some other video provider who I am and watch theirs. I could theoretically play quite sophisticated computer games, but it is my work machine, so I don’t. I do use it as a music player or to show photos. But mostly, I use it to write, just like my Apple 2 and my Mac Fx. Subjectively, it is about the same speed for those tasks. Graphics and video are the main things that differ.

I’m not suggesting going back to an Apple 2 or even an Fx. However, using I7 chip tech, a 9000 transistor processor running 1360 times faster and taking up 1/1000th of a square millimetre would still let me write documents and simulations, but would be blazingly fast compared to my old Apple 2. I could fit another 150,000 of them on the same chip space as the I7. Or I could have 5128 Mac Fxs running at 85 times normal speed. Or you could have something like a Mac FX running 85 times faster than the original for a tiny fraction of the price. There are certainly a few promising trees in the forest that nobody seems to have barked up. As an interesting aside, that 22nm tech Apple 2 chip would only be ten times bigger than a skin cell, probably less now, since my PC is already several months old

At the very least, that really begs the question what all this extra processing is needed for and why there is still ever any noticeable delay doing anything in spite of it. Each of those earlier machines was perfectly adequate for everyday tasks such as typing or spreadsheeting. All the extra speed has an impact only on some things and most is being wasted by poor code. Some of the delays we had 20 and 30 years ago still affect us just as badly today.

The main point though is that if you can make thousands of processors on a standard sized chip, you don’t have to run multitasking. Each task could have a processor all to itself.

The operating system currently runs programs to check all the processes that need attention, determine their priorities, schedule processing for them, and copy their data in and out of memory. That is not needed if each process can have its own dedicated processor and memory all the time. There are lots of ways of using basic physics to allocate processes to processors, relying on basic statistics to ensure that collisions rarely occur. No code is needed at all.

An ultra-simple computer could therefore have a large pool of powerful, free processors, each with their own memory, allocated on demand using simple physical processes. (I will describe a few options for the basic physics processes later). With no competition for memory or processing, a lot of delays would be eliminated too.

Ultra-simple computing: Part 1

Introduction

This is first part of a techie series. If you aren’t interested in computing, move along, nothing here. It is a big topic so I will cover it in several manageable parts.

Like many people, I spent a good few hours changing passwords after the Heartbleed problem and then again after ebay’s screw-up. It is a futile task in some ways because passwords are no longer a secure defense anyway. A decent hacker with a decent computer can crack hundreds of passwords in an hour, so unless an account is locked after a few failed attempts, and many aren’t, passwords only manage to keep out casual observers and the most amateurish hackers.

The need for simplicity

A lot of problems are caused by the complexity of today’s software, making it impossible to find every error and hole. Weaknesses have been added to operating systems, office automation tools and browsers to increase functionality for only a few users, even though they add little to most of us most of the time. I don’t think I have ever executed a macro in Microsoft office for example and I’ve certainly never used print merge or many its other publishing and formatting features. I was perfectly happy with Word 93 and most things added since then (apart from the real time spelling and grammar checker) have added irrelevant and worthless features at the expense of safety. I can see very little user advantage of allowing pop-ups on web sites, or tracking cookies. Their primary purpose is to learn about us to make marketing more precise. I can see why they want that, but I can’t see why I should. Users generally want pull marketing, not push, and pull doesn’t need cookies, there are better ways of sending your standard data when needed if that’s what you want to do. There are many better ways of automating logons to regular sites if that is needed.

In a world where more of the people who wish us harm are online it is time to design an alternative platform which it is designed specifically to be secure from the start and no features are added that allow remote access or control without deliberate explicit permission. It can be done. A machine with a strictly limited set of commands and access can be made secure and can even be networked safely. We may have to sacrifice a few bells and whistles, but I don’t think we will need to sacrifice many that we actually want or need. It may be less easy to track us and advertise at us or to offer remote machine analysis tools, but I can live with that and you can too. Almost all the services we genuinely want can still be provided. You could still browse the net, still buy stuff, still play games with others, and socialize. But you wouldn’t be able to install or run code on someone else’s machine without their explicit knowledge. Every time you turn the machine on, it would be squeaky clean. That’s already a security benefit.

I call it ultra-simple computing. It is based on the principle that simplicity and a limited command set makes it easy to understand and easy to secure. That basic physics and logic is more reliable than severely bloated code. That enough is enough, and more than that is too much.

We’ve been barking up the wrong trees

There are a few things you take for granted in your IT that needn’t be so.

Your PC has an extremely large operating system. So does your tablet, your phone, games console… That isn’t really necessary. It wasn’t always the case and it doesn’t have to be the case tomorrow.

Your operating system still assumes that your PC has only a few processing cores and has to allocate priorities and run-time on those cores for each process. That isn’t necessary.

Although you probably use some software in the cloud, you probably also download a lot of software off the net or install from a CD or DVD. That isn’t necessary.

You access the net via an ISP. That isn’t necessary. Almost unavoidable at present, but only due to bad group-think. Really, it isn’t necessary.

You store data and executable code in the same memory and therefore have to run analysis tools that check all the data in case some is executable. That isn’t necessary.

You run virus checkers and firewalls to prevent unauthorized code execution or remote access. That isn’t necessary.

Overall, we live with an IT system that is severely unfit for purpose. It is dangerous, bloated, inefficient, excessively resource and energy intensive, extremely fragile and yet vulnerable to attack via many routes, designed with the user as a lower priority than suppliers, with the philosophy of functionality at any price. The good news is that it can be replaced by one that is absolutely fit for purpose, secure, invulnerable, cheap and reliable, resource-efficient, and works just fine. Even better, it could be extremely cheap so you could have both and live as risky an online life in those areas that don’t really matter, knowing you have a safe platform to fall back on when your risky system fails or when you want to do anything that involves your money or private data.

More future fashion fun

A nice light hearted shorty again. It started as one on smart makeup, but I deleted that and will do it soon. This one is easier and in line with today’s news.

I am the best dressed and most fashion conscious futurologist in my office. Mind you, the population is 1. I liked an article in the papers this morning about Amazon starting to offer 3D printed bobble-heads that look like you.

See: http://t.co/iFBtEaRfBd.

I am especially pleased since I suggested it over 2 years ago  in a paper I wrote on 3D printing.

http://timeguide.wordpress.com/2012/04/30/more-uses-for-3d-printing/

In the news article, you see the chappy with a bobble-head of him wearing the same shirt. It is obvious that since Amazon sells shirts too, that it won’t be long at all before they send you cute little avatars of you wearing the outfits you buy from them. It starts with bobble-heads but all the doll manufacturers will bring out versions based on their dolls, as well as character merchandise from films, games, TV shows. Kids will populate doll houses with minis of them and their friends.

You could even give one of a friend to them for a birthday present instead of a gift voucher, so that they can see the outfit you are offering them before they decide whether they want that or something different. Over time, you’d have a collection of minis of you and your friends in various outfits.

3D cameras are coming to phones too, so you’ll be able to immortalize embarrassing office party antics in 3D office ornaments. When you can’t afford to buy an outfit or accessory sported by your favorite celeb, you could get a miniature wearing it. Clothing manufacturers may well appreciate the extra revenue from selling miniatures of their best kit.

Sports manufacturers will make replicas of you wearing their kit, doing sporting activities. Car manufacturers will have ones of you driving the car they want you to buy, or you could buy a fleet of miniatures. Holiday companies could put you in a resort hotspot. Or in a bedroom ….with your chosen celeb.

OK, enough.

 

 

The United Nations: Gaza, climate change and UK welfare

This one is just personal commentary, not my normal futurology; even futurists have opinions on things today. Move along to my futurist pieces if you want.

These areas are highly polarized and I know many readers will disagree with my views this time and I don’t want to cause offence, but I think it is too important an issue to leave un-blogged. Maybe I won’t say anything that hasn’t already been said 1000 times by others, but I would not feel justified in keeping quiet.

Feel free to add unoffensive comments.

The UN started off as a good idea, but over some decades now its reputation has taken an occasional battering. I will argue that it has recently started to do more harm than good in a couple of areas so it should take more care. Instead of being a global organisation to solve global problems and ensure better life for everyone, in these areas at least it has become a tool for activists using it to push their own personal political and ideological agendas.

Last week the UN Human Rights Council condemned Israel for its action in Gaza and wanted to investigate it for war crimes, because they apparently weren’t doing enough to reduce civilian casualties in Gaza. The UN is also critical that far more Palestinians are killed than Israelis. Let’s look at that. My analysis echoes that of many others.

I am of course distressed by the civilian deaths in Gaza and Israel, just as I am in other conflicts, and wish they could be avoided, but watching the news and listening to the many voices, my view is that any blame for them must be assigned to Hamas, not Israel. I hope that the UN’s taking sides against Israel shares no common ground with the growing antisemitism we are now seeing in many of the public demonstrations we see about the conflict.

Israel does its best to reduce Palestinian civilian deaths by giving advanced warnings of their activities, even at the cost of greater risk to their own forces, so it seems reasonable to absolve them of responsibility for casualties after such warnings. If people remain in a danger zone because they are not permitted to leave, those who force them to remain are guilty. If civilians are forced to remain while the military evacuate, then the military are doubly guilty. War is always messy and there are always some errors of judgment, rogue soldiers and accidents, but that is a quite separate issue.

A superior military will generally suffer fewer casualties than their opponent. The Israelis can hardly be blamed for protecting their own people as well as they can and it isn’t their fault if Hamas wants to maximize casualties on their side. Little would be gained by forcing Israel to have random Israelis killed to meet a quota.

Hamas has declared its aim to be the annihilation of Israel and all Jews. There can be no justification for such a position. It is plain wrong. The Israeli goal is self-defense – to prevent their people being killed by rocket attacks, and ultimately to prevent their nation from being annihilated. There is no moral equivalence in such a conflict. One side is in the right and behaves in a broadly civilized manner, the other is wrong and behaves in a barbaric manner.

Israelis  don’t mix their civilian and military areas, so it easy to see which are which. Their civilian areas are deliberately targeted by Hamas with no warnings to cause as many civilian deaths as possible but Israel evacuates people and uses its ‘Iron Shield’ to destroy incoming rockets before they hit.

On the other side, the military in Gaza deliberately conceal their personnel and weapons in civilian areas such as primary schools, hospitals and residential areas and launch attacks from those areas. (UN schools have been included in that.) When they receive Israeli warnings of an attack, they evacuate key personnel and force civilians to remain. Hamas knows that innocent people on their own side will be killed. It deliberately puts them in harm’s way to capitalise on the leverage they can get for them via some western media and politicians and now the UN. The more innocents killed in incoming fire, the more points and sympathy they get, and the more battering the Israelis get.

I don’t see any blame at all on the Israeli side here. As the Israelis put it, they use missiles to defend their civilians, while Hamas uses civilians to defend its missiles.

If Hamas uses Palestinian women and children as a human shields, then they must be given the blame for the inevitable deaths, not Israel. They are murdering their own people for media and political points.

The UN, by fostering the illusion that both sides are equally bad, by condemning Israel, and helping Hamas in their media war, are rewarding Hamas for killing their own women and children. The UN is ignoring those critically important circumstances: Hamas using human shields, forcing people to remain in danger zones, putting military resources in civilian areas and launching attacks from there. The UN also ignores Israeli seeking to minimize civilian casualties via warnings and advanced mini-strikes.

The UN therefore forfeits any right to pontificate on morality in this conflict. They have stupidly rewarded Hamas for its human shield policy. Some extra women and children in Gaza will die because of the UN’s condemnation of Israel. It is proof that the human shields policy works. The long list of useful idiots with innocent Palestinian blood on their hands includes many Western journalists, news programs and politicians who have also condemned Israel rather than Hamas for the civilian deaths. The UN deserves condemnation for its words, but the victims will be innocent Palestinian civilians.

Let’s move on to look at another area the UN is doing harm.

The UN is the home of the Intergovernmental Panel on Climate Change. It is the source of scientific and socio-economic advice on a wide range of policies intended to defend the environment against global warming. I won’t look at the issue of climate change here, only the harmful economic policies resulting from poor IPCC advice aimed at reducing CO2 emissions:

Biodiesel – the IPCC produced extremely encouraging figures for palm oil plantation as a substitute for fossil fuels, leading to massive growth of palm oil planting. A lot of forest was burned down to make land available, causing huge immediate emissions in CO2. A lot of planting was on peat-land, causing the peat to dry out and biodegrade, again emitting massive amounts of CO2 into the air. Many poor people were evicted from their land to make room for the plantations. The result of this advice is that CO2 emissions increased, the environment was badly damaged in several ways, and many poor people suffered.

In western countries, huge areas of land were switched to grow crops to make biodiesel. This caused a drop in food grain production, with an increase in food prices, causing malnutrition in poor countries, unknown deaths from starvation and a massive increase in poverty. This policy is in reverse now, but the damage has been done., Very many poor people suffered.

Solar power farms have sprung up widely on agricultural land. Again this pushes up food prices and again the poor suffer. Since solar is not economic in most countries yet, it has to be subsidized, and poor people suffer additionally via higher energy bills.

Wind energy is a worse solution still. In Scotland, many turbines are planted on peat-land. The turbines need to have roads to them for building and maintenance. The roads cause the peat to dry out, making it biodegrade and leading to high CO2 emissions. The resulting CO2 emissions from some Scottish wind farms are greater than would have resulted from producing the same energy from coal, while a local ecosystem is destroyed. Additionally, 1% of the endangered white-tailed eagles in Scotland have already been killed by them. Small mammals and birds have their breeding cycles interrupted due to stress caused by the flicker and noise. Humans in nearby areas are stressed too. Wind energy is even more expensive than solar, so it needs even more subsidy, and this has therefore increased energy prices and fuel poverty. Poor people have suffered while rich landowners and wind farm owners have gained from huge subsidy windfalls. The environment has taken a beating instead of benefiting, money has been transferred from the poor to the rich and the poor suffer again.

Carbon taxes favored by the IPCC have been associated with fraud and money laundering, helping criminality to flourish. They have also caused some industries to relocate overseas, destroying jobs and local communities that depend on those industries. The environmental standards followed in recipient countries are sometimes lower, so the environment overall suffers. The poor suffer most since they find it harder to relocate.

Carbon offsetting has similar issues to those above – increasing prices and taxes, creating fraud opportunities, and encouraging deforestation and forced relocation of communities in areas wanted for offset schemes. The environment and the poor both suffer again.

The huge economic drain on national economies trying to meet emissions targets resulting from IPCC reports makes economic recovery in Europe much slower and the poor suffer. Everyone in a country suffers as a result of higher national debts and higher taxes to pay it back with interest. Enforced government austerity measures lead to cuts in budget increases for welfare and the poor suffer. Increasing economic tension also leads to more violence, more social division.

The IPCC’s political influence, making reports that are essentially politics rather than simply reporting good science, have led to its infiltration by political green activists who seek to introduce otherwise unacceptable socialist policies via the environmental door and also providing official accreditation for activist propaganda. This has subsequently led to corruption of the whole process of science followed in environmental circles, damaging public faith in science generally. This loss of trust in science and scientists now echoes across other spheres of science, making it harder to get public support for important science projects such as future medical programs, beneficial lifestyle changes, dietary advice and other things that will affect quality and quantity of life for everyone. It’s a pretty safe bet that the poor will suffer most, some people won’t live as long, and the environment will take more damage too.

A much more minor one to finish:

Going back to September 2013, the UN Human Rights Special Rapporteur Raquel Rolnik was heavily critical of the UK government’s attempt at removing the ‘spare room subsidy’ that allowed people to remain in council houses bigger than they need, designed to free up homes for families that need them. Why should this be a UN human rights concern? Regardless of political affiliation, most people agree that if new houses can’t be built fast enough, it makes sense to encourage families to downsize to smaller properties if they no longer need them, provided of course that policies allow for genuine specific needs. Even with poor implementation, it is hard to see this as a priority for a human rights investigation in the midst of such genuine and extreme abuses worldwide. The fact that this review occurred at all shows a significant distortion of values and priorities in today’s UN.

These are just a few areas where the UN makes a negative contribution to the world. I haven’t looked at others, though clearly some of its activities are praiseworthy. I hope that it will fix these meanderings away from its rightful path. If it doesn’t, it could eventually become a liability.

Future materials: Variable grip

variable grip

 

Another simple idea for the future. Variable grip under electronic control.

Shape changing materials are springing up regularly now. There are shape memory metal alloys, proteins, polymer gel muscle fibers and even string (changes shape when it gets wet or dries again). It occurred to me that if you make a triangle out of carbon fibre or indeed anything hard, with a polymer gel base, and pull the base together, either the base moves down or the tip will move up. If tiny components this shape are embedded throughout a 3D structure such as a tire (tyre is the English spelling, the rest of this text just uses tire because most of the blog readers are Americans), then tiny spikes could be made to poke through the surface by contracting the polymer gel that forms the base. All you have to do is apply an electric field across it, and that makes the tire surface just another part of the car electronics along with the engine management system and suspension.

Tires that can vary their grip and wear according to road surface conditions might be attractive, especially in car racing, but also on the street. Emergency braking improvement would save lives, as would reduce skidding in rain or ice, and allowing the components to retract when not in use would greatly reduce their rate of wear. In racing, grip could be optimized for cornering and braking and wear could be optimized for the straights.

Fashion

Although I haven’t bothered yet to draw pretty pictures to illustrate, clothes could use variable grip too. Shoes and gloves would both benefit. Since both can have easy contact with skin (shoes can use socks as a relay), the active components could pick up electrical signals associated with muscle control or even thinking. Even stress is detectable via skin resistance measurement. Having gloves or shoes that change grip just by you thinking it would be like a cat with claws that push out when it wants to climb a fence or attack something. You could even be a micro-scale version of Wolverine. Climbers might want to vary the grip for different kinds of rock, extruding different spikes for different conditions.

Other clothes could use different materials for the components and still use the same basic techniques to push them out, creating a wide variety of electronically controllable fabric textures. Anything from smooth and shiny through to soft and fluffy could be made with a single adaptable fabric garment. Shoes, hosiery, underwear and outerwear can all benefit. Fun!

Road deaths v hospital hygiene and errors

Here is a slide I just made for a road safety conference. All the figures I used came from government sources. We use the argument that a life is worth any spend, and we might be able to shave 10% off road deaths if we try hard, but we’d save 30 times more if we could reduce NHS errors and improve hygiene by just 10%.

road safety v NHS

Drones – it isn’t the Reapers and Predators you should worry about

We’re well used now to drones being used to attack terrorist targets in the Middle East. Call of Duty players will also be familiar with using drones to take out enemies. But drones so far are basically unmanned planes with missiles attached.

Elsewhere, quadcopter drones are also becoming very familiar for a variety of tasks, but so far at least, we’re not seeing them being used on the battlefield, or if they are being used, it is being kept out of the news. It can only be a matter of time though. They can already be made in a wide range of sizes from tiny insect-sized reconnaissance drones that carry cameras, microphones or other small sensors, right up to helicopter-sized drones for missile and gun mounting.

At each size, there are advantages and disadvantages. Collectively, drones will change warfare and terrorism dramatically over the next decades.

Although the big Predator drones with Hellfire missiles look very impressive and pack a mean punch, and are well proven in warfare, they soon won’t be as important as tiny drones. Imagine you have a big gun and a choice of being attacked by two enemies – a hungry grizzly bear, or a swarm of killer bees, and suppose these bees can penetrate your clothing. The bear is huge and has big sharp claws and teeth, but there is only one, and you’re a good shot and it will go down easily with your gun if you stay cool. The bees are small and you may swat a few but many will sting you. In practice, the sting could be a high voltage electric shock, a drop of nerve gas, a laser into your eye, or lethal germs, all of which are banned, but terrorists don’t care. Sharp carbon needles can penetrate a lot of armor. It is even possible to make tiny shaped-charge explosive stings.

Soon, they won’t even need to be as big as bees. Against many backgrounds, it can be almost impossible to see a midge, let alone kill it and a midge sized device can get through even a small gap. Soldiers don’t like having to fight in Noddy suits (NBC).

Further in the future, various types of nanotech devices might be added to attack your nervous system, take over your brain, paralyze you, switch your consciousness off.

Nature loves self-organisation, and biomimetics has adopted the idea well already. It is easy to use simple flocking algorithms to keep a swarm loosely together and pretty immune to high attrition. The algorithms only need simple sensors and processors, so can be very cheap. A few seekers can find and identify targets and the right areas of a target to attack. The rest can carry assorted payloads and coordinate their attacks, adding electric charges to make lethal shocks or arranging to ‘sting’ simultaneously or in timed sequences at certain points.

We heard this week about 3D printers allowing planes to make offshoots during flight. Well, insect-sized drones could too. Some could carry material, some could have the print heads and some provide the relative positioning systems for others to assemble whatever you want. Weapons could just seemingly appear from nowhere, assembled very close to the target.

So much for the short-term and mid-term future. What then?

a Mass Effect combat droneMass Effect combat drone, picture credit: masseffect.wikia.com

In futuristic computer games such as Halo and Mass Effect, combat orbs float around doing various military and assistant tasks. We will soon be able to make those too. We don’t have to use quadcopters or dragonfly drones. I had to design one for my sci-fi novel but I kept as close as possible to real feasible technology. Mine just floats around using electromagnetic/plasma effects. I discussed this in:

http://carbonweapons.com/2013/06/27/free-floating-combat-drones/ (the context there was for my sci-fi book, but the idea is still feasible)

I explained how such drones could self-organize, could be ultra-smart, and could reassemble if hit, becoming extremely resilient. They could carry significant weaponry too. A squadron of combat drones like these would be one hell of an enemy. You could shoot one for ages with laser or bullets and it would keep coming. Disruption of its fields by electrical weapons would make it collapse temporarily, but it would just get up and reassemble as soon as you stop firing. With its intelligence potentially local cloud based, you could make a small battalion of these that could only be properly killed by totally frazzling them all. They would be potentially lethal individually but almost irresistible as a team. Super-capacitors could be recharged frequently using companion drones to relay power from the rear line. A mist of spare components could make ready replacements for any that are destroyed. Self-orientation and use of free-space optics for comms make wiring and circuit boards redundant, and sub-millimetre chips 100m away would be quite hard to hit.

My generation grew up with the nuclear arms race. Millennials will grow up with the drone arms race. And that if anything is a lot scarier. The battle drones in computer games are fairly easy to kill. Real ones soon won’t be.

 

Well I’m scared. If you’re not, I didn’t explain it properly.