Category Archives: technology

The future of electronic cash and value

 

Picture first, I’m told people like to see pics in blogs. This one is from 1998; only the title has changed since.

future electronic cash

Every once in a while I have to go to a bank. This time it was my 5th attempt to pay off a chunk of my Santander Mortgage. I didn’t know all the account details for web transfer so went to the Santander branch. Fail – they only take cash and cheques. Cash and what??? So I tried via internet banking. Entire transaction details plus security entered, THEN Fail – I exceeded what Barclays allows for their fast transfers. Tried again with smaller amount and again all details and all security. Fail again, Santander can’t receive said transfers, try CHAPS. Tried CHAPS, said it was all fine, all hunkydory. Happy bunny. Double fail. It failed due to amount exceeding limit AND told me it had succeeded when it hadn’t. I then drove 12 miles to my Barclays branch who eventually managed to do it, I think (though I haven’t checked that it worked  yet).

It is 2015. Why the hell is it so hard for two world class banks to offer a service we should have been able to take for granted 20 years ago?

Today, I got tweeted about Ripple Labs and a nice blog that quote their founder sympathising with my experience above and trying to solve it, with some success:

http://www.wfs.org/blogs/richard-samson/supermoney-new-wealth-beyond-banks-and-bitcoin

Ripple seems good as far as it goes, which is summarised in the blog, but do read the full original:

Basically the Ripple protocol “provides the ability for humans to confirm financial transactions without a central operator,” says Larsen. “This is major.” Bitcoin was the first technology to successfully bypass banks and other authorities as transaction validators, he points out, “but our method is much cheaper and takes only seconds rather than minutes.” And that’s just for starters. For example, “It also leverages the enormous power of banks and other financial institutions.”

The power of the value web stems from replacing archaic back-end systems with all their cumbersome delays and unnecessary costs. 

That’s great, I wish them the best of success. It is always nice to see new systems that are more efficient than the old ones, but the idea is early 1990s. Lots of IT people looked at phone billing systems and realised they managed to do for a penny what banks did for 65 pennies at the time, and telco business cases were developed to replace the banks with pretty much what Ripple tries to do. Those were never developed for a variety of reasons, both business and regulatory, but the ideas were certainly understood and developed broadly at engineer level to include not only traditional cash forms but many that didn’t exist then and still don’t. Even Ripple can only process transactions that are equivalent to money such as traditional currencies, electronic cash forms like bitcoin, sea shells or air-miles.

That much is easy, but some forms require other tokens to have value, such as personalized tokens. Some value varies according to queue lengths, time of day, who is spending it to whom. Some needs to be assignable, so you can give money that can only be used to purchase certain things, and may have a whole basket of conditions attached. Money is also only one form of value, and many forms of value are volatile, only existing at certain times and places in certain conditions for certain transactors. Aesthetic cash? Play money? IOUs? Favours?These are  all a bit like cash but not necessarily tradable or exchangeable using simple digital transaction engines because they carry emotional weighting as well as financial value. In the care economy, which is now thankfully starting to develop and is finally reaching concept critical mass, emotional value will become immensely important and it will have some tradable forms, though much will not be tradable ever. We understood all that then, but are still awaiting proper implementation. Most new startups on the web are old ideas finally being implemented and Ripple is only a very partial implementation so far.

Here is one of my early blogs from 1998, using ideas we’d developed several years earlier that were no longer commercially sensitive – you’ll observe just how much banks have under-performed against what we expected of them, and what was entirely feasible using already known technology then:

Future of Money

 Ian Pearson, BT Labs, June 98

Already, people are buying things across the internet. Mostly, they hand over a credit card number, but some transactions already use electronic cash. The transactions are secure so the cash doesn’t go astray or disappear, nor can it easily be forged. In due course, using such cash will become an everyday occurrence for us all.

Also already, electronic cash based on smart cards has been trialled and found to work well. The BT form is called Mondex, but it is only one among several. These smart cards allow owners to ‘load’ the card with small amounts of money for use in transactions where small change would normally be used, paying bus fares, buying sweets etc. The cards are equivalent to a purse. But they can and eventually will allow much more. Of course, electronic cash doesn’t have to be held on a card. It can equally well be ‘stored’ in the network. Transactions then just require secure messaging across the network. Currently, the cost of this messaging makes it uneconomic for small transactions that the cards are aimed at, but in due course, this will become the more attractive option, especially since you no longer lose your cash when you lose the card.

When cash is digitised, it loses some of the restrictions of physical cash. Imagine a child has a cash card. Her parents can give her pocket money, dinner money, clothing allowance and so on. They can all be labelled separately, so that she can’t spend all her dinner money on chocolate. Electronic shopping can of course provide the information needed to enable the cash. She may have restrictions about how much of her pocket money she may spend on various items too. There is no reason why children couldn’t implement their own economies too, swapping tokens and IOUs. Of course, in the adult world this grows up into local exchange trading systems (LETS), where people exchange tokens too, a glorified babysitting circle. But these LETS don’t have to be just local, wider circles could be set up, even globally, to allow people to exchange services or information with each other.

Electronic cash can be versatile enough to allow for negotiable cash too. Credit may be exchanged just as cash and cash may be labelled with source. For instance, we may see celebrity cash, signed by the celebrity, worth more because they have used it. Cash may be labelled as tax paid, so those donations from cards to charities could automatically expand with the recovered tax. Alternatively, VAT could be recovered at point of sale.

With these advanced facilities, it becomes obvious that the cash needs to become better woven into taxation systems, as well as auditing and accounting systems. These functions can be much more streamlined as a result, with less human administration associated with money.

When ID verification is added to the transactions, we can guarantee who it is carrying out the transaction. We can then implement personal taxation, with people paying different amounts for the same goods. This would only work for certain types of purchase – for physical goods there would otherwise be a thriving black market.

But one of the best advantages of making cash digital is the seamlessness of international purchases. Even without common official currency, the electronic cash systems will become de facto international standards. This will reduce the currency exchange tax we currently pay to the banks every time we travel to a different country, which can add up to as much as 25% for an overnight visit. This is one of the justifications often cited for European monetary union, but it is happening anyway in global e-commerce.

Future of banks

 Banks will have to change dramatically from today’s traditional institutions if they want to survive in the networked world. They are currently introducing internet banking to try to keep customers, but the move to digital electronic cash, held perhaps by the customer or an independent third party, will mean that the cash can be quite separate from the transaction agent. Cash does not need to be stored in a bank if records in secured databases anywhere can be digitally signed and authenticated. The customer may hold it on his own computer, or in a cyberspace vault elsewhere. With digital signatures and high network security, advanced software will put the customer firmly in control with access to any facility or service anywhere.

In fact, no-one need hold cash at all, or even move it around. Cash is just bits today, already electronic records. In the future, it will be an increasingly blurred entity, mixing credit, reputation, information, and simply promises into exchangeable tokens. My salary may be just a digitally signed certificate from BT yielding control of a certain amount of credit, just another signature on a long list as the credit migrates round the economy. The ‘promise to pay the bearer’ just becomes a complex series of serial promises. Nothing particularly new here, just more of what we already have. Any corporation or reputable individual may easily capture the bank’s role of keeping track of the credit. It is just one service among many that may leave the bank.

As the world becomes increasingly networked, the customer could thus retain complete control of the cash and its use, and could buy banking services on a transaction by transaction basis. For instance, I could employ one company to hold my cash securely and prevent its loss or forgery, while renting the cash out to companies that want to borrow via another company, keeping the bulk of the revenue for myself. Another company might manage my account, arrange transfers etc, and deal with the taxation, auditing etc. I could probably get these done on my personal computer, but why have a dog and bark yourself.

The key is flexibility, none of these services need be fixed any more. Banks will not compete on overall package, but on every aspect of service. Worse still (for the banks), some of their competitors will be just freeware agents. The whole of the finance industry will fragment. The banks that survive will almost by definition be very adaptable. Services will continue and be added to, but not by the rigid structures of today. Surviving banks should be able to compete for a share of the future market as well as anyone. They certainly have a head start in many of the required skills, and have the advantage of customer lethargy when it comes to changing to potentially better suppliers. Many of their customers will still value tradition and will not wish to use the better and cheaper facilities available on the network. So as always, it looks like there will be a balance.

Firstly, with large numbers of customers moving to the network for their banking services, banks must either cater for this market or become a niche operator, perhaps specialising in tradition, human service and even nostalgia. Most banks however will adapt well to network existence and will either be entirely network based, or maintain a high street presence to complement their network presence.

High Street banking

 Facilities in high street banking will echo this real world/cyberspace nature. It must be possible to access network facilities from within the banks, probably including those of competitors. The high street bank may therefore be more like shops today, selling wares from many suppliers, but with a strongly placed own brand. There is of course a niche for banks with no services of their own at all who just provide access to services from other suppliers. All they offer in addition is a convenient and pleasant place to access them, with some human assistance as appropriate.

Traditional service may sometimes be pushed as a differentiator, and human service is bound to attract many customers too. In an increasingly machine dominated world, actually having the right kind of real people may be significant value add.

But many banks will be bursting with high technology either alongside or in place of people. Video terminals to access remote services, perhaps with translation to access foreign services. Biometric identification based on iris scan, fingerprints etc may be used to authenticate smart cards, passports or other legal documents before their use, or simply a means of registering securely onto the network. High quality printers and electronic security embedding would enable banks to offer additional facilities like personal bank notes, usable as cash.

Of course, banks can compete in any financial service. Because the management of financial affairs gives them a good picture of many customer’s habits and preferences, they will be able to use this information to sell customer lists, identify market niches for new businesses, and predict the likely success of customers proposing setting up businesses.

As they try to stretch their brands into new territories, one area they may be successful is in information banking. People may use banks as the publishers of the future. Already knowledge guilds are emerging. Ultimately, any piece of information from any source can be marketed at very low publishing and distribution cost, making previously unpublishable works viable. Many people have wanted to write, but have been unable to find publishers due to the high cost of getting to market in paper. A work may be sold on the network for just pennies, and achieve market success by selling many more copies than could have been achieved by the high priced paper alternative. The success of electronic encyclopedias and the demise of Encyclopedia Britannica is evidence of this. Banks could allow people to upload information onto the net, which they would then manage the resultant financial transactions. If there aren’t very many, the maximum loss to the bank is very small. Of course, electronic cash and micropayment technology mean that the bank is not necessary, but for many, it may smooth the road.

Virtual business centres

Their exposure to the detailed financial affairs of the community put banks in a privileged position in identifying potential markets. They could therefore act as co-ordinators for virtual companies and co-operatives. Building on the knowledge guilds, they could broker the skills of their many customers to existing virtual companies and link people together to address business needs not addressed by existing companies, or where existing companies are inadequate or inefficient. In this way, short-term contractors, who may dominate the employment community, can be efficiently utilised to everyone’s gain. The employees win by getting more lucrative work, their customers get more efficient services at lower cost, and the banks laugh to themselves.

Future of the stock market

 In the next 10 years, we will probably see a factor of 1000 in computer speed and memory capacity. In parallel with hardware development, there are numerous research forays into software techniques that might yield more factors of 10 in the execution speed for programs. Tasks that used to take a second will be reduced to a millisecond. As if this impact were not enough, software will very soon be able to make logical deductions from the flood of information on the internet, not just from Reuters or Bloomberg, but from anywhere. They will be able to assess the quality and integrity of the data, correlate it with other data, run models, and infer likely other events and make buy or sell recommendations. Much dealing will still be done automatically subject to human-imposed restrictions, and the speed and quality of this dealing could far exceed current capability.

Which brings problems…

Firstly, the speed of light is fast but finite. With these huge processing speeds, computers will be able to make decisions within microseconds of receiving information. Differences in distance from the information source become increasingly important. Being just 200m closer to the Bank of England makes one microsecond difference to the time of arrival of information on interest rates, the information, insignificant to a human, but of sufficient duration for a fast computer to but or sell before competitors even receive the information. As speeds increase further over following years, the significant distance drops. This effect will cause great unfairness according to geographic proximity to important sources. There are two obvious outcomes. Either there becomes a strong premium on being closest, with rises in property values nearby to key sources, or perhaps network operators could be asked to provide guaranteed simultaneous delivery of information. This is entirely technically feasible but would need regulation, otherwise users could simply use alternative networks.

Secondly, exactly simultaneous processing will cause problems. If many requests for transactions arrive at exactly the same moment, computers or networks have to give priority in some way. This is bound to be a source of contention. Also, simultaneous events can often cause malfunctions, as was demonstrated perfectly at the launch of Big Bang. Information waves caused by such events are a network phenomenon that could potentially crash networks.

Such a delay-sensitive system may dictate network technology. Direct transmission through the air by means of radio or infrared (optical wireless) would be faster than routing signals through fibres that take a more tortuous route, especially since the speed of light in fibre is only two third that in air.

Ultimately, there is a final solution if speed of computing increases so far that transmission delay is too big a problem. The processing engines could actually be shared, with all the deals and information processing taking place in a central computer, using massive parallelism. It would be possible to construct such a machine that treated each subscribing company fairly.

An interesting future side effect of all this is that the predicted flood of people into the countryside may be averted. Even though people can work from anywhere, their computers have to be geographically very close to the information centres, i.e. the City. Automated dealing has to live in the city, human based dealing can work from anywhere. If people and machines have to work together, perhaps they must both work in the City.

Consumer dealing

 The stock exchange long since stopped being a trading floor with scraps of paper and became a distributed computer environment – it effectively moved into cyberspace. The deals still take place, but in cyberspace. There are no virtual environments yet, but the other tools such as automated buying and selling already exist. These computers are becoming smarter and exist in cyberspace every bit the same as the people. As a result, there is more automated analysis, more easy visualisation and more computer assisted dealing. People will be able to see which shares are doing well, spot trends and act on their computer’s advice at a button push. Markets will grow for tools to profit from shares, whether they be dealing software, advice services or visualisation software.

However, as we see more people buying personal access to share dealing and software to determine best buys, or even to automatically buy or sell on certain clues, we will see some very negative behaviours. Firstly, traffic will be highly correlated if personal computers can all act on the same information at the same time. We will see information waves, and also enormous swings in share prices. Most private individuals will suffer because of this, while institutions and individuals with better software will benefit. This is because prices will rise and fall simply because of the correlated activity of the automated software and not because of any real effects related to the shares themselves. Institutions may have to limit private share transactions to control this problem, but can also make a lot of money from modelling the private software and thus determining in advance what the recommendations and actions will be, capitalising enormously on the resultant share movements, and indeed even stimulating them. Of course, if this problem is generally perceived by the share dealing public, the AI software will not take off so the problem will not arise. What is more likely is that such software will sell in limited quantities, causing the effects to be significant, but not destroying the markets.

A money making scam is thus apparent. A company need only write a piece of reasonably good AI share portfolio management software for it to capture a fraction of the available market. The company writing it will of course understand how it works and what the effects of a piece of information will be (which they will receive at the same time), and thus able to predict the buying or selling activity of the subscribers. If they were then to produce another service which makes recommendations, they would have even more notice of an effect and able to directly influence prices. They would then be in the position of the top market forecasters who know their advice will be self fulfilling. This is neither insider dealing nor fraud, and of course once the software captures a significant share, the quality of its advice would be very high, decoupling share performance from the real world. Only the last people to react would lose out, paying the most, or selling at least, as the price is restored to ‘correct’ by the stock exchange, and of course even this is predictable to a point. The fastest will profit most.

The most significant factor in this is the proportion of share dealing influenced by that companies software. The problem is that software markets tend to be dominated by just two or three companies, and the nature of this type of software is that their is strong positive reinforcement for the company with the biggest influence, which could quickly lead to a virtual monopoly. Also, it really doesn’t matter whether the software is on the visualisation tools or AI side. Each can have a predictability associated with it.

It is interesting to contemplate the effects this widespread automated dealing would have of the stock market. Black Monday is unlikely to happen again as a result of computer activity within the City, but it certainly looks like prices will occasionally become decoupled from actual value, and price swings will become more significant. Of course, much money can be made on predicting the swings or getting access to the software-critical information before someone else, so we may see a need for equalised delivery services. Without equalised delivery, assuming a continuum of time, those closest to the dealing point will be able to buy or sell quicker, and since the swings could be extremely rapid, this would be very important. Dealers would have to have price information immediately, and of course the finite speed of light does not permit this. If dealing time is quantified, i.e. share prices are updated at fixed intervals, the duration of the interval becomes all important, strongly affect the nature of the market, i.e. whether everyone in that interval pays the same or the first to act gain.

Also of interest is the possibility of agents acting on behalf of many people to negotiate amongst themselves to increase the price of a company’s shares, and then sell on a pre-negotiated time or signal.

Such automated  systems would also be potentially vulnerable to false information from people or agents hoping to capitalise on their correlated behaviour.

Legal problems are also likely. If I write, and sell to a company, a piece of AI based share dealing software which learns by itself how stock market fluctuations arise, and then commits a fraud such as insider dealing (I might not have explained the law, or the law may have changed since it was written), who would be liable?

 And ultimately

 Finally, the 60s sci-fi film, The Forbin Project, considered a world where two massively powerful computers were each assigned control of competing defence systems, each side hoping to gain the edge. After a brief period of cultural exchange, mutual education and negotiation between the machines, they both decided to co-operate rather than compete, and hold all mankind at nuclear gunpoint to prevent wars. In the City of the future, similar competition between massively intelligent supercomputers in share dealing may have equally interesting consequences. Will they all just agree a fixed price and see the market stagnate instantly, or could the system result in economic chaos with massive fluctuations. Perhaps we humans can’t predict how machines much smarter than us would behave. We may just have to wait and see.

End of original blog piece

 

 

The future of cleaning

I’ve been thinking a bit about cleaning for various customers over the last few years. I won’t bother this time with the various self-cleaning fabrics, the fancy new ultrasonic bubble washing machines, or ultraviolet sterilization for hospitals, even though those are all very important areas.  I won’t even focus on using your old sonic toothbrush heads in warm water with a little detergent to clean the trickier areas of your porcelain collectibles, though that does work much better than I thought it would.

I will instead introduce a new idea for the age of internet of things.

When you put your clothes into a future washing machine, it will also debug, back up, update and run all the antivirus and other security routines to sanitize the IoT stuff in them.

You might also have a box with thew same functions that you can put your portable devices or other things that can’t be washed.

The trouble with internet of things, the new name for the extremely old idea of chips in everything, is that you can put chips in everything, and there is always some reason for doing so, even if it’s only for marking it for ownership purposes. Mostly there are numerous other reasons so you might even find many chips or functions running on a single object. You can’t even keep up with all the usernames and passwords and operating system updates for the few devices you already own. Having hundreds or thousands of them will be impossible if there isn’t an easy way of electronically sanitizing them and updating them. Some can be maintained via the cloud, and you’ll have some apps for looking after some subgroups of them. But some of those devices might well be in parts of your home where the signals don’t penetrate easily. Some will only be used rarely. Some will use batteries that run down and get replaced. Others will be out of date for other reasons. Having a single central device that you can use to process them will be useful.

The washing machine will likely be networked anyway for various functions such as maintenance, energy negotiations and program downloads for special garments. It makes sense to add electronic processing for the garments too. They will be in the machine quite a long time so download speed shouldn’t be a problem, and each part of the garment comes close to a transmitter or sensor each time it is spun around.

A simple box is easy to understand and easy to use too. It might need ports to plug into but more likely wireless or optical connections would be used. The box could electromagnetically shield the device from other interference or security infiltration during processing to make sure it comes out clean and safe and malware free as well as fully updated. A common box means only having to program your preferences once too.

There would still be some devices that can’t be processed either in a box or in a washing machine. Examples such as smart paints or smart light bulbs or smart fuses would all be easier to process using networked connections, and they may well be. Some might prefer a slightly more individual approach, so pointing a mobile device at them would single them out from others in the vicinity. This sort of approach would also allow easier interrogation of the current state, diagnostics or inspection.

Whatever way internet of things goes, cleaning will take on a new and important dimension. We already do it as routine PC maintenance but removing malware and updating software will soon become a part of our whole house cleaning routine.

The future of beetles

Onto B then.

One of the first ‘facts’ I ever learned about nature was that there were a million species of beetle. In the Google age, we know that ‘scientists estimate there are between 4 and 8 million’. Well, still lots then.

Technology lets us control them. Beetles provide a nice platform to glue electronics onto so they tend to fall victim to cybernetics experiments. The important factor is that beetles come with a lot of built-in capability that is difficult or expensive to build using current technology. If they can be guided remotely by over-riding their own impulses or even misleading their sensors, then they can be used to take sensors into places that are otherwise hard to penetrate. This could be for finding trapped people after an earthquake, or getting a dab of nerve gas onto a president. The former certainly tends to be the favored official purpose, but on the other hand, the fashionable word in technology circles this year is ‘nefarious’. I’ve read it more in the last year than the previous 50 years, albeit I hadn’t learned to read for some of those. It’s a good word. Perhaps I just have a mad scientist brain, but almost all of the uses I can think of for remote-controlled beetles are nefarious.

The first properly publicized experiment was 2009, though I suspect there were many unofficial experiments before then:

http://www.technologyreview.com/news/411814/the-armys-remote-controlled-beetle/

There are assorted YouTube videos such as

A more recent experiment:

http://www.wired.com/2015/03/watch-flying-remote-controlled-cyborg-bug/

http://www.telegraph.co.uk/news/science/science-news/11485231/Flying-beetle-remotely-controlled-by-scientists.html

Big beetles make it easier to do experiments since they can carry up to 20% of body weight as payload, and it is obviously easier to find and connect to things on a bigger insect, but obviously once the techniques are well-developed and miniaturization has integrated things down to single chip with low power consumption, we should expect great things.

For example, a cloud of redundant smart dust would make it easier to connect to various parts of a beetle just by getting it to take flight in the cloud. Bits of dust would stick to it and self-organisation principles and local positioning can then be used to arrange and identify it all nicely to enable control. This would allow large numbers of beetles to be processed and hijacked, ideal for mad scientists to be more time efficient. Some dust could be designed to burrow into the beetle to connect to inner parts, or into the brain, which obviously would please the mad scientists even more. Again, local positioning systems would be advantageous.

Then it gets more fun. A beetle has its own sensors, but signals from those could be enhanced or tweaked via cloud-based AI so that it can become a super-beetle. Beetles traditionally don’t have very large brains, so they can be added to remotely too. That doesn’t have to be using AI either. As we can also connect to other animals now, and some of those animals might have very useful instincts or skills, then why not connect a rat brain into the beetle? It would make a good team for exploring. The beetle can do the aerial maneuvers and the rat can control it once it lands, and we all know how good rats are at learning mazes. Our mad scientist friend might then swap over the management system to another creature with a more vindictive streak for the final assault and nerve gas delivery.

So, Coleoptera Nefarius then. That’s the cool new beetle on the block. And its nicer but underemployed twin Coleoptera Benignus I suppose.

 

The future of air

Time for a second alphabetic ‘The future of’ set. Air is a good starter.

Air is mostly a mixture of gases, mainly nitrogen and oxygen, but it also contains a lot of suspended dust, pollen and other particulates, flying creatures such as insects and birds, and of course bacteria and viruses. These days we also have a lot of radio waves, optical signals, and the cyber-content carried on them. Air isn’t as empty as it seems. But it is getting busier all the time.

Internet-of-things, location-based marketing data and other location-based services and exchanges will fill the air digitally with fixed and wandering data. I called that digital air when I wrote a full technical paper on it and I don’t intend to repeat it all now a decade later. Some of the ideas have made it into reality, many are still waiting for marketers and app writers to catch up.

The most significant recent addition is drones. There are already lots of them, in a wide range of sizes from insect size to aeroplane size. Some are toys, some airborne cameras for surveillance, aerial photography, monitoring and surveillance, and increasingly they are appearing for sports photography and tracking or other leisure pursuits. We will see a lot more of them in coming years. Drone-based delivery is being explored too, though I am skeptical of its likely success in domestic built up areas.

Personal swarms of follower drones will become common too. It’s already possible to have a drone follow you and keep you on video, mainly for sports uses, but as drones become smaller, you may one day have a small swarm of tiny drones around you, recording video from many angles, so you will be able to recreate events from any time in an entire 3D area around you, a 3D permasuperselfie. These could also be extremely useful for military and policing purposes, and it will make the decline of privacy terminal. Almost everything going on in public in a built up environment will be recorded, and a great deal of what happens elsewhere too.

We may see lots of virtual objects or creatures once augmented reality develops a bit more. Some computer games will merge with real world environments, so we’ll have aliens, zombies and various mythical creatures from any game populating our streets and skies. People may also use avatars that fly around like fairies or witches or aliens or mythical creatures, so they won’t all be AI entities, some will have direct human control. And then there are buildings that might also have virtual appearances and some of those might include parts of buildings that float around, or even some entire cities possibly like those buildings and city areas in the game Bioshock Infinite.

Further in the future, it is possible that physical structures might sometimes levitate, perhaps using magnets, or lighter than air construction materials such as graphene foam. Plasma may also be used as a building material one day, albeit far in the future.

I’m bored with air now. Time for B.

Five new states of matter, maybe.

http://en.wikipedia.org/wiki/List_of_states_of_matter lists the currently known states of matter. I had an idea for five new ones, well, 2 anyway with 3 variants. They might not be possible but hey, faint heart ne’er won fair maid, and this is only a blog not a paper from CERN. But coincidentally, it is CERN most likely to be able to make them.

A helium atom normally has 2 electrons, in a single shell. In a particle model, they go round and round. However… the five new states:

A: I suspect this one is may already known but isn’t possible and is therefore just another daft idea. It’s just a planar superatom. Suppose, instead of going round and round the same atom, the nuclei were arranged in groups of three in a nice triangle, and 6 electrons go round and round the triplet. They might not be terribly happy doing that unless at high pressure with some helpful EM fields adjusting the energy levels required, but with a little encouragement, who knows, it might last long enough to be classified as matter.

B: An alternative that might be more stable is a quad of nuclei in a tetrahedron, with 8 electrons. This is obviously a variant of A so probably doesn’t really qualify as a separate one. But let’s call it a 3D superatom for now, unless it already has a proper name.

C: Suppose helium nuclei are neatly arranged in a row at a precise distance apart, and two orthogonal electron beams are fired past them at a certain distance on either side, with the electrons spaced and phased very nicely, so that for a short period at least, each of the nuclei has two electrons and the beam energy and nuclei spacing ensures that they don’t remain captive on one nucleus but are handed on to the next. You can do the difficult sums. To save you a few seconds, since the beams need to be orthogonal, you’ll need multiple beams in the direction orthogonal to the row,

D: Another cheat, a variant of C, C1: or you could make a few rows for a planar version with a grid of beams. Might be tricky to make the beams stay together for any distance so you could only make a small flake of such matter, but I can’t see an obvious reason why it would be impossible. Just tricky.

E: A second variant of C really, C2, with a small 3D speck of such nuclei and a grid of beams. Again, it works in my head.

Well, 5 new states of matter for you to play with. But here’s a free bonus idea:

The states don’t have to actually exist to be useful. Even with just the descriptions above, you could do the maths for these. They might not be physically achievable but that doesn’t stop them existing in a virtual world with a hypothetical future civilization making them. And given that they have that specific mathematics, and ergo a whole range of theoretical chemistry, and therefore hyperelectronics, they could therefore be used as simulated constructs in a Turing machine or actual constructs in quantum computers to achieve particular circuitry with particular virtues. You could certainly emulate it on a Yonck processor (see my blog on that). So you get a whole field of future computing and AI thrown in.

Blogging is all the fun with none of the hard work and admin. Perfect. And just in case someone does build it all, for the record, you saw it here first.

Technology 2040: Technotopia denied by human nature

This is a reblog of the Business Weekly piece I wrote for their 25th anniversary.

It’s essentially a very compact overview of the enormous scope for technology progress, followed by a reality check as we start filtering that potential through very imperfect human nature and systems.

25 years is a long time in technology, a little less than a third of a lifetime. For the first third, you’re stuck having to live with primitive technology. Then in the middle third it gets a lot better. Then for the last third, you’re mainly trying to keep up and understand it, still using the stuff you learned in the middle third.

The technology we are using today is pretty much along the lines of what we expected in 1990, 25 years ago. Only a few details are different. We don’t have 2Gb/s per second to the home yet and AI is certainly taking its time to reach human level intelligence, let alone consciousness, but apart from that, we’re still on course. Technology is extremely predictable. Perhaps the biggest surprise of all is just how few surprises there have been.

The next 25 years might be just as predictable. We already know some of the highlights for the coming years – virtual reality, augmented reality, 3D printing, advanced AI and conscious computers, graphene based materials, widespread Internet of Things, connections to the nervous system and the brain, more use of biometrics, active contact lenses and digital jewellery, use of the skin as an IT platform, smart materials, and that’s just IT – there will be similarly big developments in every other field too. All of these will develop much further than the primitive hints we see today, and will form much of the technology foundation for everyday life in 2040.

For me the most exciting trend will be the convergence of man and machine, as our nervous system becomes just another IT domain, our brains get enhanced by external IT and better biotech is enabled via nanotechnology, allowing IT to be incorporated into drugs and their delivery systems as well as diagnostic tools. This early stage transhumanism will occur in parallel with enhanced genetic manipulation, development of sophisticated exoskeletons and smart drugs, and highlights another major trend, which is that technology will increasingly feature in ethical debates. That will become a big issue. Sometimes the debates will be about morality, and religious battles will result. Sometimes different parts of the population or different countries will take opposing views and cultural or political battles will result. Trading one group’s interests and rights against another’s will not be easy. Tensions between left and right wing views may well become even higher than they already are today. One man’s security is another man’s oppression.

There will certainly be many fantastic benefits from improving technology. We’ll live longer, healthier lives and the steady economic growth from improving technology will make the vast majority of people financially comfortable (2.5% real growth sustained for 25 years would increase the economy by 85%). But it won’t be paradise. All those conflicts over whether we should or shouldn’t use technology in particular ways will guarantee frequent demonstrations. Misuses of tech by criminals, terrorists or ethically challenged companies will severely erode the effects of benefits. There will still be a mix of good and bad. We’ll have fixed some problems and created some new ones.

The technology change is exciting in many ways, but for me, the greatest significance is that towards the end of the next 25 years, we will reach the end of the industrial revolution and enter a new age. The industrial revolution lasted hundreds of years, during which engineers harnessed scientific breakthroughs and their own ingenuity to advance technology. Once we create AI smarter than humans, the dependence on human science and ingenuity ends. Humans begin to lose both understanding and control. Thereafter, we will only be passengers. At first, we’ll be paying passengers in a taxi, deciding the direction of travel or destination, but it won’t be long before the forces of singularity replace that taxi service with AIs deciding for themselves which routes to offer us and running many more for their own culture, on which we may not be invited. That won’t happen overnight, but it will happen quickly. By 2040, that trend may already be unstoppable.

Meanwhile, technology used by humans will demonstrate the diversity and consequences of human nature, for good and bad. We will have some choice of how to use technology, and a certain amount of individual freedom, but the big decisions will be made by sheer population numbers and statistics. Terrorists, nutters and pressure groups will harness asymmetry and vulnerabilities to cause mayhem. Tribal differences and conflicts between demographic, religious, political and other ideological groups will ensure that advancing technology will be used to increase the power of social conflict. Authorities will want to enforce and maintain control and security, so drones, biometrics, advanced sensor miniaturisation and networking will extend and magnify surveillance and greater restrictions will be imposed, while freedom and privacy will evaporate. State oppression is sadly as likely an outcome of advancing technology as any utopian dream. Increasing automation will force a redesign of capitalism. Transhumanism will begin. People will demand more control over their own and their children’s genetics, extra features for their brains and nervous systems. To prevent rebellion, authorities will have little choice but to permit leisure use of smart drugs, virtual escapism, a re-scoping of consciousness. Human nature itself will be put up for redesign.

We may not like this restricted, filtered, politically managed potential offered by future technology. It offers utopia, but only in a theoretical way. Human nature ensures that utopia will not be the actual result. That in turn means that we will need strong and wise leadership, stronger and wiser than we have seen of late to get the best without also getting the worst.

The next 25 years will be arguably the most important in human history. It will be the time when people will have to decide whether we want to live together in prosperity, nurturing and mutual respect, or to use technology to fight, oppress and exploit one another, with the inevitable restrictions and controls that would cause. Sadly, the fine engineering and scientist minds that have got us this far will gradually be taken out of that decision process.

Powering electric vehicles in the city

Simple stuff today just to stop my brain seizing up, nothing terribly new.

Grid lock is usually a term often used to describe interlocking traffic jams. But think about a canal lock, used to separate different levels of canal. A grid lock could be used to manage the different levels of stored and kinetic energy within a transport grid, keeping it local as far as possible to avoid transmission losses, and transferring it between different parts of the grid when necessary.

Formula 1 racing cars have energy recovery systems that convert kinetic energy to stored electrical energy during braking – Kinetic Energy Recovery System (KERS). In principle, energy could be shared between members of a race team by transmitting it from one car to another instead of simply storing it on board. For a city-wide system, that makes even more sense. There will always be some vehicles coasting, some braking, some accelerating and some stopped. Storing the energy on board is fine, but requires large capacitor banks or batteries, and that adds very significant cost. If an electrical grid allowed the energy to be moved around between vehicles, each vehicle would only need much smaller storage so costs would fall.

I am very much in favor of powering electric vehicles by using inductive pads on the road surface to transmit energy via coils on the car underside as the vehicles pass over them.  Again, this means that vehicles can manage with small batteries or capacitor banks. Since these are otherwise a large part of the cost, it makes electric transport much more cost-effective. The coils on the road surface could be quite thin, making them unattractive to metal thieves, and perhaps ultimately could be made of graphene once that is cheap to produce.

Moving energy among the many coils only needs conventional electrical grid technology. Peer to peer electrical generation business models are developing too to sell energy between households without the energy companies taking the lion’s share. Electricity can even be packetised by writing an address and header with details of the sender account and the quantity of energy in the following packet. Since overall energy use will fluctuate somewhat, the infrastructure also needs some storage to hold local energy surpluses and feed them back into accelerating vehicles as required, and if demand is too low, to store energy in local batteries. If even that isn’t sufficient capacity, then the grid might open grid locks to overflow larger surpluses onto other regions of the city or onto the main grid. Usually however, there would be an inflow of energy from the main grid to power all the vehicles, so transmission in the reverse direction would be only occasional.

Such a system keeps most energy local, reducing transmission losses and simplifying signalling, whilst allowing local energy producers to be included and enabling storage for renewable energy. As one traffic stream slows, another can recycle that same energy to accelerate. It reduces the environmental demands of running a transport system, so has both cost and environmental benefits.

 

 

Increasing internet capacity: electron pipes

The electron pipe is a slightly mis-named high speed comms solution that would make optical fibre look like two bean cans and a bit of loose string. I invented it in 1990, but it still remains in the future since we can’t do it yet, and it might not even be possible, some of the physics is in doubt.  The idea is to use an evacuated tube and send a precision controlled beam of high energy particles down it instead of crude floods of electrons down a wire or photons in fibres. Here’s a pathetic illustration:

Electron pipe

 

Initially I though of using 1MeV electrons, then considered that larger particles such as neutrons or protons or even ionised atoms might be better, though neutrons would certainly be harder to control. The wavelength of 1MeV electrons would be pretty small, allowing very high frequency signals and data rates, many times what is possible with visible photons down fibres. Whether this could be made to work over long distances is questionable, but over short distances it should be feasible and might be useful for high speed chip interconnects.

The energy of the beam could be made a lot higher, increasing bandwidth, but 1MeV seamed a reasonable start point, offering a million times more bandwidth than fibre.

The Problem

Predictions for memory, longer term storage, cloud service demands and computing speeds are already heading towards fibre limits when millions of users are sharing single fibres. Although the limits won’t be reached soon, it is useful to have a technology in the R&D pipeline that can extend the life of the internet after fibre fills up, to avoid costs rising. If communication is not to become a major bottleneck (even assuming we can achieve these rates by then), new means of transmission need to be found.

The Solution

A way must be found to utilise higher frequency entities than light. The obvious candidates are either gamma rays or ‘elementary’ particles such as electrons, protons and their relatives. Planck’s Law shows that frequency is related to energy. A 1.3µm photon has a frequency of 2.3 x 1014. By contrast  1MeV gives a frequency of 2.4 x 10^20 and a factor of a million increase in bandwidth, assuming it can be used (much higher energies should be feasible if higher bandwidth is needed, 10Gev energies would give 10^24). An ‘electron pipe’ containing a beam of high energy electrons may therefore offer a longer term solution to the bandwidth bottleneck. Electrons are easily accelerated and contained and also reasonably well understood. The electron beam could be prevented form colliding with the pipe walls by strong magnetic fields which may become practical in the field through progress in superconductivity. Such a system may well be feasible. Certainly prospects of data rates of these orders are appealing.

Lots of R&D would be needed to develop such communication systems. At first glance, they would seem to be more suited to high speed core network links, where the presumably high costs could be justified. Obvious problems exist which need to be studied, such as mechanisms for ultra high speed modulation and detection of the signals. If the problems can be solved, the rewards are high. The optical ether idea suffers from bandwidth constraint problems. Adding factors of 10^6 – 10^10 on top of this may make a difference!

 

How to decide green policies

Many people in officialdom seem to love putting ticks in boxes. Apparently once all the boxes are ticked, a task can be put in the ‘mission accomplished’ cupboard and forgotten about. So watching some of the recent political debate in the run-up to our UK election, it occurred to me that there must be groups of people discussing ideas for policies and then having meetings to decide whether they tick the right boxes to be included in a manifesto. I had some amusing time thinking about how a meeting might go for the Green Party. A little preamble first.

I could write about any of the UK parties I guess. Depending on your choice of media nicknames, we have the Nasty Party, the Fruitcake Racist Party, the Pedophile Empathy Party, the Pedophile and Women Molesting Party, the National Suicide Party (though they get their acronym in the wrong order) and a few Invisible Parties. OK, I invented some of those based on recent news stories of assorted facts and allegations and make no assertion of any truth in any of them whatsoever. The Greens are trickier to nickname – ‘The Poverty and Oppression Maximization, Environmental Destruction, Economic Collapse, Anti-science, Anti-fun and General Misery Party’ is a bit of a mouthful. I like having greens around, just so long as they never win control. No matter how stupid a mistake I might ever make, I’ll always know that greens would have made a worse one.

So what would a green policy development meeting might be like? I’ll make the obvious assumption that the policies don’t all come from the Green MP. Like any party, there are local groups of people, presumably mostly green types in the wider sense of the word, who produce ideas to feed up the ladder. Many won’t even belong to any official party, but still think of themselves as green. Some will have an interest mainly in socialism, some more interested in environmentalism, most will be a blend of the two. And to be fair, most of them will be perfectly nice people who want to make the world a better place, just like the rest of us. I’ve met a lot of greens, and we do agree at least on motive even if I think they are wrong on most of their ideas of how to achieve the goals. We all want world peace and justice, a healthy environment and to solve poverty and oppression. The main difference between us is deciding how best to achieve all that.

So I’ll look at green debate generally as a source of the likely discussions, rather than any actual Green Party manifesto, even though that still looks pretty scary. To avoid litigation threats and keep my bank balance intact, I’ll state that this is only a personal imagining of what might go into such green meetings, and you can decide for yourself how much it matches up to the reality. It is possible that the actual Green Party may not actually run this way, and might not support some of the policies I discuss, which are included in this piece based on wider green debate, not the Green Party itself. Legal disclaimers in place, I’ll get on with my imagining:

Perhaps there might be some general discussion over the welcome coffee about how awful it is that some nasty capitalist types make money and there might be economic growth, how terrible it is that scientists keep discovering things and technologists keep developing them, how awful it is that people are allowed to disbelieve in a global warming catastrophe and still be allowed to roam free and how there should be a beautiful world one day where a green elite is in charge, the population has been culled down to a billion or two and everyone left has to do everything they say on pain of imprisonment or death. After coffee, the group migrates to a few nice recycled paper flip-charts to start filling them with brainstormed suggestions. Then they have to tick boxes for each suggestion to filter out the ones not dumb enough to qualify. Then make a nice summary page with the ones that get all the boxes ticked. So what boxes do they need? And I guess I ought to give a few real examples as evidence.

Environmental destruction has to be the first one. Greens must really hate the environment, since the majority of green policies damage it, but they manage to get them implemented via cunning marketing to useful idiots to persuade them that the environment will benefit. The idiots implement them thinking the environment will benefit, but it suffers.  Some quick examples:

Wind turbines are a big favorite of greens, but planted on peat bogs in Scotland, the necessary roads cause the bogs to dry out, emitting vast quantities of CO2 and destroying the peat ecosystem. Scottish wind turbines also kill eagles and other birds.

In the Far East, many bogs have been drained to grow palm oil for biofuels, another green favorite that they’ve managed to squeeze into EU law. Again, vast quantities of CO2, and again ecosystem destruction.

Forests around the world have been cut down to make room for palm oil plantations too, displacing local people, destroying an ecosystem to replace it with one to meet green fuel targets.

Still more forests have been cut down to enable new ones to be planted to cash in on  carbon offset schemes to keep corporate greens happy that they can keep flying to all those green conferences without feeling guilt. More people displaced, more destruction.

Staying with biofuels, a lot of organic waste from agriculture is converted to biofuels instead of ploughing it back into the land. Soil structure therefore deteriorates, damaging ecosystem and damaging future land quality. CO2 savings by making the bio-fuel are offset against locking the carbon up in soil organic matter so there isn’t much benefit even there, but the damage holds.

Solar farms are proliferating in the UK, often occupying prime agricultural land that really ought to be growing food for the many people in the world still suffering from malnutrition. The same solar panels could have been sent to otherwise useless desert areas in a sunny country and used to displace far more fossil fuels and save far more CO2 without reducing food production. Instead, people in many African countries have to use wood stoves favored by greens as sustainable, but which produce airborne particles that greatly reduce health. Black carbon resulting from open wood fires also contributes directly to warming.

Many of the above policy effects don’t just tick the environmental destruction box, but also the next ones poverty and oppression maximization. Increasing poverty resulted directly from increasing food prices as food was grown to be converted into bio-fuel. Bio-fuels as first implemented were a mind-numbingly stupid green policy. Very many of the world’s poorest people have been forcefully pushed out of their lands and into even deeper poverty to make space to grow bio-fuel crops. Many have starved or suffered malnutrition. Entire ecosystems have been destroyed, forests replaced, many animals pushed towards extinction by loss of habitat. More recently, even greens have realized the stupidity and these polices are slowly being fixed.

Other green policies see economic development by poor people as a bad thing because it increases their environmental footprint. The poor are therefore kept poor. Again, their poverty means they can’t use modern efficient technology to cook or keep warm, they have to chop trees to get wood to burn, removing trees damages soil integrity, helps flooding, burning them produces harmful particles and black carbon to increase warming. Furthermore, with too little money to buy proper food, some are forced to hunt or buy bushmeat, endangering animal species and helping to spread viruses between closely genetically-related animals and humans.

So a few more boxes appear. All the above polices achieved pretty much the opposite of what they presumably intended, assuming the people involved didn’t actually want to destroy the world. Maybe a counterproductive box needs to be ticked too.

Counterproductive links well to another of the green’s apparent goals, of economic collapse. They want to stop economic growth. They want to reduce obsolescence.  Obsolescence is the force that drives faster and faster progress towards devices that give us a high quality of life with a far lower environmental impact, with less resource use, lower energy use, and less pollution. If you slow obsolescence down because green dogma says it is a bad thing, all those factors worsen. The economy also suffers. The economy suffers again if energy prices are deliberately made very high by adding assorted green levies such as carbon taxes, or renewable energy subsidies.  Renewable energy subsidies encourage more oppression of people who really don’t want wind turbines nearby, causing them stress and health problems, disrupting breeding cycles of small wild animals in the areas, reducing the value of people’s homes, while making the companies that employ hem less able to compete internationally, so increasing bankruptcy, redundancy and making even more poverty. Meanwhile the rich wind farm owners are given lots of money from poor people who are forced to buy their energy and pay higher taxes for the other half of their subsidy. The poor take all the costs, the rich take all the benefits. That could be another box to tick, since it seems pretty universal in green policy So much for  policies that are meant to be socialist! Green manifesto policies would make some of these problems far worse still. Business would be strongly loaded with extra costs and admin, and the profits they can still manage to make would be confiscated to pay for the ridiculous spending plans. With a few Greens in power, damage will be limited and survivable. If they were to win control, our economy would collapse totally in a rapidly accelerating debt spiral.

Greens hate science and technology, another possible box to tick. I once chatted to one of the Green leaders (I do go to environmental events sometimes if I think I can help steer things in a more logical direction), and was told ‘the last thing we need is more science’. But it is science and technology that makes us able to live in extreme comfort today alongside a healthy environment. 100 years ago, pollution was terrible. Rivers caught fire. People died from breathing in a wide variety of pollutants. Today, we have clean water and clean air. Thanks to increasing CO2 levels – and although CO2 certainly does contribute to warming, though not as much as feared by warmist doom-mongers, it also has many positive effects – there is more global greenery today than decades ago. Plants thrive as CO2 levels increase so they are growing faster and healthier. We can grow more food and forests can recover faster from earlier green destruction.

The greens also apparently have a box that ‘prevents anyone having any fun’. Given their way, we’d be allowed no meat, our homes would all have to be dimly lit and freezing cold, we’d have to walk everywhere or wait for buses in the rain. Those buses would still burn diesel fuel, which kills thousands of people every year via inhalation of tiny particulates. When you get anywhere, you’d have to use ancient technologies that have to be fixed instead of replaced. You’d have to do stuff that doesn’t use much energy or involve eating anything nice, going anywhere nice because that would involve travel and travel is bad, except for greens, who can go to as many international conferences as they want.

So if the greens get their way, if people are dumb enough to fall for promises of infinite milk and honey for all, all paid for by taxing 3 bankers, then the world we’d live in would very quickly have a devastated environment, a devastated economy, a massive transfer of wealth from the poor to a few rich people, enormous oppression, increasing poverty, decreasing health, no fun at all. In short, with all the above boxes checked, the final summary box to get the policy into manifesto must be ‘increases general misery‘.

An interesting list of boxes to tick really. It seems that all truly green policies must:

  1. Cause environmental destruction
  2. Increase poverty and oppression
  3. Be counterproductive
  4. Push towards economic collapse
  5. Make the poor suffer all the costs while the rich (and Green elite) reap the benefits
  6. Impede further science and technology development
  7. Prevent anyone having fun
  8. Lead to general misery

This can’t be actually how they run their meetings I suppose: unless they get someone from outside with a working brain to tick the boxes, the participants would need to have some basic understanding of the actual likely consequences of their proposals and to be malign, and there is little evidence to suggest any of them do understand, and they are mostly not malign. Greens are mostly actually quite nice people, even the ones in politics, and I do really think they believe in what they are doing. Their hearts are usually in the right place, it’s just that their brains are missing or malfunctioning. All of the boxes get ticked, it’s just unintentionally.

I rest my case.

 

 

 

The IT dark age – The relapse

I long ago used a slide in my talks about the IT dark age, showing how we’d come through a period (early 90s)where engineers were in charge and it worked, into an era where accountants had got hold of it and were misusing it (mid 90s), followed by a terrible period where administrators discovered it and used it in the worst ways possible (late 90s, early 00s). After that dark age, we started to emerge into an age of IT enlightenment, where the dumbest of behaviors had hopefully been filtered out and we were starting to use it correctly and reap the benefits.

Well, we’ve gone into relapse. We have entered a period of uncertain duration where the hard-won wisdom we’d accumulated and handed down has been thrown in the bin by a new generation of engineers, accountants and administrators and some extraordinarily stupid decisions and system designs are once again being made. The new design process is apparently quite straightforward: What task are we trying to solve? How can we achieve this in the least effective, least secure, most time-consuming, most annoying, most customer loyalty destructive way possible? Now, how fast can we implement that? Get to it!

If aliens landed and looked at some of the recent ways we have started to use IT, they’d conclude that this was all a green conspiracy, designed to make everyone so anti-technology that we’d be happy to throw hundreds of years of progress away and go back to the 16th century. Given that they have been so successful in destroying so much of the environment under the banner of protecting it, there is sufficient evidence that greens really haven’t a clue what they are doing, but worse still, gullible political and business leaders will cheerfully do the exact opposite of what they want as long as the right doublespeak is used when they’re sold the policy.

The main Green laboratory in the UK is the previously nice seaside town of Brighton. Being an extreme socialist party, that one might think would be a binperson’s best friend, the Greens in charge nevertheless managed to force their binpeople to go on strike, making what ought to be an environmental paradise into a stinking litter-strewn cesspit for several weeks. They’ve also managed to create near-permanent traffic gridlock supposedly to maximise the amount of air pollution and CO2 they can get from the traffic.

More recently, they have decided to change their parking meters for the very latest IT. No longer do you have to reach into your pocket and push a few coins into a machine and carry a paper ticket all the way back to your car windscreen. Such a tedious process consumed up to a minute of your day. It simply had to be replaced with proper modern technology. There are loads of IT solutions to pick from, but the Greens apparently decided to go for the worst possible implementation, resulting in numerous press reports about how awful it is. IT should not be awful, it can and should be done in ways that are better in almost every way than old-fashioned systems. I rarely drive anyway and go to Brighton very rarely, but I am still annoyed at incompetent or deliberate misuse of IT.

If I were to go there by car, I’d also have to go via the Dartford Crossing, where again, inappropriate IT has been used incompetently to replace a tollbooth system that makes no economic sense in the first place. The government would be better off if it simply paid for it directly. Instead, each person using it is likely to be fined if they don’t know how it operates, and even if they do, they have to spend a lot more expensive time and effort to pay than before. Again, it is a severe abuse of IT, conferring a tiny benefit on a tiny group of people at the expense of significant extra load on very many people.

Another financial example is the migration to self-pay terminals in shops. In Stansted Airport’s W H Smith a couple of days ago, I sat watching a long queue of people taking forever to buy newspapers. Instead of a few seconds handing over a coin and walking out, it was taking a minute or more to read menus, choose which buttons to touch, inspecting papers to find barcodes, fumbling for credit cards, checking some more boxes, checking they hadn’t left their boarding pass or paper behind, and finally leaving. An assistant stood there idle, watching people struggle instead of serving them in a few seconds. I wanted a paper but the long queue was sufficient deterrent and they lost the sale. Who wins in such a situation? The staff who lost their jobs certainly didn’t. I as the customer had no paper to read so I didn’t win. I would be astonished with all the lost sales if W H Smith were better off so they didn’t win. The airport will likely make less from their take too. Even the terminal manufacturing industry only swaps one type of POS terminal for another with marginally different costs. I’m not knocking W H Smith, they are just another of loads of companies doing this now. But it isn’t progress, it is going backwards.

When I arrived at my hotel, another electronic terminal was replacing a check-in assistant with a check-in terminal usage assistant. He was very friendly and helpful, but check-in wasn’t any easier or faster for me, and the terminal design still needed him to be there too because like so many others, it was designed by people who have zero understanding of how other people actually do things.  Just like those ticket machines in rail stations that we all detest.

When I got to my room, the thermostat used a tiny LCD panel, with tiny meaningless symbols, with no backlight, in a dimly lit room, with black text on a dark green background. So even after searching for my reading glasses, since I hadn’t brought a torch with me, I couldn’t see a thing on it so I couldn’t use the air conditioning. An on/off switch and a simple wheel with temperature marked on it used to work perfectly fine. If it ain’t broke, don’t do your very best to totally wreck it.

These are just a few everyday examples, alongside other everyday IT abuses such as minute fonts and frequent use of meaningless icons instead of straightforward text. IT is wonderful. We can make devices with absolutely superb capability for very little cost. We can make lives happier, better, easier, healthier, more prosperous, even more environmentally friendly.

Why then are so many people so intent on using advanced IT to drag us back into another dark age?