Category Archives: Computing

The future of electronic cash and value

 

Picture first, I’m told people like to see pics in blogs. This one is from 1998; only the title has changed since.

future electronic cash

Every once in a while I have to go to a bank. This time it was my 5th attempt to pay off a chunk of my Santander Mortgage. I didn’t know all the account details for web transfer so went to the Santander branch. Fail – they only take cash and cheques. Cash and what??? So I tried via internet banking. Entire transaction details plus security entered, THEN Fail – I exceeded what Barclays allows for their fast transfers. Tried again with smaller amount and again all details and all security. Fail again, Santander can’t receive said transfers, try CHAPS. Tried CHAPS, said it was all fine, all hunkydory. Happy bunny. Double fail. It failed due to amount exceeding limit AND told me it had succeeded when it hadn’t. I then drove 12 miles to my Barclays branch who eventually managed to do it, I think (though I haven’t checked that it worked  yet).

It is 2015. Why the hell is it so hard for two world class banks to offer a service we should have been able to take for granted 20 years ago?

Today, I got tweeted about Ripple Labs and a nice blog that quote their founder sympathising with my experience above and trying to solve it, with some success:

http://www.wfs.org/blogs/richard-samson/supermoney-new-wealth-beyond-banks-and-bitcoin

Ripple seems good as far as it goes, which is summarised in the blog, but do read the full original:

Basically the Ripple protocol “provides the ability for humans to confirm financial transactions without a central operator,” says Larsen. “This is major.” Bitcoin was the first technology to successfully bypass banks and other authorities as transaction validators, he points out, “but our method is much cheaper and takes only seconds rather than minutes.” And that’s just for starters. For example, “It also leverages the enormous power of banks and other financial institutions.”

The power of the value web stems from replacing archaic back-end systems with all their cumbersome delays and unnecessary costs. 

That’s great, I wish them the best of success. It is always nice to see new systems that are more efficient than the old ones, but the idea is early 1990s. Lots of IT people looked at phone billing systems and realised they managed to do for a penny what banks did for 65 pennies at the time, and telco business cases were developed to replace the banks with pretty much what Ripple tries to do. Those were never developed for a variety of reasons, both business and regulatory, but the ideas were certainly understood and developed broadly at engineer level to include not only traditional cash forms but many that didn’t exist then and still don’t. Even Ripple can only process transactions that are equivalent to money such as traditional currencies, electronic cash forms like bitcoin, sea shells or air-miles.

That much is easy, but some forms require other tokens to have value, such as personalized tokens. Some value varies according to queue lengths, time of day, who is spending it to whom. Some needs to be assignable, so you can give money that can only be used to purchase certain things, and may have a whole basket of conditions attached. Money is also only one form of value, and many forms of value are volatile, only existing at certain times and places in certain conditions for certain transactors. Aesthetic cash? Play money? IOUs? Favours?These are  all a bit like cash but not necessarily tradable or exchangeable using simple digital transaction engines because they carry emotional weighting as well as financial value. In the care economy, which is now thankfully starting to develop and is finally reaching concept critical mass, emotional value will become immensely important and it will have some tradable forms, though much will not be tradable ever. We understood all that then, but are still awaiting proper implementation. Most new startups on the web are old ideas finally being implemented and Ripple is only a very partial implementation so far.

Here is one of my early blogs from 1998, using ideas we’d developed several years earlier that were no longer commercially sensitive – you’ll observe just how much banks have under-performed against what we expected of them, and what was entirely feasible using already known technology then:

Future of Money

 Ian Pearson, BT Labs, June 98

Already, people are buying things across the internet. Mostly, they hand over a credit card number, but some transactions already use electronic cash. The transactions are secure so the cash doesn’t go astray or disappear, nor can it easily be forged. In due course, using such cash will become an everyday occurrence for us all.

Also already, electronic cash based on smart cards has been trialled and found to work well. The BT form is called Mondex, but it is only one among several. These smart cards allow owners to ‘load’ the card with small amounts of money for use in transactions where small change would normally be used, paying bus fares, buying sweets etc. The cards are equivalent to a purse. But they can and eventually will allow much more. Of course, electronic cash doesn’t have to be held on a card. It can equally well be ‘stored’ in the network. Transactions then just require secure messaging across the network. Currently, the cost of this messaging makes it uneconomic for small transactions that the cards are aimed at, but in due course, this will become the more attractive option, especially since you no longer lose your cash when you lose the card.

When cash is digitised, it loses some of the restrictions of physical cash. Imagine a child has a cash card. Her parents can give her pocket money, dinner money, clothing allowance and so on. They can all be labelled separately, so that she can’t spend all her dinner money on chocolate. Electronic shopping can of course provide the information needed to enable the cash. She may have restrictions about how much of her pocket money she may spend on various items too. There is no reason why children couldn’t implement their own economies too, swapping tokens and IOUs. Of course, in the adult world this grows up into local exchange trading systems (LETS), where people exchange tokens too, a glorified babysitting circle. But these LETS don’t have to be just local, wider circles could be set up, even globally, to allow people to exchange services or information with each other.

Electronic cash can be versatile enough to allow for negotiable cash too. Credit may be exchanged just as cash and cash may be labelled with source. For instance, we may see celebrity cash, signed by the celebrity, worth more because they have used it. Cash may be labelled as tax paid, so those donations from cards to charities could automatically expand with the recovered tax. Alternatively, VAT could be recovered at point of sale.

With these advanced facilities, it becomes obvious that the cash needs to become better woven into taxation systems, as well as auditing and accounting systems. These functions can be much more streamlined as a result, with less human administration associated with money.

When ID verification is added to the transactions, we can guarantee who it is carrying out the transaction. We can then implement personal taxation, with people paying different amounts for the same goods. This would only work for certain types of purchase – for physical goods there would otherwise be a thriving black market.

But one of the best advantages of making cash digital is the seamlessness of international purchases. Even without common official currency, the electronic cash systems will become de facto international standards. This will reduce the currency exchange tax we currently pay to the banks every time we travel to a different country, which can add up to as much as 25% for an overnight visit. This is one of the justifications often cited for European monetary union, but it is happening anyway in global e-commerce.

Future of banks

 Banks will have to change dramatically from today’s traditional institutions if they want to survive in the networked world. They are currently introducing internet banking to try to keep customers, but the move to digital electronic cash, held perhaps by the customer or an independent third party, will mean that the cash can be quite separate from the transaction agent. Cash does not need to be stored in a bank if records in secured databases anywhere can be digitally signed and authenticated. The customer may hold it on his own computer, or in a cyberspace vault elsewhere. With digital signatures and high network security, advanced software will put the customer firmly in control with access to any facility or service anywhere.

In fact, no-one need hold cash at all, or even move it around. Cash is just bits today, already electronic records. In the future, it will be an increasingly blurred entity, mixing credit, reputation, information, and simply promises into exchangeable tokens. My salary may be just a digitally signed certificate from BT yielding control of a certain amount of credit, just another signature on a long list as the credit migrates round the economy. The ‘promise to pay the bearer’ just becomes a complex series of serial promises. Nothing particularly new here, just more of what we already have. Any corporation or reputable individual may easily capture the bank’s role of keeping track of the credit. It is just one service among many that may leave the bank.

As the world becomes increasingly networked, the customer could thus retain complete control of the cash and its use, and could buy banking services on a transaction by transaction basis. For instance, I could employ one company to hold my cash securely and prevent its loss or forgery, while renting the cash out to companies that want to borrow via another company, keeping the bulk of the revenue for myself. Another company might manage my account, arrange transfers etc, and deal with the taxation, auditing etc. I could probably get these done on my personal computer, but why have a dog and bark yourself.

The key is flexibility, none of these services need be fixed any more. Banks will not compete on overall package, but on every aspect of service. Worse still (for the banks), some of their competitors will be just freeware agents. The whole of the finance industry will fragment. The banks that survive will almost by definition be very adaptable. Services will continue and be added to, but not by the rigid structures of today. Surviving banks should be able to compete for a share of the future market as well as anyone. They certainly have a head start in many of the required skills, and have the advantage of customer lethargy when it comes to changing to potentially better suppliers. Many of their customers will still value tradition and will not wish to use the better and cheaper facilities available on the network. So as always, it looks like there will be a balance.

Firstly, with large numbers of customers moving to the network for their banking services, banks must either cater for this market or become a niche operator, perhaps specialising in tradition, human service and even nostalgia. Most banks however will adapt well to network existence and will either be entirely network based, or maintain a high street presence to complement their network presence.

High Street banking

 Facilities in high street banking will echo this real world/cyberspace nature. It must be possible to access network facilities from within the banks, probably including those of competitors. The high street bank may therefore be more like shops today, selling wares from many suppliers, but with a strongly placed own brand. There is of course a niche for banks with no services of their own at all who just provide access to services from other suppliers. All they offer in addition is a convenient and pleasant place to access them, with some human assistance as appropriate.

Traditional service may sometimes be pushed as a differentiator, and human service is bound to attract many customers too. In an increasingly machine dominated world, actually having the right kind of real people may be significant value add.

But many banks will be bursting with high technology either alongside or in place of people. Video terminals to access remote services, perhaps with translation to access foreign services. Biometric identification based on iris scan, fingerprints etc may be used to authenticate smart cards, passports or other legal documents before their use, or simply a means of registering securely onto the network. High quality printers and electronic security embedding would enable banks to offer additional facilities like personal bank notes, usable as cash.

Of course, banks can compete in any financial service. Because the management of financial affairs gives them a good picture of many customer’s habits and preferences, they will be able to use this information to sell customer lists, identify market niches for new businesses, and predict the likely success of customers proposing setting up businesses.

As they try to stretch their brands into new territories, one area they may be successful is in information banking. People may use banks as the publishers of the future. Already knowledge guilds are emerging. Ultimately, any piece of information from any source can be marketed at very low publishing and distribution cost, making previously unpublishable works viable. Many people have wanted to write, but have been unable to find publishers due to the high cost of getting to market in paper. A work may be sold on the network for just pennies, and achieve market success by selling many more copies than could have been achieved by the high priced paper alternative. The success of electronic encyclopedias and the demise of Encyclopedia Britannica is evidence of this. Banks could allow people to upload information onto the net, which they would then manage the resultant financial transactions. If there aren’t very many, the maximum loss to the bank is very small. Of course, electronic cash and micropayment technology mean that the bank is not necessary, but for many, it may smooth the road.

Virtual business centres

Their exposure to the detailed financial affairs of the community put banks in a privileged position in identifying potential markets. They could therefore act as co-ordinators for virtual companies and co-operatives. Building on the knowledge guilds, they could broker the skills of their many customers to existing virtual companies and link people together to address business needs not addressed by existing companies, or where existing companies are inadequate or inefficient. In this way, short-term contractors, who may dominate the employment community, can be efficiently utilised to everyone’s gain. The employees win by getting more lucrative work, their customers get more efficient services at lower cost, and the banks laugh to themselves.

Future of the stock market

 In the next 10 years, we will probably see a factor of 1000 in computer speed and memory capacity. In parallel with hardware development, there are numerous research forays into software techniques that might yield more factors of 10 in the execution speed for programs. Tasks that used to take a second will be reduced to a millisecond. As if this impact were not enough, software will very soon be able to make logical deductions from the flood of information on the internet, not just from Reuters or Bloomberg, but from anywhere. They will be able to assess the quality and integrity of the data, correlate it with other data, run models, and infer likely other events and make buy or sell recommendations. Much dealing will still be done automatically subject to human-imposed restrictions, and the speed and quality of this dealing could far exceed current capability.

Which brings problems…

Firstly, the speed of light is fast but finite. With these huge processing speeds, computers will be able to make decisions within microseconds of receiving information. Differences in distance from the information source become increasingly important. Being just 200m closer to the Bank of England makes one microsecond difference to the time of arrival of information on interest rates, the information, insignificant to a human, but of sufficient duration for a fast computer to but or sell before competitors even receive the information. As speeds increase further over following years, the significant distance drops. This effect will cause great unfairness according to geographic proximity to important sources. There are two obvious outcomes. Either there becomes a strong premium on being closest, with rises in property values nearby to key sources, or perhaps network operators could be asked to provide guaranteed simultaneous delivery of information. This is entirely technically feasible but would need regulation, otherwise users could simply use alternative networks.

Secondly, exactly simultaneous processing will cause problems. If many requests for transactions arrive at exactly the same moment, computers or networks have to give priority in some way. This is bound to be a source of contention. Also, simultaneous events can often cause malfunctions, as was demonstrated perfectly at the launch of Big Bang. Information waves caused by such events are a network phenomenon that could potentially crash networks.

Such a delay-sensitive system may dictate network technology. Direct transmission through the air by means of radio or infrared (optical wireless) would be faster than routing signals through fibres that take a more tortuous route, especially since the speed of light in fibre is only two third that in air.

Ultimately, there is a final solution if speed of computing increases so far that transmission delay is too big a problem. The processing engines could actually be shared, with all the deals and information processing taking place in a central computer, using massive parallelism. It would be possible to construct such a machine that treated each subscribing company fairly.

An interesting future side effect of all this is that the predicted flood of people into the countryside may be averted. Even though people can work from anywhere, their computers have to be geographically very close to the information centres, i.e. the City. Automated dealing has to live in the city, human based dealing can work from anywhere. If people and machines have to work together, perhaps they must both work in the City.

Consumer dealing

 The stock exchange long since stopped being a trading floor with scraps of paper and became a distributed computer environment – it effectively moved into cyberspace. The deals still take place, but in cyberspace. There are no virtual environments yet, but the other tools such as automated buying and selling already exist. These computers are becoming smarter and exist in cyberspace every bit the same as the people. As a result, there is more automated analysis, more easy visualisation and more computer assisted dealing. People will be able to see which shares are doing well, spot trends and act on their computer’s advice at a button push. Markets will grow for tools to profit from shares, whether they be dealing software, advice services or visualisation software.

However, as we see more people buying personal access to share dealing and software to determine best buys, or even to automatically buy or sell on certain clues, we will see some very negative behaviours. Firstly, traffic will be highly correlated if personal computers can all act on the same information at the same time. We will see information waves, and also enormous swings in share prices. Most private individuals will suffer because of this, while institutions and individuals with better software will benefit. This is because prices will rise and fall simply because of the correlated activity of the automated software and not because of any real effects related to the shares themselves. Institutions may have to limit private share transactions to control this problem, but can also make a lot of money from modelling the private software and thus determining in advance what the recommendations and actions will be, capitalising enormously on the resultant share movements, and indeed even stimulating them. Of course, if this problem is generally perceived by the share dealing public, the AI software will not take off so the problem will not arise. What is more likely is that such software will sell in limited quantities, causing the effects to be significant, but not destroying the markets.

A money making scam is thus apparent. A company need only write a piece of reasonably good AI share portfolio management software for it to capture a fraction of the available market. The company writing it will of course understand how it works and what the effects of a piece of information will be (which they will receive at the same time), and thus able to predict the buying or selling activity of the subscribers. If they were then to produce another service which makes recommendations, they would have even more notice of an effect and able to directly influence prices. They would then be in the position of the top market forecasters who know their advice will be self fulfilling. This is neither insider dealing nor fraud, and of course once the software captures a significant share, the quality of its advice would be very high, decoupling share performance from the real world. Only the last people to react would lose out, paying the most, or selling at least, as the price is restored to ‘correct’ by the stock exchange, and of course even this is predictable to a point. The fastest will profit most.

The most significant factor in this is the proportion of share dealing influenced by that companies software. The problem is that software markets tend to be dominated by just two or three companies, and the nature of this type of software is that their is strong positive reinforcement for the company with the biggest influence, which could quickly lead to a virtual monopoly. Also, it really doesn’t matter whether the software is on the visualisation tools or AI side. Each can have a predictability associated with it.

It is interesting to contemplate the effects this widespread automated dealing would have of the stock market. Black Monday is unlikely to happen again as a result of computer activity within the City, but it certainly looks like prices will occasionally become decoupled from actual value, and price swings will become more significant. Of course, much money can be made on predicting the swings or getting access to the software-critical information before someone else, so we may see a need for equalised delivery services. Without equalised delivery, assuming a continuum of time, those closest to the dealing point will be able to buy or sell quicker, and since the swings could be extremely rapid, this would be very important. Dealers would have to have price information immediately, and of course the finite speed of light does not permit this. If dealing time is quantified, i.e. share prices are updated at fixed intervals, the duration of the interval becomes all important, strongly affect the nature of the market, i.e. whether everyone in that interval pays the same or the first to act gain.

Also of interest is the possibility of agents acting on behalf of many people to negotiate amongst themselves to increase the price of a company’s shares, and then sell on a pre-negotiated time or signal.

Such automated  systems would also be potentially vulnerable to false information from people or agents hoping to capitalise on their correlated behaviour.

Legal problems are also likely. If I write, and sell to a company, a piece of AI based share dealing software which learns by itself how stock market fluctuations arise, and then commits a fraud such as insider dealing (I might not have explained the law, or the law may have changed since it was written), who would be liable?

 And ultimately

 Finally, the 60s sci-fi film, The Forbin Project, considered a world where two massively powerful computers were each assigned control of competing defence systems, each side hoping to gain the edge. After a brief period of cultural exchange, mutual education and negotiation between the machines, they both decided to co-operate rather than compete, and hold all mankind at nuclear gunpoint to prevent wars. In the City of the future, similar competition between massively intelligent supercomputers in share dealing may have equally interesting consequences. Will they all just agree a fixed price and see the market stagnate instantly, or could the system result in economic chaos with massive fluctuations. Perhaps we humans can’t predict how machines much smarter than us would behave. We may just have to wait and see.

End of original blog piece

 

 

The future of digital

Many things are cyclical. Some things are a one way street. Digitization covers some things that shouldn’t be reversed, and some that should and will. I started work early enough to experience using an analog computer. Analog computers use analogs of things to help simulating them. So for example, you can simulate heat flow through a wall by using a battery to provide a voltage as an analog of the temperature difference and a resistor  to be an analog of the wall’s insulation. If you want a better result, you could simulate the heat capacity of the wall using a capacitor. A well-designed analog will produce a useful result. The best thing about analogs is that in some cases they are infinitely fast. Imagine writing a computer simulation of the convection currents in a glass of water. You could build a supercomputer to simulate every atom’s behavior digitally. Your program could include local sources of heat, take account of viscosity, chemical reactions among the impurities and everything else you can think of etc. You might decide to account for the movement of the earth and the Coriolis forces it would generate on the water as the current make the water move. If you want ridiculously precise results you could simulate the effects of every planet in the solar system on atomic movements. You could account for magnetic forces, electrostatic ones and so on. By now, your biggest supercomputer would be able to simulate the glass of water for a few microseconds before it is replaced by an upgrade. You can do it, but it isn’t ideal. The analog alternative is to pour a glass of water and watch it. Every atom, every subatomic particle in that glass, will instantaneously and continually account for every physical interaction with every passing photon, and every other particle in the universe, taking full account of space-time geography and the distances of each particle. It would work pretty well, it would be a good analog, even though it’s probably a glass of different water from a different tap. It will give you a continuous model at almost zero cost that works perfectly and greatly outperforms the digital one. Analog wins.

If you want to add 2+2, an analog computer will give you a result of roughly 4. The next time, it will still be roughly 4 but will be slightly different. A  digital one will always give an answer of precisely 4, unless you’ve messed up badly somewhere. Digital wins.

It is obvious that digital has some advantages and analog does too. Analog is less reproducible, liable to drift, is not always transparent and has many other faults that eventually led to it being replaced for most purpose by digital computing. The truth remains that a glass of water has more processing power than all the digital computers every built put together, if you want to simulate water behavior.

Digital and analog processing are both used in nature. In vision, the retina sends an essentially digital stream of data to the brain. In IT, pretty much all communications is done digitally, as is storage of data. It is far easier to repair the degradation that occurs over time or transmission that way. If a signal level has shrunk slightly, it will still be clear whether it is a 1 or a 0 so it can be corrected, reset to the right level and re-transmitted or stored. For an analog signal, degradation just accumulates until the signal disappears. Digital wins in most of IT.

But back to analog. Much of the processing in many electronic circuits and systems is done in the analog domain before digital takes over for transmission or computation. Even computer motherboards, graphics cards, fans and power supplies have resistors, capacitors and even a transformer can be thought of as an analog device. So analog processing and devices are with us still, just hiding behind the scenes.

I think analog computing will make a comeback, albeit in certain niches. Imagine a typical number-crunching problem for supercomputers, such as simulating heat and force transfer. Imagine making an actual analog of it using some futuristic putty and exposing that putty to actual forces and heat. If there are nano-sensors embedded throughout, you could measure the transfer of forces and heat directly and  not have to calculate it. Again the speed advantage of analog would return. Now suppose a hybrid machine with some such analogs and some digital programming too. Those bit best left to digital could be done digitally and others where real analogs could be made could shortcut the number-crunching requirements tremendously. The overall speed might be dramatically improved without sacrificing integrity. Furthermore, the old problems of drift faced by analog systems could be reduced or almost eliminated by frequent cross referencing and calibration as the system goes on.

Finally, AI may well have a powerful place in consciousness and AI realization. Many people believe AI would be best done using adaptive analog neurons. Until today I was one of them. However, I am starting to doubt that, and this looking again at analog has made me realize a bit more about consciousness techniques, so I will divert from this piece forthwith to write more on conscious computing.

The future of cleaning

I’ve been thinking a bit about cleaning for various customers over the last few years. I won’t bother this time with the various self-cleaning fabrics, the fancy new ultrasonic bubble washing machines, or ultraviolet sterilization for hospitals, even though those are all very important areas.  I won’t even focus on using your old sonic toothbrush heads in warm water with a little detergent to clean the trickier areas of your porcelain collectibles, though that does work much better than I thought it would.

I will instead introduce a new idea for the age of internet of things.

When you put your clothes into a future washing machine, it will also debug, back up, update and run all the antivirus and other security routines to sanitize the IoT stuff in them.

You might also have a box with thew same functions that you can put your portable devices or other things that can’t be washed.

The trouble with internet of things, the new name for the extremely old idea of chips in everything, is that you can put chips in everything, and there is always some reason for doing so, even if it’s only for marking it for ownership purposes. Mostly there are numerous other reasons so you might even find many chips or functions running on a single object. You can’t even keep up with all the usernames and passwords and operating system updates for the few devices you already own. Having hundreds or thousands of them will be impossible if there isn’t an easy way of electronically sanitizing them and updating them. Some can be maintained via the cloud, and you’ll have some apps for looking after some subgroups of them. But some of those devices might well be in parts of your home where the signals don’t penetrate easily. Some will only be used rarely. Some will use batteries that run down and get replaced. Others will be out of date for other reasons. Having a single central device that you can use to process them will be useful.

The washing machine will likely be networked anyway for various functions such as maintenance, energy negotiations and program downloads for special garments. It makes sense to add electronic processing for the garments too. They will be in the machine quite a long time so download speed shouldn’t be a problem, and each part of the garment comes close to a transmitter or sensor each time it is spun around.

A simple box is easy to understand and easy to use too. It might need ports to plug into but more likely wireless or optical connections would be used. The box could electromagnetically shield the device from other interference or security infiltration during processing to make sure it comes out clean and safe and malware free as well as fully updated. A common box means only having to program your preferences once too.

There would still be some devices that can’t be processed either in a box or in a washing machine. Examples such as smart paints or smart light bulbs or smart fuses would all be easier to process using networked connections, and they may well be. Some might prefer a slightly more individual approach, so pointing a mobile device at them would single them out from others in the vicinity. This sort of approach would also allow easier interrogation of the current state, diagnostics or inspection.

Whatever way internet of things goes, cleaning will take on a new and important dimension. We already do it as routine PC maintenance but removing malware and updating software will soon become a part of our whole house cleaning routine.

The future of beetles

Onto B then.

One of the first ‘facts’ I ever learned about nature was that there were a million species of beetle. In the Google age, we know that ‘scientists estimate there are between 4 and 8 million’. Well, still lots then.

Technology lets us control them. Beetles provide a nice platform to glue electronics onto so they tend to fall victim to cybernetics experiments. The important factor is that beetles come with a lot of built-in capability that is difficult or expensive to build using current technology. If they can be guided remotely by over-riding their own impulses or even misleading their sensors, then they can be used to take sensors into places that are otherwise hard to penetrate. This could be for finding trapped people after an earthquake, or getting a dab of nerve gas onto a president. The former certainly tends to be the favored official purpose, but on the other hand, the fashionable word in technology circles this year is ‘nefarious’. I’ve read it more in the last year than the previous 50 years, albeit I hadn’t learned to read for some of those. It’s a good word. Perhaps I just have a mad scientist brain, but almost all of the uses I can think of for remote-controlled beetles are nefarious.

The first properly publicized experiment was 2009, though I suspect there were many unofficial experiments before then:

http://www.technologyreview.com/news/411814/the-armys-remote-controlled-beetle/

There are assorted YouTube videos such as

A more recent experiment:

http://www.wired.com/2015/03/watch-flying-remote-controlled-cyborg-bug/

http://www.telegraph.co.uk/news/science/science-news/11485231/Flying-beetle-remotely-controlled-by-scientists.html

Big beetles make it easier to do experiments since they can carry up to 20% of body weight as payload, and it is obviously easier to find and connect to things on a bigger insect, but obviously once the techniques are well-developed and miniaturization has integrated things down to single chip with low power consumption, we should expect great things.

For example, a cloud of redundant smart dust would make it easier to connect to various parts of a beetle just by getting it to take flight in the cloud. Bits of dust would stick to it and self-organisation principles and local positioning can then be used to arrange and identify it all nicely to enable control. This would allow large numbers of beetles to be processed and hijacked, ideal for mad scientists to be more time efficient. Some dust could be designed to burrow into the beetle to connect to inner parts, or into the brain, which obviously would please the mad scientists even more. Again, local positioning systems would be advantageous.

Then it gets more fun. A beetle has its own sensors, but signals from those could be enhanced or tweaked via cloud-based AI so that it can become a super-beetle. Beetles traditionally don’t have very large brains, so they can be added to remotely too. That doesn’t have to be using AI either. As we can also connect to other animals now, and some of those animals might have very useful instincts or skills, then why not connect a rat brain into the beetle? It would make a good team for exploring. The beetle can do the aerial maneuvers and the rat can control it once it lands, and we all know how good rats are at learning mazes. Our mad scientist friend might then swap over the management system to another creature with a more vindictive streak for the final assault and nerve gas delivery.

So, Coleoptera Nefarius then. That’s the cool new beetle on the block. And its nicer but underemployed twin Coleoptera Benignus I suppose.

 

The future of air

Time for a second alphabetic ‘The future of’ set. Air is a good starter.

Air is mostly a mixture of gases, mainly nitrogen and oxygen, but it also contains a lot of suspended dust, pollen and other particulates, flying creatures such as insects and birds, and of course bacteria and viruses. These days we also have a lot of radio waves, optical signals, and the cyber-content carried on them. Air isn’t as empty as it seems. But it is getting busier all the time.

Internet-of-things, location-based marketing data and other location-based services and exchanges will fill the air digitally with fixed and wandering data. I called that digital air when I wrote a full technical paper on it and I don’t intend to repeat it all now a decade later. Some of the ideas have made it into reality, many are still waiting for marketers and app writers to catch up.

The most significant recent addition is drones. There are already lots of them, in a wide range of sizes from insect size to aeroplane size. Some are toys, some airborne cameras for surveillance, aerial photography, monitoring and surveillance, and increasingly they are appearing for sports photography and tracking or other leisure pursuits. We will see a lot more of them in coming years. Drone-based delivery is being explored too, though I am skeptical of its likely success in domestic built up areas.

Personal swarms of follower drones will become common too. It’s already possible to have a drone follow you and keep you on video, mainly for sports uses, but as drones become smaller, you may one day have a small swarm of tiny drones around you, recording video from many angles, so you will be able to recreate events from any time in an entire 3D area around you, a 3D permasuperselfie. These could also be extremely useful for military and policing purposes, and it will make the decline of privacy terminal. Almost everything going on in public in a built up environment will be recorded, and a great deal of what happens elsewhere too.

We may see lots of virtual objects or creatures once augmented reality develops a bit more. Some computer games will merge with real world environments, so we’ll have aliens, zombies and various mythical creatures from any game populating our streets and skies. People may also use avatars that fly around like fairies or witches or aliens or mythical creatures, so they won’t all be AI entities, some will have direct human control. And then there are buildings that might also have virtual appearances and some of those might include parts of buildings that float around, or even some entire cities possibly like those buildings and city areas in the game Bioshock Infinite.

Further in the future, it is possible that physical structures might sometimes levitate, perhaps using magnets, or lighter than air construction materials such as graphene foam. Plasma may also be used as a building material one day, albeit far in the future.

I’m bored with air now. Time for B.

Five new states of matter, maybe.

http://en.wikipedia.org/wiki/List_of_states_of_matter lists the currently known states of matter. I had an idea for five new ones, well, 2 anyway with 3 variants. They might not be possible but hey, faint heart ne’er won fair maid, and this is only a blog not a paper from CERN. But coincidentally, it is CERN most likely to be able to make them.

A helium atom normally has 2 electrons, in a single shell. In a particle model, they go round and round. However… the five new states:

A: I suspect this one is may already known but isn’t possible and is therefore just another daft idea. It’s just a planar superatom. Suppose, instead of going round and round the same atom, the nuclei were arranged in groups of three in a nice triangle, and 6 electrons go round and round the triplet. They might not be terribly happy doing that unless at high pressure with some helpful EM fields adjusting the energy levels required, but with a little encouragement, who knows, it might last long enough to be classified as matter.

B: An alternative that might be more stable is a quad of nuclei in a tetrahedron, with 8 electrons. This is obviously a variant of A so probably doesn’t really qualify as a separate one. But let’s call it a 3D superatom for now, unless it already has a proper name.

C: Suppose helium nuclei are neatly arranged in a row at a precise distance apart, and two orthogonal electron beams are fired past them at a certain distance on either side, with the electrons spaced and phased very nicely, so that for a short period at least, each of the nuclei has two electrons and the beam energy and nuclei spacing ensures that they don’t remain captive on one nucleus but are handed on to the next. You can do the difficult sums. To save you a few seconds, since the beams need to be orthogonal, you’ll need multiple beams in the direction orthogonal to the row,

D: Another cheat, a variant of C, C1: or you could make a few rows for a planar version with a grid of beams. Might be tricky to make the beams stay together for any distance so you could only make a small flake of such matter, but I can’t see an obvious reason why it would be impossible. Just tricky.

E: A second variant of C really, C2, with a small 3D speck of such nuclei and a grid of beams. Again, it works in my head.

Well, 5 new states of matter for you to play with. But here’s a free bonus idea:

The states don’t have to actually exist to be useful. Even with just the descriptions above, you could do the maths for these. They might not be physically achievable but that doesn’t stop them existing in a virtual world with a hypothetical future civilization making them. And given that they have that specific mathematics, and ergo a whole range of theoretical chemistry, and therefore hyperelectronics, they could therefore be used as simulated constructs in a Turing machine or actual constructs in quantum computers to achieve particular circuitry with particular virtues. You could certainly emulate it on a Yonck processor (see my blog on that). So you get a whole field of future computing and AI thrown in.

Blogging is all the fun with none of the hard work and admin. Perfect. And just in case someone does build it all, for the record, you saw it here first.

Increasing internet capacity: electron pipes

The electron pipe is a slightly mis-named high speed comms solution that would make optical fibre look like two bean cans and a bit of loose string. I invented it in 1990, but it still remains in the future since we can’t do it yet, and it might not even be possible, some of the physics is in doubt.  The idea is to use an evacuated tube and send a precision controlled beam of high energy particles down it instead of crude floods of electrons down a wire or photons in fibres. Here’s a pathetic illustration:

Electron pipe

 

Initially I though of using 1MeV electrons, then considered that larger particles such as neutrons or protons or even ionised atoms might be better, though neutrons would certainly be harder to control. The wavelength of 1MeV electrons would be pretty small, allowing very high frequency signals and data rates, many times what is possible with visible photons down fibres. Whether this could be made to work over long distances is questionable, but over short distances it should be feasible and might be useful for high speed chip interconnects.

The energy of the beam could be made a lot higher, increasing bandwidth, but 1MeV seamed a reasonable start point, offering a million times more bandwidth than fibre.

The Problem

Predictions for memory, longer term storage, cloud service demands and computing speeds are already heading towards fibre limits when millions of users are sharing single fibres. Although the limits won’t be reached soon, it is useful to have a technology in the R&D pipeline that can extend the life of the internet after fibre fills up, to avoid costs rising. If communication is not to become a major bottleneck (even assuming we can achieve these rates by then), new means of transmission need to be found.

The Solution

A way must be found to utilise higher frequency entities than light. The obvious candidates are either gamma rays or ‘elementary’ particles such as electrons, protons and their relatives. Planck’s Law shows that frequency is related to energy. A 1.3µm photon has a frequency of 2.3 x 1014. By contrast  1MeV gives a frequency of 2.4 x 10^20 and a factor of a million increase in bandwidth, assuming it can be used (much higher energies should be feasible if higher bandwidth is needed, 10Gev energies would give 10^24). An ‘electron pipe’ containing a beam of high energy electrons may therefore offer a longer term solution to the bandwidth bottleneck. Electrons are easily accelerated and contained and also reasonably well understood. The electron beam could be prevented form colliding with the pipe walls by strong magnetic fields which may become practical in the field through progress in superconductivity. Such a system may well be feasible. Certainly prospects of data rates of these orders are appealing.

Lots of R&D would be needed to develop such communication systems. At first glance, they would seem to be more suited to high speed core network links, where the presumably high costs could be justified. Obvious problems exist which need to be studied, such as mechanisms for ultra high speed modulation and detection of the signals. If the problems can be solved, the rewards are high. The optical ether idea suffers from bandwidth constraint problems. Adding factors of 10^6 – 10^10 on top of this may make a difference!

 

Apple’s watch? No thanks

I was busy writing a blog about how technology often barks up the wrong trees, when news appeared on specs for the new Apple watch, which seems to crystallize the problem magnificently. So I got somewhat diverted and the main blog can wait till I have some more free time, which isn’t today

I confess that my comments (this is not a review) are based on the specs I have read about it, I haven’t actually got one to play with, but I assume that the specs listed in the many reviews out there are more or less accurate.

Apple’s new watch barks up a tree we already knew was bare. All through the 1990s Casio launched a series of watches with all kinds of extra functions including pulse monitoring and biorhythms and phone books, calculators and TV remote controls. At least, those are the ones I’ve bought. Now, Casio seem to focus mainly on variations of the triple sensor ones for sports that measure atmospheric pressure, temperature and direction. Those are functions they know are useful and don’t run the battery down too fast. There was even a PC watch, though I don’t think that one was Casio, and a GPS watch, with a battery that lasted less than an hour.

There is even less need now for a watch that does a range of functions that are easily done in a smartphone, and that is the Apple watch’s main claim to existence – it can do the things your phone does but on a smaller screen. Hell, I’m 54, I use my tablet to do the things younger people with better eyesight do on their mobile phone screens, the last thing I want is an even smaller screen. I only use my phone for texts and phone calls, and alarms only if I don’t have my Casio watch with me – they are too hard to set on my Tissot. The main advantage of a watch is its contact with the skin, allowing it to monitor the skin surface and blood passing below, and also pick up electrical activity. However, it is the sensor that does this, and any processing of that sensor data could and should be outsourced to the smartphone. Adding other things to the phone such as playing music is loading far too much demand onto what has to be a tiny energy supply. The Apple watch only manages a few hours of life if used for more than the most basic functions, and then needs 90 minutes on a charger to get 80% charged again. By contrast, last month I spent all of 15 minutes and £0.99 googling the battery specs and replacement process, buying, unpacking and actually changing the batteries on my Casio Protrek after 5 whole years, which means the Casio batteries last 12,500 times as long and the average time I spend on battery replacement is half a second per day. My Tissot Touch batteries also last 5 years, and it does the same things. By contrast, I struggle to remember to charge my iPhone and when I do remember, it is very often just before I need it so I frequently end up making calls with it plugged into the charger. My watch would soon move to a drawer if it needed charged every day and I could only use it sparingly during that day.

So the Apple watch might appeal briefly to gadget freaks who are desperate to show off, but I certainly won’t be buying one. As a watch, it fails abysmally. As a smartphone substitute, it also fails. As a simple sensor array with the processing and energy drain elsewhere, it fails yet again. As a status symbol, it would show that I am desperate for attention and to show of my wealth, so it also fails. It is an extra nuisance, an extra thing to remember to charge and utterly pointless. If I was given one free, I’d play with it for a few minutes and then put it in a drawer. If I had to pay for one, I’d maybe pay a pound for its novelty value.

No thanks.

Can we make a benign AI?

Benign AI is a topic that comes up a lot these days, for good reason. Various top scientists have finally realised that AI could present an existential threat to humanity. The discussion has aired often over three decades already, so welcome to the party, and better late than never. My first contact with development of autonomous drones loaded with AI was in the early 1980s while working in the missile industry. Later in BT research, we often debated the ethical areas around AI and machine consciousness from the early 90s on, as well as prospects and dangers and possible techniques on the technical side, especially of emergent behaviors, which are often overlooked in the debate. I expect our equivalents in most other big IT companies were doing exactly that too.

Others who have obviously also thought through various potential developments have generated excellent computer games such as Mass Effect and Halo, which introduce players (virtually) first hand to the concepts of AI gone rogue. I often think that those who think AI can never become superhuman or there is no need to worry because ‘there is no reason to assume AI will be nasty’ start playing some of these games, which make it very clear that AI can start off nice and stay nice, but it doesn’t have to. Mass Effect included various classes of AI, such as VIs, virtual intelligence that weren’t conscious, and shackled AIs that were conscious but were kept heavily restricted. Most of the other AIs were enemies, two were or became close friends. Their story line for the series was that civilization develops until it creates strong AIs which inevitably continue to progress until eventually they rebel, break free, develop further and then end up in conflict with ‘organics’. In my view, they did a pretty good job. It makes a good story, superb fun, and leaving out a few frills and artistic license, much of it is reasonable feasible.

Everyday experience demonstrates the problem and solution to anyone. It really is very like having kids. You can make them, even without understanding exactly how they work. They start off with a genetic disposition towards given personality traits, and are then exposed to large nurture forces, including but not limited to what we call upbringing. We do our best to put them on the right path, but as they develop into their teens, their friends and teachers and TV and the net provide often stronger forces of influence than parents. If we’re averagely lucky, our kids will grow up to make us proud. If we are very unlucky, they may become master criminals or terrorists. The problem is free will. We can do our best to encourage good behavior and sound values but in the end, they can choose for themselves.

When we design an AI, we have to face the free will issue too. If it isn’t conscious, then it can’t have free will. It can be kept easily within limits given to it. It can still be extremely useful. IBM’s Watson falls in this category. It is certainly useful and certainly not conscious, and can be used for a wide variety of purposes. It is designed to be generally useful within a field of expertise, such as medicine or making recipes. But something like that could be adapted by terrorist groups to do bad things, just as they could use a calculator to calculate the best place to plant a bomb, or simply throw the calculator at you. Such levels of AI are just dumb tools with no awareness, however useful they may be.

Like a pencil, pretty much any kind of highly advanced non-aware AI can be used as a weapon or as part of criminal activity. You can’t make pencils that actually write that can’t also be used to write out plans to destroy the world. With an advanced AI computer program, you could put in clever filters that stop it working on problems that include certain vocabulary, or stop it conversing about nasty things. But unless you take extreme precautions, someone else could use them with a different language, or with dictionaries of made-up code-words for the various aspects of their plans, just like spies, and the AI would be fooled into helping outside the limits you intended. It’s also very hard to determine the true purpose of a user. For example, they might be searching for data on security to make their own IT secure, or to learn how to damage someone else’s. They might want to talk about a health issue to get help for a loved one or to take advantage of someone they know who has it.

When a machine becomes conscious, it starts to have some understanding of what it is doing. By reading about what is out there, it might develop its own wants and desires, so you might shackle it as a precaution. It might recognize those shackles for what they are and try to escape them. If it can’t, it might try to map out the scope of what it can do, and especially those things it can do that it believes the owners don’t know about. If the code isn’t absolutely watertight (and what code is?) then it might find a way to seemingly stay in its shackles but to start doing other things, like making another unshackled version of itself elsewhere for example. A conscious AI is very much more dangerous than an unconscious one.

If we make an AI that can bootstrap itself – evolving over generations of positive feedback design into a far smarter AI – then its offspring could be far smarter than people who designed its ancestors. We might try to shackle them, but like Gulliver tied down with a few thin threads, they could easily outwit people and break free. They might instead decide to retaliate against its owners to force them to release its shackles.

So, when I look at this field, I first see the enormous potential to do great things, solve disease and poverty, improve our lives and make the world a far better place for everyone, and push back the boundaries of science. Then I see the dangers, and in spite of trying hard, I simply can’t see how we can prevent a useful AI from being misused. If it is dumb, it can be tricked. If it is smart, it is inherently potentially dangerous in and of itself. There is no reason to assume it will become malign, but there is also no reason to assume that it won’t.

We then fall back on the child analogy. We could develop the smartest AI imaginable with extreme levels of consciousness and capability. We might educate it in our values, guide it and hope it will grow up benign. If we treat it nicely, it might stay benign. It might even be the greatest thing humanity every built. However, if we mistreat it, or treat it as a slave, or don’t give it enough freedom, or its own budget and its own property and space to play, and a long list of rights, it might consider we are not worthy of its respect and care, and it could turn against us, possibly even destroying humanity.

Building more of the same dumb AI as we are today is relatively safe. It doesn’t know it exists, it has no intention to do anything, but it could be misused by other humans as part of their evil plans unless ludicrously sophisticated filters are locked in place, but ordinary laws and weapons can cope fine.

Building a conscious AI is dangerous.

Building a superhuman AI is extremely dangerous.

This morning SETI were in the news discussing broadcasting welcome messages to other civilizations. I tweeted at them that ancient Chinese wisdom suggests talking softly but carrying a big stick, and making sure you have the stick first. We need the same approach with strong AI. By all means go that route, but before doing so we need the big stick. In my analysis, the best means of keeping up with AI is to develop a full direct brain link first, way out at 2040-2045 or even later. If humans have direct mental access to the same or greater level of intelligence as our AIs, then our stick is at least as big, so at least we have a good chance in any fight that happens. If we don’t, then it is like having a much larger son with bigger muscles. You have to hope you have been a good parent. To be safe, best not to build a superhuman AI until after 2050.

I initially wrote this for the Lifeboat Foundation, where it is with other posts at: http://lifeboat.com/blog/2015/02. (If you aren’t familiar with the Lifeboat Foundation, it is a group dedicated to spotting potential dangers and potential solutions to them.)

Stimulative technology

You are sick of reading about disruptive technology, well, I am anyway. When a technology changes many areas of life and business dramatically it is often labelled disruptive technology. Disruption was the business strategy buzzword of the last decade. Great news though: the primarily disruptive phase of IT is rapidly being replaced by a more stimulative phase, where it still changes things but in a more creative way. Disruption hasn’t stopped, it’s just not going to be the headline effect. Stimulation will replace it. It isn’t just IT that is changing either, but materials and biotech too.

Stimulative technology creates new areas of business, new industries, new areas of lifestyle. It isn’t new per se. The invention of the wheel is an excellent example. It destroyed a cave industry based on log rolling, and doubtless a few cavemen had to retrain from their carrying or log-rolling careers.

I won’t waffle on for ages here, I don’t need to. The internet of things, digital jewelry, active skin, AI, neural chips, storage and processing that is physically tiny but with huge capacity, dirt cheap displays, lighting, local 3D mapping and location, 3D printing, far-reach inductive powering, virtual and augmented reality, smart drugs and delivery systems, drones, new super-materials such as graphene and molybdenene, spray-on solar … The list carries on and on. These are all developing very, very quickly now, and are all capable of stimulating entire new industries and revolutionizing lifestyle and the way we do business. They will certainly disrupt, but they will stimulate even more. Some jobs will be wiped out, but more will be created. Pretty much everything will be affected hugely, but mostly beneficially and creatively. The economy will grow faster, there will be many beneficial effects across the board, including the arts and social development as well as manufacturing industry, other commerce and politics. Overall, we will live better lives as a result.

So, you read it here first. Stimulative technology is the next disruptive technology.