Category Archives: technology

The future of holes

H already in my alphabetic series! I was going to write about happiness, or have/have nots, or hunger, or harassment, or hiding, or health. Far too many options for H. Holes is a topic I have never written about, not even a bit, whereas the others would just be updates on previous thoughts. So here goes, the future of holes.

Holes come in various shapes and sizes. At one extreme, we have great big holes from deep mining, drilling, fracking, and natural holes such as meteor craters, rifts and volcanoes. Some look nice and make good documentaries, but I have nothing to say about them.

At the other we have long thin holes in optical fibers that increase bandwidth or holes through carbon nanotubes to make them into electron pipes. And short fat ones that make nice passages through semi-permeable smart membranes.

Electron pipes are an idea I invented in 1992 to increase internet capacity by several orders of magnitude. I’ve written about them in this blog before: https://timeguide.wordpress.com/2015/05/04/increasing-internet-capacity-electron-pipes/

Short fat holes are interesting. If you make a fabric using special polymers that can stretch when a voltage is applied across it, then round holes in it would become oval holes as long as you only stretch it in one direction.  Particles that may fit through round holes might be too thick to pass through them when they are elongated. If you can do that with a membrane on the skin surface, then you have an electronically controllable means of allowing the right mount of medication to be applied. A dispenser could hold medication and use the membrane to allow the right doses at the right time to be applied.

Long thin holes are interesting too. Hollow fiber polyester has served well as duvet and pillow filling for many years. Suppose more natural material fibers could be engineered to have holes, and those holes could be filled with chemicals that are highly distasteful to moths. As a moth larva starts to eat the fabric, it would very quickly be repelled, protecting the fabric from harm.

Conventional wisdom says when you are in a hole, stop digging. End.

The future of feminism and fashion

Perhaps it’s a bit presumptive of me to talk about what feminists want or don’t want, but I will make the simplifying assumption that they vary somewhat and don’t all want the same things. When it comes to makeup, many feminists want to look how they want to look for their own pleasure, not specifically to appeal to men, or they may want to attract some people and not others, or they may not want to bother with makeup at all, but still be able to look nice for the right people.

Augmented reality will allow those options. AR creates an extra layer of appearance that allows a woman to present herself any way she wants via an avatar, and also to vary presented appearance according to who is looking at her. So she may choose to be attractive to people she finds attractive, and plain to people she’d rather not get attention from. This is independent of any makeup she might be wearing, so she may choose not to wear any at all and rely entirely on the augmented reality layer to replace makeup, saving a lot of time, effort and expense. She could even use skin care products such as face masks that are purely functional, nourishing or protecting her face, but which don’t look very nice. Friends, colleagues and particular subsections of total strangers would still see her as she wants to be seen and she might not care about how she appears to others.

It may therefore be possible that feminism could use makeup as a future activist platform. It would allow women to seize back control over their appearance in a far more precise way, making it abundantly clear that their appearance belongs to them and is under their control and that they control who they look nice for. They would not have to give up looking good for themselves or their friends, but would be able to exclude any groups currently out of favour.

However, it doesn’t have to be just virtual appearance that they can control electronically. It is also possible to have actual physical makeup that changes according to time, location, emotional state or circumstances. Active makeup does just that, but I’ve written too often about that. Let’s look instead at other options:

Fashion has created many different clothing accessories over the years. It has taken far longer than it should, but we are now finally seeing flexible polymer displays being forged into wrist watch straps and health monitoring bands as well as bendy and curvy phones. As 1920s era fashion makes a small comeback, it can’t be long before headbands and hair-bands come back and they would be a perfect display platform too. Hair accessories can be pretty much any shape and size, and be a single display zone or multiple ones. Some could even use holographic displays, so that the accessory seems to change its form, or have optional remote components seemingly hanging free in the nearby air. Any of these could be electronically controllable or set to adjust automatically according to location and the people present.

Displays would also make good forehead jewellery, such as electronic eyebrows, holographic jewels, smart bindis, forehead tattoos and so on. They could change colour or pattern according to emotions for example. As long as displays are small, skin flexing doesn’t present too big an engineering barrier.

In fact, small display particles such as electronic glitter could group together to appear as a single display, even though each is attached to a different piece of skin. Thus, flexing of the skin is still possible with a collection of rigid small displays, which could be millimetre sized electronic glitter. Electronic glitter could contain small capacitors that store energy harvested from temperature difference between the skin and the environment, periodically allowing a colour change.

However, it won’t be just the forehead that is available once displays become totally flexible. That will make the whole visible face an electronic display platform instead of just a place for dumb makeup. Smart freckles and moles could make a fashion reappearance. Lips and cheeks could change colour according to mood and pre-decided protocols, rather than just at the whim of nature.

Other parts of the body would likely house displays too. Fingernails and toenails could be an early candidate since they are relatively rigid. The wrist and forearm are also often exposed. Much of the rest of the body is concealed by clothing most of the time, but seasonal displays are likely when it is more often bare. Beach displays could interact with swimwear, or even substitute for it.

In fact, enabling a multitude of tiny displays on the face and around the body will undoubtedly create a new fashion design language. Some dialects could be secret, only understood by certain groups, a tribal language. Fashion has always had an extensive symbology and adding electronic components to the various items will extend its potential range. It is impossible to predict what different things will mean to mainstream and sub-cultures, as meanings evolve chaotically from random beginnings. But there will certainly be many people and groups willing to capitalise on the opportunities presented. Feminism could use such devices and languages to good effect.

Clothing and accessories such as jewellery are also obvious potential display platforms. A good clue for the preferred location is the preferred location today for similar usage. For example, many people wear logos, messages and pictures on their T-shirts, whereas other items of clothing remain mostly free of them. The T-shirt is therefore by far the most likely electronic display area. Belts, boots, shoes and bag-straps offer a likely platform too, not because they are used so much today, but because they again present an easy and relatively rigid physical platform.

Timescales for this run from historical appearance of LED jewellery at Christmas (which I am very glad to say I also predicted well in advance) right through to holographic plates that appear to hover around the person as they walk around. I’ve explained in previous blogs how actual floating and mobile plates could be made using plasma and electro-magnetics. But the timescale of relevance in the next few years is that of the cheaper and flexible polymer display. As costs fall and size increases, in parallel with an ever improving wireless and cloud infrastructure, the potential revenue from a large new sector combining the fashion and display industries will make this not so much likely as  inevitable.

The future of electronic cash and value

 

Picture first, I’m told people like to see pics in blogs. This one is from 1998; only the title has changed since.

future electronic cash

Every once in a while I have to go to a bank. This time it was my 5th attempt to pay off a chunk of my Santander Mortgage. I didn’t know all the account details for web transfer so went to the Santander branch. Fail – they only take cash and cheques. Cash and what??? So I tried via internet banking. Entire transaction details plus security entered, THEN Fail – I exceeded what Barclays allows for their fast transfers. Tried again with smaller amount and again all details and all security. Fail again, Santander can’t receive said transfers, try CHAPS. Tried CHAPS, said it was all fine, all hunkydory. Happy bunny. Double fail. It failed due to amount exceeding limit AND told me it had succeeded when it hadn’t. I then drove 12 miles to my Barclays branch who eventually managed to do it, I think (though I haven’t checked that it worked  yet).

It is 2015. Why the hell is it so hard for two world class banks to offer a service we should have been able to take for granted 20 years ago?

Today, I got tweeted about Ripple Labs and a nice blog that quote their founder sympathising with my experience above and trying to solve it, with some success:

http://www.wfs.org/blogs/richard-samson/supermoney-new-wealth-beyond-banks-and-bitcoin

Ripple seems good as far as it goes, which is summarised in the blog, but do read the full original:

Basically the Ripple protocol “provides the ability for humans to confirm financial transactions without a central operator,” says Larsen. “This is major.” Bitcoin was the first technology to successfully bypass banks and other authorities as transaction validators, he points out, “but our method is much cheaper and takes only seconds rather than minutes.” And that’s just for starters. For example, “It also leverages the enormous power of banks and other financial institutions.”

The power of the value web stems from replacing archaic back-end systems with all their cumbersome delays and unnecessary costs. 

That’s great, I wish them the best of success. It is always nice to see new systems that are more efficient than the old ones, but the idea is early 1990s. Lots of IT people looked at phone billing systems and realised they managed to do for a penny what banks did for 65 pennies at the time, and telco business cases were developed to replace the banks with pretty much what Ripple tries to do. Those were never developed for a variety of reasons, both business and regulatory, but the ideas were certainly understood and developed broadly at engineer level to include not only traditional cash forms but many that didn’t exist then and still don’t. Even Ripple can only process transactions that are equivalent to money such as traditional currencies, electronic cash forms like bitcoin, sea shells or air-miles.

That much is easy, but some forms require other tokens to have value, such as personalized tokens. Some value varies according to queue lengths, time of day, who is spending it to whom. Some needs to be assignable, so you can give money that can only be used to purchase certain things, and may have a whole basket of conditions attached. Money is also only one form of value, and many forms of value are volatile, only existing at certain times and places in certain conditions for certain transactors. Aesthetic cash? Play money? IOUs? Favours?These are  all a bit like cash but not necessarily tradable or exchangeable using simple digital transaction engines because they carry emotional weighting as well as financial value. In the care economy, which is now thankfully starting to develop and is finally reaching concept critical mass, emotional value will become immensely important and it will have some tradable forms, though much will not be tradable ever. We understood all that then, but are still awaiting proper implementation. Most new startups on the web are old ideas finally being implemented and Ripple is only a very partial implementation so far.

Here is one of my early blogs from 1998, using ideas we’d developed several years earlier that were no longer commercially sensitive – you’ll observe just how much banks have under-performed against what we expected of them, and what was entirely feasible using already known technology then:

Future of Money

 Ian Pearson, BT Labs, June 98

Already, people are buying things across the internet. Mostly, they hand over a credit card number, but some transactions already use electronic cash. The transactions are secure so the cash doesn’t go astray or disappear, nor can it easily be forged. In due course, using such cash will become an everyday occurrence for us all.

Also already, electronic cash based on smart cards has been trialled and found to work well. The BT form is called Mondex, but it is only one among several. These smart cards allow owners to ‘load’ the card with small amounts of money for use in transactions where small change would normally be used, paying bus fares, buying sweets etc. The cards are equivalent to a purse. But they can and eventually will allow much more. Of course, electronic cash doesn’t have to be held on a card. It can equally well be ‘stored’ in the network. Transactions then just require secure messaging across the network. Currently, the cost of this messaging makes it uneconomic for small transactions that the cards are aimed at, but in due course, this will become the more attractive option, especially since you no longer lose your cash when you lose the card.

When cash is digitised, it loses some of the restrictions of physical cash. Imagine a child has a cash card. Her parents can give her pocket money, dinner money, clothing allowance and so on. They can all be labelled separately, so that she can’t spend all her dinner money on chocolate. Electronic shopping can of course provide the information needed to enable the cash. She may have restrictions about how much of her pocket money she may spend on various items too. There is no reason why children couldn’t implement their own economies too, swapping tokens and IOUs. Of course, in the adult world this grows up into local exchange trading systems (LETS), where people exchange tokens too, a glorified babysitting circle. But these LETS don’t have to be just local, wider circles could be set up, even globally, to allow people to exchange services or information with each other.

Electronic cash can be versatile enough to allow for negotiable cash too. Credit may be exchanged just as cash and cash may be labelled with source. For instance, we may see celebrity cash, signed by the celebrity, worth more because they have used it. Cash may be labelled as tax paid, so those donations from cards to charities could automatically expand with the recovered tax. Alternatively, VAT could be recovered at point of sale.

With these advanced facilities, it becomes obvious that the cash needs to become better woven into taxation systems, as well as auditing and accounting systems. These functions can be much more streamlined as a result, with less human administration associated with money.

When ID verification is added to the transactions, we can guarantee who it is carrying out the transaction. We can then implement personal taxation, with people paying different amounts for the same goods. This would only work for certain types of purchase – for physical goods there would otherwise be a thriving black market.

But one of the best advantages of making cash digital is the seamlessness of international purchases. Even without common official currency, the electronic cash systems will become de facto international standards. This will reduce the currency exchange tax we currently pay to the banks every time we travel to a different country, which can add up to as much as 25% for an overnight visit. This is one of the justifications often cited for European monetary union, but it is happening anyway in global e-commerce.

Future of banks

 Banks will have to change dramatically from today’s traditional institutions if they want to survive in the networked world. They are currently introducing internet banking to try to keep customers, but the move to digital electronic cash, held perhaps by the customer or an independent third party, will mean that the cash can be quite separate from the transaction agent. Cash does not need to be stored in a bank if records in secured databases anywhere can be digitally signed and authenticated. The customer may hold it on his own computer, or in a cyberspace vault elsewhere. With digital signatures and high network security, advanced software will put the customer firmly in control with access to any facility or service anywhere.

In fact, no-one need hold cash at all, or even move it around. Cash is just bits today, already electronic records. In the future, it will be an increasingly blurred entity, mixing credit, reputation, information, and simply promises into exchangeable tokens. My salary may be just a digitally signed certificate from BT yielding control of a certain amount of credit, just another signature on a long list as the credit migrates round the economy. The ‘promise to pay the bearer’ just becomes a complex series of serial promises. Nothing particularly new here, just more of what we already have. Any corporation or reputable individual may easily capture the bank’s role of keeping track of the credit. It is just one service among many that may leave the bank.

As the world becomes increasingly networked, the customer could thus retain complete control of the cash and its use, and could buy banking services on a transaction by transaction basis. For instance, I could employ one company to hold my cash securely and prevent its loss or forgery, while renting the cash out to companies that want to borrow via another company, keeping the bulk of the revenue for myself. Another company might manage my account, arrange transfers etc, and deal with the taxation, auditing etc. I could probably get these done on my personal computer, but why have a dog and bark yourself.

The key is flexibility, none of these services need be fixed any more. Banks will not compete on overall package, but on every aspect of service. Worse still (for the banks), some of their competitors will be just freeware agents. The whole of the finance industry will fragment. The banks that survive will almost by definition be very adaptable. Services will continue and be added to, but not by the rigid structures of today. Surviving banks should be able to compete for a share of the future market as well as anyone. They certainly have a head start in many of the required skills, and have the advantage of customer lethargy when it comes to changing to potentially better suppliers. Many of their customers will still value tradition and will not wish to use the better and cheaper facilities available on the network. So as always, it looks like there will be a balance.

Firstly, with large numbers of customers moving to the network for their banking services, banks must either cater for this market or become a niche operator, perhaps specialising in tradition, human service and even nostalgia. Most banks however will adapt well to network existence and will either be entirely network based, or maintain a high street presence to complement their network presence.

High Street banking

 Facilities in high street banking will echo this real world/cyberspace nature. It must be possible to access network facilities from within the banks, probably including those of competitors. The high street bank may therefore be more like shops today, selling wares from many suppliers, but with a strongly placed own brand. There is of course a niche for banks with no services of their own at all who just provide access to services from other suppliers. All they offer in addition is a convenient and pleasant place to access them, with some human assistance as appropriate.

Traditional service may sometimes be pushed as a differentiator, and human service is bound to attract many customers too. In an increasingly machine dominated world, actually having the right kind of real people may be significant value add.

But many banks will be bursting with high technology either alongside or in place of people. Video terminals to access remote services, perhaps with translation to access foreign services. Biometric identification based on iris scan, fingerprints etc may be used to authenticate smart cards, passports or other legal documents before their use, or simply a means of registering securely onto the network. High quality printers and electronic security embedding would enable banks to offer additional facilities like personal bank notes, usable as cash.

Of course, banks can compete in any financial service. Because the management of financial affairs gives them a good picture of many customer’s habits and preferences, they will be able to use this information to sell customer lists, identify market niches for new businesses, and predict the likely success of customers proposing setting up businesses.

As they try to stretch their brands into new territories, one area they may be successful is in information banking. People may use banks as the publishers of the future. Already knowledge guilds are emerging. Ultimately, any piece of information from any source can be marketed at very low publishing and distribution cost, making previously unpublishable works viable. Many people have wanted to write, but have been unable to find publishers due to the high cost of getting to market in paper. A work may be sold on the network for just pennies, and achieve market success by selling many more copies than could have been achieved by the high priced paper alternative. The success of electronic encyclopedias and the demise of Encyclopedia Britannica is evidence of this. Banks could allow people to upload information onto the net, which they would then manage the resultant financial transactions. If there aren’t very many, the maximum loss to the bank is very small. Of course, electronic cash and micropayment technology mean that the bank is not necessary, but for many, it may smooth the road.

Virtual business centres

Their exposure to the detailed financial affairs of the community put banks in a privileged position in identifying potential markets. They could therefore act as co-ordinators for virtual companies and co-operatives. Building on the knowledge guilds, they could broker the skills of their many customers to existing virtual companies and link people together to address business needs not addressed by existing companies, or where existing companies are inadequate or inefficient. In this way, short-term contractors, who may dominate the employment community, can be efficiently utilised to everyone’s gain. The employees win by getting more lucrative work, their customers get more efficient services at lower cost, and the banks laugh to themselves.

Future of the stock market

 In the next 10 years, we will probably see a factor of 1000 in computer speed and memory capacity. In parallel with hardware development, there are numerous research forays into software techniques that might yield more factors of 10 in the execution speed for programs. Tasks that used to take a second will be reduced to a millisecond. As if this impact were not enough, software will very soon be able to make logical deductions from the flood of information on the internet, not just from Reuters or Bloomberg, but from anywhere. They will be able to assess the quality and integrity of the data, correlate it with other data, run models, and infer likely other events and make buy or sell recommendations. Much dealing will still be done automatically subject to human-imposed restrictions, and the speed and quality of this dealing could far exceed current capability.

Which brings problems…

Firstly, the speed of light is fast but finite. With these huge processing speeds, computers will be able to make decisions within microseconds of receiving information. Differences in distance from the information source become increasingly important. Being just 200m closer to the Bank of England makes one microsecond difference to the time of arrival of information on interest rates, the information, insignificant to a human, but of sufficient duration for a fast computer to but or sell before competitors even receive the information. As speeds increase further over following years, the significant distance drops. This effect will cause great unfairness according to geographic proximity to important sources. There are two obvious outcomes. Either there becomes a strong premium on being closest, with rises in property values nearby to key sources, or perhaps network operators could be asked to provide guaranteed simultaneous delivery of information. This is entirely technically feasible but would need regulation, otherwise users could simply use alternative networks.

Secondly, exactly simultaneous processing will cause problems. If many requests for transactions arrive at exactly the same moment, computers or networks have to give priority in some way. This is bound to be a source of contention. Also, simultaneous events can often cause malfunctions, as was demonstrated perfectly at the launch of Big Bang. Information waves caused by such events are a network phenomenon that could potentially crash networks.

Such a delay-sensitive system may dictate network technology. Direct transmission through the air by means of radio or infrared (optical wireless) would be faster than routing signals through fibres that take a more tortuous route, especially since the speed of light in fibre is only two third that in air.

Ultimately, there is a final solution if speed of computing increases so far that transmission delay is too big a problem. The processing engines could actually be shared, with all the deals and information processing taking place in a central computer, using massive parallelism. It would be possible to construct such a machine that treated each subscribing company fairly.

An interesting future side effect of all this is that the predicted flood of people into the countryside may be averted. Even though people can work from anywhere, their computers have to be geographically very close to the information centres, i.e. the City. Automated dealing has to live in the city, human based dealing can work from anywhere. If people and machines have to work together, perhaps they must both work in the City.

Consumer dealing

 The stock exchange long since stopped being a trading floor with scraps of paper and became a distributed computer environment – it effectively moved into cyberspace. The deals still take place, but in cyberspace. There are no virtual environments yet, but the other tools such as automated buying and selling already exist. These computers are becoming smarter and exist in cyberspace every bit the same as the people. As a result, there is more automated analysis, more easy visualisation and more computer assisted dealing. People will be able to see which shares are doing well, spot trends and act on their computer’s advice at a button push. Markets will grow for tools to profit from shares, whether they be dealing software, advice services or visualisation software.

However, as we see more people buying personal access to share dealing and software to determine best buys, or even to automatically buy or sell on certain clues, we will see some very negative behaviours. Firstly, traffic will be highly correlated if personal computers can all act on the same information at the same time. We will see information waves, and also enormous swings in share prices. Most private individuals will suffer because of this, while institutions and individuals with better software will benefit. This is because prices will rise and fall simply because of the correlated activity of the automated software and not because of any real effects related to the shares themselves. Institutions may have to limit private share transactions to control this problem, but can also make a lot of money from modelling the private software and thus determining in advance what the recommendations and actions will be, capitalising enormously on the resultant share movements, and indeed even stimulating them. Of course, if this problem is generally perceived by the share dealing public, the AI software will not take off so the problem will not arise. What is more likely is that such software will sell in limited quantities, causing the effects to be significant, but not destroying the markets.

A money making scam is thus apparent. A company need only write a piece of reasonably good AI share portfolio management software for it to capture a fraction of the available market. The company writing it will of course understand how it works and what the effects of a piece of information will be (which they will receive at the same time), and thus able to predict the buying or selling activity of the subscribers. If they were then to produce another service which makes recommendations, they would have even more notice of an effect and able to directly influence prices. They would then be in the position of the top market forecasters who know their advice will be self fulfilling. This is neither insider dealing nor fraud, and of course once the software captures a significant share, the quality of its advice would be very high, decoupling share performance from the real world. Only the last people to react would lose out, paying the most, or selling at least, as the price is restored to ‘correct’ by the stock exchange, and of course even this is predictable to a point. The fastest will profit most.

The most significant factor in this is the proportion of share dealing influenced by that companies software. The problem is that software markets tend to be dominated by just two or three companies, and the nature of this type of software is that their is strong positive reinforcement for the company with the biggest influence, which could quickly lead to a virtual monopoly. Also, it really doesn’t matter whether the software is on the visualisation tools or AI side. Each can have a predictability associated with it.

It is interesting to contemplate the effects this widespread automated dealing would have of the stock market. Black Monday is unlikely to happen again as a result of computer activity within the City, but it certainly looks like prices will occasionally become decoupled from actual value, and price swings will become more significant. Of course, much money can be made on predicting the swings or getting access to the software-critical information before someone else, so we may see a need for equalised delivery services. Without equalised delivery, assuming a continuum of time, those closest to the dealing point will be able to buy or sell quicker, and since the swings could be extremely rapid, this would be very important. Dealers would have to have price information immediately, and of course the finite speed of light does not permit this. If dealing time is quantified, i.e. share prices are updated at fixed intervals, the duration of the interval becomes all important, strongly affect the nature of the market, i.e. whether everyone in that interval pays the same or the first to act gain.

Also of interest is the possibility of agents acting on behalf of many people to negotiate amongst themselves to increase the price of a company’s shares, and then sell on a pre-negotiated time or signal.

Such automated  systems would also be potentially vulnerable to false information from people or agents hoping to capitalise on their correlated behaviour.

Legal problems are also likely. If I write, and sell to a company, a piece of AI based share dealing software which learns by itself how stock market fluctuations arise, and then commits a fraud such as insider dealing (I might not have explained the law, or the law may have changed since it was written), who would be liable?

 And ultimately

 Finally, the 60s sci-fi film, The Forbin Project, considered a world where two massively powerful computers were each assigned control of competing defence systems, each side hoping to gain the edge. After a brief period of cultural exchange, mutual education and negotiation between the machines, they both decided to co-operate rather than compete, and hold all mankind at nuclear gunpoint to prevent wars. In the City of the future, similar competition between massively intelligent supercomputers in share dealing may have equally interesting consequences. Will they all just agree a fixed price and see the market stagnate instantly, or could the system result in economic chaos with massive fluctuations. Perhaps we humans can’t predict how machines much smarter than us would behave. We may just have to wait and see.

End of original blog piece

 

 

The future of cleaning

I’ve been thinking a bit about cleaning for various customers over the last few years. I won’t bother this time with the various self-cleaning fabrics, the fancy new ultrasonic bubble washing machines, or ultraviolet sterilization for hospitals, even though those are all very important areas.  I won’t even focus on using your old sonic toothbrush heads in warm water with a little detergent to clean the trickier areas of your porcelain collectibles, though that does work much better than I thought it would.

I will instead introduce a new idea for the age of internet of things.

When you put your clothes into a future washing machine, it will also debug, back up, update and run all the antivirus and other security routines to sanitize the IoT stuff in them.

You might also have a box with thew same functions that you can put your portable devices or other things that can’t be washed.

The trouble with internet of things, the new name for the extremely old idea of chips in everything, is that you can put chips in everything, and there is always some reason for doing so, even if it’s only for marking it for ownership purposes. Mostly there are numerous other reasons so you might even find many chips or functions running on a single object. You can’t even keep up with all the usernames and passwords and operating system updates for the few devices you already own. Having hundreds or thousands of them will be impossible if there isn’t an easy way of electronically sanitizing them and updating them. Some can be maintained via the cloud, and you’ll have some apps for looking after some subgroups of them. But some of those devices might well be in parts of your home where the signals don’t penetrate easily. Some will only be used rarely. Some will use batteries that run down and get replaced. Others will be out of date for other reasons. Having a single central device that you can use to process them will be useful.

The washing machine will likely be networked anyway for various functions such as maintenance, energy negotiations and program downloads for special garments. It makes sense to add electronic processing for the garments too. They will be in the machine quite a long time so download speed shouldn’t be a problem, and each part of the garment comes close to a transmitter or sensor each time it is spun around.

A simple box is easy to understand and easy to use too. It might need ports to plug into but more likely wireless or optical connections would be used. The box could electromagnetically shield the device from other interference or security infiltration during processing to make sure it comes out clean and safe and malware free as well as fully updated. A common box means only having to program your preferences once too.

There would still be some devices that can’t be processed either in a box or in a washing machine. Examples such as smart paints or smart light bulbs or smart fuses would all be easier to process using networked connections, and they may well be. Some might prefer a slightly more individual approach, so pointing a mobile device at them would single them out from others in the vicinity. This sort of approach would also allow easier interrogation of the current state, diagnostics or inspection.

Whatever way internet of things goes, cleaning will take on a new and important dimension. We already do it as routine PC maintenance but removing malware and updating software will soon become a part of our whole house cleaning routine.

The future of beetles

Onto B then.

One of the first ‘facts’ I ever learned about nature was that there were a million species of beetle. In the Google age, we know that ‘scientists estimate there are between 4 and 8 million’. Well, still lots then.

Technology lets us control them. Beetles provide a nice platform to glue electronics onto so they tend to fall victim to cybernetics experiments. The important factor is that beetles come with a lot of built-in capability that is difficult or expensive to build using current technology. If they can be guided remotely by over-riding their own impulses or even misleading their sensors, then they can be used to take sensors into places that are otherwise hard to penetrate. This could be for finding trapped people after an earthquake, or getting a dab of nerve gas onto a president. The former certainly tends to be the favored official purpose, but on the other hand, the fashionable word in technology circles this year is ‘nefarious’. I’ve read it more in the last year than the previous 50 years, albeit I hadn’t learned to read for some of those. It’s a good word. Perhaps I just have a mad scientist brain, but almost all of the uses I can think of for remote-controlled beetles are nefarious.

The first properly publicized experiment was 2009, though I suspect there were many unofficial experiments before then:

http://www.technologyreview.com/news/411814/the-armys-remote-controlled-beetle/

There are assorted YouTube videos such as

A more recent experiment:

http://www.wired.com/2015/03/watch-flying-remote-controlled-cyborg-bug/

http://www.telegraph.co.uk/news/science/science-news/11485231/Flying-beetle-remotely-controlled-by-scientists.html

Big beetles make it easier to do experiments since they can carry up to 20% of body weight as payload, and it is obviously easier to find and connect to things on a bigger insect, but obviously once the techniques are well-developed and miniaturization has integrated things down to single chip with low power consumption, we should expect great things.

For example, a cloud of redundant smart dust would make it easier to connect to various parts of a beetle just by getting it to take flight in the cloud. Bits of dust would stick to it and self-organisation principles and local positioning can then be used to arrange and identify it all nicely to enable control. This would allow large numbers of beetles to be processed and hijacked, ideal for mad scientists to be more time efficient. Some dust could be designed to burrow into the beetle to connect to inner parts, or into the brain, which obviously would please the mad scientists even more. Again, local positioning systems would be advantageous.

Then it gets more fun. A beetle has its own sensors, but signals from those could be enhanced or tweaked via cloud-based AI so that it can become a super-beetle. Beetles traditionally don’t have very large brains, so they can be added to remotely too. That doesn’t have to be using AI either. As we can also connect to other animals now, and some of those animals might have very useful instincts or skills, then why not connect a rat brain into the beetle? It would make a good team for exploring. The beetle can do the aerial maneuvers and the rat can control it once it lands, and we all know how good rats are at learning mazes. Our mad scientist friend might then swap over the management system to another creature with a more vindictive streak for the final assault and nerve gas delivery.

So, Coleoptera Nefarius then. That’s the cool new beetle on the block. And its nicer but underemployed twin Coleoptera Benignus I suppose.

 

The future of air

Time for a second alphabetic ‘The future of’ set. Air is a good starter.

Air is mostly a mixture of gases, mainly nitrogen and oxygen, but it also contains a lot of suspended dust, pollen and other particulates, flying creatures such as insects and birds, and of course bacteria and viruses. These days we also have a lot of radio waves, optical signals, and the cyber-content carried on them. Air isn’t as empty as it seems. But it is getting busier all the time.

Internet-of-things, location-based marketing data and other location-based services and exchanges will fill the air digitally with fixed and wandering data. I called that digital air when I wrote a full technical paper on it and I don’t intend to repeat it all now a decade later. Some of the ideas have made it into reality, many are still waiting for marketers and app writers to catch up.

The most significant recent addition is drones. There are already lots of them, in a wide range of sizes from insect size to aeroplane size. Some are toys, some airborne cameras for surveillance, aerial photography, monitoring and surveillance, and increasingly they are appearing for sports photography and tracking or other leisure pursuits. We will see a lot more of them in coming years. Drone-based delivery is being explored too, though I am skeptical of its likely success in domestic built up areas.

Personal swarms of follower drones will become common too. It’s already possible to have a drone follow you and keep you on video, mainly for sports uses, but as drones become smaller, you may one day have a small swarm of tiny drones around you, recording video from many angles, so you will be able to recreate events from any time in an entire 3D area around you, a 3D permasuperselfie. These could also be extremely useful for military and policing purposes, and it will make the decline of privacy terminal. Almost everything going on in public in a built up environment will be recorded, and a great deal of what happens elsewhere too.

We may see lots of virtual objects or creatures once augmented reality develops a bit more. Some computer games will merge with real world environments, so we’ll have aliens, zombies and various mythical creatures from any game populating our streets and skies. People may also use avatars that fly around like fairies or witches or aliens or mythical creatures, so they won’t all be AI entities, some will have direct human control. And then there are buildings that might also have virtual appearances and some of those might include parts of buildings that float around, or even some entire cities possibly like those buildings and city areas in the game Bioshock Infinite.

Further in the future, it is possible that physical structures might sometimes levitate, perhaps using magnets, or lighter than air construction materials such as graphene foam. Plasma may also be used as a building material one day, albeit far in the future.

I’m bored with air now. Time for B.

Five new states of matter, maybe.

http://en.wikipedia.org/wiki/List_of_states_of_matter lists the currently known states of matter. I had an idea for five new ones, well, 2 anyway with 3 variants. They might not be possible but hey, faint heart ne’er won fair maid, and this is only a blog not a paper from CERN. But coincidentally, it is CERN most likely to be able to make them.

A helium atom normally has 2 electrons, in a single shell. In a particle model, they go round and round. However… the five new states:

A: I suspect this one is may already known but isn’t possible and is therefore just another daft idea. It’s just a planar superatom. Suppose, instead of going round and round the same atom, the nuclei were arranged in groups of three in a nice triangle, and 6 electrons go round and round the triplet. They might not be terribly happy doing that unless at high pressure with some helpful EM fields adjusting the energy levels required, but with a little encouragement, who knows, it might last long enough to be classified as matter.

B: An alternative that might be more stable is a quad of nuclei in a tetrahedron, with 8 electrons. This is obviously a variant of A so probably doesn’t really qualify as a separate one. But let’s call it a 3D superatom for now, unless it already has a proper name.

C: Suppose helium nuclei are neatly arranged in a row at a precise distance apart, and two orthogonal electron beams are fired past them at a certain distance on either side, with the electrons spaced and phased very nicely, so that for a short period at least, each of the nuclei has two electrons and the beam energy and nuclei spacing ensures that they don’t remain captive on one nucleus but are handed on to the next. You can do the difficult sums. To save you a few seconds, since the beams need to be orthogonal, you’ll need multiple beams in the direction orthogonal to the row,

D: Another cheat, a variant of C, C1: or you could make a few rows for a planar version with a grid of beams. Might be tricky to make the beams stay together for any distance so you could only make a small flake of such matter, but I can’t see an obvious reason why it would be impossible. Just tricky.

E: A second variant of C really, C2, with a small 3D speck of such nuclei and a grid of beams. Again, it works in my head.

Well, 5 new states of matter for you to play with. But here’s a free bonus idea:

The states don’t have to actually exist to be useful. Even with just the descriptions above, you could do the maths for these. They might not be physically achievable but that doesn’t stop them existing in a virtual world with a hypothetical future civilization making them. And given that they have that specific mathematics, and ergo a whole range of theoretical chemistry, and therefore hyperelectronics, they could therefore be used as simulated constructs in a Turing machine or actual constructs in quantum computers to achieve particular circuitry with particular virtues. You could certainly emulate it on a Yonck processor (see my blog on that). So you get a whole field of future computing and AI thrown in.

Blogging is all the fun with none of the hard work and admin. Perfect. And just in case someone does build it all, for the record, you saw it here first.

Technology 2040: Technotopia denied by human nature

This is a reblog of the Business Weekly piece I wrote for their 25th anniversary.

It’s essentially a very compact overview of the enormous scope for technology progress, followed by a reality check as we start filtering that potential through very imperfect human nature and systems.

25 years is a long time in technology, a little less than a third of a lifetime. For the first third, you’re stuck having to live with primitive technology. Then in the middle third it gets a lot better. Then for the last third, you’re mainly trying to keep up and understand it, still using the stuff you learned in the middle third.

The technology we are using today is pretty much along the lines of what we expected in 1990, 25 years ago. Only a few details are different. We don’t have 2Gb/s per second to the home yet and AI is certainly taking its time to reach human level intelligence, let alone consciousness, but apart from that, we’re still on course. Technology is extremely predictable. Perhaps the biggest surprise of all is just how few surprises there have been.

The next 25 years might be just as predictable. We already know some of the highlights for the coming years – virtual reality, augmented reality, 3D printing, advanced AI and conscious computers, graphene based materials, widespread Internet of Things, connections to the nervous system and the brain, more use of biometrics, active contact lenses and digital jewellery, use of the skin as an IT platform, smart materials, and that’s just IT – there will be similarly big developments in every other field too. All of these will develop much further than the primitive hints we see today, and will form much of the technology foundation for everyday life in 2040.

For me the most exciting trend will be the convergence of man and machine, as our nervous system becomes just another IT domain, our brains get enhanced by external IT and better biotech is enabled via nanotechnology, allowing IT to be incorporated into drugs and their delivery systems as well as diagnostic tools. This early stage transhumanism will occur in parallel with enhanced genetic manipulation, development of sophisticated exoskeletons and smart drugs, and highlights another major trend, which is that technology will increasingly feature in ethical debates. That will become a big issue. Sometimes the debates will be about morality, and religious battles will result. Sometimes different parts of the population or different countries will take opposing views and cultural or political battles will result. Trading one group’s interests and rights against another’s will not be easy. Tensions between left and right wing views may well become even higher than they already are today. One man’s security is another man’s oppression.

There will certainly be many fantastic benefits from improving technology. We’ll live longer, healthier lives and the steady economic growth from improving technology will make the vast majority of people financially comfortable (2.5% real growth sustained for 25 years would increase the economy by 85%). But it won’t be paradise. All those conflicts over whether we should or shouldn’t use technology in particular ways will guarantee frequent demonstrations. Misuses of tech by criminals, terrorists or ethically challenged companies will severely erode the effects of benefits. There will still be a mix of good and bad. We’ll have fixed some problems and created some new ones.

The technology change is exciting in many ways, but for me, the greatest significance is that towards the end of the next 25 years, we will reach the end of the industrial revolution and enter a new age. The industrial revolution lasted hundreds of years, during which engineers harnessed scientific breakthroughs and their own ingenuity to advance technology. Once we create AI smarter than humans, the dependence on human science and ingenuity ends. Humans begin to lose both understanding and control. Thereafter, we will only be passengers. At first, we’ll be paying passengers in a taxi, deciding the direction of travel or destination, but it won’t be long before the forces of singularity replace that taxi service with AIs deciding for themselves which routes to offer us and running many more for their own culture, on which we may not be invited. That won’t happen overnight, but it will happen quickly. By 2040, that trend may already be unstoppable.

Meanwhile, technology used by humans will demonstrate the diversity and consequences of human nature, for good and bad. We will have some choice of how to use technology, and a certain amount of individual freedom, but the big decisions will be made by sheer population numbers and statistics. Terrorists, nutters and pressure groups will harness asymmetry and vulnerabilities to cause mayhem. Tribal differences and conflicts between demographic, religious, political and other ideological groups will ensure that advancing technology will be used to increase the power of social conflict. Authorities will want to enforce and maintain control and security, so drones, biometrics, advanced sensor miniaturisation and networking will extend and magnify surveillance and greater restrictions will be imposed, while freedom and privacy will evaporate. State oppression is sadly as likely an outcome of advancing technology as any utopian dream. Increasing automation will force a redesign of capitalism. Transhumanism will begin. People will demand more control over their own and their children’s genetics, extra features for their brains and nervous systems. To prevent rebellion, authorities will have little choice but to permit leisure use of smart drugs, virtual escapism, a re-scoping of consciousness. Human nature itself will be put up for redesign.

We may not like this restricted, filtered, politically managed potential offered by future technology. It offers utopia, but only in a theoretical way. Human nature ensures that utopia will not be the actual result. That in turn means that we will need strong and wise leadership, stronger and wiser than we have seen of late to get the best without also getting the worst.

The next 25 years will be arguably the most important in human history. It will be the time when people will have to decide whether we want to live together in prosperity, nurturing and mutual respect, or to use technology to fight, oppress and exploit one another, with the inevitable restrictions and controls that would cause. Sadly, the fine engineering and scientist minds that have got us this far will gradually be taken out of that decision process.

Powering electric vehicles in the city

Simple stuff today just to stop my brain seizing up, nothing terribly new.

Grid lock is usually a term often used to describe interlocking traffic jams. But think about a canal lock, used to separate different levels of canal. A grid lock could be used to manage the different levels of stored and kinetic energy within a transport grid, keeping it local as far as possible to avoid transmission losses, and transferring it between different parts of the grid when necessary.

Formula 1 racing cars have energy recovery systems that convert kinetic energy to stored electrical energy during braking – Kinetic Energy Recovery System (KERS). In principle, energy could be shared between members of a race team by transmitting it from one car to another instead of simply storing it on board. For a city-wide system, that makes even more sense. There will always be some vehicles coasting, some braking, some accelerating and some stopped. Storing the energy on board is fine, but requires large capacitor banks or batteries, and that adds very significant cost. If an electrical grid allowed the energy to be moved around between vehicles, each vehicle would only need much smaller storage so costs would fall.

I am very much in favor of powering electric vehicles by using inductive pads on the road surface to transmit energy via coils on the car underside as the vehicles pass over them.  Again, this means that vehicles can manage with small batteries or capacitor banks. Since these are otherwise a large part of the cost, it makes electric transport much more cost-effective. The coils on the road surface could be quite thin, making them unattractive to metal thieves, and perhaps ultimately could be made of graphene once that is cheap to produce.

Moving energy among the many coils only needs conventional electrical grid technology. Peer to peer electrical generation business models are developing too to sell energy between households without the energy companies taking the lion’s share. Electricity can even be packetised by writing an address and header with details of the sender account and the quantity of energy in the following packet. Since overall energy use will fluctuate somewhat, the infrastructure also needs some storage to hold local energy surpluses and feed them back into accelerating vehicles as required, and if demand is too low, to store energy in local batteries. If even that isn’t sufficient capacity, then the grid might open grid locks to overflow larger surpluses onto other regions of the city or onto the main grid. Usually however, there would be an inflow of energy from the main grid to power all the vehicles, so transmission in the reverse direction would be only occasional.

Such a system keeps most energy local, reducing transmission losses and simplifying signalling, whilst allowing local energy producers to be included and enabling storage for renewable energy. As one traffic stream slows, another can recycle that same energy to accelerate. It reduces the environmental demands of running a transport system, so has both cost and environmental benefits.

 

 

Increasing internet capacity: electron pipes

The electron pipe is a slightly mis-named high speed comms solution that would make optical fibre look like two bean cans and a bit of loose string. I invented it in 1990, but it still remains in the future since we can’t do it yet, and it might not even be possible, some of the physics is in doubt.  The idea is to use an evacuated tube and send a precision controlled beam of high energy particles down it instead of crude floods of electrons down a wire or photons in fibres. Here’s a pathetic illustration:

Electron pipe

 

Initially I though of using 1MeV electrons, then considered that larger particles such as neutrons or protons or even ionised atoms might be better, though neutrons would certainly be harder to control. The wavelength of 1MeV electrons would be pretty small, allowing very high frequency signals and data rates, many times what is possible with visible photons down fibres. Whether this could be made to work over long distances is questionable, but over short distances it should be feasible and might be useful for high speed chip interconnects.

The energy of the beam could be made a lot higher, increasing bandwidth, but 1MeV seamed a reasonable start point, offering a million times more bandwidth than fibre.

The Problem

Predictions for memory, longer term storage, cloud service demands and computing speeds are already heading towards fibre limits when millions of users are sharing single fibres. Although the limits won’t be reached soon, it is useful to have a technology in the R&D pipeline that can extend the life of the internet after fibre fills up, to avoid costs rising. If communication is not to become a major bottleneck (even assuming we can achieve these rates by then), new means of transmission need to be found.

The Solution

A way must be found to utilise higher frequency entities than light. The obvious candidates are either gamma rays or ‘elementary’ particles such as electrons, protons and their relatives. Planck’s Law shows that frequency is related to energy. A 1.3µm photon has a frequency of 2.3 x 1014. By contrast  1MeV gives a frequency of 2.4 x 10^20 and a factor of a million increase in bandwidth, assuming it can be used (much higher energies should be feasible if higher bandwidth is needed, 10Gev energies would give 10^24). An ‘electron pipe’ containing a beam of high energy electrons may therefore offer a longer term solution to the bandwidth bottleneck. Electrons are easily accelerated and contained and also reasonably well understood. The electron beam could be prevented form colliding with the pipe walls by strong magnetic fields which may become practical in the field through progress in superconductivity. Such a system may well be feasible. Certainly prospects of data rates of these orders are appealing.

Lots of R&D would be needed to develop such communication systems. At first glance, they would seem to be more suited to high speed core network links, where the presumably high costs could be justified. Obvious problems exist which need to be studied, such as mechanisms for ultra high speed modulation and detection of the signals. If the problems can be solved, the rewards are high. The optical ether idea suffers from bandwidth constraint problems. Adding factors of 10^6 – 10^10 on top of this may make a difference!