Category Archives: Computing

The future of bacteria

Bacteria have already taken the prize for the first synthetic organism. Craig Venter’s team claimed the first synthetic bacterium in 2010.

Bacteria are being genetically modified for a range of roles, such as converting materials for easier extraction (e.g. coal to gas, or concentrating elements in landfill sites to make extraction easier), making new food sources (alongside algae), carbon fixation, pollutant detection and other sensory roles, decorative, clothing or cosmetic roles based on color changing, special surface treatments, biodegradable construction or packing materials, self-organizing printing… There are many others, even ignoring all the military ones.

I have written many times on smart yogurt now and it has to be the highlight of the bacterial future, one of the greatest hopes as well as potential danger to human survival. Here is an extract from a previous blog:

Progress is continuing to harness bacteria to make components of electronic circuits (after which the bacteria are dissolved to leave the electronics). Bacteria can also have genes added to emit light or electrical signals. They could later be enhanced so that as well as being able to fabricate electronic components, they could power them too. We might add various other features too, but eventually, we’re likely to end up with bacteria that contain electronics and can connect to other bacteria nearby that contain other electronics to make sophisticated circuits. We could obviously harness self-assembly and self-organisation, which are also progressing nicely. The result is that we will get smart bacteria, collectively making sophisticated, intelligent, conscious entities of a wide variety, with lots of sensory capability distributed over a wide range. Bacteria Sapiens.

I often talk about smart yogurt using such an approach as a key future computing solution. If it were to stay in a yogurt pot, it would be easy to control. But it won’t. A collective bacterial intelligence such as this could gain a global presence, and could exist in land, sea and air, maybe even in space. Allowing lots of different biological properties could allow colonization of every niche. In fact, the first few generations of bacteria sapiens might be smart enough to design their own offspring. They could probably buy or gain access to equipment to fabricate them and release them to multiply. It might be impossible for humans to stop this once it gets to a certain point. Accidents happen, as do rogue regimes, terrorism and general mad-scientist type mischief.

Transhumanists seem to think their goal is the default path for humanity, that transhumanism is inevitable. Well, it can’t easily happen without going first through transbacteria research stages, and that implies that we might well have to ask transbacteria for their consent before we can develop true transhumans.

Self-organizing printing is a likely future enhancement for 3D printing. If a 3D printer can print bacteria (onto the surface of another material being laid down, or as an ingredient in a suspension as the extrusion material itself, or even a bacterial paste, and the bacteria can then generate or modify other materials, or use self-organisation principles to form special structures or patterns, then the range of objects that can be printed will extend. In some cases, the bacteria may be involved in the construction and then die or be dissolved away.

Estimating IoT value? Count ALL the beans!

In this morning’s news:

http://www.telegraph.co.uk/technology/news/11043549/UK-funds-development-of-world-wide-web-for-machines.html

£1.6M investment by UK Technology Strategy Board in Internet-of-Things HyperCat standard, which the article says will add £100Bn to the UK economy by 2020.

Garnter says that IoT has reached the hype peak of their adoption curve and I agree. Connecting machines together, and especially adding networked sensors will certainly increase technology capability across many areas of our lives, but the appeal is often overstated and the dangers often overlooked. Value should not be measured in purely financial terms either. If you value health, wealth and happiness, don’t just measure the wealth. We value other things too of course. It is too tempting just to count the most conspicuous beans. For IoT, which really just adds a layer of extra functionality onto an already technology-rich environment, that is rather like estimating the value of a chili con carne by counting the kidney beans in it.

The headline negatives of privacy and security have often been addressed so I don’t need to explore them much more here, but let’s look at a couple of typical examples from the news article. Allowing remotely controlled washing machines will obviously impact on your personal choice on laundry scheduling. The many similar shifts of control of your life to other agencies will all add up. Another one: ‘motorists could benefit from cheaper insurance if their vehicles were constantly transmitting positioning data’. Really? Insurance companies won’t want to earn less, so motorists on average will give them at least as much profit as before. What will happen is that insurance companies will enforce driving styles and car maintenance regimes that reduce your likelihood of a claim, or use that data to avoid paying out in some cases. If you have to rigidly obey lots of rules all of the time then driving will become far less enjoyable. Having to remember to check the tyre pressures and oil level every two weeks on pain of having your insurance voided is not one of the beans listed in the article, but is entirely analogous the typical home insurance rule that all your windows must have locks and they must all be locked and the keys hidden out of sight before they will pay up on a burglary.

Overall, IoT will add functionality, but it certainly will not always be used to improve our lives. Look at the way the web developed. Think about the cookies and the pop-ups and the tracking and the incessant virus protection updates needed because of the extra functions built into browsers. You didn’t want those, they were added to increase capability and revenue for the paying site owners, not for the non-paying browsers. IoT will be the same. Some things will make minor aspects of your life easier, but the price of that will that you will be far more controlled, you will have far less freedom, less privacy, less security. Most of the data collected for business use or to enhance your life will also be available to government and police. We see every day the nonsense of the statement that if you have done nothing wrong, then you have nothing to fear. If you buy all that home kit with energy monitoring etc, how long before the data is hacked and you get put on militant environmentalist blacklists because you leave devices on standby? For every area where IoT will save you time or money or improve your control, there will be many others where it does the opposite, forcing you to do more security checks, spend more money on car and home and IoT maintenance, spend more time following administrative procedures and even follow health regimes enforced by government or insurance companies. IoT promises milk and honey, but will deliver it only as part of a much bigger and unwelcome lifestyle change. Sure you can have a little more control, but only if you relinquish much more control elsewhere.

As IoT starts rolling out, these and many more issues will hit the press, and people will start to realise the downside. That will reduce the attractiveness of owning or installing such stuff, or subscribing to services that use it. There will be a very significant drop in the economic value from the hype. Yes, we could do it all and get the headline economic benefit, but the cost of greatly reduced quality of life is too high, so we won’t.

Counting the kidney beans in your chili is fine, but it won’t tell you how hot it is, and when you start eating it you may decide the beans just aren’t worth the pain.

I still agree that IoT can be a good thing, but the evidence of web implementation suggests we’re more likely to go through decades of abuse and grief before we get the promised benefits. Being honest at the outset about the true costs and lifestyle trade-offs will help people decide, and maybe we can get to the good times faster if that process leads to better controls and better implementation.

Ultra-simple computing: Part 4

Gel processing

One problem with making computers with a lot of cores is the wiring. Another is the distribution of tasks among the cores. Both of these can be solved with relatively simple architecture. Processing chips usually have a lot of connectors, letting them get data in parallel. But a beam of light can contain rays of millions of wavelengths, far more parallelism than is possible with wiring. If chips communicated using light with high density wavelength division multiplexing, it will solve some wiring issues. Taking another simple step, processors that are freed from wiring don’t have to be on a circuit board, but could be suspended in some sort of gel. Then they could use free space interconnection to connect to many nearby chips. Line of sight availability will be much easier than on a circuit board. Gel can also be used to cool chips.

Simpler chips with very few wired connections also means less internal wiring too. This reduces size still further and permits higher density of suspension without compromising line of sight.

Ripple scheduler

Process scheduling can also be done more simply with many processors. Complex software algorithms are not needed. In an array of many processors, some would be idle while some are already engaged on tasks. When a job needs processed, a task request (this could be as simple as a short pulse of a certain frequency) would be broadcast and would propagate through the array. On encountering an idle processor, the idle processor would respond with an accept response (again this could be a single pulse of another frequency. This would also propagate out as a wave through the array. These two waves may arrive at a given processor in quick succession.

Other processors could stand down automatically once one has accepted the job (i.e. when they detect the acceptance wave). That would be appropriate when all processors are equally able. Alternatively, if processors have different capabilities, the requesting agent would pick a suitable one from the returning acceptances, send a point to point message to it, and send out a cancel broadcast wave to stand others down. It would exchange details about the task with this processor on a point to point link, avoiding swamping the system with unnecessary broadcast messages.  An idle processor in the array would thus see a request wave, followed by a number of accept waves. It may then receive a personalized point to point message with task information, or if it hasn’t been chosen, it would just see the cancel wave of . Busy processors would ignore all communications except those directed specifically to them.

I’m not saying the ripple scheduling is necessarily the best approach, just an example of a very simple system for process scheduling that doesn’t need sophisticated algorithms and code.

Activator Pastes

It is obvious that this kind of simple protocol can be used with a gel processing medium populated with a suitable mixture of different kinds of processors, sensors, storage, transmission and power devices to provide a fully scalable self-organizing array that can perform a high task load with very little administrative overhead. To make your smart gel, you might just choose the volume of weight ratios of components you want and stir them into a gel rather like mixing a cocktail. A paste made up in this way could be used to add sensing, processing and storage to any surface just by painting some of the paste onto it.

A highly sophisticated distributed cloud sensor network for example could be made just by painting dabs of paste onto lamp posts. Solar power or energy harvesting devices in the paste would power the sensors to make occasional readings, pre-process them, and send them off to the net. This approach would work well for environmental or structural monitoring, surveillance, even for everyday functions like adding parking meters to lines marking the spaces on the road where they interact with ID devices in the car or an app on the driver’s smartphone.

Special inks could contain a suspension of such particles and add a highly secure electronic signature onto one signed by pen and ink.

The tacky putty stuff that we use to stick paper to walls could use activator paste as the electronic storage and processing medium to let you manage  content an e-paper calendar or notice on a wall.

I can think of lots of ways of using smart pastes in health monitoring, packaging, smart makeup and so on. The basic principle stays the same though. It would be very cheap and yet very powerful, with many potential uses. Self-organising, and needs no set up beyond giving it a job to do, which could come from any of your devices. You’d probably buy it by the litre, keep some in the jar as your computer, and paste the rest of it all over the place to make your skin, your clothes, your work-spaces and your world smart. Works for me.

 

Ultra-simple computing part 3

Just in time v Just in case

Although the problem isn’t as bad now as it has been, a lot of software runs on your computers just in case it might be needed. Often it isn’t, and sometimes the PC is shut down or rebooted without it ever having been used. This wastes our time, wastes a little energy, and potentially adds functionality or weaknesses that can be exploited by hackers.

If it only loaded the essential pieces of software, risks would be minimised and initial delays reduced. There would be a slightly bigger delay once the code is needed because it would have to load then but since a lot of code is rarely used, the overall result would still be a big win. This would improve security and reliability. If all I am doing today is typing and checking occasional emails, a lot of the software currently loaded in my PC memory is not needed. I don’t even need a firewall running all the time if network access is disabled in between my email checks. If networking and firewall is started when I want to check email or start browsing, and then all network access is disabled after I have checked, then security would be a bit better. I also don’t need all the fancy facilities in Office when all I am doing is typing. I definitely don’t want any part of Office to use any kind of networking in either direction for any reason (I use Thunderbird, not Outlook for email). So don’t load the code yet; I don’t want it running; it only adds risks, not benefits. If I want to do something fancy in a few weeks time, load the code then. If I want to look up a word in a dictionary or check a hyperlink, I could launch a browser and copy and paste it. Why do anything until asked? Forget doing stuff just in case it might occasionally generate a tiny time saving. Just in time is far safer and better than just in case.

So, an ultra-simple computer should only load what is needed, when it is needed. It would only open communications when needed, and then only to the specific destination required. That frees up processors and memory, reduces risks and improves speed.

Software distribution

Storing software on hard disks or in memory lets the files be changed, possibly by a virus. Suppose instead that software were to be distributed on ROM chips. They can be very cheap, so why not? No apps, no downloads. All the software on your machine would be in read only memory, essentially part of the hardware. This would change a few things in computer design. First, you’d have a board with lots of nice slots in it, into which you plug the memory chips you’ve bought with the programs you want on them. (I’ll get to tablets and phones later, obviously a slightly different approach is needed for portable devices). Manufacturers would have a huge interest in checking their  code first, because they can’t put fixes out later except on replacement chips. Updating the software to a new version would simply mean inserting a new chip. Secondly, since the chips are read only, the software on them cannot be corrupted. There is no mechanism by which a virus or other malware could get onto the chip.

Apps could be distributed in collections – lifestyle or business collections. You could buy subscriptions to app agencies that issued regular chips with their baskets of apps on them. Or you could access apps online via the cloud. Your machine would stay clean.

It could go further. As well as memory chips, modules could include processing, controller or sensory capabilities. Main processing may still be in the main part of the computer but specialist capabilities could be added in this way.

So, what about tablets and phones? Obviously you can’t plug lots of extra chips into slots in those because it would be too cumbersome to make them with lots of slots to do so. One approach would be to use your PC or laptop to store and keep up to date a single storage chip that goes into your tablet or phone. It could use a re-programmable ROM that can’t be tampered with by your tablet. All your apps would live on it, but it would be made clean and fresh every day. Tablets could have a simple slot to insert that single chip, just as a few already do for extra memory.

Multi-layered security

If your computer is based on algorithms encoded on read only memory chips or better still, directly as hardware circuits, then it could boot from cold very fast, and would be clean of any malware. To be useful, it would need a decent amount of working memory too, and of course that could provide a short term residence for malware, but a restart would clean it all away. That provides a computer that can easily be reset to a clean state and work properly again right away.

Another layer of defense is to disallow programs access to things they don’t need. You don’t open every door and window in your home every time you want to go in or out. Why open every possible entrance that your office automation package might ever want to use just because you want to type an article? Why open the ability to remotely install or run programs on your computer without your knowledge and consent just because you want to read a news article or look at a cute kitten video? Yet we have accepted such appallingly bad practice from the web browser developers because we have had no choice. It seems that the developers’ desires to provide open windows to anyone that wants to use them outweighs the users’ desires for basic security common sense. So the next layer of defense is really pretty obvious. We want a browser that doesn’t open doors and windows until we explicitly tell it to, and even then it checks everything that tries to get through.

It may still be that you occasionally want to run software from a website, maybe to play a game. Another layer of defense that could help then is to restrict remote executables to a limited range of commands with limited scope. It is also easy additionally to arrange a sandbox where code can run but can’t influence anything outside the sandbox. For example, there is no reason a game would need to inspect files on your computer apart from stored games or game-related files. Creating a sandbox that can run a large range of agreed functions to enable games or other remote applications but is sealed from anything else on the computer would enable remote benign executables without compromising security. Even if they were less safe, confining activity to the sandbox allows the machine to be sterilized by sweeping that area and doesn’t necessitate a full reset. Even without the sandbox, knowing the full capability of the range of permitted commands enables damage limitation and precision cleaning. The range of commands should be created with the end user as priority, letting them do what they want with the lowest danger. It should not be created with application writers as top priority since that is where the security risk arises. Not all potential application writers are benign and many want to exploit or harm the end user for their own purposes. Everyone in IT really ought to know that and should never forget it for a minute and it really shouldn’t need to be said.

Ultra-simple computing: Part 2

Chip technology

My everyday PC uses an Intel Core-I7 3770 processor running at 3.4GHz. It has 4 cores running 8 threads on 1.4 billion 22nm transistors on just 160mm^2 of chip. It has an NVIDIA GeForce GTX660 graphics card, and has 16GB of main memory. It is OK most of the time, but although the processor and memory utilisation rarely gets above 30%, its response is often far from instant.

Let me compare it briefly with my (subjectively at time of ownership) best ever computer, my Macintosh 2Fx, RIP, which I got in 1991, the computer on which I first documented both the active contact lens and text messaging and on which I suppose I also started this project. The Mac 2Fx ran a 68030 processor at 40MHz, with 273,000 transistors and 4MB of RAM, and an 80MB hard drive. Every computer I’ve used since then has given me extra function at the expense of lower performance, wasted time and frustration.

Although its OS is stored on a 128GB solid state disk, my current PC takes several seconds longer to boot than my Macintosh Fx did – it went from cold to fully operational in 14 seconds – yes, I timed it. On my PC today, clicking a browser icon to first page usually takes a few seconds. Clicking on a word document back then took a couple of seconds to open. It still does now. Both computers gave real time response to typing and both featured occasional unexplained delays. I didn’t have any need for a firewall or virus checkers back then, but now I run tedious maintenance routines a few times every week. (The only virus I had before 2000 was nVir, which came on the Mac2 system disks). I still don’t get many viruses, but the significant time I spend avoiding them has to be counted too.

Going back further still, to my first ever computer in 1981, it was an Apple 2, and only had 9000 transistors running at 2.5MHz, with a piddling 32kB of memory. The OS was tiny. Nevertheless, on it I wrote my own spreadsheet, graphics programs, lens design programs, and an assortment of missile, aerodynamic and electromagnetic simulations. Using the same transistors as the I7, you could make 1000 of these in a single square millimetre!

Of course some things are better now. My PC has amazing graphics and image processing capabilities, though I rarely make full use of them. My PC allows me to browse the net (and see video ads). If I don’t mind telling Google who I am I can also watch videos on YouTube, or I could tell the BBC or some other video provider who I am and watch theirs. I could theoretically play quite sophisticated computer games, but it is my work machine, so I don’t. I do use it as a music player or to show photos. But mostly, I use it to write, just like my Apple 2 and my Mac Fx. Subjectively, it is about the same speed for those tasks. Graphics and video are the main things that differ.

I’m not suggesting going back to an Apple 2 or even an Fx. However, using I7 chip tech, a 9000 transistor processor running 1360 times faster and taking up 1/1000th of a square millimetre would still let me write documents and simulations, but would be blazingly fast compared to my old Apple 2. I could fit another 150,000 of them on the same chip space as the I7. Or I could have 5128 Mac Fxs running at 85 times normal speed. Or you could have something like a Mac FX running 85 times faster than the original for a tiny fraction of the price. There are certainly a few promising trees in the forest that nobody seems to have barked up. As an interesting aside, that 22nm tech Apple 2 chip would only be ten times bigger than a skin cell, probably less now, since my PC is already several months old

At the very least, that really begs the question what all this extra processing is needed for and why there is still ever any noticeable delay doing anything in spite of it. Each of those earlier machines was perfectly adequate for everyday tasks such as typing or spreadsheeting. All the extra speed has an impact only on some things and most is being wasted by poor code. Some of the delays we had 20 and 30 years ago still affect us just as badly today.

The main point though is that if you can make thousands of processors on a standard sized chip, you don’t have to run multitasking. Each task could have a processor all to itself.

The operating system currently runs programs to check all the processes that need attention, determine their priorities, schedule processing for them, and copy their data in and out of memory. That is not needed if each process can have its own dedicated processor and memory all the time. There are lots of ways of using basic physics to allocate processes to processors, relying on basic statistics to ensure that collisions rarely occur. No code is needed at all.

An ultra-simple computer could therefore have a large pool of powerful, free processors, each with their own memory, allocated on demand using simple physical processes. (I will describe a few options for the basic physics processes later). With no competition for memory or processing, a lot of delays would be eliminated too.

Ultra-simple computing: Part 1

Introduction

This is first part of a techie series. If you aren’t interested in computing, move along, nothing here. It is a big topic so I will cover it in several manageable parts.

Like many people, I spent a good few hours changing passwords after the Heartbleed problem and then again after ebay’s screw-up. It is a futile task in some ways because passwords are no longer a secure defense anyway. A decent hacker with a decent computer can crack hundreds of passwords in an hour, so unless an account is locked after a few failed attempts, and many aren’t, passwords only manage to keep out casual observers and the most amateurish hackers.

The need for simplicity

A lot of problems are caused by the complexity of today’s software, making it impossible to find every error and hole. Weaknesses have been added to operating systems, office automation tools and browsers to increase functionality for only a few users, even though they add little to most of us most of the time. I don’t think I have ever executed a macro in Microsoft office for example and I’ve certainly never used print merge or many its other publishing and formatting features. I was perfectly happy with Word 93 and most things added since then (apart from the real time spelling and grammar checker) have added irrelevant and worthless features at the expense of safety. I can see very little user advantage of allowing pop-ups on web sites, or tracking cookies. Their primary purpose is to learn about us to make marketing more precise. I can see why they want that, but I can’t see why I should. Users generally want pull marketing, not push, and pull doesn’t need cookies, there are better ways of sending your standard data when needed if that’s what you want to do. There are many better ways of automating logons to regular sites if that is needed.

In a world where more of the people who wish us harm are online it is time to design an alternative platform which it is designed specifically to be secure from the start and no features are added that allow remote access or control without deliberate explicit permission. It can be done. A machine with a strictly limited set of commands and access can be made secure and can even be networked safely. We may have to sacrifice a few bells and whistles, but I don’t think we will need to sacrifice many that we actually want or need. It may be less easy to track us and advertise at us or to offer remote machine analysis tools, but I can live with that and you can too. Almost all the services we genuinely want can still be provided. You could still browse the net, still buy stuff, still play games with others, and socialize. But you wouldn’t be able to install or run code on someone else’s machine without their explicit knowledge. Every time you turn the machine on, it would be squeaky clean. That’s already a security benefit.

I call it ultra-simple computing. It is based on the principle that simplicity and a limited command set makes it easy to understand and easy to secure. That basic physics and logic is more reliable than severely bloated code. That enough is enough, and more than that is too much.

We’ve been barking up the wrong trees

There are a few things you take for granted in your IT that needn’t be so.

Your PC has an extremely large operating system. So does your tablet, your phone, games console… That isn’t really necessary. It wasn’t always the case and it doesn’t have to be the case tomorrow.

Your operating system still assumes that your PC has only a few processing cores and has to allocate priorities and run-time on those cores for each process. That isn’t necessary.

Although you probably use some software in the cloud, you probably also download a lot of software off the net or install from a CD or DVD. That isn’t necessary.

You access the net via an ISP. That isn’t necessary. Almost unavoidable at present, but only due to bad group-think. Really, it isn’t necessary.

You store data and executable code in the same memory and therefore have to run analysis tools that check all the data in case some is executable. That isn’t necessary.

You run virus checkers and firewalls to prevent unauthorized code execution or remote access. That isn’t necessary.

Overall, we live with an IT system that is severely unfit for purpose. It is dangerous, bloated, inefficient, excessively resource and energy intensive, extremely fragile and yet vulnerable to attack via many routes, designed with the user as a lower priority than suppliers, with the philosophy of functionality at any price. The good news is that it can be replaced by one that is absolutely fit for purpose, secure, invulnerable, cheap and reliable, resource-efficient, and works just fine. Even better, it could be extremely cheap so you could have both and live as risky an online life in those areas that don’t really matter, knowing you have a safe platform to fall back on when your risky system fails or when you want to do anything that involves your money or private data.

Interfacial prejudice

This blog is caused by an interaction with Nick Colosimo, thanks Nick.

We were discussing whether usage differences for gadgets were generational. I think they are but not because older people find it hard to learn new tricks. Apart from a few unfortunate people whose brains go downhill when they get old, older people have shown they are perfectly able and willing to learn web stuff. Older people were among the busiest early adopters of social media.

I think the problem is the volume of earlier habits that need to be unlearned. I am 53 and have used computers every day since 1981. I have used slide rules and log tables, an abacus, an analog computer, several mainframes, a few minicomputers, many assorted Macs and PCs and numerous PDAs, smartphones and now tablets. They all have very different ways of using them and although I can’t say I struggle with any of them, I do find the differing implementations of features and mechanisms annoying. Each time a new operating system comes along, or a new style of PDA, you have to learn a new design language, remember where all the menus, sub-menus and all the various features are hidden on this one, how they interconnect and what depends on what.

That’s where the prejudice kicks in. The many hours of experience you have on previous systems have made you adept at navigating through a sea of features, menus, facilities. You are native to the design language, the way you do things, the places to look for buttons or menus, even what the buttons look like. You understand its culture, thoroughly. When a new device or OS is very different, using it is like going on holiday. It is like emigrating if you’re making a permanent switch. You have the ability to adapt, but the prejudice caused by your long experience on a previous system makes that harder. Your first uses involve translation from the old to the new, just like translating foreignish to your own language, rather than thinking in the new language as you will after lengthy exposure. Your attitude to anything on the new system is colored by your experiences with the old one.

It isn’t stupidity that making you slow and incompetent. Its interfacial prejudice.

Google is wrong. We don’t all want gadgets that predict our needs.

In the early 1990s, lots of people started talking about future tech that would work out what we want and make it happen. A whole batch of new ideas came out – internet fridges, smart waste-baskets, the ability to control your air conditioning from the office or open and close curtains when you’re away on holiday. 25 years on almost and we still see just a trickle of prototypes, followed by a tsunami of apathy from the customer base.

Do you want an internet fridge, that orders milk when you’re running out, or speaks to you all the time telling you what you’re short of, or sends messages to your phone when you are shopping? I certainly don’t. It would be extremely irritating. It would crash frequently. If I forget to clean the sensors it won’t work. If I don’t regularly update the software, and update the security, and get it serviced, it won’t work. It will ask me for passwords. If my smart loo notices I’m putting on weight, the fridge will refuse to open, and tell the microwave and cooker too so that they won’t cook my lunch. It will tell my credit card not to let me buy chocolate bars or ice cream. It will be a week before kitchen rage sets in and I take a hammer to it. The smart waste bin will also be covered in tomato sauce from bean cans held in a hundred orientations until the sensor finally recognizes the scrap of bar-code that hasn’t been ripped off. Trust me, we looked at all this decades ago and found the whole idea wanting. A few show-off early adopters want it to show how cool and trendy they are, then they’ll turn it off when no-one is watching.

EDIT: example of security risks from smart devices (this one has since been fixed) http://www.bbc.co.uk/news/technology-28208905

If I am with my best friend, who has known me for 30 years, or my wife, who also knows me quite well, they ask me what I want, they discuss options with me. They don’t think they know best and just decide things. If they did, they’d soon get moaned at. If I don’t want my wife or my best friend to assume they know what I want best, why would I want gadgets to do that?

The first thing I did after checking out my smart TV was to disconnect it from the network so that it won’t upload anything and won’t get hacked or infected with viruses. Lots of people have complained about new adverts on TV that control their new xBoxes via the Kinect voice recognition. The ‘smart’ TV receiver might be switched off as that happens. I am already sick of things turning themselves off without my consent because they think they know what I want.

They don’t know what is best. They don’t know what I want. Google doesn’t either. Their many ideas about giving lots of information it thinks I want while I am out are also things I will not welcome. Is the future of UI gadgets that predict your needs, as Wired says Google thinks? No, it isn’t. What I want is a really intuitive interface so I can ask for what I want, when I want it. The very last thing I want is an idiot device thinking it knows better than I do.

We are not there yet. We are nowhere near there yet. Until we are, let me make my own decisions. PLEASE!

Your most likely cause of death is being switched off

This one’s short and sweet.

The majority of you reading this blog live in the USA, UK, Canada or Australia. More than half of you are under 40.

That means your natural life expectancy is over 85, so statistically, your body will probably live until after 2060.

By then, electronic mind enhancement will probably mean that most of your mind runs on external electronics, not in your brain, so that your mind won’t die when your body does. You’ll just need to find a new body, probably an android, for those times you aren’t content being on the net. Most of us identify ourselves mainly as our mind, and would still think of ourselves as still alive if our mind carries on as if nothing much has happened, which is likely.

Electronic immortality is not true immortality though. Your mind can only survive on the net as long as it is supported by the infrastructure. That will be controlled by others. Future technology will likely be able to defend against asteroid strikes, power surges cause by solar storms and so on, so accidental death seems unlikely for hundreds of years. However, since minds supported on it need energy to continue running and electronics to be provided and maintained, and will want to make trips into the ‘real’ world, or even live there a lot of the time, they will have a significant resource footprint. They will probably not be considered as valuable as other people whose bodies are still alive. In fact they might be considered as competition – for jobs, resources, space, housing, energy… They may even be seen as easy targets for future cyber-terrorists.

So, it seems quite likely, maybe even inevitable, that life limits will be imposed on the vast majority of you. At some point you will simply be switched off. There might be some prioritization, competitions, lotteries or other selection mechanism, but only some will benefit from it.

Since you are unlikely to die when your body ceases to work, your most likely cause of death is therefore to be switched off. Sorry to break that to you.

Future human evolution

I’ve done patches of work on this topic frequently over the last 20 years. It usually features in my books at some point too, but it’s always good to look afresh at anything. Sometimes you see something you didn’t see last time.

Some of the potential future is pretty obvious. I use the word potential, because there are usually choices to be made, regulations that may or may not get in the way, or many other reasons we could divert from the main road or even get blocked completely.

We’ve been learning genetics now for a long time, with a few key breakthroughs. It is certain that our understanding will increase, less certain how far people will be permitted to exploit the potential here in any given time frame. But let’s take a good example to learn a key message first. In IVF, we can filter out embryos that have the ‘wrong’ genes, and use their sibling embryos instead. Few people have a problem with that. At the same time, pregnant women may choose an abortion if they don’t want a child when they discover it is the wrong gender, but in the UK at least, that is illegal. The moral and ethical values of our society are on a random walk though, changing direction frequently. The social assignment of right and wrong can reverse completely in just 30 years. In this example, we saw a complete reversal of attitudes to abortion itself within 30 years, so who is to say we won’t see reversal on the attitude to abortion due to gender? It is unwise to expect that future generations will have the same value sets. In fact, it is highly unlikely that they will.

That lesson likely applies to many technology developments and quite a lot of social ones – such as euthanasia and assisted suicide, both already well into their attitude reversal. At some point, even if something is distasteful to current attitudes, it is pretty likely to be legalized eventually, and hard to ban once the door is opened. There will always be another special case that opens the door a little further. So we should assume that we may eventually use genetics to its full capability, even if it is temporarily blocked for a few decades along the way. The same goes for other biotech, nanotech, IT, AI and any other transhuman enhancements that might come down the road.

So, where can we go in the future? What sorts of splits can we expect in the future human evolution path? It certainly won’t remain as just plain old homo sapiens.

I drew this evolution path a long time ago in the mid 1990s:

human evolution 1

It was clear even then that we could connect external IT to the nervous system, eventually the brain, and this would lead to IT-enhanced senses, memory, processing, higher intelligence, hence homo cyberneticus. (No point in having had to suffer Latin at school if you aren’t allowed to get your own back on it later). Meanwhile, genetic enhancement and optimization of selected features would lead to homo optimus. Converging these two – why should you have to choose, why not have a perfect body and an enhanced mind? – you get homo hybridus. Meanwhile, in the robots and AI world, machine intelligence is increasing and we eventually we get the first self-aware AI/robot (it makes little sense to separate the two since networked AI can easily be connected to a machine such as a robot) and this has its own evolution path towards a rich diversity of different kinds of AI and robots, robotus multitudinus. Since both the AI world and the human world could be networked to the same network, it is then easy to see how they could converge, to give homo machinus. This future transhuman would have any of the abilities of humans and machines at its disposal. and eventually the ability to network minds into a shared consciousness. A lot of ordinary conventional humans would remain, but with safe upgrades available, I called them homo sapiens ludditus. As they watch their neighbors getting all the best jobs, winning at all the sports, buying everything, and getting the hottest dates too, many would be tempted to accept the upgrades and homo sapiens might gradually fizzle out.

My future evolution timeline stayed like that for several years. Then in the early 2000s I updated it to include later ideas:

human evolution 2

I realized that we could still add AI into computer games long after it becomes comparable with human intelligence, so games like EA’s The Sims might evolve to allow entire civilizations living within a computer game, each aware of their existence, each running just as real a life as you and I. It is perhaps unlikely that we would allow children any time soon to control fully sentient people within a computer game, acting as some sort of a god to them, but who knows, future people will argue that they’re not really real people so it’s OK. Anyway, you could employ them in the game to do real knowledge work, and make money, like slaves. But since you’re nice, you might do an incentive program for them that lets them buy their freedom if they do well, letting them migrate into an android. They could even carry on living in their Sims home and still wander round in our world too.

Emigration from computer games into our world could be high, but the reverse is also possible. If the mind is connected well enough, and enhanced so far by external IT that almost all of it runs on the IT instead of in the brain, then when your body dies, your mind would carry on living. It could live in any world, real or fantasy, or move freely between them. (As I explained in my last blog, it would also be able to travel in time, subject to certain very expensive infrastructural requirements.) As well as migrants coming via electronic immortality route, it would be likely that some people that are unhappy in the real world might prefer to end it all and migrate their minds into a virtual world where they might be happy. As an alternative to suicide, I can imagine that would be a popular route. If they feel better later, they could even come back, using an android.  So we’d have an interesting future with lots of variants of people, AI and computer game and fantasy characters migrating among various real and imaginary worlds.

But it doesn’t stop there. Meanwhile, back in the biotech labs, progress is continuing to harness bacteria to make components of electronic circuits (after which the bacteria are dissolved to leave the electronics). Bacteria can also have genes added to emit light or electrical signals. They could later be enhanced so that as well as being able to fabricate electronic components, they could power them too. We might add various other features too, but eventually, we’re likely to end up with bacteria that contain electronics and can connect to other bacteria nearby that contain other electronics to make sophisticated circuits. We could obviously harness self-assembly and self-organisation, which are also progressing nicely. The result is that we will get smart bacteria, collectively making sophisticated, intelligent, conscious entities of a wide variety, with lots of sensory capability distributed over a wide range. Bacteria Sapiens.

I often talk about smart yogurt using such an approach as a key future computing solution. If it were to stay in a yogurt pot, it would be easy to control. But it won’t. A collective bacterial intelligence such as this could gain a global presence, and could exist in land, sea and air, maybe even in space. Allowing lots of different biological properties could allow colonization of every niche. In fact, the first few generations of bacteria sapiens might be smart enough to design their own offspring. They could probably buy or gain access to equipment to fabricate them and release them to multiply. It might be impossible for humans to stop this once it gets to a certain point. Accidents happen, as do rogue regimes, terrorism and general mad-scientist type mischief.

And meanwhile, we’ll also be modifying nature. We’ll be genetically enhancing a wide range of organisms, bringing some back from extinction, creating new ones, adding new features, changing even some of the basic mechanism by which nature works in some cases. We might even create new kinds of DNA or develop substitutes with enhanced capability. We may change nature’s evolution hugely. With a mix of old and new and modified, nature evolves nicely into Gaia Sapiens.

We’re not finished with the evolution chart though. Here is the next one:

human evolution 3

Just one thing is added. Homo zombius. I realized eventually that the sci-fi ideas of zombies being created by viruses could be entirely feasible. A few viruses, bacteria and other parasites can affect the brains of the victims and change their behaviour to harness them for their own life cycle.

See http://io9.com/12-real-parasites-that-control-the-lives-of-their-hosts-461313366 for fun.

Bacteria sapiens could be highly versatile. It could make virus variants if need be. It could evolve itself to be able to live in our bodies, maybe penetrate our brains. Bacteria sapiens could make tiny components that connect to brain cells and intercept signals within our brains, or put signals back in. It could read our thoughts, and then control our thoughts. It could essentially convert people into remote controlled robots, or zombies as we usually call them. They could even control muscles directly to a point, so even if the zombie is decapitated, it could carry on for a short while. I used that as part of my storyline in Space Anchor. If future humans have widespread availability of cordless electricity, as they might, then it is far fetched but possible that headless zombies could wander around for ages, using the bacterial sensors to navigate. Homo zombius would be mankind enslaved by bacteria. Hopefully just a few people, but it could be everyone if we lose the battle. Think how difficult a war against bacteria would be, especially if they can penetrate anyone’s brain and intercept thoughts. The Terminator films looks a lot less scary when you compare the Terminator with the real potential of smart yogurt.

Bacteria sapiens might also need to be consulted when humans plan any transhuman upgrades. If they don’t consent, we might not be able to do other transhuman stuff. Transhumans might only be possible if transbacteria allow it.

Not done yet. I wrote a couple of weeks ago about fairies. I suggested fairies are entirely feasible future variants that would be ideally suited to space travel.

http://timeguide.wordpress.com/2014/06/06/fairies-will-dominate-space-travel/

They’d also have lots of environmental advantages as well as most other things from the transhuman library. So I think they’re inevitable. So we should add fairies to the future timeline. We need a revised timeline and they certainly deserve their own branch. But I haven’t drawn it yet, hence this blog as an excuse. Before I do and finish this, what else needs to go on it?

Well, time travel in cyberspace is feasible and attractive beyond 2075. It’s not the proper real world time travel that isn’t permitted by physics, but it could feel just like that to those involved, and it could go further than you might think. It certainly will have some effects in the real world, because some of the active members of the society beyond 2075 might be involved in it. It certainly changes the future evolution timeline if people can essentially migrate from one era to another (there are some very strong caveats applicable here that I tried to explain in the blog, so please don’t misquote me as a nutter – I haven’t forgotten basic physics and logic, I’m just suggesting a feasible implementation of cyberspace that would allow time travel within it. It is really a cyberspace bubble that intersects with the real world at the real time front so doesn’t cause any physics problems, but at that intersection, its users can interact fully with the real world and their cultural experiences of time travel are therefore significant to others outside it.)

What else? OK, well there is a very significant community (many millions of people) that engages in all sorts of fantasy in shared on-line worlds, chat rooms and other forums. Fairies, elves, assorted spirits, assorted gods, dwarves, vampires, werewolves, assorted furry animals, assorted aliens, dolls,  living statues, mannequins, remote controlled people, assorted inanimate but living objects, plants and of course assorted robot/android variants are just some of those that already exist in principle; I’m sure I’ve forgotten some here and anyway, many more are invented every year so an exhaustive list would quickly become out of date. In most cases, many people already role play these with a great deal of conviction and imagination, not just in standalone games, but in communities, with rich cultures, back-stories and story-lines. So we know there is a strong demand, so we’re only waiting for their implementation once technology catches up, and it certainly will.

Biotech can do a lot, and nanotech and IT can add greatly to that. If you can design any kind of body with almost any kind of properties and constraints and abilities, and add any kind of IT and sensing and networking and sharing and external links for control and access and duplication, we will have an extremely rich diversity of future forms with an infinite variety of subcultures, cross-fertilization, migration and transformation. In fact, I can’t add just a few branches to my timeline. I need millions. So instead I will just lump all these extras into a huge collected category that allows almost anything, called Homo Whateverus.

So, here is the future of human (and associates) evolution, for the next 150 years. A few possible cross-links are omitted for clarity

evolution

I won’t be around to watch it all happen. But a lot of you will.