Category Archives: AI

The future of Jelly Babies

Another frivolous ‘future of’, recycled from 10 years ago.

I’ve always loved Jelly Babies, (Jelly Bears would work as well if you prefer those) and remember that Dr Who used to eat them a lot too. Perhaps we all have a mean streak, but I’m sure most if us sometimes bite off their heads before eating the rest. But that might all change. I must stress at this point that I have never even spoken to anyone from Bassetts, who make the best ones, and I have absolutely no idea what plans they might have, and they might even strongly disapprove of my suggestions, but they certainly could do this if they wanted, as could anyone else who makes Jelly Babies or Jelly Bears or whatever.

There will soon be various forms of edible electronics. Some electronic devices can already be swallowed, including a miniature video camera that can take pictures all the way as it proceeds through your digestive tract (I don’t know whether they bother retrieving them though). Some plastics can be used as electronic components. We also have loads of radio frequency identity (RFID) tags around now. Some tags work in groups, recording whether they have been separated from each other at some point, for example. With nanotech, we will be able to make tags using little more than a few well-designed molecules, and few materials are so poisonous that a few molecules can do you much harm so they should be sweet-compliant. So extrapolating a little, it seems reasonable to expect that we might be able to eat things that have specially made RFID tags in them.  It would make a lot of sense. They could be used on fruit so that someone buying an apple could ingest the RFID tag on it without concern. And as well as work on RFID tags, many other electronic devices can be made very small, and out of fairly safe materials too.

So I propose that Jelly Baby manufacturers add three organic RFID tags to each jelly baby, (legs, head and body), some processing, and a simple communications device When someone bites the head off a jelly baby, the jelly baby would ‘know’, because the tags would now be separated. The other electronics in the jelly baby could then come into play, setting up a wireless connection to the nearest streaming device and screaming through the loudspeakers. It could also link to the rest of the jelly babies left in the packet, sending out a radio distress call. The other jelly babies, and any other friends they can solicit help from via the internet, could then use their combined artificial intelligence to organise a retaliatory strike on the person’s home computer. They might be able to trash the hard drive, upload viruses, or post a stroppy complaint on social media about the person’s cruelty.

This would make eating jelly babies even more fun than today. People used to spend fortunes going on safari to shoot lions. I presume it was exciting at least in part because there was always a risk that you might not kill the lion and it might eat you instead. With our environmentally responsible attitudes, it is no longer socially acceptable to hunt lions, but jelly babies could be the future replacement. As long as you eat them in the right order, with the appropriate respect and ceremony and so on, you would just enjoy eating a nice sweet. If you get it wrong, your life is trashed for the next day or two. That would level the playing field a bit.

Jelly Baby anyone?

The future of I

Me, myself, I, identity, ego, self, lots of words for more or less the same thing. The way we think of ourselves evolves just like everything else. Perhaps we are still cavemen with better clothes and toys. You may be a man, a dad, a manager, a lover, a friend, an artist and a golfer and those are all just descendants of caveman, dad, tribal leader, lover, friend, cave drawer and stone thrower. When you play Halo as Master Chief, that is not very different from acting or putting a tiger skin on for a religious ritual. There have always been many aspects of identity and people have always occupied many roles simultaneously. Technology changes but it still pushes the same buttons that we evolved hundred thousands of years ago.

Will we develop new buttons to push? Will we create any genuinely new facets of ‘I’? I wrote a fair bit about aspects of self when I addressed the related topic of gender, since self perception includes perceptions of how others perceive us and attempts to project chosen identity to survive passing through such filters:

http://timeguide.wordpress.com/2014/02/14/the-future-of-gender-2/

Self is certainly complex. Using ‘I’ simplifies the problem. When you say ‘I’, you are communicating with someone, (possibly yourself). The ‘I’ refers to a tailored context-dependent blend made up of a subset of what you genuinely consider to be you and what you want to project, which may be largely fictional. So in a chat room where people often have never physically met, very often, one fictional entity is talking to another fictional entity, with each side only very loosely coupled to reality. I think that is different from caveman days.

Since chat rooms started, virtual identities have come a long way. As well as acting out manufactured characters such as the heroes in computer games, people fabricate their own characters for a broad range of kinds of ‘shared spaces’, design personalities and act them out. They may run that personality instance in parallel with many others, possibly dozens at once. Putting on an act is certainly not new, and friends easily detect acts in normal interactions when they have known a real person a long time, but online interactions can mean that the fictional version is presented it as the only manifestation of self that the group sees. With no other means to know that person by face to face contact, that group has to take them at face value and interact with them as such, though they know that may not represent reality.

These designed personalities may be designed to give away as little as possible of the real person wielding them, and may exist for a range of reasons, but in such a case the person inevitably presents a shallow image. Probing below the surface must inevitably lead to leakage of the real self. New personality content must be continually created and remembered if the fictional entity is to maintain a disconnect from the real person. Holding the in-depth memory necessary to recall full personality aspects and history for numerous personalities and executing them is beyond most people. That means that most characters in shared spaces take on at least some characteristics of their owners.

But back to the point. These fabrications should be considered as part of that person. They are an ‘I’ just as much as any other ‘I’. Only their context is different. Those parts may only be presented to subsets of the role population, but by running them, the person’s brain can’t avoid internalizing the experience of doing so. They may be partly separated but they are fully open to the consciousness of that person. I think that as augmented and virtual reality take off over the next few years, we will see their importance grow enormously. As virtual worlds start to feel more real, so their anchoring and effects in the person’s mind must get stronger.

More than a decade ago, AI software agents started inhabiting chat rooms too, and in some cases these ‘bots’ become a sufficient nuisance that they get banned. The front that they present is shallow but can give an illusion of reality. In some degree, they are an extension of the person or people that wrote their code. In fact, some are deliberately designed to represent a person when they are not present. The experiences that they have can’t be properly internalized by their creators, so they are a very limited extension to self. But how long will that be true? Eventually, with direct brain links and transhuman brain extensions into cyberspace, the combined experiences of I-bots may be fully available to consciousness just the same as first hand experiences.

Then it will get interesting. Some of those bots might be part of multiple people. People’s consciousnesses will start to overlap. People might collect them, or subscribe to them. Much as you might subscribe to my blog, maybe one day, part of one person’s mind, manifested as a bot or directly ‘published’, will become part of your mind. Some people will become absorbed into the experience and adopt so many that their own original personality becomes diluted to the point of disappearance. They will become just an interference pattern of numerous minds. Some will be so infectious that they will spread widely. For many, it will be impossible to die, and for many others, their minds will be spread globally. The hive minds of Dr Who, then later the Borg on Star Trek are conceptual prototypes but as with any sci-fi, they are limited by the imagination of the time they were conceived. By the time they become feasible, we will have moved on and the playground will be far richer than we can imagine yet.

So, ‘I’ has a future just as everything else. We may have just started to add extra facets a couple of decades ago, but the future will see our concept of self evolve far more quickly.

Postscript

I got asked by a reader whether I worry about this stuff. Here is my reply:

It isn’t the technology that worries me so much that humanity doesn’t really have any fixed anchor to keep human nature in place. Genetics fixed our biological nature and our values and morality were largely anchored by the main religions. We in the West have thrown our religion in the bin and are already seeing a 30 year cycle in moral judgments which puts our value sets on something of a random walk, with no destination, the current direction governed solely by media and interpretation and political reaction to of the happenings of the day. Political correctness enforces subscription to that value set even more strictly than any bishop ever forced religious compliance. Anyone that thinks religion has gone away just because people don’t believe in God any more is blind.

Then as genetics technology truly kicks in, we will be able to modify some aspects of our nature. Who knows whether some future busybody will decree that a particular trait must be filtered out because it doesn’t fit his or her particular value set? Throwing AI into the mix as a new intelligence alongside will introduce another degree of freedom. So already several forces acting on us in pretty randomized directions that can combine to drag us quickly anywhere. Then the stuff above that allows us to share and swap personality? Sure I worry about it. We are like young kids being handed a big chemistry set for Christmas without the instructions, not knowing that adding the blue stuff to the yellow stuff and setting it alight will go bang.

I am certainly no technotopian. I see the enormous potential that the tech can bring and it could be wonderful and I can’t help but be excited by it. But to get that you need to make the right decisions, and when I look at the sorts of leaders we elect and the sorts of decisions that are made, I can’t find the confidence that we will make the right ones.

On the good side, engineers and scientists are usually smart and can see most of the issues and prevent most of the big errors by using comon industry standards, so there is a parallel self-regulatory system in place that politicians rarely have any interest in. On the other side, those smart guys unfortunately will usually follow the same value sets as the rest of the population. So we’re quite likely to avoid major accidents and blowing ourselves up or being taken over by AIs. But we’re unlikely to avoid the random walk values problem and that will be our downfall.

So it could be worse, but it could be a whole lot better too.

 

The future of death

This one is a cut and paste from my book You Tomorrow.

Although age-related decline can be postponed significantly, it will eventually come. But that is just biological decline. In a few decades, people will have their brains linked to the machine world and much of their mind will be online, and that opens up the strong likelihood that death is not inevitable, and in fact anyone who expects to live past 2070 biologically (and rich people who can get past 2050) shouldn’t need to face death of their mind. Their bodies will eventually die, but their minds can live on, and an android body will replace the biological one they’ve lost.

Death used to be one of the great certainties of life, along with taxes. But unless someone under 35 now is unfortunate enough to die early from accident or disease, they have a good chance of not dying at all. Let’s explore that.

Genetics and other biotechnology will work with advanced materials technology and nanotechnology to limit and even undo damage caused by disease and age, keeping us young for longer, eventually perhaps forever. It remains to be seen how far we get with that vision in the next century, but we can certainly expect some progress in that area. We won’t get biological immortality for a good while, but if you can move into a high quality android body, who cares?

With this combination of technologies locked together with IT in a positive feedback loop, we will certainly eventually develop the technology to enable a direct link between the human brain and the machine, i.e. the descendants of today’s computers. On the computer side, neural networks are already the routine approach to many problems and are based on many of the same principles that neurons in the brain use. As this field develops, we will be able to make a good emulation of biological neurons. As it develops further, it ought to be possible on a sufficiently sophisticated computer to make a full emulation of a whole brain. Progress is already happening in this direction.

Meanwhile, on the human side, nanotechnology and biotechnology will also converge so that we will have the capability to link synthetic technology directly to individual neurons in the brain. We don’t know for certain that this is possible, but it may be possible to measure the behaviour of each individual neuron using this technology and to signal this behaviour to the brain emulation running in the computer, which could then emulate it. Other sensors could similarly measure and allow emulation of the many chemical signalling mechanisms that are used in the brain. The computer could thus produce an almost perfect electronic equivalent of the person’s brain, neuron by neuron. This gives us two things.

Firstly, by doing this, we would have a ‘backup’ copy of the person’s brain, so that in principle, they can carry on thinking, and effectively living, long after their biological body and brain has died. At this point we could claim effective immortality. Secondly, we have a two way link between the brain and the computer which allows thought to be executed on either platform and to be signalled between them.

There is an important difference between the brain and computer already that we may be able to capitalise on. In the brain’s neurons, signals travel at hundreds of metres per second. In a free space optical connection, they travel at hundreds of millions of metres per second, millions of times faster. Switching speeds are similarly faster in electronics. In the brain, cells are also very large compared to the electronic components of the future, so we may be able to reduce the distances over which the signals have to travel by another factor of 100 or more. But this assumes we take an almost exact representation of brain layout. We might be able to do much better than this. In the brain, we don’t appear to use all the neurons, (some are either redundant or have an unknown purpose) and those that we do use in a particular process are often in groups that are far apart. Reconfigurable hardware will be the norm in the 21st century and we may be able to optimize the structure for each type of thought process. Rearranging the useful neurons into more optimal structures should give another huge gain.

This means that our electronic emulation of the brain should behave in a similar way but much faster – maybe billions of times faster! It may be able to process an entire lifetime’s thoughts in a second or two. But even there are several opportunities for vast improvement. The brain is limited in size by a variety of biological constraints. Even if there were more space available, it could not be made much more efficient by making it larger, because of the need for cooling, energy and oxygen supply taking up ever more space and making distances between processors larger. In the computer, these constraints are much more easily addressable, so we could add large numbers of additional neurons to give more intelligence. In the brain, many learning processes stop soon after birth or in childhood. There need be no such constraints in computer emulations, so we could learn new skills as easily as in our infancy. And best of all, the computer is not limited by the memory of a single brain – it has access to all the world’s information and knowledge, and huge amounts of processing outside the brain emulation. Our electronic brain could be literally the size of the planet – the whole internet and all the processing and storage connected to it.

With all these advances, the computer emulation of the brain could be many orders of magnitude superior to its organic equivalent, and yet it might be connected in real time to the original. We would have an effective brain extension in cyberspace, one that gives us immeasurably improved performance and intelligence. Most of our thoughts might happen in the machine world, and because of the direct link, we might experience them as if they had occurred inside our head.

Our brains are in some ways equivalent in nature to how computers were before the age of the internet. They are certainly useful, but communication between them is slow and inefficient. However, when our brains are directly connected to machines, and those machines are networked, then everyone else’s brains are also part of that network, so we have a global network of people’s brains, all connected together, with all the computers too.

So we may soon eradicate death. By the time today’s children are due to die, they will have been using brain extensions for many years, and backups will be taken for granted. Death need not be traumatic for our relatives. They will soon get used to us walking around in an android body. Funerals will be much more fun as the key participant makes a speech about what they are expecting from their new life. Biological death might still be unpleasant, but it need no longer be a career barrier.

In terms of timescales, rich people might have this capability by 2050 and the rest of us some time before 2070. Your life expectancy biologically is increasing every year, so even if you are over 35, you have a pretty good chance of surviving long enough to gain. Half the people alive today are under 35 and will almost certainly not die fully. Many more are under 50 and some of them will live on electronically too. If you are over 50, the chances are that you will be the last generation of your family ever to have a full death.

As a side-note, there are more conventional ways of achieving immortality. Some Egyptian pharaohs are remembered because of their great pyramids. A few philosophers, artists, engineers and scientists have left such great works that they are remembered millennia later. And of course, on a small scale, for the rest of us, making an impression on those around us keeps your memory going a few generations. Writing a book immortalises your words. And you may have a multimedia headstone on your grave, or one that at least links into augmented reality to bring up your old web page of social networking site profile. But frankly, I am with Woody Allen on this one “I don’t want to achieve immortality through my work; I want to achieve immortality through not dying”. I just hope the technology arrives early enough.

The future of creativity

Another future of… blog.

I can play simple tunes on a guitar or keyboard. I compose music, mostly just bashing out some random sequences till a decent one happens. Although I can’t offer any Mozart-level creations just yet, doing that makes me happy. Electronic keyboards raise an interesting point for creativity. All I am actually doing is pressing keys, I don’t make sounds in the same way as when I pick at guitar strings. A few chips monitor the keys, noting which ones I hit and how fast, then producing and sending appropriate signals to the speakers.

The point is that I still think of it as my music, even though all I am doing is telling a microprocessor what to do on my behalf. One day, I will be able to hum a few notes or tap a rhythm with my fingers to give the computer some idea of a theme, and it will produce beautiful works based on my idea. It will still be my music, even when 99.9% of the ‘creativity’ is done by an AI. We will still think of the machines and software just as tools, and we will still think of the music as ours.

The other arts will be similarly affected. Computers will help us build on the merest hint of human creativity, enhancing our work and enabling us to do much greater things than we could achieve by our raw ability alone. I can’t paint or draw for toffee, but I do have imagination. One day I will be able to produce good paintings, design and make my own furniture, design and make my own clothes. I could start with a few downloads in the right ballpark. The computer will help me to build on those and produce new ones along divergent lines. I will be able to guide it with verbal instructions. ‘A few more trees on the hill, and a cedar in the foreground just here, a bit bigger, and move it to the left a bit’. Why buy a mass produced design when you can have a completely personal design?

These advances are unlikely to make a big dent in conventional art sales. Professional artists will always retain an edge, maybe even by producing the best seeds for computer creativity. Instead, computer assisted and computer enhanced art will make our lives more artistically enriched, and ourselves more fulfilled as a result. We will be able to express our own personalities more effectively in our everyday environment, instead of just decorating it with a few expressions of someone else’s.

However, one factor that seems to be overrated is originality. Anyone can immediately come up with many original ideas in seconds. Stick a safety pin in an orange and tie a red string through the loop. There, can I have my Turner prize now? There is an infinitely large field to pick from and only a small number have ever been realized, so coming up with something from the infinite set that still haven’t been thought of is easy and therefore of little intrinsic value. Ideas are ten a penny. It is only when it is combined with judgement or skill in making it real that it becomes valuable. Here again, computers will be able to assist. Analyzing a great many existing pictures or works or art should give some clues as to what most people like and dislike. IBM’s new neural chip is the sort of development that will accelerate this trend enormously. Machines will learn how to decide whether a picture is likely to be attractive to people or not. It should be possible for a computer to automatically create new pictures in a particular style or taste by either recombining appropriate ideas, or just randomly mixing any ideas together and then filtering the new pictures according to ‘taste’.

Augmented reality and other branches of cyberspace offer greater flexibility. Virtual objects and environments do not have to conform to laws of physics, so more elaborate and artistic structures are possible. Adding in 3D printing extends virtual graphics into the physical domain, but physics will only apply to the physical bits, and with future display technology, you might not easily be able to see where the physical stops and the virtual begins.

So, with machine assistance, human creativity will no longer be as limited by personal skill and talent. Anyone with a spark of creativity will be able to achieve great works, thanks to machine assistance. So long as you aren’t competitive about it, (someone else will always be able to do it better than you) your world will feel nicer, more friendly and personal, you’ll feel more in control, empowered, and your quality of life will improve. Instead of just making do with what you can buy, you’ll be able to decide what your world looks, sounds, feels, tastes and smells like, and design personality into anything you want too.

The future of bacteria

Bacteria have already taken the prize for the first synthetic organism. Craig Venter’s team claimed the first synthetic bacterium in 2010.

Bacteria are being genetically modified for a range of roles, such as converting materials for easier extraction (e.g. coal to gas, or concentrating elements in landfill sites to make extraction easier), making new food sources (alongside algae), carbon fixation, pollutant detection and other sensory roles, decorative, clothing or cosmetic roles based on color changing, special surface treatments, biodegradable construction or packing materials, self-organizing printing… There are many others, even ignoring all the military ones.

I have written many times on smart yogurt now and it has to be the highlight of the bacterial future, one of the greatest hopes as well as potential danger to human survival. Here is an extract from a previous blog:

Progress is continuing to harness bacteria to make components of electronic circuits (after which the bacteria are dissolved to leave the electronics). Bacteria can also have genes added to emit light or electrical signals. They could later be enhanced so that as well as being able to fabricate electronic components, they could power them too. We might add various other features too, but eventually, we’re likely to end up with bacteria that contain electronics and can connect to other bacteria nearby that contain other electronics to make sophisticated circuits. We could obviously harness self-assembly and self-organisation, which are also progressing nicely. The result is that we will get smart bacteria, collectively making sophisticated, intelligent, conscious entities of a wide variety, with lots of sensory capability distributed over a wide range. Bacteria Sapiens.

I often talk about smart yogurt using such an approach as a key future computing solution. If it were to stay in a yogurt pot, it would be easy to control. But it won’t. A collective bacterial intelligence such as this could gain a global presence, and could exist in land, sea and air, maybe even in space. Allowing lots of different biological properties could allow colonization of every niche. In fact, the first few generations of bacteria sapiens might be smart enough to design their own offspring. They could probably buy or gain access to equipment to fabricate them and release them to multiply. It might be impossible for humans to stop this once it gets to a certain point. Accidents happen, as do rogue regimes, terrorism and general mad-scientist type mischief.

Transhumanists seem to think their goal is the default path for humanity, that transhumanism is inevitable. Well, it can’t easily happen without going first through transbacteria research stages, and that implies that we might well have to ask transbacteria for their consent before we can develop true transhumans.

Self-organizing printing is a likely future enhancement for 3D printing. If a 3D printer can print bacteria (onto the surface of another material being laid down, or as an ingredient in a suspension as the extrusion material itself, or even a bacterial paste, and the bacteria can then generate or modify other materials, or use self-organisation principles to form special structures or patterns, then the range of objects that can be printed will extend. In some cases, the bacteria may be involved in the construction and then die or be dissolved away.

Estimating IoT value? Count ALL the beans!

In this morning’s news:

http://www.telegraph.co.uk/technology/news/11043549/UK-funds-development-of-world-wide-web-for-machines.html

£1.6M investment by UK Technology Strategy Board in Internet-of-Things HyperCat standard, which the article says will add £100Bn to the UK economy by 2020.

Garnter says that IoT has reached the hype peak of their adoption curve and I agree. Connecting machines together, and especially adding networked sensors will certainly increase technology capability across many areas of our lives, but the appeal is often overstated and the dangers often overlooked. Value should not be measured in purely financial terms either. If you value health, wealth and happiness, don’t just measure the wealth. We value other things too of course. It is too tempting just to count the most conspicuous beans. For IoT, which really just adds a layer of extra functionality onto an already technology-rich environment, that is rather like estimating the value of a chili con carne by counting the kidney beans in it.

The headline negatives of privacy and security have often been addressed so I don’t need to explore them much more here, but let’s look at a couple of typical examples from the news article. Allowing remotely controlled washing machines will obviously impact on your personal choice on laundry scheduling. The many similar shifts of control of your life to other agencies will all add up. Another one: ‘motorists could benefit from cheaper insurance if their vehicles were constantly transmitting positioning data’. Really? Insurance companies won’t want to earn less, so motorists on average will give them at least as much profit as before. What will happen is that insurance companies will enforce driving styles and car maintenance regimes that reduce your likelihood of a claim, or use that data to avoid paying out in some cases. If you have to rigidly obey lots of rules all of the time then driving will become far less enjoyable. Having to remember to check the tyre pressures and oil level every two weeks on pain of having your insurance voided is not one of the beans listed in the article, but is entirely analogous the typical home insurance rule that all your windows must have locks and they must all be locked and the keys hidden out of sight before they will pay up on a burglary.

Overall, IoT will add functionality, but it certainly will not always be used to improve our lives. Look at the way the web developed. Think about the cookies and the pop-ups and the tracking and the incessant virus protection updates needed because of the extra functions built into browsers. You didn’t want those, they were added to increase capability and revenue for the paying site owners, not for the non-paying browsers. IoT will be the same. Some things will make minor aspects of your life easier, but the price of that will that you will be far more controlled, you will have far less freedom, less privacy, less security. Most of the data collected for business use or to enhance your life will also be available to government and police. We see every day the nonsense of the statement that if you have done nothing wrong, then you have nothing to fear. If you buy all that home kit with energy monitoring etc, how long before the data is hacked and you get put on militant environmentalist blacklists because you leave devices on standby? For every area where IoT will save you time or money or improve your control, there will be many others where it does the opposite, forcing you to do more security checks, spend more money on car and home and IoT maintenance, spend more time following administrative procedures and even follow health regimes enforced by government or insurance companies. IoT promises milk and honey, but will deliver it only as part of a much bigger and unwelcome lifestyle change. Sure you can have a little more control, but only if you relinquish much more control elsewhere.

As IoT starts rolling out, these and many more issues will hit the press, and people will start to realise the downside. That will reduce the attractiveness of owning or installing such stuff, or subscribing to services that use it. There will be a very significant drop in the economic value from the hype. Yes, we could do it all and get the headline economic benefit, but the cost of greatly reduced quality of life is too high, so we won’t.

Counting the kidney beans in your chili is fine, but it won’t tell you how hot it is, and when you start eating it you may decide the beans just aren’t worth the pain.

I still agree that IoT can be a good thing, but the evidence of web implementation suggests we’re more likely to go through decades of abuse and grief before we get the promised benefits. Being honest at the outset about the true costs and lifestyle trade-offs will help people decide, and maybe we can get to the good times faster if that process leads to better controls and better implementation.

Ultra-simple computing: Part 2

Chip technology

My everyday PC uses an Intel Core-I7 3770 processor running at 3.4GHz. It has 4 cores running 8 threads on 1.4 billion 22nm transistors on just 160mm^2 of chip. It has an NVIDIA GeForce GTX660 graphics card, and has 16GB of main memory. It is OK most of the time, but although the processor and memory utilisation rarely gets above 30%, its response is often far from instant.

Let me compare it briefly with my (subjectively at time of ownership) best ever computer, my Macintosh 2Fx, RIP, which I got in 1991, the computer on which I first documented both the active contact lens and text messaging and on which I suppose I also started this project. The Mac 2Fx ran a 68030 processor at 40MHz, with 273,000 transistors and 4MB of RAM, and an 80MB hard drive. Every computer I’ve used since then has given me extra function at the expense of lower performance, wasted time and frustration.

Although its OS is stored on a 128GB solid state disk, my current PC takes several seconds longer to boot than my Macintosh Fx did – it went from cold to fully operational in 14 seconds – yes, I timed it. On my PC today, clicking a browser icon to first page usually takes a few seconds. Clicking on a word document back then took a couple of seconds to open. It still does now. Both computers gave real time response to typing and both featured occasional unexplained delays. I didn’t have any need for a firewall or virus checkers back then, but now I run tedious maintenance routines a few times every week. (The only virus I had before 2000 was nVir, which came on the Mac2 system disks). I still don’t get many viruses, but the significant time I spend avoiding them has to be counted too.

Going back further still, to my first ever computer in 1981, it was an Apple 2, and only had 9000 transistors running at 2.5MHz, with a piddling 32kB of memory. The OS was tiny. Nevertheless, on it I wrote my own spreadsheet, graphics programs, lens design programs, and an assortment of missile, aerodynamic and electromagnetic simulations. Using the same transistors as the I7, you could make 1000 of these in a single square millimetre!

Of course some things are better now. My PC has amazing graphics and image processing capabilities, though I rarely make full use of them. My PC allows me to browse the net (and see video ads). If I don’t mind telling Google who I am I can also watch videos on YouTube, or I could tell the BBC or some other video provider who I am and watch theirs. I could theoretically play quite sophisticated computer games, but it is my work machine, so I don’t. I do use it as a music player or to show photos. But mostly, I use it to write, just like my Apple 2 and my Mac Fx. Subjectively, it is about the same speed for those tasks. Graphics and video are the main things that differ.

I’m not suggesting going back to an Apple 2 or even an Fx. However, using I7 chip tech, a 9000 transistor processor running 1360 times faster and taking up 1/1000th of a square millimetre would still let me write documents and simulations, but would be blazingly fast compared to my old Apple 2. I could fit another 150,000 of them on the same chip space as the I7. Or I could have 5128 Mac Fxs running at 85 times normal speed. Or you could have something like a Mac FX running 85 times faster than the original for a tiny fraction of the price. There are certainly a few promising trees in the forest that nobody seems to have barked up. As an interesting aside, that 22nm tech Apple 2 chip would only be ten times bigger than a skin cell, probably less now, since my PC is already several months old

At the very least, that really begs the question what all this extra processing is needed for and why there is still ever any noticeable delay doing anything in spite of it. Each of those earlier machines was perfectly adequate for everyday tasks such as typing or spreadsheeting. All the extra speed has an impact only on some things and most is being wasted by poor code. Some of the delays we had 20 and 30 years ago still affect us just as badly today.

The main point though is that if you can make thousands of processors on a standard sized chip, you don’t have to run multitasking. Each task could have a processor all to itself.

The operating system currently runs programs to check all the processes that need attention, determine their priorities, schedule processing for them, and copy their data in and out of memory. That is not needed if each process can have its own dedicated processor and memory all the time. There are lots of ways of using basic physics to allocate processes to processors, relying on basic statistics to ensure that collisions rarely occur. No code is needed at all.

An ultra-simple computer could therefore have a large pool of powerful, free processors, each with their own memory, allocated on demand using simple physical processes. (I will describe a few options for the basic physics processes later). With no competition for memory or processing, a lot of delays would be eliminated too.

Ultra-simple computing: Part 1

Introduction

This is first part of a techie series. If you aren’t interested in computing, move along, nothing here. It is a big topic so I will cover it in several manageable parts.

Like many people, I spent a good few hours changing passwords after the Heartbleed problem and then again after ebay’s screw-up. It is a futile task in some ways because passwords are no longer a secure defense anyway. A decent hacker with a decent computer can crack hundreds of passwords in an hour, so unless an account is locked after a few failed attempts, and many aren’t, passwords only manage to keep out casual observers and the most amateurish hackers.

The need for simplicity

A lot of problems are caused by the complexity of today’s software, making it impossible to find every error and hole. Weaknesses have been added to operating systems, office automation tools and browsers to increase functionality for only a few users, even though they add little to most of us most of the time. I don’t think I have ever executed a macro in Microsoft office for example and I’ve certainly never used print merge or many its other publishing and formatting features. I was perfectly happy with Word 93 and most things added since then (apart from the real time spelling and grammar checker) have added irrelevant and worthless features at the expense of safety. I can see very little user advantage of allowing pop-ups on web sites, or tracking cookies. Their primary purpose is to learn about us to make marketing more precise. I can see why they want that, but I can’t see why I should. Users generally want pull marketing, not push, and pull doesn’t need cookies, there are better ways of sending your standard data when needed if that’s what you want to do. There are many better ways of automating logons to regular sites if that is needed.

In a world where more of the people who wish us harm are online it is time to design an alternative platform which it is designed specifically to be secure from the start and no features are added that allow remote access or control without deliberate explicit permission. It can be done. A machine with a strictly limited set of commands and access can be made secure and can even be networked safely. We may have to sacrifice a few bells and whistles, but I don’t think we will need to sacrifice many that we actually want or need. It may be less easy to track us and advertise at us or to offer remote machine analysis tools, but I can live with that and you can too. Almost all the services we genuinely want can still be provided. You could still browse the net, still buy stuff, still play games with others, and socialize. But you wouldn’t be able to install or run code on someone else’s machine without their explicit knowledge. Every time you turn the machine on, it would be squeaky clean. That’s already a security benefit.

I call it ultra-simple computing. It is based on the principle that simplicity and a limited command set makes it easy to understand and easy to secure. That basic physics and logic is more reliable than severely bloated code. That enough is enough, and more than that is too much.

We’ve been barking up the wrong trees

There are a few things you take for granted in your IT that needn’t be so.

Your PC has an extremely large operating system. So does your tablet, your phone, games console… That isn’t really necessary. It wasn’t always the case and it doesn’t have to be the case tomorrow.

Your operating system still assumes that your PC has only a few processing cores and has to allocate priorities and run-time on those cores for each process. That isn’t necessary.

Although you probably use some software in the cloud, you probably also download a lot of software off the net or install from a CD or DVD. That isn’t necessary.

You access the net via an ISP. That isn’t necessary. Almost unavoidable at present, but only due to bad group-think. Really, it isn’t necessary.

You store data and executable code in the same memory and therefore have to run analysis tools that check all the data in case some is executable. That isn’t necessary.

You run virus checkers and firewalls to prevent unauthorized code execution or remote access. That isn’t necessary.

Overall, we live with an IT system that is severely unfit for purpose. It is dangerous, bloated, inefficient, excessively resource and energy intensive, extremely fragile and yet vulnerable to attack via many routes, designed with the user as a lower priority than suppliers, with the philosophy of functionality at any price. The good news is that it can be replaced by one that is absolutely fit for purpose, secure, invulnerable, cheap and reliable, resource-efficient, and works just fine. Even better, it could be extremely cheap so you could have both and live as risky an online life in those areas that don’t really matter, knowing you have a safe platform to fall back on when your risky system fails or when you want to do anything that involves your money or private data.

Switching people off

A very interesting development has been reported in the discovery of how consciousness works, where neuroscientists stimulating a particular brain region were able to switch a woman’s state of awareness on and off. They said: “We describe a region in the human brain where electrical stimulation reproducibly disrupted consciousness…”

http://www.newscientist.com/article/mg22329762.700-consciousness-onoff-switch-discovered-deep-in-brain.html.

The region of the brain concerned was the claustrum, and apparently nobody had tried stimulating it before, although Francis Crick and Christof Koch had suggested the region would likely be important in achieving consciousness. Apparently, the woman involved in this discovery was also missing some of her hippocampus, and that may be a key factor, but they don’t know for sure yet.

Mohamed Koubeissi and his the team at the George Washington university in Washington DC were investigating her epilepsy and stimulated her claustrum area with high frequency electrical impulses. When they did so, the woman lost consciousness, no longer responding to any audio or visual stimuli, just staring blankly into space. They verified that she was not having any epileptic activity signs at the time, and repeated the experiment with similar results over two days.

The team urges caution and recommends not jumping to too many conclusions. They did observe the obvious potential advantages as an anesthesia substitute if it can be made generally usable.

As a futurologist, it is my job to look as far down the road as I can see, and imagine as much as I can. Then I filter out all the stuff that is nonsensical, or doesn’t have a decent potential social or business case or as in this case, where research teams suggest that it is too early to draw conclusions. I make exceptions where it seems that researchers are being over-cautious or covering their asses or being PC or unimaginative, but I have no evidence of that in this case. However, the other good case for making exceptions is where it is good fun to jump to conclusions. Anyway, it is Saturday, I’m off work, so in the great words of Dr Emmett Brown in ‘Back to the future':  “Well, I figured, what the hell.”

OK, IF it works for everyone without removing parts of the brain, what will we do with it and how?

First, it is reasonable to assume that we can produce electrical stimulation at specific points in the brain by using external kit. Trans-cranial magnetic stimulation might work, or perhaps implants may be possible using injection of tiny particles that migrate to the right place rather than needing significant surgery. Failing those, a tiny implant or two via a fine needle into the right place ought to do the trick. Powering via induction should work. So we will be able to produce the stimulation, once the sucker victim subject has the device implanted.

I guess that could happen voluntarily, or via a court ordered protective device, as a condition of employment or immigration, or conditional release from prison, or a supervision order, or as a violent act or in war.

Imagine if government demands a legal right to access it, for security purposes and to ensure your comfort and safety, of course.

If you think 1984 has already gone too far, imagine a government or police officer that can switch you off if you are saying or thinking the wrong thing. Automated censorship devices could ensure that nobody discusses prohibited topics.

Imagine if people on the street were routinely switched off as a VIP passes to avoid any trouble for them.

Imagine a future carbon-reduction law where people are immobilized for an hour or two each day during certain periods. There might be a quota for how long you are allowed to be conscious each week to limit your environmental footprint.

In war, captives could have devices implanted to make them easy to control, simply turned off for packing and transport to a prison camp. A perimeter fence could be replaced by a line in the sand. If a prisoner tries to cross it, they are rendered unconscious automatically and put back where they belong.

Imagine a higher class of mugger that doesn’t like violence much and prefers to switch victims off before stealing their valuables.

Imagine being able to switch off for a few hours to pass the time on a long haul flight. Airlines could give discounts to passengers willing to be disabled and therefore less demanding of attention.

Imagine  a couple or a group of friends, or a fetish club, where people can turn each other off at will. Once off, other people can do anything they please with them – use them as dolls, as living statues or as mannequins, posing them, dressing them up. This is not an adult blog so just use your imagination – it’s pretty obvious what people will do and what sorts of clubs will emerge if an off-switch is feasible, making people into temporary toys.

Imagine if you got an illegal hacking app and could freeze the other people in your vicinity. What would you do?

Imagine if your off-switch is networked and someone else has a remote control or hacks into it.

Imagine if an AI manages to get control of such a system.

Having an off-switch installed could open a new world of fun, but it could also open up a whole new world for control by the authorities, crime control, censorship or abuse by terrorists and thieves and even pranksters.

 

 

Google is wrong. We don’t all want gadgets that predict our needs.

In the early 1990s, lots of people started talking about future tech that would work out what we want and make it happen. A whole batch of new ideas came out – internet fridges, smart waste-baskets, the ability to control your air conditioning from the office or open and close curtains when you’re away on holiday. 25 years on almost and we still see just a trickle of prototypes, followed by a tsunami of apathy from the customer base.

Do you want an internet fridge, that orders milk when you’re running out, or speaks to you all the time telling you what you’re short of, or sends messages to your phone when you are shopping? I certainly don’t. It would be extremely irritating. It would crash frequently. If I forget to clean the sensors it won’t work. If I don’t regularly update the software, and update the security, and get it serviced, it won’t work. It will ask me for passwords. If my smart loo notices I’m putting on weight, the fridge will refuse to open, and tell the microwave and cooker too so that they won’t cook my lunch. It will tell my credit card not to let me buy chocolate bars or ice cream. It will be a week before kitchen rage sets in and I take a hammer to it. The smart waste bin will also be covered in tomato sauce from bean cans held in a hundred orientations until the sensor finally recognizes the scrap of bar-code that hasn’t been ripped off. Trust me, we looked at all this decades ago and found the whole idea wanting. A few show-off early adopters want it to show how cool and trendy they are, then they’ll turn it off when no-one is watching.

EDIT: example of security risks from smart devices (this one has since been fixed) http://www.bbc.co.uk/news/technology-28208905

If I am with my best friend, who has known me for 30 years, or my wife, who also knows me quite well, they ask me what I want, they discuss options with me. They don’t think they know best and just decide things. If they did, they’d soon get moaned at. If I don’t want my wife or my best friend to assume they know what I want best, why would I want gadgets to do that?

The first thing I did after checking out my smart TV was to disconnect it from the network so that it won’t upload anything and won’t get hacked or infected with viruses. Lots of people have complained about new adverts on TV that control their new xBoxes via the Kinect voice recognition. The ‘smart’ TV receiver might be switched off as that happens. I am already sick of things turning themselves off without my consent because they think they know what I want.

They don’t know what is best. They don’t know what I want. Google doesn’t either. Their many ideas about giving lots of information it thinks I want while I am out are also things I will not welcome. Is the future of UI gadgets that predict your needs, as Wired says Google thinks? No, it isn’t. What I want is a really intuitive interface so I can ask for what I want, when I want it. The very last thing I want is an idiot device thinking it knows better than I do.

We are not there yet. We are nowhere near there yet. Until we are, let me make my own decisions. PLEASE!