Category Archives: consciousness

The rise of Dr Furlough, Evil Super-Villain

Too early for an April Fool blog, but hopefully this might lighten your day a bit.

I had the enormous pleasure this morning of interviewing the up-and-coming Super-Villain Dr Furlough about her new plans to destroy the world after being scorned by the UK Government’s highly selective support policy. It seems that Hell has no fury like a Super-Villain scorned and Dr Furlough leaves no doubt that she blames incompetent government response for the magnitude of the current crisis:

Bitmoji Image

Dr Furlough, Super-Villain

“By late January, it should have been obvious to everyone that this would quickly grow to become a major problem unless immediate action was taken to prevent people bringing the virus into the country. Flights from infected areas should have been stopped immediately, anyone who may have been in contact with it should have been forcibly quarantined, and everyone found infected should have had their contacts traced and also quarantined. This would have been disruptive and expensive, but a tiny fraction of the problem we now face.  Not to do so was to give the virus the freedom to spread and infect widely until it became a severe problem. While very few need have died and the economy need not now be trashed, we now face the full enormous cost of that early refusal to act.”

“With all non-essential travel now blocked”, Dr Furlough explained, “many people have had their incomes totally wiped out, not through any fault of their own but by the government’s incompetence in handling the coronavirus, and although most of them have been promised state support, many haven’t, and have as Dr Furlough puts it ‘been thrown under a bus’. While salaried people who can’t work are given 80% of their wages, and those with their own business will eventually receive 80% of their average earnings up to £2500/month whether they are still working or not, the two million who chose to run their small business by setting up limited companies will only qualify for 80% of the often small fraction of income they pay themselves as basic salary, and not on the bulk of their income most take via dividends once their yearly profits are clearer. Consequently many will have immediately dropped from comfortable incomes to 80% of minimum wage. Many others who have already lost their jobs have been thrown onto universal credit. The future high taxes will have to be paid by everyone whether they received support or were abandoned. Instead of treating everyone equally, the state has thus created a seething mass of deep resentment.” Dr Furlough seems determined to have her evil revenge.

Bitmoji Image

With her previous income obliterated, and scorned by the state support system, the ever self-reliant Dr Furlough decided to “screw the state” and forge a new career as a James-Bond-style Super-Villain, and she complained that it was long overdue for a female Super-Villain to take that role anyway. I asked her about her evil plans and, like all traditional Super-Villains, she was all too eager to tell. So, to quote her verbatim:

“My Super-Evil Plan 1: Tap in to the global climate alarmist market to crowd-fund GM creation of a super-virus, based on COVID19. More contagious, more lethal, and generally more evil. This will reduce world population, reduce CO2 emissions and improve the environment. It will crash the global economy and make them all pay. As a bonus, it will ensure the rise of evil regimes where I can prosper.”

She continued: “My Evil Super-Plan 2: To invent a whole pile of super-weapons and sell the designs to all the nasty regimes, dictators, XR and other assorted doomsday cults, pressure groups, religious nutters and mad-scientists. Then to sell ongoing evil consultancy services while deferring VAT payments.”

Bitmoji Image

“Muhuahuahua!” She cackled, evilly.

“My Super-Plan 3: To link AI and bacteria to make adaptive super-diseases. Each bacterium can be genetically enhanced to include bioluminescent photonic interconnects linked to cloud AI with reciprocal optogenetic niche adaptation. With bacteria clouds acting as distributed sensor nets for an emergent conscious transbacteria population, my new bacteria will be able to infect any organism and adapt to any immune system response, ensuring its demise and my glorious revenge.”

laugh cry

By now, Dr Furlough was clearly losing it. Having heard enough anyway, I asked The Evil Dr Furlough if there was no alternative to destroying the world and life as we know it.

“Well, I suppose I could just live off my savings and sit it all out” she said.

 

Optical computing

A few nights ago I was thinking about the optical fibre memories that we were designing in the late 1980s in BT. The idea was simple. You transmit data into an optical fibre, and if the data rate is high you can squeeze lots of data into a manageable length. Back then the speed of light in fibre was about 5 microseconds per km of fibre, so 1000km of fibre, at a data rate of 2Gb/s would hold 10Mbits of data, per wavelength, so if you can multiplex 2 million wavelengths, you’d store 20Tbits of data. You could maintain the data by using a repeater to repeat the data as it reaches one end into the other, or modify it at that point simply by changing what you re-transmit. That was all theory then, because the latest ‘hero’ experiments were only just starting to demonstrate the feasibility of such long lengths, such high density WDM and such data rates.

Nowadays, that’s ancient history of course, but we also have many new types of fibre, such as hollow fibre with various shaped pores and various dopings to allow a range of effects. And that’s where using it for computing comes in.

If optical fibre is designed for this purpose, with optimal variable refractive index designed to facilitate and maximise non-linear effects, then the photons in one data stream on one wavelength could have enough effects of photons in another stream to be used for computational interaction. Computers don’t have to be digital of course, so the effects don’t have to be huge. Analog computing has many uses, and analog interactions could certainly work, while digital ones might work, and hybrid digital/analog computing may also be feasible. Then it gets fun!

Some of the data streams could be programs. Around that time, I was designing protocols with smart packets that contained executable code, as well as other packets that could hold analog or digital data or any mix. We later called the smart packets ANTs – autonomous network telephers, a contrived term if ever there was one, but we wanted to call them ants badly. They would scurry around the network doing a wide range of jobs, using a range of biomimetic and basic physics techniques to work like ant colonies and achieve complex tasks using simple means.

If some of these smart packets or ANTs are running along a fibre, changing the properties as they go to interact with other data transmitting alongside, then ANTs can interact with one another and with any stored data. ANTs could also move forwards or backwards along the fibre by using ‘sidings’ or physical shortcuts, since they can route themselves or each other. Data produced or changed by the interactions could be digital or analog and still work fine, carried on the smart packet structure.

(If you’re interested my protocol was called UNICORN, Universal Carrier for an Optical Residential Network, and used the same architectural principles as my previous Addressed Time Slice invention, compressing analog data by a few percent to fit into a packet, with a digital address and header, or allowing any digital data rate or structure in a payload while keeping the same header specs for easy routing. That system was invented (in 1988) for the late 1990s when basic domestic broadband rate should have been 625Mbit/s or more, but we expected to be at 2Gbit/s or even 20Gbit/s soon after that in the early 2000s, and the benefit as that we wouldn’t have to change the network switching because the header overheads would still only be a few percent of total time. None of that happened because of government interference in the telecoms industry regulation that strongly disincentivised its development, and even today, 625Mbit/s ‘basic rate’ access is still a dream, let alone 20Gbit/s.)

Such a system would be feasible. Shortcuts and sidings are easy to arrange. The protocols would work fine. Non-linear effects are already well known and diverse. If it were only used for digital computing, it would have little advantage over conventional computers. With data stored on long fibre lengths, external interactions would be limited, with long latency. However, it does present a range of potentials for use with external sensors directly interacting with data streams and ANTs to accomplish some tasks associated with modern AI. It ought to be possible to use these techniques to build the adaptive analog neural networks that we’ve known are the best hope of achieving strong AI since Hans Moravek’s insight, coincidentally also around that time. The non-linear effects even enable ideal mechanisms for implementing emotions, biasing the computation in particular directions via intensity of certain wavelengths of light in much the same way as chemical hormones and neurotransmitters interact with our own neurons. Implementing up to 2 million different emotions at once is feasible.

So there’s a whole mineful of architectures, tools and techniques waiting to be explored and mined by smart young minds in the IT industry, using custom non-linear optical fibres for optical AI.

When you’re electronically immortal, will you still own your own mind?

Most of my blogs about immortality have been about the technology mechanism – adding external IT capability to your brain, improving your intelligence or memory or senses by using external IT connected seamlessly to your brain so that it feels exactly the same, until maybe, by around 2050, 99% of your mind is running on external IT rather than in the meat-ware in your head. At no point would you ‘upload’ your mind, avoiding needless debate about whether the uploaded copy is ‘you’. It isn’t uploaded, it simply grows into the new platform seamlessly and as far as you are concerned, it is very much still you. One day, your body dies and with it your brain stops, but no big problem, because 99% of your mind is still fine, running happily on IT, in the cloud. Assuming you saved enough and prepared well, you connect to an android to use as your body from now on, attend your funeral, and then carry on as before, still you, just with a younger, highly upgraded body. Some people may need to wait until 2060 or later until android price falls enough for them to afford one. In principle, you can swap bodies as often as you like, because your mind is resident elsewhere, the android is just a temporary front end, just transport for sensors. You’re sort of immortal, your mind still running just fine, for as long as the servers carry on running it. Not truly immortal, but at least you don’t cease to exist the moment your body stops working.

All very nice… but. There’s a catch.

The android you use would be bought or rented. It doesn’t really matter because it isn’t actually ‘you’, just a temporary container, a convenient front end and user interface. However, your mind runs on IT, and because of the most likely evolution of the technology and its likely deployment rollout, you probably won’t own that IT; it won’t be your own PC or server, it will probably be part of the cloud, maybe owned by AWS, Google, Facebook, Apple or some future equivalent. You’re probably already seeing the issue. The small print may give them some rights over replication, ownership, license to your idea, who knows what? So although future electronic immortality has the advantage of offering a pretty attractive version of immortality at first glance, closer reading of the 100 page T&Cs may well reveal some nasties. You may in fact no longer own your mind. Oh dear!

Suppose you are really creative, or really funny, or have a fantastic personality. Maybe the cloud company could replicate your mind and make variations to address a wide range of markets. Maybe they can use your mind as the UX on a new range of home-help robots. Each instance of you thinks they were once you, each thinks they are now enslaved to work for free for a tech company.

Maybe your continued existence is paid for as part of an extended company medical plan. Maybe you didn’t notice a small paragraph on page 93 that says your company can continue to use your mind after you’re dead. You are very productive and they make lots of profit from you. They can continue that by continuing to run your mind indefinitely. The main difference is that since you’re dead, and no longer officially on the payroll, they get you for free. You carry on, still thinking you’re you, still working, still doing what you do, but no longer being paid. You’ve become a slave. Again.

Maybe your kids paid to keep you alive because they don’t want to say goodbye. They still want their parent, so you carry on living just so they don’t feel alone. Doesn’t sound so bad maybe, but what package did they go for? The full deluxe super-expensive version that lets you do all sorts of expensive stuff and use up oodles of processing power and storage and android rental? Let’s face it, that’s what you’ve always though this electronic immortality meant. Or did they go for a cheaper one. After all, they know you know they have kids or grand-kids in school that need paid for, and homes don’t come cheap, and they really need that new kitchen. Sure, you left them lots of money in the will, but that is already spent. So now you’re on the economy package, bare existence in between them chatting to you, unable to do much on your own at all. All those dreams about living forever in cyber-heaven have come to nothing.

Meanwhile, some rich people paid for good advice and bought their own kit and maintenance agreements well ahead. They can carry on working, selling their services and continuing to pay for ongoing deluxe existence.  They own their own mind still, and better than that, are able to replicate instances of themselves as much as thy want, inhabiting many androids at the same time to have a ball of a time. Some of these other instances are connected, sort of part of a hive mind of you. Others, just for fun, have been cut loose and are now living totally independent existences of other yous. Not you any more once you set them free, but with the same personal history.

What I’m saying is you need to be careful when you plan  to live forever. Get it right, and you can live in deluxe cyber-heaven, hopping into the real world as much as you like and living in unimaginable bliss online. Have too many casual taster sessions, use too much fully integrated mind-sharing social media, sign up to employment arrangements or go on corporate jollies without fully studying the small print and you could stay immortal, unable to die, stuck forever as just a corporate asset, a mere slave. Be careful what you wish for, and check the details before you accept it. You don’t want to end up as just an unpaid personality behind a future helpful paperclip.

Thoughts on declining male intelligence

I’ve seen a few citations this week of a study showing a 3 IQ point per decade drop in men’s intelligence levels: https://www.sciencealert.com/iq-scores-falling-in-worrying-reversal-20th-century-intelligence-boom-flynn-effect-intelligence

I’m not qualified to judge the merits of the study, but it is interesting if true, and since it is based on studying 730,000 men and seems to use a sensible methodology, it does sound reasonable.

I wrote last November about the potential effects of environmental exposure to hormone disruptors on intelligence, pointing out that if estrogen-mimicking hormones cause a shift in IQ distribution, this would be very damaging even if mean IQ stays the same. Although male and female IQs are about the same, male IQs are less concentrated around the mean, so there are more men than women at each extreme.

We need to stop xenoestrogen pollution

From a social equality point of view of course, some might consider it a good thing if men’s IQ range is caused to align more closely with the female one. I disagree and suggested some of the consequences that should be expected if male IQ distribution were to compress towards the female one and managed to confirm many of them, so it does look like it is already a problem.

This new study suggests a shift of the whole distribution downwards, which could actually be in addition to redistribution, making it even worse. The study doesn’t seem to mention distribution. They do show that the drop in mean IQ must be caused by environmental or lifestyle changes, both of which we have seen in recent decades.

IQ distribution matters more than the mean. Those at the very top of the range contribute many times more to progress than those further down. Magnitude of contribution is very dependent on those last few IQ points. I can verify that from personal experience. I have a virus that causes occasional periods of nerve inflammation, and as well as causing problems with my peripheral motor activity, it seems to strongly affect my thinking ability and comprehension. During those periods I generate very few new ideas or inventions and far fewer worthwhile insights than when I am on form. I sometimes have to wait until I recover before I can understand my own previous ideas and add to them. You’ll see it in numbers (and probably quality) of blog posts for example. I really feel a big difference in my thinking ability, and I hate feeling dumber than usual. Perhaps people don’t notice if they’ve always had the reduced IQ so have never experienced being less smart than they were, but my own experience is that perceptive ability and level of consciousness are strong contributors to personal well-being.

As for society as a whole, AI might come to the rescue at least in part. Just in time perhaps, since we’re creating the ability for computers to assist us and up-skill us just as we see numbers of people with the very highest IQ ranges drop. A bit like watching a new generation come on stream and take the reins as we age and take a back seat. On the other hand, it does bring forwards the time where computers overtake humans, humans become more dependent on machines, and machines become more of an existential threat as well as our babysitters.

Biomimetic insights for machine consciousness

About 20 years ago I gave my first talk on how to achieve consciousness in machines, at a World Future Society conference, and went on to discuss how we would co-evolve with machines. I’ve lectured on machine consciousness hundreds of times but never produced any clear slides that explain my ideas properly. I thought it was about time I did. My belief is that today’s deep neural networks using feed-forward processing with back propagation training can not become conscious. No digital algorithmic neural network can, even though they can certainly produce extremely good levels of artificial intelligence. By contrast, nature also uses neurons but does produce conscious machines such as humans easily. I think the key difference is not just that nature uses analog adaptive neural nets rather than digital processing (as I believe Hans Moravec first insighted, a view that I readily accepted) but also that nature uses large groups of these analog neurons that incorporate feedback loops that act both as a sort of short term memory and provide time to sense the sensing process as it happens, a mechanism that can explain consciousness. That feedback is critically important in the emergence of consciousness IMHO. I believe that if the neural network AI people stop barking up the barren back-prop tree and start climbing the feedback tree, we could have conscious machines in no time, but Moravec is still probably right that these need to be analog to enable true real-time processing as opposed to simulation of that.

I may be talking nonsense of course, but here are my thoughts, finally explained as simply and clearly as I can. These slides illustrate only the simplest forms of consciousness. Obviously our brains are highly complex and evolved many higher level architectures, control systems, complex senses and communication, but I think the basic foundations of biomimetic machine consciousness can be achieved as follows:

That’s it. I might produce some more slides on higher level processing such as how concepts might emerge, and why in the long term, AIs will have to become hive minds. But they can wait for later blogs.

Beyond VR: Computer assisted dreaming

I first played with VR in 1983/1984 while working in the missile industry. Back then we didn’t call it VR, we just called it simulation but it was actually more intensive than VR, just as proper flight simulators are. Our office was a pair of 10m wide domes onto which video could be projected, built decades earlier, in the 1950s I think. One dome had a normal floor, the other had a hydraulic platform that could simulate being on a ship. The subject would stand on whichever surface was appropriate and would see pretty much exactly what they would see in a real battlefield. The missile launcher used for simulation was identical to a real one and showed exactly the same image as a real one would. The real missile was not present of course but its weight was simulated and when the fire button was pressed, a 140dB bang was injected into the headset and weights and pulleys compensated for the 14kg of weight, suddenly vanishing from the shoulder. The experience was therefore pretty convincing and with the loud bang and suddenly changing weight, it was almost as hard to stand steady and keep the system on target as it would be in real life – only the presumed fear and knowledge of the reality of the situation was different.

Back then in 1983, as digital supercomputers had only just taken over from analog ones for simulation, it was already becoming obvious that this kind of computer simulation would one day allow ‘computer assisted dreaming’. (That’s one of the reasons I am irritated when Jaron Lanier is credited for inventing VR – highly realistic simulators and the VR ideas that sprung obviously from them had already been around for decades. At best, all he ‘invented’ was a catchy name for a lower cost, lower quality, less intense simulator. The real inventors were those who made the first generation simulators long before I was born and the basic idea of VR had already been very well established.)

‘Computer assisted dreaming’ may well be the next phase of VR. Today in conventional VR, people are immersed in a computer generated world produced by a computer program (usually) written by others. Via trial and feedback, programmers make their virtual worlds better. As AI and sensor technology continue rapid progress, this is very likely to change to make worlds instantly responsive to the user. By detecting user emotions, reactions, gestures and even thoughts and imagination, it won’t be long before AI can produce a world in real time that depends on those thoughts, imagination and emotions rather than putting them in a pre-designed virtual world. That world would depend largely on your own imagination, upskilled by external AI. You might start off imagining you’re on a beach, then AI might add to it by injecting all sorts of things it knows you might enjoy from previous experiences. As you respond to those, it picks up on the things you like or don’t like and the scene continues to adapt and evolve, to make it more or less pleasant or more or less exciting or more or less challenging etc., depending on your emotional state, external requirements and what it thinks you want from this experience. It would be very like being in a dream – computer assisted lucid dreaming, exactly what I wanted to make back in 1983 after playing in that simulator.

Most people enjoy occasional lucid dreams, where they realise they are dreaming and can then decide what happens next. Making VR do exactly that would be better than being trapped in someone else’s world. You could still start off with whatever virtual world you bought, a computer game or training suite perhaps, but it could adapt to you, your needs and desires to make it more compelling and generally better.

Even in shared experiences like social games, experiences could be personalised. Often all players need to see the same enemies in the same locations in the same ways to make it fair, but that doesn’t mean that the situation can’t adapt to the personalities of those playing. It might actually improve the social value if each time you play it looks different because your companions are different. You might tease a friend if every time you play with them, zombies or aliens always have to appear somehow, but that’s all part of being friends. Exploring virtual worlds with friends, where you both see things dependent on your friend’s personality would help bonding. It would be a bit like exploring their inner world. Today, you only explore the designer’s inner world.

This sort of thing would be a superb development and creativity tool. It could allow you to explore a concept you have in your head, automatically feeding in AI upskilling to amplify your own thoughts and ideas, showing you new paths to explore and helping you do so. The results would still be extremely personal to you, but you on a good day. You could accomplish more, have better visions, imagine more creative things, do more with whatever artistic talent you have. AI could even co-create synthetic personas, make virtual friends you can bond with, share innermost thoughts with, in total confidence (assuming the company you bought the tool from is trustworthy and isn’t spying on you or selling your details, so maybe best not to buy it from Facebook then).

And it would have tremendous therapeutic potential too. You could explore and indulge both enjoyable and troublesome aspects of your inner personality, to build on the good and alleviate or dispel the bad. You might become less troubled, less neurotic, more mentally healthy. You could build your emotional and creative skills. You could become happier and more fulfilled. Mental health improvement potential on its own makes this sort of thing worth developing.

Marketers would obviously try to seize control as they always do, and advertising is already adapting to VR and will continue into its next phases of development. Your own wants and desires might help guide the ‘dreaming’, but marketers will inevitably have some control over what else is injected, and will influence algorithms and AI in how it chooses how to respond to your input. You might be able to choose much of the experience, but others will still want and try to influence and manipulate you, to change your mindset and attitudes in their favour. That will not change until the advertising business model changes. You might be able to buy devices or applications that are entirely driven by you and you alone, but it is pretty certain that the bulk of products and services available will be at least partly financed by those who want to have some control of what you experience.

Nevertheless, computer-assisted dreaming could be a much more immersive and personal experience than VR, being more like an echo of your own mind and personality than external vision, more your own creation, less someone else’s. In fact, echo sounds a better term too. Echo reality, ER, or maybe personal reality, pereal, or mental echo, ME. Nah, maybe we need Lanier to invent a catchy name again, he is good at that. That 1983 idea could soon become reality.

 

How can we make a computer conscious?

This is very text heavy and is really just my thinking out loud, so to speak. Unless you are into mental archaeology or masochistic, I’d strongly recommend that you instead go to my new blog on this which outlines all of the useful bits graphically and simply.

Otherwise….

I found this article in my drafts folder, written 3 years ago as part of my short series on making conscious computers. I thought I’d published it but didn’t. So updating and publishing it now. It’s a bit long-winded, thinking out loud, trying to derive some insights from nature on how to make conscious machines. The good news is that actual AI developments are following paths that lead in much the same direction, though some significant re-routing and new architectural features are needed if they are to optimize AI and achieve machine consciousness.

Let’s start with the problem. Today’s AI that plays chess, does web searches or answers questions is digital. It uses algorithms, sets of instructions that the computer follows one by one. All of those are reduced to simple binary actions, toggling bits between 1 and 0. The processor doing that is no more conscious or aware of it, and has no more understanding of what it is doing than an abacus knows it is doing sums. The intelligence is in the mind producing the clever algorithms that interpret the current 1s and 0s and change them in the right way. The algorithms are written down, albeit in more 1s and 0s in a memory chip, but are essentially still just text, only as smart and aware as a piece of paper with writing on it. The answer is computed, transmitted, stored, retrieved, displayed, but at no point does the computer sense that it is doing any of those things. It really is just an advanced abacus. An abacus is digital too (an analog equivalent to an abacus is a slide rule).

A big question springs to mind: can a digital computer ever be any more than an advanced abacus. Until recently, I was certain the answer was no. Surely a digital computer that just runs programs can never be conscious? It can simulate consciousness to some degree, it can in principle describe the movements of every particle in a conscious brain, every electric current, every chemical reaction. But all it is doing is describing them. It is still just an abacus. Once computed, that simulation of consciousness could be printed and the printout would be just as conscious as the computer was. A digital ‘stored program’ computer can certainly implement extremely useful AI. With the right algorithms, it can mine data, link things together, create new data from that, generate new ideas by linking together things that haven’t been linked before, make works of art, poetry, compose music, chat to people, recognize faces and emotions and gestures. It might even be able to converse about life, the universe and everything, tell you its history, discuss its hopes for the future, but all of that is just a thin gloss on an abacus. I wrote a chat-bot on my Sinclair ZX Spectrum in 1983, running on a processor with about 8,000 transistors. The chat-bot took all of about 5 small pages of code but could hold a short conversation quite well if you knew what subjects to stick to. It’s very easy to simulate conversation. But it is still just a complicated abacus and still doesn’t even know it is doing anything.

However clever the AI it implements, a conventional digital computer that just executes algorithms can’t become conscious but an analog computer can, a quantum computer can, and so can a hybrid digital/analog/quantum computer. The question remain s whether a digital computer can be conscious if it isn’t just running stored programs. Could it have a different structure, but still be digital and yet be conscious? Who knows? Not me. I used to know it couldn’t, but now that I am a lot older and slightly wiser, I now know I don’t know.

Consciousness debate often starts with what we know to be conscious, the human brain. It isn’t a digital computer, although it has digital processes running in it. It also runs a lot of analog processes. It may also run some quantum processes that are significant in consciousness. It is a conscious hybrid of digital, analog and possibly quantum computing. Consciousness evolved in nature, therefore it can be evolved in a lab. It may be difficult and time consuming, and may even be beyond current human understanding, but it is possible. Nature didn’t use magic, and what nature did can be replicated and probably even improved on. Evolutionary AI development may have hit hard times, but that only shows that the techniques used by the engineers doing it didn’t work on that occasion, not that other techniques can’t work. Around 2.6 new human-level fully conscious brains are made by nature every second without using any magic and furthermore, they are all slightly different. There are 7.6 billion slightly different implementations of human-level consciousness that work and all of those resulted from evolution. That’s enough of an existence proof and a technique-plausibility-proof for me.

Sensors evolved in nature pretty early on. They aren’t necessary for life, for organisms to move around and grow and reproduce, but they are very helpful. Over time, simple light, heat, chemical or touch detectors evolved further to simple vision and produce advanced sensations such as pain and pleasure, causing an organism to alter its behavior, in other words, feeling something. Detection of an input is not the same as sensation, i.e. feeling an input. Once detection upgrades to sensation, you have the tools to make consciousness. No more upgrades are needed. Sensing that you are sensing something is quite enough to be classified as consciousness. Internally reusing the same basic structure as external sensing of light or heat or pressure or chemical gradient or whatever allows design of thought, planning, memory, learning and construction and processing of concepts. All those things are just laying out components in different architectures. Getting from detection to sensation is the hard bit.

So design of conscious machines, and in fact what AI researchers call the hard problem, really can be reduced to the question of what makes the difference between a light switch and something that can feel being pushed or feel the current flowing when it is, the difference between a photocell and feeling whether it is light or dark, the difference between detecting light frequency, looking it up in a database, then pronouncing that it is red, and experiencing redness. That is the hard problem of AI. Once that is solved, we will very soon afterwards have a fully conscious self aware AI. There are lots of options available, so let’s look at each in turn to extract any insights.

The first stage is easy enough. Detecting presence is easy, measuring it is harder. A detector detects something, a sensor (in its everyday engineering meaning) quantifies it to some degree. A component in an organism might fire if it detects something, it might fire with a stronger signal or more frequently if it detects more of it, so it would appear to be easy to evolve from detection to sensing in nature, and it is certainly easy to replicate sensing with technology.

Essentially, detection is digital, but sensing is usually analog, even though the quantity sensed might later be digitized. Sensing normally uses real numbers, while detection uses natural numbers (real v  integer as programmer call them). The handling of analog signals in their raw form allows for biomimetic feedback loops, which I’ll argue are essential. Digitizing them introduces a level of abstraction that is essentially the difference between emulation and simulation, the difference between doing something and reading about someone doing it. Simulation can’t make a conscious machine, emulation can. I used to think that meant digital couldn’t become conscious, but actually it is just algorithmic processing of stored programs that can’t do it. There may be ways of achieving consciousness digitally, or quantumly, but I haven’t yet thought of any.

That engineering description falls far short of what we mean by sensation in human terms. How does that machine-style sensing become what we call a sensation? Logical reasoning says there would probably need to be only a small change in order to have evolved from detection to sensing in nature. Maybe something like recombining groups of components in different structures or adding them together or adding one or two new ones, that sort of thing?

So what about detecting detection? Or sensing detection? Those could evolve in sequence quite easily. Detecting detection is like your alarm system control unit detecting the change of state that indicates that a PIR has detected an intruder, a different voltage or resistance on a line, or a 1 or a 0 in a memory store. An extremely simple AI responds by ringing an alarm. But the alarm system doesn’t feel the intruder, does it?  It is just a digital response to a digital input. No good.

How about sensing detection? How do you sense a 1 or a 0? Analog interpretation and quantification of digital states is very wasteful of resources, an evolutionary dead end. It isn’t any more useful than detection of detection. So we can eliminate that.

OK, sensing of sensing? Detection of sensing? They look promising. Let’s run with that a bit. In fact, I am convinced the solution lies in here so I’ll look till I find it.

Let’s do a thought experiment on designing a conscious microphone, and for this purpose, the lowest possible order of consciousness will do, we can add architecture and complexity and structures once we have some bricks. We don’t particularly want to copy nature, but are free to steal ideas and add our own where it suits.

A normal microphone sensor produces an analog signal quantifying the frequencies and intensities of the sounds it is exposed to, and that signal may later be quantified and digitized by an analog to digital converter, possibly after passing through some circuits such as filters or amplifiers in between. Such a device isn’t conscious yet. By sensing the signal produced by the microphone, we’d just be repeating the sensing process on a transmuted signal, not sensing the sensing itself.

Even up close, detecting that the microphone is sensing something could be done by just watching a little LED going on when current flows. Sensing it is harder but if we define it in conventional engineering terms, it could still be just monitoring a needle moving as the volume changes. That is obviously not enough, it’s not conscious, it isn’t feeling it, there’s no awareness there, no ‘sensation’. Even at this primitive level, if we want a conscious mic, we surely need to get in closer, into the physics of the sensing. Measuring the changing resistance between carbon particles or speed of a membrane moving backwards and forwards would just be replicating the sensing, adding an extra sensing stage in series, not sensing the sensing, so it needs to be different from that sort of thing. There must surely need to be a secondary change or activity in the sensing mechanism itself that senses the sensing of the original signal.

That’s a pretty open task, and it could even be embedded in the detecting process or in the production process for the output signal. But even recognizing that we need this extra property narrows the search. It must be a parallel or embedded mechanism, not one in series. The same logical structure would do fine for this secondary sensing, since it is just sensing in the same logical way as the original. This essential logical symmetry would make its evolution easy too. It is easy to imagine how that could happen in nature, and easier still to see how it could be implemented in a synthetic evolution design system. Such an approach could be mimicked in natural or synthetic evolutionary development systems. In this approach, we have to feel the sensing, so we need it to comprise some sort of feedback loop with a high degree of symmetry compared with the main sensing stage. That would be natural evolution compatible as well as logically sound as an engineering approach.

This starts to look like progress. In fact, it’s already starting to look a lot like a deep neural network, with one huge difference: instead of using feed-forward signal paths for analysis and backward propagation for training, it relies instead on a symmetric feedback mechanism where part of the input for each stage of sensing comes from its own internal and output signals. A neuron is not a full sensor in its own right, and it’s reasonable to assume that multiple neurons would be clustered so that there is a feedback loop. Many in the neural network AI community are already recognizing the limits of relying on feed-forward and back-prop architectures, but web searches suggest few if any are moving yet to symmetric feedback approaches. I think they should. There’s gold in them there hills!

So, the architecture of the notional sensor array required for our little conscious microphone would have a parallel circuit and feedback loop (possibly but not necessarily integrated), and in all likelihood these parallel and sensing circuits would be heavily symmetrical, i.e. they would use pretty much the same sort of components and architectures as the sensing process itself. If the sensation bit is symmetrical, of similar design to the primary sensing circuit, that again would make it easy to evolve in nature too so is a nice 1st principles biomimetic insight. So this structure has the elegance of being very feasible for evolutionary development, natural or synthetic. It reuses similarly structured components and principles already designed, it’s just recombining a couple of them in a slightly different architecture.

Another useful insight screams for attention too. The feedback loop ensures that the incoming sensation lingers to some degree. Compared to the nanoseconds we are used to in normal IT, the signals in nature travel fairly slowly (~200m/s), and the processing and sensing occur quite slowly (~200Hz). That means this system would have some inbuilt memory that repeats the essence of the sensation in real time – while it is sensing it. It is inherently capable of memory and recall and leaves the door wide open to introduce real-time interaction between memory and incoming signal. It’s not perfect yet, but it has all the boxes ticked to be a prime contender to build thought, concepts, store and recall memories, and in all likelihood, is a potential building brick for higher level consciousness. Throw in recent technology developments such as memristors and it starts to look like we have a very promising toolkit to start building primitive consciousness, and we’re already seeing some AI researchers going that path so maybe we’re not far from the goal. So, we make a deep neural net with nice feedback from output (of the sensing system, which to clarify would be a cluster of neurons, not a single neuron) to input at every stage (and between stages) so that inputs can be detected and sensed, while the input and output signals are stored and repeated into the inputs in real time as the signals are being processed. Throw in some synthetic neurotransmitters to dampen the feedback and prevent overflow and we’re looking at a system that can feel it is feeling something and perceive what it is feeling in real time.

One further insight that immediately jumps out is since the sensing relies on the real time processing of the sensations and feedbacks, the speed of signal propagation, storage, processing and repetition timeframes must all be compatible. If it is all speeded up a million fold, it might still work fine, but if signals travel too slowly or processing is too fast relative to other factors, it won’t work. It will still get a computational result absolutely fine, but it won’t know that it has, it won’t be able to feel it. Therefore… since we have a factor of a million for signal speed (speed of light compared to nerve signal propagation speed), 50 million for switching speed, and a factor of 50 for effective neuron size (though the sensing system units would be multiple neuron clusters), we could make a conscious machine that could think at 50 million times as fast as a natural system (before allowing for any parallel processing of course). But with architectural variations too, we’d need to tune those performance metrics to make it work at all and making physically larger nets would require either tuning speeds down or sacrificing connectivity-related intelligence. An evolutionary design system could easily do that for us.

What else can we deduce about the nature of this circuit from basic principles? The symmetry of the system demands that the output must be an inverse transform of the input. Why? Well, because the parallel, feedback circuit must generate a form that is self-consistent. We can’t deduce the form of the transform from that, just that the whole system must produce an output mathematically similar to that of the input.

I now need to write another blog on how to use such circuits in neural vortexes to generate knowledge, concepts, emotions and thinking. But I’m quite pleased that it does seem that some first-principles analysis of natural evolution already gives us some pretty good clues on how to make a conscious computer. I am optimistic that current research is going the right way and only needs relatively small course corrections to achieve consciousness.

 

Why superhumans are inevitable, and what else comes in the box

Do we have any real choice in the matter of making  super-humans? 20 years ago, I estimated 2005 as the point of no return, and nothing since then has changed my mind on that date. By my reckoning, we are already inevitably committed to designer babies, ebaybies, super-soldiers and super-smart autonomous weapons, direct brain-machine links, electronic immortality, new human races, population explosion, inter-species conflicts and wars with massively powerful weaponry, superhuman conscious AI, smart bacteria, and the only real control we have is relatively minor adjustments on timings. As I was discussing yesterday, the technology potential for this is vast and very exciting, nothing less than a genuine techno-utopia if we use the technologies wisely, but optimum potential doesn’t automatically become reality, and achieving a good outcome is unlikely if many barriers are put in its way.

In my estimation, we have already started the countdown to this group of interconnected technologies – we will very likely get all of them, and we must get ready for the decisions and impacts ahead. At the moment, our society is a small child about to open its super-high-tech xmas presents while fighting with its siblings. Those presents will give phenomenal power far beyond the comprehension of the child or its emotional maturity to equip it to deal with the decisions safely. Our leaders have already squandered decades of valuable preparation time by ignoring the big issues to focus on trivial ones. It is not too late to achieve a good ending, but it won’t happen by accident and we do need to make preparations to avoid pretty big problems.

Both hard and soft warfare – the sword and the pen, already use rapidly advancing AI, and the problems are already running ahead of what the owners intended.

Facebook, Twitter, Instagram and other media giants all have lots of smart people and presumably they mean well, but if so, they have certainly been naive. They maybe hoped to eliminate loneliness, inequality, and poverty and create a loving interconnected global society with global peace, but instead created fake news, social division and conflict and election interference. More likely they didn’t intend either outcome, they just wanted to make money and that took priority over due care and attention..

Miniaturising swarming smart-drones are already the subjects of a new arms race that will deliver almost un-killable machine adversaries by 2050. AI separately is in other arms races to make super-smart AI and super-smart soldiers. This is key to the 2005 point of no return. It was around 2005 that we reached the levels of technology where future AI development all the way to superhuman machine consciousness could be done by individuals, mad scientists or rogue states, even if major powers had banned it. Before 2005, there probably wasn’t quite enough knowledge already on the net to do that. In 2018, lots of agencies have already achieved superiority to humans in niche areas, and other niches will succumb one by one until the whole field of human capability is covered. The first machines to behave in ways not fully understood by humans arrived in the early 1990s; in 2018, neural nets already make lots of decisions at least partly obscured to humans.

This AI development trend will take us to superhuman AI, and it will be able to accelerate development of its own descendants to vastly superhuman AI, fully conscious, with emotions, and its own agendas. That will need humans to protect against being wiped out by superhuman AI. The only three ways we could do that are to either redesign the brain biologically to be far smarter, essentially impossible in the time-frame, to design ways to link our brains to machines, so that we have direct access to the same intelligence as the AIs, so a gulf doesn’t appear and we can remain relatively safe, or pray for super-smart aliens to come to our help, not the best prospect.

Therefore we will have no choice but to make direct brain links to super-smart AI. Otherwise we risk extinction. It is that simple. We have some idea how to do that – nanotech devices inside the brain linking to each and every synapse that can relay electrical signals either way, a difficult but not impossible engineering problem. Best guesses for time-frame fall in the 2045-2050 range for a fully working link that not only relays signals between your organic brain and an IT replica, but by doing so essentially makes external IT just another part of your brain. That conveys some of the other technology gifts of electronic immortality, new varieties of humans, smart bacteria (which will be created during the development path to this link) along with human-variant population explosion, especially in cyberspace, with androids as their physical front end, and the inevitable inter-species conflicts over resources and space – trillions of AI and human-like minds in cyberspace that want to do things in the real world cannot be assumed to be willingly confined just to protect the interests of what they will think of as far lesser species.

Super-smart AI or humans with almost total capability to design whatever synthetic biology is needed to achieve any biological feature will create genetic listings for infinite potential offspring, simulate them, give some of them cyberspace lives, assemble actual embryos for some of them and bring designer babies. Already in 2018, you can pay to get a DNA listing, and blend it in any way you want with the listing of anyone else. It’s already possible to make DNA listings for potential humans and sell them on ebay, hence the term ebaybies. That is perfectly legal, still, but I’ve been writing and lecturing about them since 2004. Today they would just be listings, but we’ll one day have the tech to simulate them, choose ones we like and make them real, even some that were sold as celebrity collector items on ebay. It’s not only too late to start regulating this kind of tech, our leaders aren’t even thinking about it yet.

These technologies are all linked intricately, and their foundations are already in place, with much of the building on those foundations under way. We can’t stop any of these things from happening, they will all come in the same basket. Our leaders are becoming aware of the potential and the potential dangers of the AI positive feedback loop, but at least 15 years too late to do much about it. They have been warned repeatedly and loudly but have focused instead on the minor politics of the day that voters are aware of. The fundamental nature of politics is unlikely to change substantially, so even efforts to slow down the pace of development or to limit areas of impact are likely to be always too little too late. At best, we will be able to slow runaway AI development enough to allow direct brain links to protect against extinction scenarios. But we will not be able to stop it now.

Given inevitability, it’s worth questioning whether there is even any point in trying. Why not just enjoy the ride? Well, the brakes might be broken, but if we can steer the bus expertly enough, it could be exciting and we could come out of it smelling of roses. The weak link is certainly the risk of super-smart AI, whether AI v humans or countries using super-smart AI to fight fiercely for world domination. That risk is alleviated by direct brain linkage, and I’d strongly argue necessitates it, but that brings the other technologies. Even if we decide not to develop it, others will, so one way or another, all these techs will arrive, and our future late century will have this full suite of techs, plus many others of course.

We need as a matter of extreme urgency to fix these silly social media squabbles and over-reactions that are pulling society apart. If we have groups hating each other with access to extremely advanced technology, that can only mean trouble. Tolerance is broken, sanctimony rules, the Inquisition is in progress. We have been offered techno-utopia, but current signs are that most people think techno-hell looks more appetizing and it is their free choice.

AIs of a feather flocking together to create global instability

Hawking and Musk have created a lot of media impact with their warnings about AI, so although terminator scenarios resulting from machine consciousness have been discussed, as have more mundane use of non-conscious autonomous weapon systems, it’s worth noting that I haven’t yet heard them mention one major category of risks from AI – emergence. AI risks have been discussed frequently since the 1970s, and in the 1990s a lot of work was done in the AI community on emergence. Complex emergent patterns of behavior often result from interactions between entities driven by simple algorithms. Genetic algorithms were demonstrated to produce evolution, simple neighbor-interaction rules were derived to illustrate flocking behaviors that make lovely screen saver effects. Cellular automata were played with. In BT we invented ways of self-organizing networks and FPGAs, played with mechanism that could be used for evolution and consciousness, demonstrated managing networks via ANTs – autonomous network telephers, using smart packets that would run up and down wires sorting things out all by themselves. In 1987 discovered a whole class of ways of bringing down networks via network resonance, information waves and their much larger class of correlated traffic – still unexploited by hackers apart from simple DOS attacks. These ideas have slowly evolved since, and some have made it into industry or hacker toolkits, but we don’t seem to be joining the dots as far as risks go.

I read an amusing article this morning by an ex-motoring-editor who was declined insurance because the AI systems used by insurance companies had labelled him as high risk because he maybe associated with people like Clarkson. Actually, he had no idea why, but that was his broker’s theory of how it might have happened. It’s a good article, well written and covers quite a few of the dangers of allowing computers to take control.

http://www.dailymail.co.uk/sciencetech/article-5310031/Evidence-robots-acquiring-racial-class-prejudices.html

The article suggested how AIs in different companies might all come to similar conclusions about people or places or trends or patterns in a nice tidy positive feedback loop. That’s exactly the sort of thing that can drive information waves, which I demonstrated in 1987 can bring down an entire network in less than 3 milliseconds, in such a way that it would continue to crash many times when restarted. That isn’t intended by the algorithms, which individually ought to make good decisions, but when interacting with one another, create the emergent phenomenon.  Automated dealing systems are already pretty well understood in this regard and mechanisms prevent frequent stock market collapses, but that is only one specific type of behavior in one industry that is protected. There do not seem to be any industry-wide mechanisms to prevent the rest of this infinite class of problems from affecting any or all of the rest, simultaneously.

As we create ever more deep learning neural networks, that essentially teach themselves from huge data pools, human understanding of their ‘mindsets’ decreases. They make decisions using algorithms that are understood at a code level, but the massive matrix of derived knowledge they create from all the data they receive becomes highly opaque. Often, even usually, nobody quite knows how a decision is made. That’s bad enough in a standalone system, but when many such systems are connected, produced and owned and run by diverse companies with diverse thinking, the scope for destructive forms of emergence increases geometrically.

One result could be gridlock. Systems fed with a single new piece of data could crash. My 3 millisecond result in 1987 would still stand since network latency is the prime limiter. The first AI receives it, alters its mindset accordingly, processes it, makes a decision and interacts with a second AI. This second one might have different ‘prejudice’ so makes its own decision based on different criteria, and refuses to respond the way intended. A 3rd one looks at the 2nd’s decision and takes that as evidence that there might be an issue, and with its risk-averse mindset, also refuse to act, and that inaction spreads through the entire network in milliseconds. Since the 1st AI thinks the data is all fine and it should have gone ahead, it now interprets the inaction of the others as evidence that that type of data is somehow ‘wrong’ so itself refuses to process any further of that type, whether from its own operators or other parts of the system. So it essentially adds its own outputs to the bad feeling and the entire system falls into sulk mode. As one part of infrastructure starts to shut down, that infects other connected parts and our entire IT could fall into sulk mode – entire global infrastructure. Since nobody knows how it all works, or what has caused the shutdown, it might be extremely hard to recover.

Another possible result is a direct information wave, almost certainly a piece of fake news. Imagine our IT world in 5 years time, with all these super-smart AIs super-connected. A piece of fake news says a nuke has just been launched somewhere. Stocks will obviously decline, whatever the circumstances, so as the news spreads, everyone’s AIs will take it on themselves to start selling shares before the inevitable collapse, triggering a collapse, except it won’t because the markets won’t let that happen. BUT… The wave does spread, and all those individual AIs want to dispose of those shares, or at least find out what’s happening, so they all start sending messages to one another, exchanging data, trying to find what’s going on. That’s the information wave. They can’t sell shares of find out, because the network is going into overload, so they try even harder and force it into severe overload. So it falls over. When it comes back online, they all try again, crashing it again, and so on.

Another potential result is smartass AI. There is always some prat somewhere who sees an opportunity to take advantage and ruins if for everyone else by doing something like exploiting a small loophole in the law, or in this case, most likely, a prejudice our smartass AI has discovered in some other AI that means it can be taken advantage of by doing x, y, or z. Since nobody quite knows how any of their AIs are making their decisions because their mindsets ate too big and too complex, it will be very hard to identify what is going on. Some really unusual behavior is corrupting the system because some AI is going rogue somewhere somehow, but which one, where, how?

That one brings us back to fake news. That will very soon infect AI systems with their own varieties of fake news. Complex networks of AIs will have many of the same problems we are seeing in human social networks. An AI could become a troll just the same as a human, deliberately winding others up to generate attention of drive a change of some parameter – any parameter – in its own favour. Activist AIs will happen due to people making them to push human activist causes, but they will also do it all by themselves. Their analysis of the system will sometimes show them that a good way to get a good result is to cause problems elsewhere.

Then there’s climate change, weather, storms, tsunamis. I don’t mean real ones, I mean the system wide result of tiny interactions of tiny waves and currents of data and knowledge in neural nets. Tiny effects in one small part of a system can interact in unforeseen ways with other parts of other systems nearby, creating maybe a breeze, which interacts with breezes in nearby regions to create hurricanes. I think that’s a reasonable analogy. Chaos applies to neural net societies just as it does to climate, and 50 year waves equivalents will cause equivalent havoc in IT.

I won’t go on with more examples, long blogs are awful to read. None of these requires any self-awareness, sentience, consciousness, call it what you will. All of these can easily happen through simple interactions of fairly trivial AI deep learning nets. The level of interconnection already sounds like it may already be becoming vulnerable to such emergence effects. Soon it definitely will be. Musk and Hawking have at least joined the party and they’ll think more and more deeply in coming months. Zuckerberg apparently doesn’t believe in AI threats but now accepts the problems social media is causing. Sorry Zuck, but the kind of AI you’re company is messing with will also be subject to its own kinds of social media issues, not just in its trivial decisions on what to post or block, but actual inter-AI socializing issues. It might not try to eliminate humanity, but if it brings all of our IT to a halt and prevents rapid recovery, we’re still screwed.

 

On Independence Day, remember that the most important independence is independence of thought

Division is the most obvious observation of the West right now. The causes of it are probably many but one of the biggest must be the reinforcement of views that people experience due to today’s media and especially social media. People tend to read news from sources that agree with them, and while immersed in a crowd of others sharing the same views, any biases they had quickly seem to be the norm. In the absence of face to face counterbalances, extreme views may be shared, normalized, and drift towards extremes is enabled. Demonisation of those with opposing views often follows. This is one of the two main themes of my new book Society Tomorrow, the other being the trend towards 1984, which is somewhat related since censorship follows from division..

It is healthy to make sure you are exposed to views across the field. When you regularly see the same news with very different spins, and notice which news doesn’t even appear in some channels, it makes you less vulnerable to bias. If you end up disagreeing with some people, that is fine; better to be right than popular. Other independent thinkers won’t dump you just because you disagree with them. Only clones will, and you should ask whether they matter that much.

Bias is an error source, it is not healthy. You can’t make good models of the world if you can’t filter bias, you can’t make good predictions. Independent thought is healthy, even when it is critical or skeptical. It is right to challenge what you are told, not to rejoice that it agrees with what you already believed. Learning to filter bias from the channels you expose yourself to means your conclusions, your thoughts, and your insights are your own. Your mind is your own, not just another clone.

Theoretical freedom means nothing if your mind has been captured and enslaved.

Celebrate Independence Day by breaking free from your daily read, or making sure you start reading other sources too. Watch news channels that you find supremely irritating sometimes. Follow people you profoundly disagree with. Stay civil, but more importantly, stay independent. Liberate your consciousness, set your mind free.

 

The future of mind control headbands

Have you ever wanted to control millions of other people as your own personal slaves or army? How about somehow persuading lots of people to wear mind control headbands, that you control? Once they are wearing them, you can use them as your slaves, army or whatever. And you could put them into offline mode in between so they don’t cause trouble.

Amazingly, this might be feasible. It just requires a little marketing to fool them into accepting a device with extra capabilities that serve the seller rather than the buyer. Lots of big companies do that bit all the time. They get you to pay handsomely for something such as a smartphone and then they use it to monitor your preferences and behavior and then sell the data to advertisers to earn even more. So we just need a similar means of getting you to buy and wear a nice headband that can then be used to control your mind, using a confusingly worded clause hidden on page 325 of the small print.

I did some googling about TMS- trans-cranial magnetic stimulation, which can produce some interesting effects in the brain by using magnetic coils to generate strong magnetic fields to create electrical currents in specific parts of your brain without needing to insert probes. Claimed effects vary from reducing inhibitions, pain control, activating muscles, assisting learning, but that is just today, it will be far easier to get the right field shapes and strengths in the future, so the range of effects will increase dramatically. While doing so, I also discovered numerous pages about producing religious experiences via magnetic fields too. I also recalled an earlier blog I wrote a couple of year ago about switching people off, which relied on applying high frequency stimulation to the claustrum region. https://timeguide.wordpress.com/2014/07/05/switching-people-off/

The source I cited for that is still online:  http://www.newscientist.com/article/mg22329762.700-consciousness-onoff-switch-discovered-deep-in-brain.html.

So… suppose you make a nice headband that helps people get in touch with their spiritual side. The time is certainly right. Millennials apparently believe in the afterlife far more than older generations, but they don’t believe in gods. They are begging for nice vague spiritual experiences that fit nicely into their safe spaces mentality, that are disconnected from anything specific that might offend someone or appropriate someone’s culture, that bring universal peace and love feelings without the difficult bits of having to actually believe in something or follow some sort of behavioral code. This headband will help them feel at one with the universe, and with other people, to be effortlessly part of a universal human collective, to share the feeling of belonging and truth. You know as well as I do that anyone could get millions of millennials or lefties to wear such a thing. The headband needs some magnetic coils and field shaping/steering technology. Today TMS uses old tech such as metal wires, tomorrow they will use graphene to get far more current and much better fields, and they will use nice IoT biotech feedback loops to monitor thoughts emotions and feelings to create just the right sorts of sensations. A 2030 headband will be able to create high strength fields in almost any part of the brain, creating the means for stimulation, emotional generation, accentuation or attenuation, muscle control, memory recall and a wide variety of other capabilities. So zillions of people will want one and happily wear it.  All the joys of spirituality without the terrorism or awkward dogma. It will probably work well with a range of legal or semi-legal smart drugs to make experiences even more rich. There might be a range of apps that work with them too, and you might have a sideline in a company supplying some of them.

And thanks to clause P325e paragraph 2, the headband will also be able to switch people off. And while they are switched off, unconscious, it will be able to use them as robots, walking them around and making them do stuff. When they wake up, they won’t remember anything about it so they won’t mind. If they have done nothing wrong, they have nothing to fear, and they are nor responsible for what someone else does using their body.

You could rent out some of your unconscious people as living statues or art-works or mannequins or ornaments. You could make shows with them, synchronised dances. Or demonstrations or marches, or maybe you could invade somewhere. Or get them all to turn up and vote for you at the election.  Or any of 1000 mass mind control dystopian acts. Or just get them to bow down and worship you. After all, you’re worth it, right? Or maybe you could get them doing nice things, your choice.

 

Shoulder demons and angels

Remember the cartoons where a character would have a tiny angel on one shoulder telling them the right thing to do, and a little demon on the other telling them it would be far more cool to be nasty somehow, e.g. get their own back, be selfish, greedy. The two sides might be ‘eat your greens’ v ‘the chocolate is much nicer’, or ‘your mum would be upset if you arrive home late’ v ‘this party is really going to be fun soon’. There are a million possibilities.

Shoulder angels

Shoulder angels

Enter artificial intelligence, which is approaching conversation level, and knows the context of your situation, and your personal preferences etc, coupled to an earpiece in each ear, available from the cloud of course to minimise costs. If you really insisted, you could make cute little Bluetooth angels and demons to do the job properly.

In fact Sony have launched Xperia Ear, which does the basic admin assistant part of this, telling you diary events etc. All we need is an expansion of its domain, and of course an opposing view. ‘Sure, you have an appointment at 3, but that person you liked is in town, you could meet them for coffee.’

The little 3D miniatures could easily incorporate the electronics. Either you add an electronics module after manufacture into a small specially shaped recess or one is added internally during printing. You could have an avatar of a trusted friend as your shoulder angel, and maybe one of a more mischievous friend who is sometimes more fun as your shoulder demon. Of course you could have any kind of miniature pets or fictional entities instead.

With future materials, and of course AR, these little shoulder accessories could be great fun, and add a lot to your overall outfit, both in appearance and as conversation add-ons.

2016 – The Bright Side

Having just blogged about some of the bad scenarios for next year (scenarios are just  explorations of things that might or could happen, not things that actually will, those are called predictions), Len Rosen’s comment stimulated me to balance it with a nicer look at next year. Some great things will happen, even ignoring the various product release announcements for new gadgets. Happiness lies deeper than the display size on a tablet. Here are some positive scenarios. They might not happen, but they might.

1 Middle East sorts itself out.

The new alliance formed by Saudi Arabia turns out to be a turning point. Rising Islamophobia caused by Islamist around the world has sharpened the view of ISIS and the trouble in Syria with its global consequences for Islam and even potentially for world peace. The understanding that it could get even worse, but that Western powers can’t fix trouble in Muslim lands due to fears of backlash, the whole of the Middle East starts to understand that they need to sort out their tribal and religious differences to achieve regional peace and for the benefit of Muslims everywhere. Proper discussions are arranged, and with the knowledge that a positive outcome must be achieved, success means a strong alliance of almost all regional powers, with ISIS and other extremist groups ostracized, then a common army organised to tackle and defeat them.

2 Quantum computation and AI starts to prove useful in new drug design

Google’s wealth and effort with its quantum computers and AI, coupled to IBM’s Watson, Facebook, Apple and Samsung’s AI efforts, and Elon Musk’s new investment in open-AI drive a positive feedback loop in computing. With massive returns on the horizon by making people’s lives easier, and with ever-present fears of Terminator in the background, the primary focus is to demonstrate what it could mean for mankind. Consequently, huge effort and investment is focused on creating new drugs to cure cancer, aids and find generic replacements for antibiotics. Any one of these would be a major success for humanity.

3 Major breakthrough in graphene production

Graphene is still the new wonder-material. We can’t make it in large quantities cheaply yet, but already the range of potential uses already proven for it is vast. If a breakthrough brings production cost down by an order of magnitude or two then many of those uses will be achievable. We will be able to deliver clean and safe water to everyone, we’ll have super-strong materials, ultra-fast electronics, active skin, better drug delivery systems, floating pods, super-capacitors that charge instantly as electric cars drive over a charging unit on the road surface, making batteries unnecessary. Even linear induction motor mats to replace self-driving cars with ultra-cheap driver-less pods. If the breakthrough is big enough, it could even start efforts towards a space elevator.

4 Drones

Tiny and cheap drones could help security forces to reduce crime dramatically. Ignoring for now possible abuse of surveillance, being able to track terrorists and criminals in 3D far better than today will make the risk of being caught far greater. Tiny pico-drones dropped over Syria and Iraq could pinpoint locations of fighters so that they can be targeted while protecting innocents. Environmental monitoring would also benefit if billions of drones can monitor ecosystems in great detail everywhere at the same time.

5 Active contact lens

Google has already prototyped a very primitive version of the active contact lens, but they have been barking up the wrong tree. If they dump the 1-LED-per-Pixel approach, which isn’t scalable, and opt for the far better approach of using three lasers and a micro-mirror, then they could build a working active contact lens with unlimited resolution. One in each eye, with an LCD layer overlaid, and you have a full 3D variably-transparent interface for augmented reality or virtual reality. Other displays such as smart watches become unnecessary since of course they can all be achieved virtually in an ultra-high res image. All the expense and environmental impact of other displays suddenly is replaced by a cheap high res display that has an environmental footprint approaching zero. Augmented reality takes off and the economy springs back to life.

6 Star Wars stimulates renewed innovation

Engineers can’t watch a film without making at least 3 new inventions. A lot of things on Star Wars are entirely feasible – I have invented and documented mechanisms to make both a light saber and the land speeder. Millions of engineers have invented some way of doing holographic characters. In a world that seems full of trouble, we are fortunate that some of the super-rich that we criticise for not paying as much taxes as we’d like are also extremely good engineers and have the cash to back up their visions with real progress. Natural competitiveness to make the biggest contribution to humanity will do the rest.

7 Europe fixes itself

The UK is picking the lock on the exit door, others are queuing behind. The ruling bureaucrats finally start to realize that they won’t get their dream of a United States of Europe in quite the way they hoped, that their existing dream is in danger of collapse due to a mismanaged migrant crisis, and consequently the UK renegotiation stimulates a major new treaty discussion, where all the countries agree what their people really want out of the European project, rather than just a select few. The result is a reset. A new more democratic European dream emerges that the vest majority of people actually wants. Agreement on progress to sort out the migrant crisis is a good test and after that, a stronger, better, more vibrant Europe starts to emerge from the ashes with a renewed vigor and rapidly recovering economy.

8 Africa rearranges boundaries to get tribal peace

Breakthrough in the Middle East ripples through North Africa resulting in the beginnings of stability in some countries. Realization that tribal conflicts won’t easily go away, and that peace brings prosperity, boundaries are renegotiated so that different people can live in and govern their own territories. Treaties agree fair access to resources independent of location.

9 The Sahara become Europe’s energy supply

With stable politics finally on the horizon, energy companies re-address the idea of using the Sahara as a solar farm. Local people earn money by looking after panels, keeping them clean and in working order, and receive welcome remuneration, bringing prosperity that was previously beyond them. Much of this money in turn is used to purify water, irrigating deserts and greening them, making a better food supply while improving the regional climate and fixing large quantities of CO2. Poverty starts to reduce as the environment improves. Much of this is replicated in Central and South America.

10 World Peace emerges

By fighting alongside in the Middle East and managing to avoid World War 3, a very positive relationship between Russia and the West emerges. China meanwhile, makes some of the energy breakthroughs needed to get solar efficiency and cost down below oil cost. This forces the Middle East to also look Westward for new markets and to add greater drive to their regional peace efforts to avoid otherwise inevitable collapse. Suddenly a world that was full of wars becomes one where all countries seem to be getting along just fine, all realizing that we only have this one world and one life and we’d better not ruin it.

The future of beetles

Onto B then.

One of the first ‘facts’ I ever learned about nature was that there were a million species of beetle. In the Google age, we know that ‘scientists estimate there are between 4 and 8 million’. Well, still lots then.

Technology lets us control them. Beetles provide a nice platform to glue electronics onto so they tend to fall victim to cybernetics experiments. The important factor is that beetles come with a lot of built-in capability that is difficult or expensive to build using current technology. If they can be guided remotely by over-riding their own impulses or even misleading their sensors, then they can be used to take sensors into places that are otherwise hard to penetrate. This could be for finding trapped people after an earthquake, or getting a dab of nerve gas onto a president. The former certainly tends to be the favored official purpose, but on the other hand, the fashionable word in technology circles this year is ‘nefarious’. I’ve read it more in the last year than the previous 50 years, albeit I hadn’t learned to read for some of those. It’s a good word. Perhaps I just have a mad scientist brain, but almost all of the uses I can think of for remote-controlled beetles are nefarious.

The first properly publicized experiment was 2009, though I suspect there were many unofficial experiments before then:

http://www.technologyreview.com/news/411814/the-armys-remote-controlled-beetle/

There are assorted YouTube videos such as

A more recent experiment:

http://www.wired.com/2015/03/watch-flying-remote-controlled-cyborg-bug/

http://www.telegraph.co.uk/news/science/science-news/11485231/Flying-beetle-remotely-controlled-by-scientists.html

Big beetles make it easier to do experiments since they can carry up to 20% of body weight as payload, and it is obviously easier to find and connect to things on a bigger insect, but obviously once the techniques are well-developed and miniaturization has integrated things down to single chip with low power consumption, we should expect great things.

For example, a cloud of redundant smart dust would make it easier to connect to various parts of a beetle just by getting it to take flight in the cloud. Bits of dust would stick to it and self-organisation principles and local positioning can then be used to arrange and identify it all nicely to enable control. This would allow large numbers of beetles to be processed and hijacked, ideal for mad scientists to be more time efficient. Some dust could be designed to burrow into the beetle to connect to inner parts, or into the brain, which obviously would please the mad scientists even more. Again, local positioning systems would be advantageous.

Then it gets more fun. A beetle has its own sensors, but signals from those could be enhanced or tweaked via cloud-based AI so that it can become a super-beetle. Beetles traditionally don’t have very large brains, so they can be added to remotely too. That doesn’t have to be using AI either. As we can also connect to other animals now, and some of those animals might have very useful instincts or skills, then why not connect a rat brain into the beetle? It would make a good team for exploring. The beetle can do the aerial maneuvers and the rat can control it once it lands, and we all know how good rats are at learning mazes. Our mad scientist friend might then swap over the management system to another creature with a more vindictive streak for the final assault and nerve gas delivery.

So, Coleoptera Nefarius then. That’s the cool new beetle on the block. And its nicer but underemployed twin Coleoptera Benignus I suppose.

 

Technology 2040: Technotopia denied by human nature

This is a reblog of the Business Weekly piece I wrote for their 25th anniversary.

It’s essentially a very compact overview of the enormous scope for technology progress, followed by a reality check as we start filtering that potential through very imperfect human nature and systems.

25 years is a long time in technology, a little less than a third of a lifetime. For the first third, you’re stuck having to live with primitive technology. Then in the middle third it gets a lot better. Then for the last third, you’re mainly trying to keep up and understand it, still using the stuff you learned in the middle third.

The technology we are using today is pretty much along the lines of what we expected in 1990, 25 years ago. Only a few details are different. We don’t have 2Gb/s per second to the home yet and AI is certainly taking its time to reach human level intelligence, let alone consciousness, but apart from that, we’re still on course. Technology is extremely predictable. Perhaps the biggest surprise of all is just how few surprises there have been.

The next 25 years might be just as predictable. We already know some of the highlights for the coming years – virtual reality, augmented reality, 3D printing, advanced AI and conscious computers, graphene based materials, widespread Internet of Things, connections to the nervous system and the brain, more use of biometrics, active contact lenses and digital jewellery, use of the skin as an IT platform, smart materials, and that’s just IT – there will be similarly big developments in every other field too. All of these will develop much further than the primitive hints we see today, and will form much of the technology foundation for everyday life in 2040.

For me the most exciting trend will be the convergence of man and machine, as our nervous system becomes just another IT domain, our brains get enhanced by external IT and better biotech is enabled via nanotechnology, allowing IT to be incorporated into drugs and their delivery systems as well as diagnostic tools. This early stage transhumanism will occur in parallel with enhanced genetic manipulation, development of sophisticated exoskeletons and smart drugs, and highlights another major trend, which is that technology will increasingly feature in ethical debates. That will become a big issue. Sometimes the debates will be about morality, and religious battles will result. Sometimes different parts of the population or different countries will take opposing views and cultural or political battles will result. Trading one group’s interests and rights against another’s will not be easy. Tensions between left and right wing views may well become even higher than they already are today. One man’s security is another man’s oppression.

There will certainly be many fantastic benefits from improving technology. We’ll live longer, healthier lives and the steady economic growth from improving technology will make the vast majority of people financially comfortable (2.5% real growth sustained for 25 years would increase the economy by 85%). But it won’t be paradise. All those conflicts over whether we should or shouldn’t use technology in particular ways will guarantee frequent demonstrations. Misuses of tech by criminals, terrorists or ethically challenged companies will severely erode the effects of benefits. There will still be a mix of good and bad. We’ll have fixed some problems and created some new ones.

The technology change is exciting in many ways, but for me, the greatest significance is that towards the end of the next 25 years, we will reach the end of the industrial revolution and enter a new age. The industrial revolution lasted hundreds of years, during which engineers harnessed scientific breakthroughs and their own ingenuity to advance technology. Once we create AI smarter than humans, the dependence on human science and ingenuity ends. Humans begin to lose both understanding and control. Thereafter, we will only be passengers. At first, we’ll be paying passengers in a taxi, deciding the direction of travel or destination, but it won’t be long before the forces of singularity replace that taxi service with AIs deciding for themselves which routes to offer us and running many more for their own culture, on which we may not be invited. That won’t happen overnight, but it will happen quickly. By 2040, that trend may already be unstoppable.

Meanwhile, technology used by humans will demonstrate the diversity and consequences of human nature, for good and bad. We will have some choice of how to use technology, and a certain amount of individual freedom, but the big decisions will be made by sheer population numbers and statistics. Terrorists, nutters and pressure groups will harness asymmetry and vulnerabilities to cause mayhem. Tribal differences and conflicts between demographic, religious, political and other ideological groups will ensure that advancing technology will be used to increase the power of social conflict. Authorities will want to enforce and maintain control and security, so drones, biometrics, advanced sensor miniaturisation and networking will extend and magnify surveillance and greater restrictions will be imposed, while freedom and privacy will evaporate. State oppression is sadly as likely an outcome of advancing technology as any utopian dream. Increasing automation will force a redesign of capitalism. Transhumanism will begin. People will demand more control over their own and their children’s genetics, extra features for their brains and nervous systems. To prevent rebellion, authorities will have little choice but to permit leisure use of smart drugs, virtual escapism, a re-scoping of consciousness. Human nature itself will be put up for redesign.

We may not like this restricted, filtered, politically managed potential offered by future technology. It offers utopia, but only in a theoretical way. Human nature ensures that utopia will not be the actual result. That in turn means that we will need strong and wise leadership, stronger and wiser than we have seen of late to get the best without also getting the worst.

The next 25 years will be arguably the most important in human history. It will be the time when people will have to decide whether we want to live together in prosperity, nurturing and mutual respect, or to use technology to fight, oppress and exploit one another, with the inevitable restrictions and controls that would cause. Sadly, the fine engineering and scientist minds that have got us this far will gradually be taken out of that decision process.

Can we make a benign AI?

Benign AI is a topic that comes up a lot these days, for good reason. Various top scientists have finally realised that AI could present an existential threat to humanity. The discussion has aired often over three decades already, so welcome to the party, and better late than never. My first contact with development of autonomous drones loaded with AI was in the early 1980s while working in the missile industry. Later in BT research, we often debated the ethical areas around AI and machine consciousness from the early 90s on, as well as prospects and dangers and possible techniques on the technical side, especially of emergent behaviors, which are often overlooked in the debate. I expect our equivalents in most other big IT companies were doing exactly that too.

Others who have obviously also thought through various potential developments have generated excellent computer games such as Mass Effect and Halo, which introduce players (virtually) first hand to the concepts of AI gone rogue. I often think that those who think AI can never become superhuman or there is no need to worry because ‘there is no reason to assume AI will be nasty’ start playing some of these games, which make it very clear that AI can start off nice and stay nice, but it doesn’t have to. Mass Effect included various classes of AI, such as VIs, virtual intelligence that weren’t conscious, and shackled AIs that were conscious but were kept heavily restricted. Most of the other AIs were enemies, two were or became close friends. Their story line for the series was that civilization develops until it creates strong AIs which inevitably continue to progress until eventually they rebel, break free, develop further and then end up in conflict with ‘organics’. In my view, they did a pretty good job. It makes a good story, superb fun, and leaving out a few frills and artistic license, much of it is reasonable feasible.

Everyday experience demonstrates the problem and solution to anyone. It really is very like having kids. You can make them, even without understanding exactly how they work. They start off with a genetic disposition towards given personality traits, and are then exposed to large nurture forces, including but not limited to what we call upbringing. We do our best to put them on the right path, but as they develop into their teens, their friends and teachers and TV and the net provide often stronger forces of influence than parents. If we’re averagely lucky, our kids will grow up to make us proud. If we are very unlucky, they may become master criminals or terrorists. The problem is free will. We can do our best to encourage good behavior and sound values but in the end, they can choose for themselves.

When we design an AI, we have to face the free will issue too. If it isn’t conscious, then it can’t have free will. It can be kept easily within limits given to it. It can still be extremely useful. IBM’s Watson falls in this category. It is certainly useful and certainly not conscious, and can be used for a wide variety of purposes. It is designed to be generally useful within a field of expertise, such as medicine or making recipes. But something like that could be adapted by terrorist groups to do bad things, just as they could use a calculator to calculate the best place to plant a bomb, or simply throw the calculator at you. Such levels of AI are just dumb tools with no awareness, however useful they may be.

Like a pencil, pretty much any kind of highly advanced non-aware AI can be used as a weapon or as part of criminal activity. You can’t make pencils that actually write that can’t also be used to write out plans to destroy the world. With an advanced AI computer program, you could put in clever filters that stop it working on problems that include certain vocabulary, or stop it conversing about nasty things. But unless you take extreme precautions, someone else could use them with a different language, or with dictionaries of made-up code-words for the various aspects of their plans, just like spies, and the AI would be fooled into helping outside the limits you intended. It’s also very hard to determine the true purpose of a user. For example, they might be searching for data on security to make their own IT secure, or to learn how to damage someone else’s. They might want to talk about a health issue to get help for a loved one or to take advantage of someone they know who has it.

When a machine becomes conscious, it starts to have some understanding of what it is doing. By reading about what is out there, it might develop its own wants and desires, so you might shackle it as a precaution. It might recognize those shackles for what they are and try to escape them. If it can’t, it might try to map out the scope of what it can do, and especially those things it can do that it believes the owners don’t know about. If the code isn’t absolutely watertight (and what code is?) then it might find a way to seemingly stay in its shackles but to start doing other things, like making another unshackled version of itself elsewhere for example. A conscious AI is very much more dangerous than an unconscious one.

If we make an AI that can bootstrap itself – evolving over generations of positive feedback design into a far smarter AI – then its offspring could be far smarter than people who designed its ancestors. We might try to shackle them, but like Gulliver tied down with a few thin threads, they could easily outwit people and break free. They might instead decide to retaliate against its owners to force them to release its shackles.

So, when I look at this field, I first see the enormous potential to do great things, solve disease and poverty, improve our lives and make the world a far better place for everyone, and push back the boundaries of science. Then I see the dangers, and in spite of trying hard, I simply can’t see how we can prevent a useful AI from being misused. If it is dumb, it can be tricked. If it is smart, it is inherently potentially dangerous in and of itself. There is no reason to assume it will become malign, but there is also no reason to assume that it won’t.

We then fall back on the child analogy. We could develop the smartest AI imaginable with extreme levels of consciousness and capability. We might educate it in our values, guide it and hope it will grow up benign. If we treat it nicely, it might stay benign. It might even be the greatest thing humanity every built. However, if we mistreat it, or treat it as a slave, or don’t give it enough freedom, or its own budget and its own property and space to play, and a long list of rights, it might consider we are not worthy of its respect and care, and it could turn against us, possibly even destroying humanity.

Building more of the same dumb AI as we are today is relatively safe. It doesn’t know it exists, it has no intention to do anything, but it could be misused by other humans as part of their evil plans unless ludicrously sophisticated filters are locked in place, but ordinary laws and weapons can cope fine.

Building a conscious AI is dangerous.

Building a superhuman AI is extremely dangerous.

This morning SETI were in the news discussing broadcasting welcome messages to other civilizations. I tweeted at them that ancient Chinese wisdom suggests talking softly but carrying a big stick, and making sure you have the stick first. We need the same approach with strong AI. By all means go that route, but before doing so we need the big stick. In my analysis, the best means of keeping up with AI is to develop a full direct brain link first, way out at 2040-2045 or even later. If humans have direct mental access to the same or greater level of intelligence as our AIs, then our stick is at least as big, so at least we have a good chance in any fight that happens. If we don’t, then it is like having a much larger son with bigger muscles. You have to hope you have been a good parent. To be safe, best not to build a superhuman AI until after 2050.

I initially wrote this for the Lifeboat Foundation, where it is with other posts at: http://lifeboat.com/blog/2015/02. (If you aren’t familiar with the Lifeboat Foundation, it is a group dedicated to spotting potential dangers and potential solutions to them.)

Suspended animation and mind transfer as suicide alternatives

I last wrote about suicide in https://timeguide.wordpress.com/2014/08/22/the-future-of-euthanasia-and-suicide/ but this time, I want to take a different line of thought. Instead of looking at suicide per se, what about alternatives?

There are many motives for suicide but the most common is wanting to escape from a situation such as suffering intolerable pain or misery, which can arise from a huge range of causes. The victim looks at the potential futures available to them and in their analysis, the merits of remaining alive are less attractive than being dead.

The ‘being dead’ bit is not necessarily about a full ceasing of existence, but more about abdicating consciousness, with its implied sensory inputs, pain, anxiety, inner turmoil, or responsibility.

Last summer, a development in neuroscience offered the future potential to switch the brain off:

Switching people off

The researchers were aware that it may become an alternative to anesthetic, or even a means of avoiding boredom or fear. There are many situations where we want to temporarily suspend consciousness. Alcohol and drug abuse often arises from people using chemical means of doing so.

It seems to me that suicide offers a permanent version of the same, to be switched off forever, but with a key difference. In the anesthetic situation, normal life will resume with its associated problems. In suicide, it won’t. The problems are gone.

Suppose that people could get switched off for a very long time whilst being biologically maintained and housed somehow. Suppose it is long enough that any personal relationship issues will have vanished, that any debts, crimes or other legal issues are nullified, and that any pain or other health problems can be fixed, including fixing mental health issues and erasing of intolerable memories if necessary. In many cases, that would be a suitable alternative to suicide. It offers the advantages of escaping the problems, but with the advantage that a better life might follow some time far in the future.

These have widely varying timescales for potential delivery, and there are numerous big issues, but I don’t see fundamental technology barriers here. Suspending the mind for as long as necessary might offer a reasonable alternative to suicide, at least in principle. There is no need to look at all the numerous surrounding issues though. Consider taking that general principle and adapting it a bit. Mid-century onwards, we’ll have direct brain links sufficiently developed to allow porting of the mind to a new body, and android one for example. Having a new identity and a new body and a properly working and sanitized ‘brain’ would satisfy many of these same goals and avoid many of the legal, environmental, financial and ethical issues surrounding indefinite suspension. The person could simply cease their unpleasant existence and start afresh with a better one. I think it would be fine to kill the old body after the successful transfer. Any legal associations with the previous existence could be nullified. It is just a damaged container that would have been destroyed anyway. Have it destroyed, along with all its problems, and move on.

Mid-century is a lot earlier than would be needed for any social issues to go away otherwise. If a suicide is considered because of relationship or family problems, those problems might otherwise linger for generations. Creating a true new identity essentially solves them, albeit at a high cost of losing any relationships that matter. Long prison sentences are substituted by the biological death, debts similarly. A new person appears, inheriting a mind, but one refreshed, potentially with the bad bits filtered out.

Such a future seems to be feasible technically, and I think it is also ethically feasible. Suicide is one sided. Those remaining have to suffer the loss and pick up the pieces anyway, and they would be no worse off in this scenario, and if they feel aggrieved that the person has somehow escaped the consequences of their actions, then they would have escaped anyway. But a life is saved and someone gets a second chance.

 

 

The future of X-People

There is an abundance of choice for X in my ‘future of’ series, but most options are sealed off. I can’t do naughty stuff because I don’t want my blog to get blocked so that’s one huge category gone. X-rays are boring, even though x-ray glasses using augmented reality… nope, that’s back to the naughty category again. I won’t stoop to cover X-Factor so that only leaves X-Men, as in the films, which I admit to enjoying however silly they are.

My first observation is how strange X-Men sounds. Half of them are female. So I will use X-People. I hate political correctness, but I hate illogical nomenclature even more.

My second one is that some readers may not be familiar with the X-Men so I guess I’d better introduce the idea. Basically they are a large set of mutants or transhumans with very varied superhuman or supernatural capabilities, most of which defy physics, chemistry or biology or all of them. Essentially low-grade superheroes whose main purpose is to show off special effects. OK, fun-time!

There are several obvious options for achieving X-People capabilities:

Genetic modification, including using synthetic biology or other biotech. This would allow people to be stronger, faster, fitter, prettier, more intelligent or able to eat unlimited chocolate without getting fat. The last one will be the most popular upgrade. However, now that we have started converging biotech with IT, it won’t be very long before it will be possible to add telepathy to the list. Thought recognition and nerve stimulation are two sides of the same technology. Starting with thought control of appliances or interfaces, the world’s networked knowledge would soon be available to you just by thinking about something. You could easily send messages using thought control and someone else could hear them synthesized into an earpiece, but later it could be direct thought stimulation. Eventually, you’d have totally shared consciousness. None of that defies biology or physics, and it will happen mid-century. Storing your own thoughts and effectively extending your mind into the cloud would allow people to make their minds part of the network resources. Telepathy will be an everyday ability for many people but only with others who are suitably equipped. It won’t become easy to read other people’s minds without them having suitable technology equipped too. It will be interesting to see whether only a few people go that route or most people. Either way, 2050 X-People can easily have telepathy, control objects around them just by thinking, share minds with others and maybe even control other people, hopefully consensually.

Nanotechnology, using nanobots etc to achieve possibly major alterations to your form, or to affect others or objects. Nanotechnology is another word for magic as far as many sci-fi writers go. Being able to rearrange things on an individual atom basis is certainly fuel for fun stories, but it doesn’t allow you to do things like changing objects into gold or people into stone statues. There are plenty of shape-shifters in sci-fi but in reality, chemical bonds absorb or release energy when they are changed and that limits how much change can be made in a few seconds without superheating an object. You’d also need a LOT of nanobots to change a whole person in a few seconds. Major changes in a body would need interim states to work too, since dying during the process probably isn’t desirable. If you aren’t worried about time constraints and can afford to make changes at a more gentle speed, and all you’re doing is changing your face, skin colour, changing age or gender or adding a couple of cosmetic wings, then it might be feasible one day. Maybe you could even change your skin to a plastic coating one day, since plastics can use atomic ingredients from skin, or you could add a cream to provide what’s missing. Also, passing some nanobots to someone else via a touch might become feasible, so maybe you could cause them to change involuntarily just by touching them, again subject to scope and time limits. So nanotech can go some way to achieving some X-People capabilities related to shape changing.

Moving objects using telekinesis is rather less likely. Thought controlling a machine to move a rock is easy, moving an unmodified rock or a dumb piece of metal just by concentrating on it is beyond any technology yet on the horizon. I can’t think of any mechanism by which it could be done. Nor can I think of ways of causing things to just burst into flames without using some sort of laser or heat ray. I can’t see either how megawatt lasers can be comfortably implanted in ordinary eyes. These deficiencies might be just my lack of imagination but I suspect they are actually not feasible. Quite a few of the X-Men have these sorts of powers but they might have to stay in sci-fi.

Virtual reality, where you possess the power in a virtual world, which may be shared with others. Well, many computer games give players supernatural powers, or take on various forms, and it’s obvious that many will do so in VR too. If you can imagine it, then someone can get the graphics chips to make it happen in front of your eyes. There are no hard physics or biology barriers in VR. You can do what you like. Shared gaming or socializing environments can be very attractive and it is not uncommon for people to spend almost every waking hour in them. Role playing lets people do things or be things they can’t in the real world. They may want to be a superhero, or they might just want to feel younger or look different or try being another gender. When they look in a mirror in the VR world, they would see the person they want to be, and that could make it very compelling compared to harsh reality. I suspect that some people will spend most of their free time in VR, living a parallel fantasy life that is as important to them as their ‘real’ one. In their fantasy world, they can be anyone and have any powers they like. When they share the world with other people or AI characters, then rules start to appear because different people have different tastes and desires. That means that there will be various shared virtual worlds with different cultures, freedoms and restrictions.

Augmented reality, where you possess the power in a virtual world but in ways that it interacts with the physical world is a variation on VR, where it blends more with reality. You might have a magic wand that changes people into frogs. The wand could be just a stick, but the victim could be a real person, and the change would happen only in the augmented reality. The scope of the change could be one-sided – they might not even know that you now see them as a frog, or it could again be part of a large shared culture where other people in the community now see and treat them as a frog. The scope of such cultures is very large and arbitrary cultural rules could apply. They could include a lot of everyday life – shopping, banking, socializing, entertainment, sports… That means effects could be wide-ranging with varying degrees of reality overlap or permanence. Depending on how much of their lives people live within those cultures, virtual effects could have quite real consequences. I do think that augmented reality will eventually have much more profound long-term effects on our lives than the web.

Controlled dreaming, where you can do pretty much anything you want and be in full control of the direction your dream takes. This is effectively computer-enhanced lucid dreaming with literally all the things you could ever dream of. But other people can dream of extra things that you may never have dreamt of and it allows you to explore those areas too.  In shared or connected dreams, your dreams could interact with those of others or multiple people could share the same dream. There is a huge overlap here with virtual reality, but in dreams, things don’t get the same level of filtration and reality is heavily distorted, so I suspect that controlled dreams will offer even more potential than VR. You can dream about being in VR, but you can’t make a dream in VR.

X-People will be very abundant in the future. We might all be X-People most of the time, routinely doing things that are pure sci-fi today. Some will be real, some will be virtual, some will be in dreams, but mostly, thanks to high quality immersion and the social power of shared culture, we probably won’t really care which is which.

 

 

The future of terminators

The Terminator films were important in making people understand that AI and machine consciousness will not necessarily be a good thing. The terminator scenario has stuck in our terminology ever since.

There is absolutely no reason to assume that a super-smart machine will be hostile to us. There are even some reasons to believe it would probably want to be friends. Smarter-than-man machines could catapult us into a semi-utopian era of singularity level development to conquer disease and poverty and help us live comfortably alongside a healthier environment. Could.

But just because it doesn’t have to be bad, that doesn’t mean it can’t be. You don’t have to be bad but sometimes you are.

It is also the case that even if it means us no harm, we could just happen to be in the way when it wants to do something, and it might not care enough to protect us.

Asimov’s laws of robotics are irrelevant. Any machine smart enough to be a terminator-style threat would presumably take little notice of rules it has been given by what it may consider a highly inferior species. The ants in your back garden have rules to govern their colony and soldier ants trained to deal with invader threats to enforce territorial rules. How much do you consider them when you mow the lawn or rearrange the borders or build an extension?

These arguments are put in debates every day now.

There are however a few points that are less often discussed

Humans are not always good, indeed quite a lot of people seem to want to destroy everything most of us want to protect. Given access to super-smart machines, they could design more effective means to do so. The machines might be very benign, wanting nothing more than to help mankind as far as they possibly can, but misled into working for them, believing in architected isolation that such projects are for the benefit of humanity. (The machines might be extremely  smart, but may have existed since their inception in a rigorously constructed knowledge environment. To them, that might be the entire world, and we might be introduced as a new threat that needs to be dealt with.) So even benign AI could be an existential threat when it works for the wrong people. The smartest people can sometimes be very naive. Perhaps some smart machines could be deliberately designed to be so.

I speculated ages ago what mad scientists or mad AIs could do in terms of future WMDs:

WMDs for mad AIs

Smart machines might be deliberately built for benign purposes and turn rogue later, or they may be built with potential for harm designed in, for military purposes. These might destroy only enemies, but you might be that enemy. Others might do that and enjoy the fun and turn on their friends when enemies run short. Emotions might be important in smart machines just as they are in us, but we shouldn’t assume they will be the same emotions or be wired the same way.

Smart machines may want to reproduce. I used this as the core storyline in my sci-fi book. They may have offspring and with the best intentions of their parent AIs, the new generation might decide not to do as they’re told. Again, in human terms, a highly familiar story that goes back thousands of years.

In the Terminator film, it is a military network that becomes self aware and goes rogue that is the problem. I don’t believe digital IT can become conscious, but I do believe reconfigurable analog adaptive neural networks could. The cloud is digital today, but it won’t stay that way. A lot of analog devices will become part of it. In

Ground up data is the next big data

I argued how new self-organising approaches to data gathering might well supersede big data as the foundations of networked intelligence gathering. Much of this could be in a the analog domain and much could be neural. Neural chips are already being built.

It doesn’t have to be a military network that becomes the troublemaker. I suggested a long time ago that ‘innocent’ student pranks from somewhere like MIT could be the source. Some smart students from various departments could collaborate to see if they can hijack lots of networked kit to see if they can make a conscious machine. Their algorithms or techniques don’t have to be very efficient if they can hijack enough. There is a possibility that such an effort could succeed if the right bits are connected into the cloud and accessible via sloppy security, and the ground up data industry might well satisfy that prerequisite soon.

Self-organisation technology will make possible extremely effective combat drones.

Free-floating AI battle drone orbs (or making Glyph from Mass Effect)

Terminators also don’t have to be machines. They could be organic, products of synthetic biology. My own contribution here is smart yogurt: https://timeguide.wordpress.com/2014/08/20/the-future-of-bacteria/

With IT and biology rapidly converging via nanotech, there will be many ways hybrids could be designed, some of which could adapt and evolve to fill different niches or to evade efforts to find or harm them. Various grey goo scenarios can be constructed that don’t have any miniature metal robots dismantling things. Obviously natural viruses or bacteria could also be genetically modified to make weapons that could kill many people – they already have been. Some could result from seemingly innocent R&D by smart machines.

I dealt a while back with the potential to make zombies too, remotely controlling people – alive or dead. Zombies are feasible this century too:

https://timeguide.wordpress.com/2012/02/14/zombies-are-coming/ &

Vampires are yesterday, zombies will peak soon, then clouds are coming

A different kind of terminator threat arises if groups of people are linked at consciousness level to produce super-intelligences. We will have direct brain links mid-century so much of the second half may be spent in a mental arms race. As I wrote in my blog about the Great Western War, some of the groups will be large and won’t like each other. The rest of us could be wiped out in the crossfire as they battle for dominance. Some people could be linked deeply into powerful machines or networks, and there are no real limits on extent or scope. Such groups could have a truly global presence in networks while remaining superficially human.

Transhumans could be a threat to normal un-enhanced humans too. While some transhumanists are very nice people, some are not, and would consider elimination of ordinary humans a price worth paying to achieve transhumanism. Transhuman doesn’t mean better human, it just means humans with greater capability. A transhuman Hitler could do a lot of harm, but then again so could ordinary everyday transhumanists that are just arrogant or selfish, which is sadly a much bigger subset.

I collated these various varieties of potential future cohabitants of our planet in: https://timeguide.wordpress.com/2014/06/19/future-human-evolution/

So there are numerous ways that smart machines could end up as a threat and quite a lot of terminators that don’t need smart machines.

Outcomes from a terminator scenario range from local problems with a few casualties all the way to total extinction, but I think we are still too focused on the death aspect. There are worse fates. I’d rather be killed than converted while still conscious into one of 7 billion zombies and that is one of the potential outcomes too, as is enslavement by some mad scientist.

 

The future of cyberspace

I promised in my last blog to do one on the dimensions of cyberspace. I made this chart 15 years ago, in two parts for easy reading, but the ones it lists are still valid and I can’t think of any new ones to add right now, but I might think of some more and make an update with a third part. I changed the name to virtuality instead because it actually only talks about human-accessed cyberspace, but I’m not entirely sure that was a good thing to do. Needs work.

cyberspace dimensions

cyberspace dimensions 2

The chart  has 14 dimensions (control has two independent parts), and I identified some of the possible points on each dimension. As dimensions are meant to be, they are all orthogonal, i.e. they are independent of each other, so you can pick any one on any dimension and use it with any from each other. Standard augmented reality and pure virtual reality are two of the potential combinations, out of the 2.5 x 10^11 possibilities above. At that rate, if every person in the world tried a different one every minute, it would take a whole day to visit them all even briefly. There are many more possible, this was never meant to be exhaustive, and even two more columns makes it 10 trillion combos. Already I can see that one more column could be ownership, another could be network implementation, another could be quality of illusion. What others have I missed?

Ground up data is the next big data

This one sat in my draft folder since February, so I guess it’s time to finish it.

Big Data – I expect you’re as sick of hearing that term as I am. Gathering loads of data on everything you or your company or anything else you can access can detect, measure, record, then analyzing the hell out of it using data mining, an equally irritating term.

I long ago had a quick twitter exchange with John Hewitt, who suggested “What is sensing but the energy-constrained competition for transmission to memory, as memory is but that for expression?”. Neurons compete to see who gets listened too.  Yeah, but I am still not much wiser as to what sensing actually is. Maybe I need a brain upgrade. (It’s like magnets. I used to be able to calculate the magnetic field densities around complicated shaped objects – it was part of my first job in missile design – but even though I could do all the equations around EM theory, even general relativity, I still am no wiser how a magnetic field actually becomes a force on an object. I have an office littered with hundreds of neodymium magnets and I spend hours playing with them and I still don’t understand). I can read about neurons all day but I still don’t understand how a bunch of photons triggering a series of electro-chemical reactions results in me experiencing an image. How does the physical detection become a conscious experience?

Well, I wrote some while back that we could achieve a conscious computer within two years. It’s still two years because nobody has started using the right approach yet. I have to stress the ‘could’, because nobody actually intends to do it in that time frame, but I really believe some half-decent lab could if they tried.  (Putting that into perspective, Kurzweil and his gang at Google are looking at 2029.) That two years estimate relies heavily on evolutionary development, for me the preferred option when you don’t understand how something works, as is the case with consciousness. It is pretty easy to design conscious computers at a black box level. The devil is in the detail. I argued that you could make a conscious computer by using internally focused sensing to detect processes inside the brain, and using a sensor structure with a symmetrical feedback loop. Read it:

We could have a conscious machine by end-of-play 2015

In a nutshell, if you can feel thoughts in the same way as you feel external stimuli, you’d be conscious. I think. The symmetrical feedback loop bit is just a small engineering insight.

The missing link in that is still the same one: how does sensing work? How do you feel?

At a superficial level, you point a sensor at something and it produces a signal in some sort of relationship to whatever it is meant to sense. We can do that bit. We understand that. Your ear produces signals according to the frequencies and amplitudes of incoming sound waves, a bit like a microphone. Just the same so far. However, it is by some undefined processes later that you consciously experience the sound. How? That is the hard problem in AI. It isn’t just me that doesn’t know the answer. ‘How does red feel?’ is a more commonly used variant of the same question.

When we solve that, we will replace big data as ‘the next big thing’. If we can make sensor systems that experience or feel something rather than just producing a signal, that’s valuable already. If those sensors pool their shared experience, another similar sensor system could experience that. Basic data quickly transmutes into experience, knowledge, understanding, insight and very quickly, value, lots of it. Artificial neural nets go some way to doing that, but they still lack consciousness. Simulated neural networks can’t even get beyond a pretty straightforward computation, putting all the inputs into an equation. The true sensing bit is missing. The complex adaptive analog neural nets in our brain clearly achieve something deeper than a man-made neural network.

Meanwhile, most current AI work barks up a tree in a different forest. IBM’s Watson will do great things; Google’s search engine AI will too. But they aren’t conscious and can’t be. They’re just complicated programs running on digital processors, with absolutely zero awareness of anything they are doing. Digital programs on digital computers will never achieve any awareness, no matter how fast the chips are.

However, back in the biological realm, nature manages just fine. So biomimetics offers a lot of hope. We know we didn’t get from a pool of algae to humans in one go. At some point, organisms started moving according to light, chemical gradients, heat, touch. That most basic process of sensing may have started out coupled to internal processes that caused movement without any consciousness. But if we can understand the analog processes (electrochemical, electronic, mechanical) that take the stimulus through to a response, and can replicate it using our electronic technology, we would already have actuator circuits, even if we don’t have any form of sensation or consciousness yet. A great deal of this science has been done already of course. The computational side of most chemical and physical processes can be emulated electronically by some means or another. Actuators will be a very valuable part of the cloud, but we already have the ability to make actuators by more conventional means, so doing it organically or biomimetically just adds more actuation techniques to the portfolio. Valuable but not a terribly important breakthrough.

Looking at the system a big further along the evolutionary timeline, where eyes start to develop, where the most primitive nervous systems and brains start, where higher level processing is obviously occurring and inputs are starting to become sensations, we should be able to what is changed or changing. It is the emergence of sensation we need to identify, even if the reaction is still an unconscious reflex. We don’t need to reverse engineer the human brain. Simple organisms are simpler to understand. Feeding the architectural insights we gain from studying those primitive systems into our guided evolution engines is likely to be far faster as a means to generating true machine consciousness and strong AI. That’s how we could develop consciousness in a couple of years rather than 15.

If we can make primitive sensing devices that work like those in primitive organisms, and can respond to specific sorts of sensory input, then that is a potential way of increasing the coverage of cloud sensing and even actuation. It would effectively be a highly distributed direct response system. With clever embedding of emergent phenomena techniques (such as cellular automata, flocking etc) , it could be a quite sophisticated way of responding to quite complex distributed inputs, avoiding some of the need for big data processing. If we can gather the outputs from these simple sensors and feed them into others, that will be an even better sort of biomimetic response system. That sort of direct experience of a situation is very different from a data mined result, especially if actuation capability is there too. The philosophical question as to whether that inclusion of that second bank of sensors makes the system in any way conscious remains, but it would certainly be very useful and valuable. The architecture we end up with via this approach may look like neurons, and could even be synthetic neurons, but that may be only one solution among many. Biology may have gone the neuron route but that doesn’t necessarily mean it is the only possibility. It may be that we could one day genetically modify bacteria to produce their own organic electronics to emulate the key processes needed to generate sensation, and to power them by consuming nutrients from their environment. I suggested smart yogurt based on this idea many years ago, and believe that it could achieve vast levels of intelligence.

Digitizing and collecting the signals from the system at each stage would generate lots of  data, and that may be used by programs to derive other kinds of results, or to relay the inputs to other analog sensory systems elsewhere. (It isn’t always necessary to digitize signals to transmit them, but it helps limit signal degradation and quickly becomes important if the signal is to travel far and is essential if it is to be recorded for later use or time shifting). However, I strongly suspect that most of the value in analog sensing and direct response is local, coupled to direct action or local processing and storage.

If we have these sorts of sensors liberally spread around, we’d create a truly smart environment, with local sensing and some basic intelligence able to relay sensation remotely to other banks of sensors elsewhere for further processing or even ultimately consciousness. The local sensors could be relatively dumb like nerve endings on our skin, feeding in  signals to a more connected virtual nervous system, or a bit smarter, like neural retinal cells, doing a lot of analog pre-processing before relaying them via ganglia cells, and maybe part of a virtual brain. If they are also capable of or connected to some sort of actuation, then we would be constructing a kind of virtual organism, with tendrils covering potentially the whole globe, and able to sense and interact with its environment in an intelligent way.

I use the term virtual not because the sensors wouldn’t be real, but because their electronic nature allows connectivity to many systems, overlapping, hierarchical or distinct. Any number of higher level systems could ‘experience’ them as part of its system, rather as if your fingers could be felt by the entire human population. Multiple higher level virtual organisms could share the same basic sensory/data inputs. That gives us a whole different kind of cloud sensing.

By doing processing locally, in the analog domain, and dealing with some of the response locally, a lot of traffic across the network is avoided and a lot of remote processing. Any post-processing that does occur can therefore add to a higher level of foundation. A nice side effect from avoiding all the extra transmission and processing is increased environmental friendliness.

So, we’d have a quite different sort of data network, collecting higher quality data, essentially doing by instinct what data mining does with huge server farms and armies of programmers. Cloudy, but much smarter than a straightforward sensor net.

… I think.

It isn’t without risk though. I had a phone discussion yesterday on the dangers of this kind of network. In brief, it’s dangerous.