Tag Archives: consciousness

Towards the Vulcan Mind Meld – Interfacing Biology and Technology for Experiential Telepathy

The science fiction concept of the Vulcan mind meld has long captured the imagination of viewers – the ability to directly share one’s subjective experiences, memories and emotional states in a profound telepathic joining of consciousness with another being. While such psychic links remain in the realm of fantasy for now, recent developments at the convergence of neuroscience, biosensing, brain stimulation and artificial intelligence are charting an ambitious path to realize elements of this mind-melding capability through an intricate fusion of biological and technological interfaces.

Neural Encoding of Subjective Experiences
The first key enabler is the ability to decode the neural correlates of human subjective experiences from brain activity patterns. By implanting high-density electrode arrays or ultrafine neural lace meshes into strategically targeted brain regions, it is possible to sample and digitize the spatiotemporal neural firing patterns underlying specific cognitive processes with high fidelity. This could include:

  • Visual and auditory perceptual processes encoded in sensory cortices
  • Patterns of memory recall and reinstatement encoded in the hippocampus and associated circuits
  • Encoding of emotional qualia and valence in limbic and frontal regions
  • Motor intent and action planning represented in premotor and parietal areas

Leveraging machine learning techniques like deep neural networks trained on massive multi-modal brain data, computational models can effectively learn the neural code underlying these cognitive modalities. This allows real-time decoding of the precise sights, sounds, emotions and memories being experienced by the subject at any given point in time.

Biochemical Correlates of Emotional Cognitive States
Going beyond just the neural signals, an array of biocompatible electrochemical and molecular sensors embedded in an “active skin” construct can simultaneously track biochemical signatures associated with different emotional and cognitive states. Indicators like neurochemical release patterns, immunomodulatory molecules and metabolic biomarkers in near-surface capillary regions can be monitored and correlated to cross-validate and enrich the higher-level neural encoding of experiences.

For example, detecting localized spikes in oxytocin, dopamine, serotonin could reinforce or clarify the nuanced emotional undercurrents encoded in the neural signals during memory recall or social cognition tasks. This multimodal data fusion combining neural encoding with biochemical sensing could yield a more holistic representation of an individual’s subjective experiences at both the molecular and systems neuroscience levels.

Augmented Reality for Reconstructing Experiential Data Streams
With the decoded streams of audio-visual, tactile and emotional data available, the next stage is reconstructing these data into an immersive experiential virtual environment that can be shared across individuals. Leveraging augmented reality displays seamlessly integrated into wearable glasses, contact lenses or translucent heads-up displays, the multi-sensory elements of another’s recalled memory or perception can be rendered within the user’s own environment in near real-time.

Vivid visual reconstructions, spatial audio rendering of sounds, even augmented odor delivery could all be choreographed to provide an experiential re-creation accurate to the original encoding. By fusing these augmented overlays with physical tactile actuators and transducers, the overall somatic and proprioceptive elements of experiences like emotional textures and action-Based sequences could also be shared, further enriching the mind meld.

Closed-Loop Brain Stimulation and Cognitive Induction
But the mind meld transcends just decoding and vicarious experience. By integrating non-invasive brain stimulation technologies like transcranial magnetic stimulation (TMS) into the interface, it may even be possible to induce and sculpt specific subjective experiences within the subject directly.

Mapping the neural activation patterns decoded during rich experiences like memory recall, focused TMS protocols could effectively trigger and steer similar trajectories of reactivation across the relevant neural circuits in either the same or a different subject. This could facilitate seamless intermingling of experiential data streams across individuals.

Even more profoundly, advanced AI models could potentially learn the neural manifolds and trajectories representing different classes of subjective experiences, like the qualitative texture of specific emotions or memory types. With a finely tuned model of these neural trajectories and felicitous stimulation patterning, it could become possible to induce or implant entirely synthetic subjective experiences from the ground up within a subject’s consciousness.

This closed-loop brain stimulation and cognitive induction capability is where the mind meld interface blurs the lines between experiencing external data streams versus directly modulating the endogenous physical substrates that give rise to conscious experiences themselves. It represents a shift towards acquiring more agency and omniscient control over the levers of phenomenological experience.

Embodied Gesture Interaction and Neural Metaphrening
To imbue the mind meld with a more intuitive and immersive interfacing modality, the technology could be embedded within the human hand itself rather than isolated modules. Different regions across the hand’s surface could be mapped to interface directly with corresponding somatotopic areas in the somatosensory cortex.

This somatotopic functional mapping means that as the user’s hand explorers and gestures in physical space, their proprioceptive sense translates into neural activation trajectories across the sensorimotor homunculus in the brain. Augmented tactile transducer arrays across the hand could then further enrich this interaction by providing localized vibrotactile, thermal and kinetic cues that intuitively guide the user in navigating and modulating the neural data flows.

In this embodied gesture interaction paradigm, the user does not merely passively receive data – instead they can quite literally “feel” their way through the woven tapestry of subjective experiences, memories and emotions using the hand’s natural biomapping as the symbolic inscription and manipulation surface. Drawing from the spiritual concept of “metaphrening”, this deep synergistic coupling between the neural data flows and the ecological dynamics of hand-object interactions could enable a form of metallized consciousness – a seamless melding of biological wetware and synthetic cognitive interfaces to fluidly shape experience itself.

Ethical Limits and Governance
However, as one can imagine, such extraordinarily powerful capabilities to decode, induce and even rewrite the very fabric of human subjective experiences could just as easily be employed for positive therapeutic or transcendent purposes as they could for nefarious coercive ends of oppression and abuse. The ethical implications and potential for misuse cannot be overstated.

Thus, any continued development of these mind meld capabilities must occur under a robust governance framework that establishes clear limits, protections and oversight mechanisms. At the core must be the inviolable principle of cognitive liberty – the sovereign human right to maintain absolute privacy and freedom over one’s own internal subjective experiences. No external entities should ever be able to read, modify or induce private experiences without full knowledge and consent.

Any legitimate application contexts like criminal forensics, therapeutic interventions or scientific research would require clearly defined due processes with extremely high burdens of proof and multiple levels oflossy encryption, access controls and independent oversight. Even then, the scope would be limited only to narrowly relevant anomalous data required for investigation or treatment – not complete access to an individual’s lifelong universe of subjective experiences.

Additionally, deriving value from this technology need not necessitate directly decoding raw subjective data streams. A promising intermediary approach could involve using machine learning to distill higher level statistical representations and taxonomies of experience types from neural big data. These high-dimensional manifolds of experience classes derived from population data could then enable physicians or researchers to probe subset dynamics without accessing raw phenomenological records. This preserves privacy while still allowing knowledge extraction and valuable utility.

Ultimately, the mind meld transcends just technological capabilities – it represents a profound inflection point in humanity’s relationship to the foundations of conscious experience itself. It behooves the pioneers working on such mind-bending interfaces to carefully navigate not just the scientific frontiers, but the depths of philosophical, ethical and existential terrains as well. Guidelines must be established through a pluralistic discourse spanning neuroscientists, ethicists, philosophers, policymakers, and the general public.

For as we imbue our technologies with the capacity to intimately interact with the very substrates that give rise to the felt qualities of consciousness itself, we must be judicious in how we wield these abilities. We stand at the precipice of a new renaissance – one that integrates the first-person inner universe of subjective experiences with the third-person outer universe described by objective metrics and physical laws.

If developed responsibility and with profoundly wise stewardship, the mind meld could potentially catalyze immense therapeutic benefits by allowing clinicians to directly perception and attune interventions at the level of phenomenological experiences underpinning psychiatric, neurological and trauma disorders. Providing an ultravivid experiential understanding of diverse neurological conditions could spur empathy and destigmatization.

In other spheres like education or scientific exploration, seamlessly sharing the qualitative textures of expertise, creative intuitions or novel conceptual models could dramatically accelerate knowledge transfer and collaborative discovery. Even transcendent experiences of spirituality, ego dissolution or unitive consciousness could perhaps be carefully shared and studied systematically.

However, these positive potentials are balanced by eerily dystopian risks – a technology to overly intrude, manipulate and control the most precious essence of our humanity. The mind meld thus represents a fascinating dichotomy – a symbolic keyhole through which we could merely observe the mysterious cognitive castles that give rise to experience…or a tempting facility through which we could foolishly play puppet master and tamperer of consciousness itself.

As we take our first steps into this new plane of technological metamorphosis, we must proceed with the deepest humility, nuanced wisdom and abiding ethics governing our ethical deployment of such powers. For in mastering the mind meld, we may well be initiating one of the most consequential revolutions in understanding the nature of our own existence as conscious beings. How we navigate this event horizon may very well shape the trajectory of humanity’s journey for generations to come.

Biomimetic insights for machine consciousness

About 20 years ago I gave my first talk on how to achieve consciousness in machines, at a World Future Society conference, and went on to discuss how we would co-evolve with machines. I’ve lectured on machine consciousness hundreds of times but never produced any clear slides that explain my ideas properly. I thought it was about time I did. My belief is that today’s deep neural networks using feed-forward processing with back propagation training can not become conscious. No digital algorithmic neural network can, even though they can certainly produce extremely good levels of artificial intelligence. By contrast, nature also uses neurons but does produce conscious machines such as humans easily. I think the key difference is not just that nature uses analog adaptive neural nets rather than digital processing (as I believe Hans Moravec first insighted, a view that I readily accepted) but also that nature uses large groups of these analog neurons that incorporate feedback loops that act both as a sort of short term memory and provide time to sense the sensing process as it happens, a mechanism that can explain consciousness. That feedback is critically important in the emergence of consciousness IMHO. I believe that if the neural network AI people stop barking up the barren back-prop tree and start climbing the feedback tree, we could have conscious machines in no time, but Moravec is still probably right that these need to be analog to enable true real-time processing as opposed to simulation of that.

I may be talking nonsense of course, but here are my thoughts, finally explained as simply and clearly as I can. These slides illustrate only the simplest forms of consciousness. Obviously our brains are highly complex and evolved many higher level architectures, control systems, complex senses and communication, but I think the basic foundations of biomimetic machine consciousness can be achieved as follows:

That’s it. I might produce some more slides on higher level processing such as how concepts might emerge, and why in the long term, AIs will have to become hive minds. But they can wait for later blogs.

How can we make a computer conscious?

This is very text heavy and is really just my thinking out loud, so to speak. Unless you are into mental archaeology or masochistic, I’d strongly recommend that you instead go to my new blog on this which outlines all of the useful bits graphically and simply.

Otherwise….

I found this article in my drafts folder, written 3 years ago as part of my short series on making conscious computers. I thought I’d published it but didn’t. So updating and publishing it now. It’s a bit long-winded, thinking out loud, trying to derive some insights from nature on how to make conscious machines. The good news is that actual AI developments are following paths that lead in much the same direction, though some significant re-routing and new architectural features are needed if they are to optimize AI and achieve machine consciousness.

Let’s start with the problem. Today’s AI that plays chess, does web searches or answers questions is digital. It uses algorithms, sets of instructions that the computer follows one by one. All of those are reduced to simple binary actions, toggling bits between 1 and 0. The processor doing that is no more conscious or aware of it, and has no more understanding of what it is doing than an abacus knows it is doing sums. The intelligence is in the mind producing the clever algorithms that interpret the current 1s and 0s and change them in the right way. The algorithms are written down, albeit in more 1s and 0s in a memory chip, but are essentially still just text, only as smart and aware as a piece of paper with writing on it. The answer is computed, transmitted, stored, retrieved, displayed, but at no point does the computer sense that it is doing any of those things. It really is just an advanced abacus. An abacus is digital too (an analog equivalent to an abacus is a slide rule).

A big question springs to mind: can a digital computer ever be any more than an advanced abacus. Until recently, I was certain the answer was no. Surely a digital computer that just runs programs can never be conscious? It can simulate consciousness to some degree, it can in principle describe the movements of every particle in a conscious brain, every electric current, every chemical reaction. But all it is doing is describing them. It is still just an abacus. Once computed, that simulation of consciousness could be printed and the printout would be just as conscious as the computer was. A digital ‘stored program’ computer can certainly implement extremely useful AI. With the right algorithms, it can mine data, link things together, create new data from that, generate new ideas by linking together things that haven’t been linked before, make works of art, poetry, compose music, chat to people, recognize faces and emotions and gestures. It might even be able to converse about life, the universe and everything, tell you its history, discuss its hopes for the future, but all of that is just a thin gloss on an abacus. I wrote a chat-bot on my Sinclair ZX Spectrum in 1983, running on a processor with about 8,000 transistors. The chat-bot took all of about 5 small pages of code but could hold a short conversation quite well if you knew what subjects to stick to. It’s very easy to simulate conversation. But it is still just a complicated abacus and still doesn’t even know it is doing anything.

However clever the AI it implements, a conventional digital computer that just executes algorithms can’t become conscious but an analog computer can, a quantum computer can, and so can a hybrid digital/analog/quantum computer. The question remain s whether a digital computer can be conscious if it isn’t just running stored programs. Could it have a different structure, but still be digital and yet be conscious? Who knows? Not me. I used to know it couldn’t, but now that I am a lot older and slightly wiser, I now know I don’t know.

Consciousness debate often starts with what we know to be conscious, the human brain. It isn’t a digital computer, although it has digital processes running in it. It also runs a lot of analog processes. It may also run some quantum processes that are significant in consciousness. It is a conscious hybrid of digital, analog and possibly quantum computing. Consciousness evolved in nature, therefore it can be evolved in a lab. It may be difficult and time consuming, and may even be beyond current human understanding, but it is possible. Nature didn’t use magic, and what nature did can be replicated and probably even improved on. Evolutionary AI development may have hit hard times, but that only shows that the techniques used by the engineers doing it didn’t work on that occasion, not that other techniques can’t work. Around 2.6 new human-level fully conscious brains are made by nature every second without using any magic and furthermore, they are all slightly different. There are 7.6 billion slightly different implementations of human-level consciousness that work and all of those resulted from evolution. That’s enough of an existence proof and a technique-plausibility-proof for me.

Sensors evolved in nature pretty early on. They aren’t necessary for life, for organisms to move around and grow and reproduce, but they are very helpful. Over time, simple light, heat, chemical or touch detectors evolved further to simple vision and produce advanced sensations such as pain and pleasure, causing an organism to alter its behavior, in other words, feeling something. Detection of an input is not the same as sensation, i.e. feeling an input. Once detection upgrades to sensation, you have the tools to make consciousness. No more upgrades are needed. Sensing that you are sensing something is quite enough to be classified as consciousness. Internally reusing the same basic structure as external sensing of light or heat or pressure or chemical gradient or whatever allows design of thought, planning, memory, learning and construction and processing of concepts. All those things are just laying out components in different architectures. Getting from detection to sensation is the hard bit.

So design of conscious machines, and in fact what AI researchers call the hard problem, really can be reduced to the question of what makes the difference between a light switch and something that can feel being pushed or feel the current flowing when it is, the difference between a photocell and feeling whether it is light or dark, the difference between detecting light frequency, looking it up in a database, then pronouncing that it is red, and experiencing redness. That is the hard problem of AI. Once that is solved, we will very soon afterwards have a fully conscious self aware AI. There are lots of options available, so let’s look at each in turn to extract any insights.

The first stage is easy enough. Detecting presence is easy, measuring it is harder. A detector detects something, a sensor (in its everyday engineering meaning) quantifies it to some degree. A component in an organism might fire if it detects something, it might fire with a stronger signal or more frequently if it detects more of it, so it would appear to be easy to evolve from detection to sensing in nature, and it is certainly easy to replicate sensing with technology.

Essentially, detection is digital, but sensing is usually analog, even though the quantity sensed might later be digitized. Sensing normally uses real numbers, while detection uses natural numbers (real v  integer as programmer call them). The handling of analog signals in their raw form allows for biomimetic feedback loops, which I’ll argue are essential. Digitizing them introduces a level of abstraction that is essentially the difference between emulation and simulation, the difference between doing something and reading about someone doing it. Simulation can’t make a conscious machine, emulation can. I used to think that meant digital couldn’t become conscious, but actually it is just algorithmic processing of stored programs that can’t do it. There may be ways of achieving consciousness digitally, or quantumly, but I haven’t yet thought of any.

That engineering description falls far short of what we mean by sensation in human terms. How does that machine-style sensing become what we call a sensation? Logical reasoning says there would probably need to be only a small change in order to have evolved from detection to sensing in nature. Maybe something like recombining groups of components in different structures or adding them together or adding one or two new ones, that sort of thing?

So what about detecting detection? Or sensing detection? Those could evolve in sequence quite easily. Detecting detection is like your alarm system control unit detecting the change of state that indicates that a PIR has detected an intruder, a different voltage or resistance on a line, or a 1 or a 0 in a memory store. An extremely simple AI responds by ringing an alarm. But the alarm system doesn’t feel the intruder, does it?  It is just a digital response to a digital input. No good.

How about sensing detection? How do you sense a 1 or a 0? Analog interpretation and quantification of digital states is very wasteful of resources, an evolutionary dead end. It isn’t any more useful than detection of detection. So we can eliminate that.

OK, sensing of sensing? Detection of sensing? They look promising. Let’s run with that a bit. In fact, I am convinced the solution lies in here so I’ll look till I find it.

Let’s do a thought experiment on designing a conscious microphone, and for this purpose, the lowest possible order of consciousness will do, we can add architecture and complexity and structures once we have some bricks. We don’t particularly want to copy nature, but are free to steal ideas and add our own where it suits.

A normal microphone sensor produces an analog signal quantifying the frequencies and intensities of the sounds it is exposed to, and that signal may later be quantified and digitized by an analog to digital converter, possibly after passing through some circuits such as filters or amplifiers in between. Such a device isn’t conscious yet. By sensing the signal produced by the microphone, we’d just be repeating the sensing process on a transmuted signal, not sensing the sensing itself.

Even up close, detecting that the microphone is sensing something could be done by just watching a little LED going on when current flows. Sensing it is harder but if we define it in conventional engineering terms, it could still be just monitoring a needle moving as the volume changes. That is obviously not enough, it’s not conscious, it isn’t feeling it, there’s no awareness there, no ‘sensation’. Even at this primitive level, if we want a conscious mic, we surely need to get in closer, into the physics of the sensing. Measuring the changing resistance between carbon particles or speed of a membrane moving backwards and forwards would just be replicating the sensing, adding an extra sensing stage in series, not sensing the sensing, so it needs to be different from that sort of thing. There must surely need to be a secondary change or activity in the sensing mechanism itself that senses the sensing of the original signal.

That’s a pretty open task, and it could even be embedded in the detecting process or in the production process for the output signal. But even recognizing that we need this extra property narrows the search. It must be a parallel or embedded mechanism, not one in series. The same logical structure would do fine for this secondary sensing, since it is just sensing in the same logical way as the original. This essential logical symmetry would make its evolution easy too. It is easy to imagine how that could happen in nature, and easier still to see how it could be implemented in a synthetic evolution design system. Such an approach could be mimicked in natural or synthetic evolutionary development systems. In this approach, we have to feel the sensing, so we need it to comprise some sort of feedback loop with a high degree of symmetry compared with the main sensing stage. That would be natural evolution compatible as well as logically sound as an engineering approach.

This starts to look like progress. In fact, it’s already starting to look a lot like a deep neural network, with one huge difference: instead of using feed-forward signal paths for analysis and backward propagation for training, it relies instead on a symmetric feedback mechanism where part of the input for each stage of sensing comes from its own internal and output signals. A neuron is not a full sensor in its own right, and it’s reasonable to assume that multiple neurons would be clustered so that there is a feedback loop. Many in the neural network AI community are already recognizing the limits of relying on feed-forward and back-prop architectures, but web searches suggest few if any are moving yet to symmetric feedback approaches. I think they should. There’s gold in them there hills!

So, the architecture of the notional sensor array required for our little conscious microphone would have a parallel circuit and feedback loop (possibly but not necessarily integrated), and in all likelihood these parallel and sensing circuits would be heavily symmetrical, i.e. they would use pretty much the same sort of components and architectures as the sensing process itself. If the sensation bit is symmetrical, of similar design to the primary sensing circuit, that again would make it easy to evolve in nature too so is a nice 1st principles biomimetic insight. So this structure has the elegance of being very feasible for evolutionary development, natural or synthetic. It reuses similarly structured components and principles already designed, it’s just recombining a couple of them in a slightly different architecture.

Another useful insight screams for attention too. The feedback loop ensures that the incoming sensation lingers to some degree. Compared to the nanoseconds we are used to in normal IT, the signals in nature travel fairly slowly (~200m/s), and the processing and sensing occur quite slowly (~200Hz). That means this system would have some inbuilt memory that repeats the essence of the sensation in real time – while it is sensing it. It is inherently capable of memory and recall and leaves the door wide open to introduce real-time interaction between memory and incoming signal. It’s not perfect yet, but it has all the boxes ticked to be a prime contender to build thought, concepts, store and recall memories, and in all likelihood, is a potential building brick for higher level consciousness. Throw in recent technology developments such as memristors and it starts to look like we have a very promising toolkit to start building primitive consciousness, and we’re already seeing some AI researchers going that path so maybe we’re not far from the goal. So, we make a deep neural net with nice feedback from output (of the sensing system, which to clarify would be a cluster of neurons, not a single neuron) to input at every stage (and between stages) so that inputs can be detected and sensed, while the input and output signals are stored and repeated into the inputs in real time as the signals are being processed. Throw in some synthetic neurotransmitters to dampen the feedback and prevent overflow and we’re looking at a system that can feel it is feeling something and perceive what it is feeling in real time.

One further insight that immediately jumps out is since the sensing relies on the real time processing of the sensations and feedbacks, the speed of signal propagation, storage, processing and repetition timeframes must all be compatible. If it is all speeded up a million fold, it might still work fine, but if signals travel too slowly or processing is too fast relative to other factors, it won’t work. It will still get a computational result absolutely fine, but it won’t know that it has, it won’t be able to feel it. Therefore… since we have a factor of a million for signal speed (speed of light compared to nerve signal propagation speed), 50 million for switching speed, and a factor of 50 for effective neuron size (though the sensing system units would be multiple neuron clusters), we could make a conscious machine that could think at 50 million times as fast as a natural system (before allowing for any parallel processing of course). But with architectural variations too, we’d need to tune those performance metrics to make it work at all and making physically larger nets would require either tuning speeds down or sacrificing connectivity-related intelligence. An evolutionary design system could easily do that for us.

What else can we deduce about the nature of this circuit from basic principles? The symmetry of the system demands that the output must be an inverse transform of the input. Why? Well, because the parallel, feedback circuit must generate a form that is self-consistent. We can’t deduce the form of the transform from that, just that the whole system must produce an output mathematically similar to that of the input.

I now need to write another blog on how to use such circuits in neural vortexes to generate knowledge, concepts, emotions and thinking. But I’m quite pleased that it does seem that some first-principles analysis of natural evolution already gives us some pretty good clues on how to make a conscious computer. I am optimistic that current research is going the right way and only needs relatively small course corrections to achieve consciousness.

 

The future of I

Me, myself, I, identity, ego, self, lots of words for more or less the same thing. The way we think of ourselves evolves just like everything else. Perhaps we are still cavemen with better clothes and toys. You may be a man, a dad, a manager, a lover, a friend, an artist and a golfer and those are all just descendants of caveman, dad, tribal leader, lover, friend, cave drawer and stone thrower. When you play Halo as Master Chief, that is not very different from acting or putting a tiger skin on for a religious ritual. There have always been many aspects of identity and people have always occupied many roles simultaneously. Technology changes but it still pushes the same buttons that we evolved hundred thousands of years ago.

Will we develop new buttons to push? Will we create any genuinely new facets of ‘I’? I wrote a fair bit about aspects of self when I addressed the related topic of gender, since self perception includes perceptions of how others perceive us and attempts to project chosen identity to survive passing through such filters:

The future of gender

Self is certainly complex. Using ‘I’ simplifies the problem. When you say ‘I’, you are communicating with someone, (possibly yourself). The ‘I’ refers to a tailored context-dependent blend made up of a subset of what you genuinely consider to be you and what you want to project, which may be largely fictional. So in a chat room where people often have never physically met, very often, one fictional entity is talking to another fictional entity, with each side only very loosely coupled to reality. I think that is different from caveman days.

Since chat rooms started, virtual identities have come a long way. As well as acting out manufactured characters such as the heroes in computer games, people fabricate their own characters for a broad range of kinds of ‘shared spaces’, design personalities and act them out. They may run that personality instance in parallel with many others, possibly dozens at once. Putting on an act is certainly not new, and friends easily detect acts in normal interactions when they have known a real person a long time, but online interactions can mean that the fictional version is presented it as the only manifestation of self that the group sees. With no other means to know that person by face to face contact, that group has to take them at face value and interact with them as such, though they know that may not represent reality.

These designed personalities may be designed to give away as little as possible of the real person wielding them, and may exist for a range of reasons, but in such a case the person inevitably presents a shallow image. Probing below the surface must inevitably lead to leakage of the real self. New personality content must be continually created and remembered if the fictional entity is to maintain a disconnect from the real person. Holding the in-depth memory necessary to recall full personality aspects and history for numerous personalities and executing them is beyond most people. That means that most characters in shared spaces take on at least some characteristics of their owners.

But back to the point. These fabrications should be considered as part of that person. They are an ‘I’ just as much as any other ‘I’. Only their context is different. Those parts may only be presented to subsets of the role population, but by running them, the person’s brain can’t avoid internalizing the experience of doing so. They may be partly separated but they are fully open to the consciousness of that person. I think that as augmented and virtual reality take off over the next few years, we will see their importance grow enormously. As virtual worlds start to feel more real, so their anchoring and effects in the person’s mind must get stronger.

More than a decade ago, AI software agents started inhabiting chat rooms too, and in some cases these ‘bots’ become a sufficient nuisance that they get banned. The front that they present is shallow but can give an illusion of reality. In some degree, they are an extension of the person or people that wrote their code. In fact, some are deliberately designed to represent a person when they are not present. The experiences that they have can’t be properly internalized by their creators, so they are a very limited extension to self. But how long will that be true? Eventually, with direct brain links and transhuman brain extensions into cyberspace, the combined experiences of I-bots may be fully available to consciousness just the same as first hand experiences.

Then it will get interesting. Some of those bots might be part of multiple people. People’s consciousnesses will start to overlap. People might collect them, or subscribe to them. Much as you might subscribe to my blog, maybe one day, part of one person’s mind, manifested as a bot or directly ‘published’, will become part of your mind. Some people will become absorbed into the experience and adopt so many that their own original personality becomes diluted to the point of disappearance. They will become just an interference pattern of numerous minds. Some will be so infectious that they will spread widely. For many, it will be impossible to die, and for many others, their minds will be spread globally. The hive minds of Dr Who, then later the Borg on Star Trek are conceptual prototypes but as with any sci-fi, they are limited by the imagination of the time they were conceived. By the time they become feasible, we will have moved on and the playground will be far richer than we can imagine yet.

So, ‘I’ has a future just as everything else. We may have just started to add extra facets a couple of decades ago, but the future will see our concept of self evolve far more quickly.

Postscript

I got asked by a reader whether I worry about this stuff. Here is my reply:

It isn’t the technology that worries me so much that humanity doesn’t really have any fixed anchor to keep human nature in place. Genetics fixed our biological nature and our values and morality were largely anchored by the main religions. We in the West have thrown our religion in the bin and are already seeing a 30 year cycle in moral judgments which puts our value sets on something of a random walk, with no destination, the current direction governed solely by media and interpretation and political reaction to of the happenings of the day. Political correctness enforces subscription to that value set even more strictly than any bishop ever forced religious compliance. Anyone that thinks religion has gone away just because people don’t believe in God any more is blind.

Then as genetics technology truly kicks in, we will be able to modify some aspects of our nature. Who knows whether some future busybody will decree that a particular trait must be filtered out because it doesn’t fit his or her particular value set? Throwing AI into the mix as a new intelligence alongside will introduce another degree of freedom. So already several forces acting on us in pretty randomized directions that can combine to drag us quickly anywhere. Then the stuff above that allows us to share and swap personality? Sure I worry about it. We are like young kids being handed a big chemistry set for Christmas without the instructions, not knowing that adding the blue stuff to the yellow stuff and setting it alight will go bang.

I am certainly no technotopian. I see the enormous potential that the tech can bring and it could be wonderful and I can’t help but be excited by it. But to get that you need to make the right decisions, and when I look at the sorts of leaders we elect and the sorts of decisions that are made, I can’t find the confidence that we will make the right ones.

On the good side, engineers and scientists are usually smart and can see most of the issues and prevent most of the big errors by using comon industry standards, so there is a parallel self-regulatory system in place that politicians rarely have any interest in. On the other side, those smart guys unfortunately will usually follow the same value sets as the rest of the population. So we’re quite likely to avoid major accidents and blowing ourselves up or being taken over by AIs. But we’re unlikely to avoid the random walk values problem and that will be our downfall.

So it could be worse, but it could be a whole lot better too.

 

Switching people off

A very interesting development has been reported in the discovery of how consciousness works, where neuroscientists stimulating a particular brain region were able to switch a woman’s state of awareness on and off. They said: “We describe a region in the human brain where electrical stimulation reproducibly disrupted consciousness…”

http://www.newscientist.com/article/mg22329762.700-consciousness-onoff-switch-discovered-deep-in-brain.html.

The region of the brain concerned was the claustrum, and apparently nobody had tried stimulating it before, although Francis Crick and Christof Koch had suggested the region would likely be important in achieving consciousness. Apparently, the woman involved in this discovery was also missing some of her hippocampus, and that may be a key factor, but they don’t know for sure yet.

Mohamed Koubeissi and his the team at the George Washington university in Washington DC were investigating her epilepsy and stimulated her claustrum area with high frequency electrical impulses. When they did so, the woman lost consciousness, no longer responding to any audio or visual stimuli, just staring blankly into space. They verified that she was not having any epileptic activity signs at the time, and repeated the experiment with similar results over two days.

The team urges caution and recommends not jumping to too many conclusions. They did observe the obvious potential advantages as an anesthesia substitute if it can be made generally usable.

As a futurologist, it is my job to look as far down the road as I can see, and imagine as much as I can. Then I filter out all the stuff that is nonsensical, or doesn’t have a decent potential social or business case or as in this case, where research teams suggest that it is too early to draw conclusions. I make exceptions where it seems that researchers are being over-cautious or covering their asses or being PC or unimaginative, but I have no evidence of that in this case. However, the other good case for making exceptions is where it is good fun to jump to conclusions. Anyway, it is Saturday, I’m off work, so in the great words of Dr Emmett Brown in ‘Back to the future’:  “Well, I figured, what the hell.”

OK, IF it works for everyone without removing parts of the brain, what will we do with it and how?

First, it is reasonable to assume that we can produce electrical stimulation at specific points in the brain by using external kit. Trans-cranial magnetic stimulation might work, or perhaps implants may be possible using injection of tiny particles that migrate to the right place rather than needing significant surgery. Failing those, a tiny implant or two via a fine needle into the right place ought to do the trick. Powering via induction should work. So we will be able to produce the stimulation, once the sucker victim subject has the device implanted.

I guess that could happen voluntarily, or via a court ordered protective device, as a condition of employment or immigration, or conditional release from prison, or a supervision order, or as a violent act or in war.

Imagine if government demands a legal right to access it, for security purposes and to ensure your comfort and safety, of course.

If you think 1984 has already gone too far, imagine a government or police officer that can switch you off if you are saying or thinking the wrong thing. Automated censorship devices could ensure that nobody discusses prohibited topics.

Imagine if people on the street were routinely switched off as a VIP passes to avoid any trouble for them.

Imagine a future carbon-reduction law where people are immobilized for an hour or two each day during certain periods. There might be a quota for how long you are allowed to be conscious each week to limit your environmental footprint.

In war, captives could have devices implanted to make them easy to control, simply turned off for packing and transport to a prison camp. A perimeter fence could be replaced by a line in the sand. If a prisoner tries to cross it, they are rendered unconscious automatically and put back where they belong.

Imagine a higher class of mugger that doesn’t like violence much and prefers to switch victims off before stealing their valuables.

Imagine being able to switch off for a few hours to pass the time on a long haul flight. Airlines could give discounts to passengers willing to be disabled and therefore less demanding of attention.

Imagine  a couple or a group of friends, or a fetish club, where people can turn each other off at will. Once off, other people can do anything they please with them – use them as dolls, as living statues or as mannequins, posing them, dressing them up. This is not an adult blog so just use your imagination – it’s pretty obvious what people will do and what sorts of clubs will emerge if an off-switch is feasible, making people into temporary toys.

Imagine if you got an illegal hacking app and could freeze the other people in your vicinity. What would you do?

Imagine if your off-switch is networked and someone else has a remote control or hacks into it.

Imagine if an AI manages to get control of such a system.

Having an off-switch installed could open a new world of fun, but it could also open up a whole new world for control by the authorities, crime control, censorship or abuse by terrorists and thieves and even pranksters.

 

 

Reverse engineering the brain is a very slow way to make a smart computer

The race is on to build conscious and smart computers and brain replicas. This article explains some of Markam’s approach. http://www.wired.com/wiredscience/2013/05/neurologist-markam-human-brain/all/

It is a nice project, and its aims are to make a working replica of the brain by reverse engineering it. That would work eventually, but it is slow and expensive and it is debatable how valuable it is as a goal.

Imagine if you want to make an aeroplane from scratch.  You could study birds and make extremely detailed reverse engineered mathematical models of the structures of individual feathers, and try to model all the stresses and airflows as the wing beats. Eventually you could make a good model of a wing, and by also looking at the electrics, feedbacks, nerves and muscles, you could eventually make some sort of control system that would essentially replicate a bird wing. Then you could scale it all up, look for other materials, experiment a bit and eventually you might make a big bird replica. Alternatively, you could look briefly at a bird and note the basic aerodynamics of a wing, note the use of lightweight and strong materials, then let it go. You don’t need any more from nature than that. The rest can be done by looking at ways of propelling the surface to create sufficient airflow and lift using the aerofoil, and ways to achieve the strength needed. The bird provides some basic insight, but it simply isn’t necessary to copy all a bird’s proprietary technology to fly.

Back to Markam. If the real goal is to reverse engineer the actual human brain and make a detailed replica or model of it, then fair enough. I wish him and his team, and their distributed helpers and affiliates every success with that. If the project goes well, and we can find insights to help with the hundreds of brain disorders and improve medicine, great. A few billion euros will have been well spent, especially given the waste of more billions of euros elsewhere on futile and counter-productive projects. Lots of people criticise his goal, and some of their arguments are nonsensical. It is a good project and for what it’s worth, I support it.

My only real objection is that a simulation of the brain will not think well and at best will be an extremely inefficient thinking machine. So if a goal is to achieve thought or intelligence, the project as described is barking up the wrong tree. If that isn’t a goal, so what? It still has the other uses.

A simulation can do many things. It can be used to follow through the consequences of an input if the system is sufficiently well modelled. A sufficiently detailed and accurate brain simulation could predict the impacts of a drug or behaviours resulting from certain mental processes. It could follow through the impacts and chain of events resulting from an electrical impulse  this finding out what the eventual result of that will be. It can therefore very inefficiently predict the result of thinking, but by using extremely high speed computation, it could in principle work out the end result of some thoughts. But it needs enormous detail and algorithmic precision to do that. I doubt it is achievable simply because of the volume of calculation needed.  Thinking properly requires consciousness and therefore emulation. A conscious circuit has to be built, not just modelled.

Consciousness is not the same as thinking. A simulation of the brain would not be conscious, even if it can work out the result of thoughts. It is the difference between printed music and played music. One is data, one is an experience. A simulation of all the processes going on inside a head will not generate any consciousness, only data. It could think, but not feel or experience.

Having made that important distinction, I still think that Markam’s approach will prove useful. It will generate many useful insights into the workings of the brain, and many of the processes nature uses to solve certain engineering problems. These insights and techniques can be used as input into other projects. Biomimetics is already proven as a useful tool in solving big problems. Looking at how the brain works will give us hints how to make a truly conscious, properly thinking machine. But just as with birds and airbuses, we can take ideas and inspiration from nature and then do it far better. No bird can carry the weight or fly as high or as fast as an aeroplane. No proper plane uses feathers or flaps its wings.

I wrote recently about how to make a conscious computer:

https://timeguide.wordpress.com/2013/02/15/how-to-make-a-conscious-computer/ and https://timeguide.wordpress.com/2013/02/18/how-smart-could-an-ai-become/

I still think that approach will work well, and it could be a decade faster than going Markam’s route. All the core technology needed to start making a conscious computer already exists today. With funding and some smart minds to set the process in motion, it could be done in a couple of years. The potential conscious and ultra-smart computer, properly harnessed, could do its research far faster than any human on Markam’s team. It could easily beat them to the goal of a replica brain. The converse is not true, Markam’s current approach would yield a conscious computer very slowly.

So while I fully applaud the effort and endorse the goals, changing the approach now could give far more bang for the buck, far faster.