Tag Archives: consciousness

42: the answer to life, the universe, and everything

Douglas Adams wrote The Hitchhiker’s guide to the Galaxy’ for which introduction to I am grateful to my friend Padraig McKeag.

He listed 42 as the answer to The Question of Life, the Universe and Everything. A good choice I think.

Optional waffle: (I almost met Adams once since we were booked for the same book launch debate, but sadly he had to withdraw on the day so it never happened, and I never got a chance to be one of the many who asked him. On the other hand, the few personal idols I have actually met have confirmed that you should never meet your idols, and mentioning no names, it can be an extremely disappointing experience, so maybe it’s best that I can keep Douglas Adams as one of my favorite authors of all time.)

Speculation on Adams’ use of 42 is well documented. 42 is 101010 in binary, and in base 13, 6 x 9 = 42, 42 is the wildcard symbol * in ASCII etc. Adams denied these, saying 42 had just been a random choice. Check http://en.wikipedia.org/wiki/Phrases_from_The_Hitchhiker%27s_Guide_to_the_Galaxy for more speculations and commentary. Having picked 42, the 6 x 9 joke is exactly what I suspect I would have written to justify it. It is the probably the most common multiplication error for the mathematically differently gifted. I don’t believe the base 13 or asterisk explanations. They are amusing but don’t hold water as The True Answer. I can happily accept he just picked it at random, but that doesn’t mean it is wrong.

101010 has a nice symmetry, a single number with two digits in three groups and 1-2-3 symmetry is itself a fact of life, the universe and everything. It is universally present. It is the basis of fractals, a sort of recursive symmetry which govern many aspects of the development of life, even a key foundation of consciousness.

101010

Suppose 1 and 0 represent the most fundamental things we observe about nature – wave-particle duality, on or off, life or death, existence or non-existence. Above that, we start to see all sorts of 3-way symmetry:

Nature

 

So if you have a lazy day and no boss breathing down your neck, it’s entirely possible to see at least some aspects of The Question of Life the Universe and Everything that might lead to an answer of 42.

Advertisements

The future of I

Me, myself, I, identity, ego, self, lots of words for more or less the same thing. The way we think of ourselves evolves just like everything else. Perhaps we are still cavemen with better clothes and toys. You may be a man, a dad, a manager, a lover, a friend, an artist and a golfer and those are all just descendants of caveman, dad, tribal leader, lover, friend, cave drawer and stone thrower. When you play Halo as Master Chief, that is not very different from acting or putting a tiger skin on for a religious ritual. There have always been many aspects of identity and people have always occupied many roles simultaneously. Technology changes but it still pushes the same buttons that we evolved hundred thousands of years ago.

Will we develop new buttons to push? Will we create any genuinely new facets of ‘I’? I wrote a fair bit about aspects of self when I addressed the related topic of gender, since self perception includes perceptions of how others perceive us and attempts to project chosen identity to survive passing through such filters:

https://timeguide.wordpress.com/2014/02/14/the-future-of-gender-2/

Self is certainly complex. Using ‘I’ simplifies the problem. When you say ‘I’, you are communicating with someone, (possibly yourself). The ‘I’ refers to a tailored context-dependent blend made up of a subset of what you genuinely consider to be you and what you want to project, which may be largely fictional. So in a chat room where people often have never physically met, very often, one fictional entity is talking to another fictional entity, with each side only very loosely coupled to reality. I think that is different from caveman days.

Since chat rooms started, virtual identities have come a long way. As well as acting out manufactured characters such as the heroes in computer games, people fabricate their own characters for a broad range of kinds of ‘shared spaces’, design personalities and act them out. They may run that personality instance in parallel with many others, possibly dozens at once. Putting on an act is certainly not new, and friends easily detect acts in normal interactions when they have known a real person a long time, but online interactions can mean that the fictional version is presented it as the only manifestation of self that the group sees. With no other means to know that person by face to face contact, that group has to take them at face value and interact with them as such, though they know that may not represent reality.

These designed personalities may be designed to give away as little as possible of the real person wielding them, and may exist for a range of reasons, but in such a case the person inevitably presents a shallow image. Probing below the surface must inevitably lead to leakage of the real self. New personality content must be continually created and remembered if the fictional entity is to maintain a disconnect from the real person. Holding the in-depth memory necessary to recall full personality aspects and history for numerous personalities and executing them is beyond most people. That means that most characters in shared spaces take on at least some characteristics of their owners.

But back to the point. These fabrications should be considered as part of that person. They are an ‘I’ just as much as any other ‘I’. Only their context is different. Those parts may only be presented to subsets of the role population, but by running them, the person’s brain can’t avoid internalizing the experience of doing so. They may be partly separated but they are fully open to the consciousness of that person. I think that as augmented and virtual reality take off over the next few years, we will see their importance grow enormously. As virtual worlds start to feel more real, so their anchoring and effects in the person’s mind must get stronger.

More than a decade ago, AI software agents started inhabiting chat rooms too, and in some cases these ‘bots’ become a sufficient nuisance that they get banned. The front that they present is shallow but can give an illusion of reality. In some degree, they are an extension of the person or people that wrote their code. In fact, some are deliberately designed to represent a person when they are not present. The experiences that they have can’t be properly internalized by their creators, so they are a very limited extension to self. But how long will that be true? Eventually, with direct brain links and transhuman brain extensions into cyberspace, the combined experiences of I-bots may be fully available to consciousness just the same as first hand experiences.

Then it will get interesting. Some of those bots might be part of multiple people. People’s consciousnesses will start to overlap. People might collect them, or subscribe to them. Much as you might subscribe to my blog, maybe one day, part of one person’s mind, manifested as a bot or directly ‘published’, will become part of your mind. Some people will become absorbed into the experience and adopt so many that their own original personality becomes diluted to the point of disappearance. They will become just an interference pattern of numerous minds. Some will be so infectious that they will spread widely. For many, it will be impossible to die, and for many others, their minds will be spread globally. The hive minds of Dr Who, then later the Borg on Star Trek are conceptual prototypes but as with any sci-fi, they are limited by the imagination of the time they were conceived. By the time they become feasible, we will have moved on and the playground will be far richer than we can imagine yet.

So, ‘I’ has a future just as everything else. We may have just started to add extra facets a couple of decades ago, but the future will see our concept of self evolve far more quickly.

Postscript

I got asked by a reader whether I worry about this stuff. Here is my reply:

It isn’t the technology that worries me so much that humanity doesn’t really have any fixed anchor to keep human nature in place. Genetics fixed our biological nature and our values and morality were largely anchored by the main religions. We in the West have thrown our religion in the bin and are already seeing a 30 year cycle in moral judgments which puts our value sets on something of a random walk, with no destination, the current direction governed solely by media and interpretation and political reaction to of the happenings of the day. Political correctness enforces subscription to that value set even more strictly than any bishop ever forced religious compliance. Anyone that thinks religion has gone away just because people don’t believe in God any more is blind.

Then as genetics technology truly kicks in, we will be able to modify some aspects of our nature. Who knows whether some future busybody will decree that a particular trait must be filtered out because it doesn’t fit his or her particular value set? Throwing AI into the mix as a new intelligence alongside will introduce another degree of freedom. So already several forces acting on us in pretty randomized directions that can combine to drag us quickly anywhere. Then the stuff above that allows us to share and swap personality? Sure I worry about it. We are like young kids being handed a big chemistry set for Christmas without the instructions, not knowing that adding the blue stuff to the yellow stuff and setting it alight will go bang.

I am certainly no technotopian. I see the enormous potential that the tech can bring and it could be wonderful and I can’t help but be excited by it. But to get that you need to make the right decisions, and when I look at the sorts of leaders we elect and the sorts of decisions that are made, I can’t find the confidence that we will make the right ones.

On the good side, engineers and scientists are usually smart and can see most of the issues and prevent most of the big errors by using comon industry standards, so there is a parallel self-regulatory system in place that politicians rarely have any interest in. On the other side, those smart guys unfortunately will usually follow the same value sets as the rest of the population. So we’re quite likely to avoid major accidents and blowing ourselves up or being taken over by AIs. But we’re unlikely to avoid the random walk values problem and that will be our downfall.

So it could be worse, but it could be a whole lot better too.

 

Switching people off

A very interesting development has been reported in the discovery of how consciousness works, where neuroscientists stimulating a particular brain region were able to switch a woman’s state of awareness on and off. They said: “We describe a region in the human brain where electrical stimulation reproducibly disrupted consciousness…”

http://www.newscientist.com/article/mg22329762.700-consciousness-onoff-switch-discovered-deep-in-brain.html.

The region of the brain concerned was the claustrum, and apparently nobody had tried stimulating it before, although Francis Crick and Christof Koch had suggested the region would likely be important in achieving consciousness. Apparently, the woman involved in this discovery was also missing some of her hippocampus, and that may be a key factor, but they don’t know for sure yet.

Mohamed Koubeissi and his the team at the George Washington university in Washington DC were investigating her epilepsy and stimulated her claustrum area with high frequency electrical impulses. When they did so, the woman lost consciousness, no longer responding to any audio or visual stimuli, just staring blankly into space. They verified that she was not having any epileptic activity signs at the time, and repeated the experiment with similar results over two days.

The team urges caution and recommends not jumping to too many conclusions. They did observe the obvious potential advantages as an anesthesia substitute if it can be made generally usable.

As a futurologist, it is my job to look as far down the road as I can see, and imagine as much as I can. Then I filter out all the stuff that is nonsensical, or doesn’t have a decent potential social or business case or as in this case, where research teams suggest that it is too early to draw conclusions. I make exceptions where it seems that researchers are being over-cautious or covering their asses or being PC or unimaginative, but I have no evidence of that in this case. However, the other good case for making exceptions is where it is good fun to jump to conclusions. Anyway, it is Saturday, I’m off work, so in the great words of Dr Emmett Brown in ‘Back to the future’:  “Well, I figured, what the hell.”

OK, IF it works for everyone without removing parts of the brain, what will we do with it and how?

First, it is reasonable to assume that we can produce electrical stimulation at specific points in the brain by using external kit. Trans-cranial magnetic stimulation might work, or perhaps implants may be possible using injection of tiny particles that migrate to the right place rather than needing significant surgery. Failing those, a tiny implant or two via a fine needle into the right place ought to do the trick. Powering via induction should work. So we will be able to produce the stimulation, once the sucker victim subject has the device implanted.

I guess that could happen voluntarily, or via a court ordered protective device, as a condition of employment or immigration, or conditional release from prison, or a supervision order, or as a violent act or in war.

Imagine if government demands a legal right to access it, for security purposes and to ensure your comfort and safety, of course.

If you think 1984 has already gone too far, imagine a government or police officer that can switch you off if you are saying or thinking the wrong thing. Automated censorship devices could ensure that nobody discusses prohibited topics.

Imagine if people on the street were routinely switched off as a VIP passes to avoid any trouble for them.

Imagine a future carbon-reduction law where people are immobilized for an hour or two each day during certain periods. There might be a quota for how long you are allowed to be conscious each week to limit your environmental footprint.

In war, captives could have devices implanted to make them easy to control, simply turned off for packing and transport to a prison camp. A perimeter fence could be replaced by a line in the sand. If a prisoner tries to cross it, they are rendered unconscious automatically and put back where they belong.

Imagine a higher class of mugger that doesn’t like violence much and prefers to switch victims off before stealing their valuables.

Imagine being able to switch off for a few hours to pass the time on a long haul flight. Airlines could give discounts to passengers willing to be disabled and therefore less demanding of attention.

Imagine  a couple or a group of friends, or a fetish club, where people can turn each other off at will. Once off, other people can do anything they please with them – use them as dolls, as living statues or as mannequins, posing them, dressing them up. This is not an adult blog so just use your imagination – it’s pretty obvious what people will do and what sorts of clubs will emerge if an off-switch is feasible, making people into temporary toys.

Imagine if you got an illegal hacking app and could freeze the other people in your vicinity. What would you do?

Imagine if your off-switch is networked and someone else has a remote control or hacks into it.

Imagine if an AI manages to get control of such a system.

Having an off-switch installed could open a new world of fun, but it could also open up a whole new world for control by the authorities, crime control, censorship or abuse by terrorists and thieves and even pranksters.

 

 

Reverse engineering the brain is a very slow way to make a smart computer

The race is on to build conscious and smart computers and brain replicas. This article explains some of Markam’s approach. http://www.wired.com/wiredscience/2013/05/neurologist-markam-human-brain/all/

It is a nice project, and its aims are to make a working replica of the brain by reverse engineering it. That would work eventually, but it is slow and expensive and it is debatable how valuable it is as a goal.

Imagine if you want to make an aeroplane from scratch.  You could study birds and make extremely detailed reverse engineered mathematical models of the structures of individual feathers, and try to model all the stresses and airflows as the wing beats. Eventually you could make a good model of a wing, and by also looking at the electrics, feedbacks, nerves and muscles, you could eventually make some sort of control system that would essentially replicate a bird wing. Then you could scale it all up, look for other materials, experiment a bit and eventually you might make a big bird replica. Alternatively, you could look briefly at a bird and note the basic aerodynamics of a wing, note the use of lightweight and strong materials, then let it go. You don’t need any more from nature than that. The rest can be done by looking at ways of propelling the surface to create sufficient airflow and lift using the aerofoil, and ways to achieve the strength needed. The bird provides some basic insight, but it simply isn’t necessary to copy all a bird’s proprietary technology to fly.

Back to Markam. If the real goal is to reverse engineer the actual human brain and make a detailed replica or model of it, then fair enough. I wish him and his team, and their distributed helpers and affiliates every success with that. If the project goes well, and we can find insights to help with the hundreds of brain disorders and improve medicine, great. A few billion euros will have been well spent, especially given the waste of more billions of euros elsewhere on futile and counter-productive projects. Lots of people criticise his goal, and some of their arguments are nonsensical. It is a good project and for what it’s worth, I support it.

My only real objection is that a simulation of the brain will not think well and at best will be an extremely inefficient thinking machine. So if a goal is to achieve thought or intelligence, the project as described is barking up the wrong tree. If that isn’t a goal, so what? It still has the other uses.

A simulation can do many things. It can be used to follow through the consequences of an input if the system is sufficiently well modelled. A sufficiently detailed and accurate brain simulation could predict the impacts of a drug or behaviours resulting from certain mental processes. It could follow through the impacts and chain of events resulting from an electrical impulse  this finding out what the eventual result of that will be. It can therefore very inefficiently predict the result of thinking, but by using extremely high speed computation, it could in principle work out the end result of some thoughts. But it needs enormous detail and algorithmic precision to do that. I doubt it is achievable simply because of the volume of calculation needed.  Thinking properly requires consciousness and therefore emulation. A conscious circuit has to be built, not just modelled.

Consciousness is not the same as thinking. A simulation of the brain would not be conscious, even if it can work out the result of thoughts. It is the difference between printed music and played music. One is data, one is an experience. A simulation of all the processes going on inside a head will not generate any consciousness, only data. It could think, but not feel or experience.

Having made that important distinction, I still think that Markam’s approach will prove useful. It will generate many useful insights into the workings of the brain, and many of the processes nature uses to solve certain engineering problems. These insights and techniques can be used as input into other projects. Biomimetics is already proven as a useful tool in solving big problems. Looking at how the brain works will give us hints how to make a truly conscious, properly thinking machine. But just as with birds and airbuses, we can take ideas and inspiration from nature and then do it far better. No bird can carry the weight or fly as high or as fast as an aeroplane. No proper plane uses feathers or flaps its wings.

I wrote recently about how to make a conscious computer:

https://timeguide.wordpress.com/2013/02/15/how-to-make-a-conscious-computer/ and https://timeguide.wordpress.com/2013/02/18/how-smart-could-an-ai-become/

I still think that approach will work well, and it could be a decade faster than going Markam’s route. All the core technology needed to start making a conscious computer already exists today. With funding and some smart minds to set the process in motion, it could be done in a couple of years. The potential conscious and ultra-smart computer, properly harnessed, could do its research far faster than any human on Markam’s team. It could easily beat them to the goal of a replica brain. The converse is not true, Markam’s current approach would yield a conscious computer very slowly.

So while I fully applaud the effort and endorse the goals, changing the approach now could give far more bang for the buck, far faster.