Category Archives: IT

The future of cyberspace

I promised in my last blog to do one on the dimensions of cyberspace. I made this chart 15 years ago, in two parts for easy reading, but the ones it lists are still valid and I can’t think of any new ones to add right now, but I might think of some more and make an update with a third part. I changed the name to virtuality instead because it actually only talks about human-accessed cyberspace, but I’m not entirely sure that was a good thing to do. Needs work.

cyberspace dimensions

cyberspace dimensions 2

The chart  has 14 dimensions (control has two independent parts), and I identified some of the possible points on each dimension. As dimensions are meant to be, they are all orthogonal, i.e. they are independent of each other, so you can pick any one on any dimension and use it with any from each other. Standard augmented reality and pure virtual reality are two of the potential combinations, out of the 2.5 x 10^11 possibilities above. At that rate, if every person in the world tried a different one every minute, it would take a whole day to visit them all even briefly. There are many more possible, this was never meant to be exhaustive, and even two more columns makes it 10 trillion combos. Already I can see that one more column could be ownership, another could be network implementation, another could be quality of illusion. What others have I missed?

The Future of IoT – virtual sensors for virtual worlds

I recently acquired a point-and-click thermometer for Futurizon, which gives an instant reading when you point it at something. I will soon know more about the world around me, but any personal discoveries I make are quite likely to be well known to science already. I don’t expect to win a Nobel prize by discovering breeches of the second law of thermodynamics, but that isn’t the point. The thermometer just measures the transmission from a particular point in a particular frequency band, which indicates what temperature it is. It cost about £20, a pretty cheap stimulation tool to help me think about the future by understanding new things about the present. I already discovered that my computer screen doubles as a heater, but I suspected that already. Soon, I’ll know how much my head warms when if think hard, and for the futurology bit, where the best locations are to put thermal IoT stuff.

Now that I am discovering the joys or remote sensing, I want to know so much more though. Sure, you can buy satellites for a billion pounds that will monitor anything anywhere, and for a few tens of thousands you can buy quite sophisticated lab equipment. For a few tens, not so much is available and I doubt the tax man will agree that Futurizon needs a high end oscilloscope or mass spectrometer so I have to set my sights low. The results of this blog justify the R&D tax offset for the thermometer. But the future will see drops in costs for most high technologies so I also expect to get far more interesting kit cheaply soon.

Even starting with the frequent assumption that in the future you can do anything, you still have to think what you want to do. I can get instant temperature readings now. In the future, I may also want a full absorption spectrum, color readings, texture and friction readings, hardness, flexibility, sound absorption characteristics, magnetic field strength, chemical composition, and a full range of biological measurements, just for fun. If Spock can have one, I want one too.

But that only covers reality, and reality will only account for a small proportion of our everyday life in the future. I may also want to check on virtual stuff, and that needs a different kind of sensor. I want to be able to point at things that only exist in virtual worlds. It needs to be able to see virtual worlds that are (at least partly) mapped onto real physical locations, and those that are totally independent and separate from the real world. I guess that is augmented reality ones and virtual reality ones. Then it starts getting tricky because augmented reality and virtual reality are just two members of a cyberspace variants set that runs to more than ten trillion members. I might do another blog soon on what they are, too big a topic to detail here.

People will be most interested in sensors to pick up geographically linked cyberspace. Much of the imaginary stuff is virtual worlds in computer games or similar, and many of those have built-in sensors designed for their spaces. So, my character can detect caves or forts or shrines from about 500m away in the virtual world of Oblivion (yes, it is from ages ago but it is still enjoyable). Most games have some sort of sensors built-in to show you what is nearby and some of its properties.

Geographically linked cyberspace won’t all be augmented reality because some will be there for machines, not people, but you might want to make sensors for it all the same, for many reasons, most likely for navigating it, debugging, or for tracking and identifying digital trespass. The last one is interesting. A rival company might well construct an augmented reality presence that allows you to see their products alongside ones in a physical shop. It doesn’t have to be in a properly virtual environment, a web page is still a location in cyberspace and when loaded, that instance takes on a geographic mapping via that display so it is part of that same trespass. That is legal today, and it started many years ago when people started using Amazon to check for better prices while in a book shop. Today it is pretty ubiquitous. We need sensors that can detect that. It may be accepted today as fair competition, but it might one day be judged as unfair competition by regulators for various reasons, and if so, they’ll need some mechanism to police it. They’ll need to be able to detect it. Not easy if it is just a web page that only exists at that location for a few seconds. Rather easier if it is a fixed augmented reality and you can download a map.

If for some reason a court does rule that digital trespass is illegal, one way of easy(though expensive) way of solving it would be to demand that all packets carry a geographic location, which of course the site would know when the person clicks on that link. To police that, turning off location would need to be blocked, or if it is turned off, sites would not be permitted to send you certain material that might not be permitted at that location. I feel certain there would be better and cheaper and more effective solutions.

I don’t intend to spend any longer exploring details here, but it is abundantly clear from just inspecting a few trees that making detectors for virtual worlds will be a very large and diverse forest full of dangers. Who should be able to get hold of the sensors? Will they only work in certain ‘dimensions’ of cyberspace? How should the watchers be watched?

The most interesting thing I can find though is that being able to detect cyberspace would allow new kinds of adventures and apps. You could walk through a doorway and it also happens to double as a portal between many virtual universes. And you might not be able to make that jump in any other physical location. You might see future high street outlets that are nothing more than teleport chambers for cyberspace worlds. They might be stuffed with virtual internet of things things and not one one of them physical. Now that’s fun.

 

Ground up data is the next big data

This one sat in my draft folder since February, so I guess it’s time to finish it.

Big Data – I expect you’re as sick of hearing that term as I am. Gathering loads of data on everything you or your company or anything else you can access can detect, measure, record, then analyzing the hell out of it using data mining, an equally irritating term.

I long ago had a quick twitter exchange with John Hewitt, who suggested “What is sensing but the energy-constrained competition for transmission to memory, as memory is but that for expression?”. Neurons compete to see who gets listened too.  Yeah, but I am still not much wiser as to what sensing actually is. Maybe I need a brain upgrade. (It’s like magnets. I used to be able to calculate the magnetic field densities around complicated shaped objects – it was part of my first job in missile design – but even though I could do all the equations around EM theory, even general relativity, I still am no wiser how a magnetic field actually becomes a force on an object. I have an office littered with hundreds of neodymium magnets and I spend hours playing with them and I still don’t understand). I can read about neurons all day but I still don’t understand how a bunch of photons triggering a series of electro-chemical reactions results in me experiencing an image. How does the physical detection become a conscious experience?

Well, I wrote some while back that we could achieve a conscious computer within two years. It’s still two years because nobody has started using the right approach yet. I have to stress the ‘could’, because nobody actually intends to do it in that time frame, but I really believe some half-decent lab could if they tried.  (Putting that into perspective, Kurzweil and his gang at Google are looking at 2029.) That two years estimate relies heavily on evolutionary development, for me the preferred option when you don’t understand how something works, as is the case with consciousness. It is pretty easy to design conscious computers at a black box level. The devil is in the detail. I argued that you could make a conscious computer by using internally focused sensing to detect processes inside the brain, and using a sensor structure with a symmetrical feedback loop. Read it:

http://timeguide.wordpress.com/2013/12/28/we-could-have-a-conscious-machine-by-end-of-play-2015/

In a nutshell, if you can feel thoughts in the same way as you feel external stimuli, you’d be conscious. I think. The symmetrical feedback loop bit is just a small engineering insight.

The missing link in that is still the same one: how does sensing work? How do you feel?

At a superficial level, you point a sensor at something and it produces a signal in some sort of relationship to whatever it is meant to sense. We can do that bit. We understand that. Your ear produces signals according to the frequencies and amplitudes of incoming sound waves, a bit like a microphone. Just the same so far. However, it is by some undefined processes later that you consciously experience the sound. How? That is the hard problem in AI. It isn’t just me that doesn’t know the answer. ‘How does red feel?’ is a more commonly used variant of the same question.

When we solve that, we will replace big data as ‘the next big thing’. If we can make sensor systems that experience or feel something rather than just producing a signal, that’s valuable already. If those sensors pool their shared experience, another similar sensor system could experience that. Basic data quickly transmutes into experience, knowledge, understanding, insight and very quickly, value, lots of it. Artificial neural nets go some way to doing that, but they still lack consciousness. Simulated neural networks can’t even get beyond a pretty straightforward computation, putting all the inputs into an equation. The true sensing bit is missing. The complex adaptive analog neural nets in our brain clearly achieve something deeper than a man-made neural network.

Meanwhile, most current AI work barks up a tree in a different forest. IBM’s Watson will do great things; Google’s search engine AI will too. But they aren’t conscious and can’t be. They’re just complicated programs running on digital processors, with absolutely zero awareness of anything they are doing. Digital programs on digital computers will never achieve any awareness, no matter how fast the chips are.

However, back in the biological realm, nature manages just fine. So biomimetics offers a lot of hope. We know we didn’t get from a pool of algae to humans in one go. At some point, organisms started moving according to light, chemical gradients, heat, touch. That most basic process of sensing may have started out coupled to internal processes that caused movement without any consciousness. But if we can understand the analog processes (electrochemical, electronic, mechanical) that take the stimulus through to a response, and can replicate it using our electronic technology, we would already have actuator circuits, even if we don’t have any form of sensation or consciousness yet. A great deal of this science has been done already of course. The computational side of most chemical and physical processes can be emulated electronically by some means or another. Actuators will be a very valuable part of the cloud, but we already have the ability to make actuators by more conventional means, so doing it organically or biomimetically just adds more actuation techniques to the portfolio. Valuable but not a terribly important breakthrough.

Looking at the system a big further along the evolutionary timeline, where eyes start to develop, where the most primitive nervous systems and brains start, where higher level processing is obviously occurring and inputs are starting to become sensations, we should be able to what is changed or changing. It is the emergence of sensation we need to identify, even if the reaction is still an unconscious reflex. We don’t need to reverse engineer the human brain. Simple organisms are simpler to understand. Feeding the architectural insights we gain from studying those primitive systems into our guided evolution engines is likely to be far faster as a means to generating true machine consciousness and strong AI. That’s how we could develop consciousness in a couple of years rather than 15.

If we can make primitive sensing devices that work like those in primitive organisms, and can respond to specific sorts of sensory input, then that is a potential way of increasing the coverage of cloud sensing and even actuation. It would effectively be a highly distributed direct response system. With clever embedding of emergent phenomena techniques (such as cellular automata, flocking etc) , it could be a quite sophisticated way of responding to quite complex distributed inputs, avoiding some of the need for big data processing. If we can gather the outputs from these simple sensors and feed them into others, that will be an even better sort of biomimetic response system. That sort of direct experience of a situation is very different from a data mined result, especially if actuation capability is there too. The philosophical question as to whether that inclusion of that second bank of sensors makes the system in any way conscious remains, but it would certainly be very useful and valuable. The architecture we end up with via this approach may look like neurons, and could even be synthetic neurons, but that may be only one solution among many. Biology may have gone the neuron route but that doesn’t necessarily mean it is the only possibility. It may be that we could one day genetically modify bacteria to produce their own organic electronics to emulate the key processes needed to generate sensation, and to power them by consuming nutrients from their environment. I suggested smart yogurt based on this idea many years ago, and believe that it could achieve vast levels of intelligence.

Digitizing and collecting the signals from the system at each stage would generate lots of  data, and that may be used by programs to derive other kinds of results, or to relay the inputs to other analog sensory systems elsewhere. (It isn’t always necessary to digitize signals to transmit them, but it helps limit signal degradation and quickly becomes important if the signal is to travel far and is essential if it is to be recorded for later use or time shifting). However, I strongly suspect that most of the value in analog sensing and direct response is local, coupled to direct action or local processing and storage.

If we have these sorts of sensors liberally spread around, we’d create a truly smart environment, with local sensing and some basic intelligence able to relay sensation remotely to other banks of sensors elsewhere for further processing or even ultimately consciousness. The local sensors could be relatively dumb like nerve endings on our skin, feeding in  signals to a more connected virtual nervous system, or a bit smarter, like neural retinal cells, doing a lot of analog pre-processing before relaying them via ganglia cells, and maybe part of a virtual brain. If they are also capable of or connected to some sort of actuation, then we would be constructing a kind of virtual organism, with tendrils covering potentially the whole globe, and able to sense and interact with its environment in an intelligent way.

I use the term virtual not because the sensors wouldn’t be real, but because their electronic nature allows connectivity to many systems, overlapping, hierarchical or distinct. Any number of higher level systems could ‘experience’ them as part of its system, rather as if your fingers could be felt by the entire human population. Multiple higher level virtual organisms could share the same basic sensory/data inputs. That gives us a whole different kind of cloud sensing.

By doing processing locally, in the analog domain, and dealing with some of the response locally, a lot of traffic across the network is avoided and a lot of remote processing. Any post-processing that does occur can therefore add to a higher level of foundation. A nice side effect from avoiding all the extra transmission and processing is increased environmental friendliness.

So, we’d have a quite different sort of data network, collecting higher quality data, essentially doing by instinct what data mining does with huge server farms and armies of programmers. Cloudy, but much smarter than a straightforward sensor net.

… I think.

It isn’t without risk though. I had a phone discussion yesterday on the dangers of this kind of network. In brief, it’s dangerous.

The future of prying

Prying is one side of the privacy coin, hiding being the other side.

Today, lots of snap-chat photos have been released, and no doubt some people are checking to see if there are any of people they know, and it is a pretty safe bet that some will send links to compromising pics of colleagues (or teachers) to others who know them. It’s a sort of push prying isn’t it?

There is more innocent prying too. Checking out Zoopla to see how much your neighbour got for their house is a little bit nosy but not too bad, or at the extremely innocent end of the line, reading someone’s web page is the sort of prying they actually want some people to do, even if not necessarily you.

The new security software I just installed lets parents check out on their kids online activity. Protecting your kids is good but monitoring every aspect of their activity just isn’t, it doesn’t give them the privacy they deserve and probably makes them used to being snooped on so that they accept state snooping more easily later in life. Every parent has to draw their own line, but kids do need to feel trusted as well as protected.

When adults install tracking apps on their partner’s phones, so they can see every location they’ve visited and every call or message they’ve made, I think most of us would agree that is going too far.

State surveillance is increasing rapidly. We often don’t even think of it as such, For example, when speed cameras are linked ‘so that the authorities can make our roads safer’, the incidental monitoring and recording of our comings and goings collected without the social debate. Add that to the replacement of tax discs by number plate recognition systems linked to databases, and even more data is collected. Also ‘to reduce crime’, video from millions of CCTV cameras is also stored and some is high enough quality to be analysed by machine to identify people’s movements and social connectivity. Then there’s our phone calls, text messages, all the web and internet accesses, all these need to be stored, either in full or at least the metadata, so that ‘we can tackle terrorism’. The state already has a very full picture of your life, and it is getting fuller by the day. When it is a benign government, it doesn’t matter so much, but if the date is not erased after a short period, then you need also to worry about future governments and whether they will also be benign, or whether you will be one of the people they want to start oppressing. You also need to worry that increasing access is being granted to your data to a wider variety of a growing number of public sector workers for a widening range of reasons, with seemingly lower security competence, meaning that a good number of people around you will be able to find out rather more about you than they really ought. State prying is always sold to the electorate via assurances that it is to make us safer and more secure and reduce crime, but the state is staffed by your neighbors, and in the end, that means that your neighbors can pry on you.

Tracking cookies are a fact of everyday browsing but mostly they are just trying to get data to market to us more effectively. Reading every email to get data for marketing may be stretching the relationship with the customer to the limits, but many of us gmail users still trust Google not to abuse our data too much and certainly not to sell on our business dealings to potential competitors. It is still prying though, however automated it is, and a wider range of services are being linked all the time. The internet of things will provide data collection devices all over homes and offices too. We should ask how much we really trust global companies to hold so much data, much of it very personal, which we’ve seen several times this year may be made available to anyone via hackers or forced to be handed over to the authorities. Almost certainly, bits of your entire collected and processed electronic activity history could get you higher insurance costs, in trouble with family or friends or neighbors or the boss or the tax-man or the police. Surveillance doesn’t have to be real time. Databases can be linked, mashed up, analysed with far future software or AI too. In the ongoing search for crimes and taxes, who knows what future governments will authorize? If you wouldn’t make a comment in front of a police officer or tax-man, it isn’t safe to make it online or in a text.

Allowing email processing to get free email is a similar trade-off to using a supermarket loyalty card. You sell personal data for free services or vouchers. You have a choice to use that service or another supermarket or not use the card, so as long as you are fully aware of the deal, it is your lifestyle choice. The lack of good competition does reduce that choice though. There are not many good products or suppliers out there for some services, and in a few there is a de-facto monopoly. There can also be a huge inconvenience and time loss or social investment cost in moving if terms and conditions change and you don’t want to accept the deal any more.

On top of that state and global company surveillance, we now have everyone’s smartphones and visors potentially recording anything and everything we do and say in public and rarely a say in what happens to that data and whether it is uploaded and tagged in some social media.

Some companies offer detective-style services where they will do thorough investigations of someone for a fee, picking up all they can learn from a wide range of websites they might use. Again, there are variable degrees that we consider acceptable according to context. If I apply for a job, I would think it is reasonable for the company to check that I don’t have a criminal record, and maybe look at a few of the things I write or tweet to see what sort of character I might be. I wouldn’t think it appropriate to go much further than that.

Some say that if you have done nothing wrong, you have nothing to fear, but none of them has a 3 digit IQ. The excellent film ‘Brazil’ showed how one man’s life was utterly destroyed by a single letter typo in a system scarily similar to what we are busily building.

Even if you are a saint, do you really want the pervert down the road checking out hacked databases for personal data on you or your family, or using their public sector access to see all your online activity?

The global population is increasing, and every day a higher proportion can afford IT and know how to use it. Networks are becoming better and AI is improving so they will have greater access and greater processing potential. Cyber-attacks will increase, and security leaks will become more common. More of your personal data will become available to more people with better tools, and quite a lot of them wish you harm. Prying will increase geometrically, according to Metcalfe’s Law I think.

My defense against prying is having an ordinary life and not being famous or a major criminal, not being rich and being reasonably careful on security. So there are lots of easier and more lucrative targets. But there are hundreds of millions of busybodies and jobsworths and nosy parkers and hackers and blackmailers out there with unlimited energy to pry, as well as anyone who doesn’t like my views on a topic so wants to throw some mud, and their future computers may be able to access and translate and process pretty much anything I type, as well as much of what I say and do anywhere outside my home.

I find myself self-censoring hundreds of times a day. I’m not paranoid. There are some people out to get me, and you, and they’re multiplying fast.

 

 

 

The future of obsolescence

My regular readers will know I am not a big fan of ‘green’ policies. I want to protect the environment and green policies invariably end up damaging it. These policies normally arise by taking too simplistic a view – that all parts of the environmental system are independent of each other so each part can be addressed in isolation to improve the environment as a whole. As a systems engineer since graduation, I always look at the whole system over the whole life cycle and when you do that, you can see why green policies usually don’t work.

Tackling the problem of rapid obsolescence is one of the big errors in environmentalism. The error here is that rapid obsolescence is not necessarily  a problem. Although at first glance it may appear to cause excessive waste and unnecessary environmental damage, on deeper inspection it is very clear that it has actually driven technology through very rapid change to the point where the same function can often be realized now with less material, less energy use, less pollution and less environmental impact. As the world gets richer and more people can afford to buy more things, it is a direct result of rapid obsolescence that those things have a better environmental impact than they would if the engineering life cycle had run through fewer times.

A 150g smart-phone replaces 750kg of 1990s IT. If the green policy of making things last longer and not replacing them had been in force back then, some improvement would still have arisen, but the chances are you would not have the smart phone or tablet, would still use a plasma TV, still need a hi-fi, camera and you’d still have to travel in person to do a lot of the things your smartphone allows you to do wherever you are. In IT, rapid obsolescence continues, soon all your IT will be replaced by active contact lenses and a few grams of jewelry. If 7Bn people want to have a good quality of digitally enabled lifestyle, then letting them do so with 5 grams of materials and milliwatts of power use is far better than using a ton of materials and kilowatts of power.

Rapid engineering progress lets us build safer bridges and buildings with less material, make cars that don’t rust after 3 years and run on less fuel, given us fridges and washing machines that use less energy. Yes, we throw things away, but thanks again to rapid obsolescence, the bits are now easily recyclable.

Whether greens like it or not, our way of throwing things away after a relatively short life cycle has been one of the greatest environmental successes of our age. Fighting against rapid obsolescence doesn’t make you a friend of the earth, it makes you its unwitting enemy.

Alcohol-free beer goggles

You remember that person you danced with and thought was wonderful, and then you met them the next day and your opinion was less favorable? That’s what people call beer goggles. Alcohol impairs judgment. It makes people chattier and improves their self confidence, but also makes them think others are more physically attractive and more interesting too. That’s why people get drunk apparently, because it upgrades otherwise dull people into tolerable company, breaking the ice and making people sociable and fun.

Augmented reality visors could double as alcohol-free beer goggles. When you look at someone  while wearing the visor, you wouldn’t have to see them warts and all. You could filter the warts. You could overlay their face with an upgraded version, or indeed replace it with someone else’s face. They wouldn’t even have to know.

The arms of the visor could house circuits to generate high intensity oscillating magnetic fields – trans-cranial magnetic stimulation. This has been demonstrated as a means of temporarily switching off certain areas of the brain, or at least reducing their effects. Among areas concerned are those involved in inhibitions. Alcohol does that normally, but you can’t drink tonight, so your visor can achieve the same effect for you.

So the nominated driver could be more included in drunken behavior on nights out. The visor could make people more attractive and reduce your inhibitions, basically replicating at least some of what alcohol does. I am not suggesting for a second that this is a good thing, only that it is technologically feasible. At least the wearer can set alerts so that they don’t declare their undying love to someone without at least being warned of the reality first.

The future of Jelly Babies

Another frivolous ‘future of’, recycled from 10 years ago.

I’ve always loved Jelly Babies, (Jelly Bears would work as well if you prefer those) and remember that Dr Who used to eat them a lot too. Perhaps we all have a mean streak, but I’m sure most if us sometimes bite off their heads before eating the rest. But that might all change. I must stress at this point that I have never even spoken to anyone from Bassetts, who make the best ones, and I have absolutely no idea what plans they might have, and they might even strongly disapprove of my suggestions, but they certainly could do this if they wanted, as could anyone else who makes Jelly Babies or Jelly Bears or whatever.

There will soon be various forms of edible electronics. Some electronic devices can already be swallowed, including a miniature video camera that can take pictures all the way as it proceeds through your digestive tract (I don’t know whether they bother retrieving them though). Some plastics can be used as electronic components. We also have loads of radio frequency identity (RFID) tags around now. Some tags work in groups, recording whether they have been separated from each other at some point, for example. With nanotech, we will be able to make tags using little more than a few well-designed molecules, and few materials are so poisonous that a few molecules can do you much harm so they should be sweet-compliant. So extrapolating a little, it seems reasonable to expect that we might be able to eat things that have specially made RFID tags in them.  It would make a lot of sense. They could be used on fruit so that someone buying an apple could ingest the RFID tag on it without concern. And as well as work on RFID tags, many other electronic devices can be made very small, and out of fairly safe materials too.

So I propose that Jelly Baby manufacturers add three organic RFID tags to each jelly baby, (legs, head and body), some processing, and a simple communications device When someone bites the head off a jelly baby, the jelly baby would ‘know’, because the tags would now be separated. The other electronics in the jelly baby could then come into play, setting up a wireless connection to the nearest streaming device and screaming through the loudspeakers. It could also link to the rest of the jelly babies left in the packet, sending out a radio distress call. The other jelly babies, and any other friends they can solicit help from via the internet, could then use their combined artificial intelligence to organise a retaliatory strike on the person’s home computer. They might be able to trash the hard drive, upload viruses, or post a stroppy complaint on social media about the person’s cruelty.

This would make eating jelly babies even more fun than today. People used to spend fortunes going on safari to shoot lions. I presume it was exciting at least in part because there was always a risk that you might not kill the lion and it might eat you instead. With our environmentally responsible attitudes, it is no longer socially acceptable to hunt lions, but jelly babies could be the future replacement. As long as you eat them in the right order, with the appropriate respect and ceremony and so on, you would just enjoy eating a nice sweet. If you get it wrong, your life is trashed for the next day or two. That would level the playing field a bit.

Jelly Baby anyone?

The future of I

Me, myself, I, identity, ego, self, lots of words for more or less the same thing. The way we think of ourselves evolves just like everything else. Perhaps we are still cavemen with better clothes and toys. You may be a man, a dad, a manager, a lover, a friend, an artist and a golfer and those are all just descendants of caveman, dad, tribal leader, lover, friend, cave drawer and stone thrower. When you play Halo as Master Chief, that is not very different from acting or putting a tiger skin on for a religious ritual. There have always been many aspects of identity and people have always occupied many roles simultaneously. Technology changes but it still pushes the same buttons that we evolved hundred thousands of years ago.

Will we develop new buttons to push? Will we create any genuinely new facets of ‘I’? I wrote a fair bit about aspects of self when I addressed the related topic of gender, since self perception includes perceptions of how others perceive us and attempts to project chosen identity to survive passing through such filters:

http://timeguide.wordpress.com/2014/02/14/the-future-of-gender-2/

Self is certainly complex. Using ‘I’ simplifies the problem. When you say ‘I’, you are communicating with someone, (possibly yourself). The ‘I’ refers to a tailored context-dependent blend made up of a subset of what you genuinely consider to be you and what you want to project, which may be largely fictional. So in a chat room where people often have never physically met, very often, one fictional entity is talking to another fictional entity, with each side only very loosely coupled to reality. I think that is different from caveman days.

Since chat rooms started, virtual identities have come a long way. As well as acting out manufactured characters such as the heroes in computer games, people fabricate their own characters for a broad range of kinds of ‘shared spaces’, design personalities and act them out. They may run that personality instance in parallel with many others, possibly dozens at once. Putting on an act is certainly not new, and friends easily detect acts in normal interactions when they have known a real person a long time, but online interactions can mean that the fictional version is presented it as the only manifestation of self that the group sees. With no other means to know that person by face to face contact, that group has to take them at face value and interact with them as such, though they know that may not represent reality.

These designed personalities may be designed to give away as little as possible of the real person wielding them, and may exist for a range of reasons, but in such a case the person inevitably presents a shallow image. Probing below the surface must inevitably lead to leakage of the real self. New personality content must be continually created and remembered if the fictional entity is to maintain a disconnect from the real person. Holding the in-depth memory necessary to recall full personality aspects and history for numerous personalities and executing them is beyond most people. That means that most characters in shared spaces take on at least some characteristics of their owners.

But back to the point. These fabrications should be considered as part of that person. They are an ‘I’ just as much as any other ‘I’. Only their context is different. Those parts may only be presented to subsets of the role population, but by running them, the person’s brain can’t avoid internalizing the experience of doing so. They may be partly separated but they are fully open to the consciousness of that person. I think that as augmented and virtual reality take off over the next few years, we will see their importance grow enormously. As virtual worlds start to feel more real, so their anchoring and effects in the person’s mind must get stronger.

More than a decade ago, AI software agents started inhabiting chat rooms too, and in some cases these ‘bots’ become a sufficient nuisance that they get banned. The front that they present is shallow but can give an illusion of reality. In some degree, they are an extension of the person or people that wrote their code. In fact, some are deliberately designed to represent a person when they are not present. The experiences that they have can’t be properly internalized by their creators, so they are a very limited extension to self. But how long will that be true? Eventually, with direct brain links and transhuman brain extensions into cyberspace, the combined experiences of I-bots may be fully available to consciousness just the same as first hand experiences.

Then it will get interesting. Some of those bots might be part of multiple people. People’s consciousnesses will start to overlap. People might collect them, or subscribe to them. Much as you might subscribe to my blog, maybe one day, part of one person’s mind, manifested as a bot or directly ‘published’, will become part of your mind. Some people will become absorbed into the experience and adopt so many that their own original personality becomes diluted to the point of disappearance. They will become just an interference pattern of numerous minds. Some will be so infectious that they will spread widely. For many, it will be impossible to die, and for many others, their minds will be spread globally. The hive minds of Dr Who, then later the Borg on Star Trek are conceptual prototypes but as with any sci-fi, they are limited by the imagination of the time they were conceived. By the time they become feasible, we will have moved on and the playground will be far richer than we can imagine yet.

So, ‘I’ has a future just as everything else. We may have just started to add extra facets a couple of decades ago, but the future will see our concept of self evolve far more quickly.

Postscript

I got asked by a reader whether I worry about this stuff. Here is my reply:

It isn’t the technology that worries me so much that humanity doesn’t really have any fixed anchor to keep human nature in place. Genetics fixed our biological nature and our values and morality were largely anchored by the main religions. We in the West have thrown our religion in the bin and are already seeing a 30 year cycle in moral judgments which puts our value sets on something of a random walk, with no destination, the current direction governed solely by media and interpretation and political reaction to of the happenings of the day. Political correctness enforces subscription to that value set even more strictly than any bishop ever forced religious compliance. Anyone that thinks religion has gone away just because people don’t believe in God any more is blind.

Then as genetics technology truly kicks in, we will be able to modify some aspects of our nature. Who knows whether some future busybody will decree that a particular trait must be filtered out because it doesn’t fit his or her particular value set? Throwing AI into the mix as a new intelligence alongside will introduce another degree of freedom. So already several forces acting on us in pretty randomized directions that can combine to drag us quickly anywhere. Then the stuff above that allows us to share and swap personality? Sure I worry about it. We are like young kids being handed a big chemistry set for Christmas without the instructions, not knowing that adding the blue stuff to the yellow stuff and setting it alight will go bang.

I am certainly no technotopian. I see the enormous potential that the tech can bring and it could be wonderful and I can’t help but be excited by it. But to get that you need to make the right decisions, and when I look at the sorts of leaders we elect and the sorts of decisions that are made, I can’t find the confidence that we will make the right ones.

On the good side, engineers and scientists are usually smart and can see most of the issues and prevent most of the big errors by using comon industry standards, so there is a parallel self-regulatory system in place that politicians rarely have any interest in. On the other side, those smart guys unfortunately will usually follow the same value sets as the rest of the population. So we’re quite likely to avoid major accidents and blowing ourselves up or being taken over by AIs. But we’re unlikely to avoid the random walk values problem and that will be our downfall.

So it could be worse, but it could be a whole lot better too.

 

The future of high quality TV

I occasionally do talks on future TV and I generally ignore current companies and their recent developments because people can read about them anywhere. If it is already out there, it isn’t the future. Companies make announcements of technologies they expect to bring in soon, which is the future, but they don’t tend to announce things until they’re almost ready for market so tracking those is no use for long term futurology.

Thanks to Pauline Rigby on Twitter, I saw the following article about Dolby’s new High Dynamic Range TV:

http://www.redsharknews.com/technology/item/2052-the-biggest-advance-in-video-for-ten-years-and-it-s-nothing-to-do-with-resolution

High dynamic range allows light levels to be reproduced across a high dynamic range. I love tech, terminology is so damned intuitive. So hopefully we will see the darkest blacks and the brightest lights.

It looks a good idea! But it won’t be their last development. We hear that the best way to predict the future is to invent it, so here’s my idea: textured pixels.

As they say, there is more to vision than just resolution. There is more to vision than just light too, even though our eyes can only make images from incoming photons and human eyes can’t even differentiate their polarisation. Eyes are not just photon detectors, they also do some image pre-processing, and the brain does a great deal more processing, using all sorts of clues from the image context.

Today’s TV displays mostly use red, blue and green LCD pixels back-lit by LEDs, fluorescent tubes or other lighting. Some newer ones use LEDs as the actual pixels, demonstrating just how stupid it was to call LCD TVs with LED back-lighting LED TVs. Each pixel that results is a small light source that can vary in brightness. Even with the new HDR that will still be the case.

Having got HDR, I suggest that textured pixels should be the next innovation. Texture is a hugely important context for vision. Micromechanical devices are becoming commonplace, and some proteins are moving into nano-motor technology territory. It would be possible to change the direction of a small plate that makes up the area of the pixel. At smaller scales, ridges could be created on the pixel, or peaks and troughs. Even reversible chemical changes could be made. Technology can go right down to nanoscale, far below the ability of the eye to perceive it, so matching the eye’s capabilities to discern texture should be feasible in the near future. If a region of the display has a physically different texture than other areas, that is an extra level of reality that they eye can perceive. It could appear glossy or matt, rough or smooth, warm or cold. Linking pixels together across an area, it could far better convey movement than jerky video frames. Sure you can emulate texture to some degree using just light, but it loses the natural subtlety.

So HDR good, Textured HDR better.

 

 

The future of gardens

It’s been weeks since my last blog. I started a few but they need some more thought so as a catch-up, here is a nice frivolous topic, recycled from 1998.

Surely gardens are a place to get back to nature, to escape from technology? Well, when journalists ask to see really advanced technology, I take them to the garden. Humans still have a long way to go to catch up with what nature does all the time. A dragonfly catching smaller flies is just a hint of future warfare, and every flower is an exercise in high precision marketing, let alone engineering. But we will catch up, and even the stages between now and then will be fun.

Advanced garden technology today starts and ends with robotic lawn trimmers. I guess you could add the special materials used in garden tools, advanced battery tech, security monitoring, plant medications and nutrition. OK, there are already lots of advanced technologies in gardens, they just aren’t very glamorous. The fact is that our gardens already use a wide range of genetically enhanced plants and flowers, state of the art fertilizers and soil conditioners, fancy lawnmowers and automatic sprinkler systems. So what can we expect next?

Fiber optic plants already  add a touch of somewhat tacky enchantment to a garden and can be a good substitute for more conventional lighting. Home security uses video cameras and webcams and some rather fun documentaries have resulted from videoing pets and wild animals during the night. There will soon be many other appliances in the future garden, including the various armies of robots and micro-bots  doing a range of jobs from cutting the grass every time a blade gets more than 3 cm long, weeding, watering, pollination or carrying individual grains of fertilizer to the plants that need it. Others will fight with bugs or tidy up debris, or remove dying flowers to keep the garden looking pristine. They could even assist in propagation, burying seeds in just the right places and tending them while they become established. The garden pond may have robot ducks or fish just for fun.

Various sensors may be inserted into the ground around the garden, or smart dust just sprinkled randomly. These would warn when the ground is getting too dry and perhaps co-ordinate automatic sprinklers. They could also monitor the chemical composition, advising the gardener where to add which type of fertilizer or conditioner. In fact, when the price and size falls sufficiently, electronic sensors might well be mixed in with fertilizer and other garden care products.

With all this robot assistance, the human may design the garden and then just let the robots get on with the construction and maintenance. Or maybe just download a garden plan if they’re really lazy, or get the AI to download one.

Another obvious potential impact comes in the shape of genetic engineering. While designing the genome for custom plants is not quite as simple as assembling Lego blocks, we will nevertheless be able to pick and choose from a wide variety of characteristics available from anywhere in the plant and animal kingdom. We are promised blue roses that smell of designer perfumes, grass that only needs cut once a year and ground cover plants that actually grow faster than weeds. By messing about with genes we can thus change the appearance and characteristics of plants enormously, and while getting a company logo to appear on a flower petal might be beyond us, the garden could certainly look much more kaleidoscopic than today’s. We are already in the era where genetics has become a hobbyist activity, but so far the limits are pretty simple gene transfers to add fun things like fluorescence or light emission. Legislation will hopefully prevent people using such clubs to learn how to make viruses or bacteria for terrorist use.

In the long term we are not limited by the Lego bricks provided by nature. Nanotechnology will eventually allow us to produce inorganic ‘plants’ . You might buy a seed and drop it in the required place and it would grow into a predetermined structure just like an organic seed, taking the materials from the soil or air, or perhaps from some additives. However, there is almost no theoretical limit to the type of ‘plant’ that could be produced this way. Flowers with logos are possible, but so are video displays built into the flowers, so are garden gnomes that wander around or that actually fish in the pond. A wide range of static and dynamic ornamentation could add fun to every garden. Nanotechnology has so many possibilities, there are almost no ultimate limits to what can be done apart from the fundamental physics of materials. Power supplies for these devices could use solar, wind or thermal power.

On the patio, there is more scope for video displays in the paving and walls, to add color or atmosphere, and also to provide a recharging base for the robots without their own independent power supplies. Flat speakers could also be built into the walls, providing birdsong or other natural sounds that are otherwise declining in our gardens. Appropriately placed large display panels could simulate being on a beach while sunbathing in Nottingham (for non-Brits, Nottingham is a city not renowned for its sunshine, and very far from a beach).

All in all, the garden could become a place of relaxation, getting back to what we like best in nature, without all the boring bits looking after it in our few spare hours. Even before we retire, we will be able to enjoy the garden, instead of just weeding and cutting the grass.

1998 is a long time ago and I have lots of new ideas for the garden now, but time demands I leave them for a later blog.