Category Archives: interfaces

The future of cyberspace

I promised in my last blog to do one on the dimensions of cyberspace. I made this chart 15 years ago, in two parts for easy reading, but the ones it lists are still valid and I can’t think of any new ones to add right now, but I might think of some more and make an update with a third part. I changed the name to virtuality instead because it actually only talks about human-accessed cyberspace, but I’m not entirely sure that was a good thing to do. Needs work.

cyberspace dimensions

cyberspace dimensions 2

The chart  has 14 dimensions (control has two independent parts), and I identified some of the possible points on each dimension. As dimensions are meant to be, they are all orthogonal, i.e. they are independent of each other, so you can pick any one on any dimension and use it with any from each other. Standard augmented reality and pure virtual reality are two of the potential combinations, out of the 2.5 x 10^11 possibilities above. At that rate, if every person in the world tried a different one every minute, it would take a whole day to visit them all even briefly. There are many more possible, this was never meant to be exhaustive, and even two more columns makes it 10 trillion combos. Already I can see that one more column could be ownership, another could be network implementation, another could be quality of illusion. What others have I missed?

Ground up data is the next big data

This one sat in my draft folder since February, so I guess it’s time to finish it.

Big Data – I expect you’re as sick of hearing that term as I am. Gathering loads of data on everything you or your company or anything else you can access can detect, measure, record, then analyzing the hell out of it using data mining, an equally irritating term.

I long ago had a quick twitter exchange with John Hewitt, who suggested “What is sensing but the energy-constrained competition for transmission to memory, as memory is but that for expression?”. Neurons compete to see who gets listened too.  Yeah, but I am still not much wiser as to what sensing actually is. Maybe I need a brain upgrade. (It’s like magnets. I used to be able to calculate the magnetic field densities around complicated shaped objects – it was part of my first job in missile design – but even though I could do all the equations around EM theory, even general relativity, I still am no wiser how a magnetic field actually becomes a force on an object. I have an office littered with hundreds of neodymium magnets and I spend hours playing with them and I still don’t understand). I can read about neurons all day but I still don’t understand how a bunch of photons triggering a series of electro-chemical reactions results in me experiencing an image. How does the physical detection become a conscious experience?

Well, I wrote some while back that we could achieve a conscious computer within two years. It’s still two years because nobody has started using the right approach yet. I have to stress the ‘could’, because nobody actually intends to do it in that time frame, but I really believe some half-decent lab could if they tried.  (Putting that into perspective, Kurzweil and his gang at Google are looking at 2029.) That two years estimate relies heavily on evolutionary development, for me the preferred option when you don’t understand how something works, as is the case with consciousness. It is pretty easy to design conscious computers at a black box level. The devil is in the detail. I argued that you could make a conscious computer by using internally focused sensing to detect processes inside the brain, and using a sensor structure with a symmetrical feedback loop. Read it:

http://timeguide.wordpress.com/2013/12/28/we-could-have-a-conscious-machine-by-end-of-play-2015/

In a nutshell, if you can feel thoughts in the same way as you feel external stimuli, you’d be conscious. I think. The symmetrical feedback loop bit is just a small engineering insight.

The missing link in that is still the same one: how does sensing work? How do you feel?

At a superficial level, you point a sensor at something and it produces a signal in some sort of relationship to whatever it is meant to sense. We can do that bit. We understand that. Your ear produces signals according to the frequencies and amplitudes of incoming sound waves, a bit like a microphone. Just the same so far. However, it is by some undefined processes later that you consciously experience the sound. How? That is the hard problem in AI. It isn’t just me that doesn’t know the answer. ‘How does red feel?’ is a more commonly used variant of the same question.

When we solve that, we will replace big data as ‘the next big thing’. If we can make sensor systems that experience or feel something rather than just producing a signal, that’s valuable already. If those sensors pool their shared experience, another similar sensor system could experience that. Basic data quickly transmutes into experience, knowledge, understanding, insight and very quickly, value, lots of it. Artificial neural nets go some way to doing that, but they still lack consciousness. Simulated neural networks can’t even get beyond a pretty straightforward computation, putting all the inputs into an equation. The true sensing bit is missing. The complex adaptive analog neural nets in our brain clearly achieve something deeper than a man-made neural network.

Meanwhile, most current AI work barks up a tree in a different forest. IBM’s Watson will do great things; Google’s search engine AI will too. But they aren’t conscious and can’t be. They’re just complicated programs running on digital processors, with absolutely zero awareness of anything they are doing. Digital programs on digital computers will never achieve any awareness, no matter how fast the chips are.

However, back in the biological realm, nature manages just fine. So biomimetics offers a lot of hope. We know we didn’t get from a pool of algae to humans in one go. At some point, organisms started moving according to light, chemical gradients, heat, touch. That most basic process of sensing may have started out coupled to internal processes that caused movement without any consciousness. But if we can understand the analog processes (electrochemical, electronic, mechanical) that take the stimulus through to a response, and can replicate it using our electronic technology, we would already have actuator circuits, even if we don’t have any form of sensation or consciousness yet. A great deal of this science has been done already of course. The computational side of most chemical and physical processes can be emulated electronically by some means or another. Actuators will be a very valuable part of the cloud, but we already have the ability to make actuators by more conventional means, so doing it organically or biomimetically just adds more actuation techniques to the portfolio. Valuable but not a terribly important breakthrough.

Looking at the system a big further along the evolutionary timeline, where eyes start to develop, where the most primitive nervous systems and brains start, where higher level processing is obviously occurring and inputs are starting to become sensations, we should be able to what is changed or changing. It is the emergence of sensation we need to identify, even if the reaction is still an unconscious reflex. We don’t need to reverse engineer the human brain. Simple organisms are simpler to understand. Feeding the architectural insights we gain from studying those primitive systems into our guided evolution engines is likely to be far faster as a means to generating true machine consciousness and strong AI. That’s how we could develop consciousness in a couple of years rather than 15.

If we can make primitive sensing devices that work like those in primitive organisms, and can respond to specific sorts of sensory input, then that is a potential way of increasing the coverage of cloud sensing and even actuation. It would effectively be a highly distributed direct response system. With clever embedding of emergent phenomena techniques (such as cellular automata, flocking etc) , it could be a quite sophisticated way of responding to quite complex distributed inputs, avoiding some of the need for big data processing. If we can gather the outputs from these simple sensors and feed them into others, that will be an even better sort of biomimetic response system. That sort of direct experience of a situation is very different from a data mined result, especially if actuation capability is there too. The philosophical question as to whether that inclusion of that second bank of sensors makes the system in any way conscious remains, but it would certainly be very useful and valuable. The architecture we end up with via this approach may look like neurons, and could even be synthetic neurons, but that may be only one solution among many. Biology may have gone the neuron route but that doesn’t necessarily mean it is the only possibility. It may be that we could one day genetically modify bacteria to produce their own organic electronics to emulate the key processes needed to generate sensation, and to power them by consuming nutrients from their environment. I suggested smart yogurt based on this idea many years ago, and believe that it could achieve vast levels of intelligence.

Digitizing and collecting the signals from the system at each stage would generate lots of  data, and that may be used by programs to derive other kinds of results, or to relay the inputs to other analog sensory systems elsewhere. (It isn’t always necessary to digitize signals to transmit them, but it helps limit signal degradation and quickly becomes important if the signal is to travel far and is essential if it is to be recorded for later use or time shifting). However, I strongly suspect that most of the value in analog sensing and direct response is local, coupled to direct action or local processing and storage.

If we have these sorts of sensors liberally spread around, we’d create a truly smart environment, with local sensing and some basic intelligence able to relay sensation remotely to other banks of sensors elsewhere for further processing or even ultimately consciousness. The local sensors could be relatively dumb like nerve endings on our skin, feeding in  signals to a more connected virtual nervous system, or a bit smarter, like neural retinal cells, doing a lot of analog pre-processing before relaying them via ganglia cells, and maybe part of a virtual brain. If they are also capable of or connected to some sort of actuation, then we would be constructing a kind of virtual organism, with tendrils covering potentially the whole globe, and able to sense and interact with its environment in an intelligent way.

I use the term virtual not because the sensors wouldn’t be real, but because their electronic nature allows connectivity to many systems, overlapping, hierarchical or distinct. Any number of higher level systems could ‘experience’ them as part of its system, rather as if your fingers could be felt by the entire human population. Multiple higher level virtual organisms could share the same basic sensory/data inputs. That gives us a whole different kind of cloud sensing.

By doing processing locally, in the analog domain, and dealing with some of the response locally, a lot of traffic across the network is avoided and a lot of remote processing. Any post-processing that does occur can therefore add to a higher level of foundation. A nice side effect from avoiding all the extra transmission and processing is increased environmental friendliness.

So, we’d have a quite different sort of data network, collecting higher quality data, essentially doing by instinct what data mining does with huge server farms and armies of programmers. Cloudy, but much smarter than a straightforward sensor net.

… I think.

It isn’t without risk though. I had a phone discussion yesterday on the dangers of this kind of network. In brief, it’s dangerous.

Alcohol-free beer goggles

You remember that person you danced with and thought was wonderful, and then you met them the next day and your opinion was less favorable? That’s what people call beer goggles. Alcohol impairs judgment. It makes people chattier and improves their self confidence, but also makes them think others are more physically attractive and more interesting too. That’s why people get drunk apparently, because it upgrades otherwise dull people into tolerable company, breaking the ice and making people sociable and fun.

Augmented reality visors could double as alcohol-free beer goggles. When you look at someone  while wearing the visor, you wouldn’t have to see them warts and all. You could filter the warts. You could overlay their face with an upgraded version, or indeed replace it with someone else’s face. They wouldn’t even have to know.

The arms of the visor could house circuits to generate high intensity oscillating magnetic fields – trans-cranial magnetic stimulation. This has been demonstrated as a means of temporarily switching off certain areas of the brain, or at least reducing their effects. Among areas concerned are those involved in inhibitions. Alcohol does that normally, but you can’t drink tonight, so your visor can achieve the same effect for you.

So the nominated driver could be more included in drunken behavior on nights out. The visor could make people more attractive and reduce your inhibitions, basically replicating at least some of what alcohol does. I am not suggesting for a second that this is a good thing, only that it is technologically feasible. At least the wearer can set alerts so that they don’t declare their undying love to someone without at least being warned of the reality first.

The future of high quality TV

I occasionally do talks on future TV and I generally ignore current companies and their recent developments because people can read about them anywhere. If it is already out there, it isn’t the future. Companies make announcements of technologies they expect to bring in soon, which is the future, but they don’t tend to announce things until they’re almost ready for market so tracking those is no use for long term futurology.

Thanks to Pauline Rigby on Twitter, I saw the following article about Dolby’s new High Dynamic Range TV:

http://www.redsharknews.com/technology/item/2052-the-biggest-advance-in-video-for-ten-years-and-it-s-nothing-to-do-with-resolution

High dynamic range allows light levels to be reproduced across a high dynamic range. I love tech, terminology is so damned intuitive. So hopefully we will see the darkest blacks and the brightest lights.

It looks a good idea! But it won’t be their last development. We hear that the best way to predict the future is to invent it, so here’s my idea: textured pixels.

As they say, there is more to vision than just resolution. There is more to vision than just light too, even though our eyes can only make images from incoming photons and human eyes can’t even differentiate their polarisation. Eyes are not just photon detectors, they also do some image pre-processing, and the brain does a great deal more processing, using all sorts of clues from the image context.

Today’s TV displays mostly use red, blue and green LCD pixels back-lit by LEDs, fluorescent tubes or other lighting. Some newer ones use LEDs as the actual pixels, demonstrating just how stupid it was to call LCD TVs with LED back-lighting LED TVs. Each pixel that results is a small light source that can vary in brightness. Even with the new HDR that will still be the case.

Having got HDR, I suggest that textured pixels should be the next innovation. Texture is a hugely important context for vision. Micromechanical devices are becoming commonplace, and some proteins are moving into nano-motor technology territory. It would be possible to change the direction of a small plate that makes up the area of the pixel. At smaller scales, ridges could be created on the pixel, or peaks and troughs. Even reversible chemical changes could be made. Technology can go right down to nanoscale, far below the ability of the eye to perceive it, so matching the eye’s capabilities to discern texture should be feasible in the near future. If a region of the display has a physically different texture than other areas, that is an extra level of reality that they eye can perceive. It could appear glossy or matt, rough or smooth, warm or cold. Linking pixels together across an area, it could far better convey movement than jerky video frames. Sure you can emulate texture to some degree using just light, but it loses the natural subtlety.

So HDR good, Textured HDR better.

 

 

Estimating IoT value? Count ALL the beans!

In this morning’s news:

http://www.telegraph.co.uk/technology/news/11043549/UK-funds-development-of-world-wide-web-for-machines.html

£1.6M investment by UK Technology Strategy Board in Internet-of-Things HyperCat standard, which the article says will add £100Bn to the UK economy by 2020.

Garnter says that IoT has reached the hype peak of their adoption curve and I agree. Connecting machines together, and especially adding networked sensors will certainly increase technology capability across many areas of our lives, but the appeal is often overstated and the dangers often overlooked. Value should not be measured in purely financial terms either. If you value health, wealth and happiness, don’t just measure the wealth. We value other things too of course. It is too tempting just to count the most conspicuous beans. For IoT, which really just adds a layer of extra functionality onto an already technology-rich environment, that is rather like estimating the value of a chili con carne by counting the kidney beans in it.

The headline negatives of privacy and security have often been addressed so I don’t need to explore them much more here, but let’s look at a couple of typical examples from the news article. Allowing remotely controlled washing machines will obviously impact on your personal choice on laundry scheduling. The many similar shifts of control of your life to other agencies will all add up. Another one: ‘motorists could benefit from cheaper insurance if their vehicles were constantly transmitting positioning data’. Really? Insurance companies won’t want to earn less, so motorists on average will give them at least as much profit as before. What will happen is that insurance companies will enforce driving styles and car maintenance regimes that reduce your likelihood of a claim, or use that data to avoid paying out in some cases. If you have to rigidly obey lots of rules all of the time then driving will become far less enjoyable. Having to remember to check the tyre pressures and oil level every two weeks on pain of having your insurance voided is not one of the beans listed in the article, but is entirely analogous the typical home insurance rule that all your windows must have locks and they must all be locked and the keys hidden out of sight before they will pay up on a burglary.

Overall, IoT will add functionality, but it certainly will not always be used to improve our lives. Look at the way the web developed. Think about the cookies and the pop-ups and the tracking and the incessant virus protection updates needed because of the extra functions built into browsers. You didn’t want those, they were added to increase capability and revenue for the paying site owners, not for the non-paying browsers. IoT will be the same. Some things will make minor aspects of your life easier, but the price of that will that you will be far more controlled, you will have far less freedom, less privacy, less security. Most of the data collected for business use or to enhance your life will also be available to government and police. We see every day the nonsense of the statement that if you have done nothing wrong, then you have nothing to fear. If you buy all that home kit with energy monitoring etc, how long before the data is hacked and you get put on militant environmentalist blacklists because you leave devices on standby? For every area where IoT will save you time or money or improve your control, there will be many others where it does the opposite, forcing you to do more security checks, spend more money on car and home and IoT maintenance, spend more time following administrative procedures and even follow health regimes enforced by government or insurance companies. IoT promises milk and honey, but will deliver it only as part of a much bigger and unwelcome lifestyle change. Sure you can have a little more control, but only if you relinquish much more control elsewhere.

As IoT starts rolling out, these and many more issues will hit the press, and people will start to realise the downside. That will reduce the attractiveness of owning or installing such stuff, or subscribing to services that use it. There will be a very significant drop in the economic value from the hype. Yes, we could do it all and get the headline economic benefit, but the cost of greatly reduced quality of life is too high, so we won’t.

Counting the kidney beans in your chili is fine, but it won’t tell you how hot it is, and when you start eating it you may decide the beans just aren’t worth the pain.

I still agree that IoT can be a good thing, but the evidence of web implementation suggests we’re more likely to go through decades of abuse and grief before we get the promised benefits. Being honest at the outset about the true costs and lifestyle trade-offs will help people decide, and maybe we can get to the good times faster if that process leads to better controls and better implementation.

Ultra-simple computing: Part 4

Gel processing

One problem with making computers with a lot of cores is the wiring. Another is the distribution of tasks among the cores. Both of these can be solved with relatively simple architecture. Processing chips usually have a lot of connectors, letting them get data in parallel. But a beam of light can contain rays of millions of wavelengths, far more parallelism than is possible with wiring. If chips communicated using light with high density wavelength division multiplexing, it will solve some wiring issues. Taking another simple step, processors that are freed from wiring don’t have to be on a circuit board, but could be suspended in some sort of gel. Then they could use free space interconnection to connect to many nearby chips. Line of sight availability will be much easier than on a circuit board. Gel can also be used to cool chips.

Simpler chips with very few wired connections also means less internal wiring too. This reduces size still further and permits higher density of suspension without compromising line of sight.

Ripple scheduler

Process scheduling can also be done more simply with many processors. Complex software algorithms are not needed. In an array of many processors, some would be idle while some are already engaged on tasks. When a job needs processed, a task request (this could be as simple as a short pulse of a certain frequency) would be broadcast and would propagate through the array. On encountering an idle processor, the idle processor would respond with an accept response (again this could be a single pulse of another frequency. This would also propagate out as a wave through the array. These two waves may arrive at a given processor in quick succession.

Other processors could stand down automatically once one has accepted the job (i.e. when they detect the acceptance wave). That would be appropriate when all processors are equally able. Alternatively, if processors have different capabilities, the requesting agent would pick a suitable one from the returning acceptances, send a point to point message to it, and send out a cancel broadcast wave to stand others down. It would exchange details about the task with this processor on a point to point link, avoiding swamping the system with unnecessary broadcast messages.  An idle processor in the array would thus see a request wave, followed by a number of accept waves. It may then receive a personalized point to point message with task information, or if it hasn’t been chosen, it would just see the cancel wave of . Busy processors would ignore all communications except those directed specifically to them.

I’m not saying the ripple scheduling is necessarily the best approach, just an example of a very simple system for process scheduling that doesn’t need sophisticated algorithms and code.

Activator Pastes

It is obvious that this kind of simple protocol can be used with a gel processing medium populated with a suitable mixture of different kinds of processors, sensors, storage, transmission and power devices to provide a fully scalable self-organizing array that can perform a high task load with very little administrative overhead. To make your smart gel, you might just choose the volume of weight ratios of components you want and stir them into a gel rather like mixing a cocktail. A paste made up in this way could be used to add sensing, processing and storage to any surface just by painting some of the paste onto it.

A highly sophisticated distributed cloud sensor network for example could be made just by painting dabs of paste onto lamp posts. Solar power or energy harvesting devices in the paste would power the sensors to make occasional readings, pre-process them, and send them off to the net. This approach would work well for environmental or structural monitoring, surveillance, even for everyday functions like adding parking meters to lines marking the spaces on the road where they interact with ID devices in the car or an app on the driver’s smartphone.

Special inks could contain a suspension of such particles and add a highly secure electronic signature onto one signed by pen and ink.

The tacky putty stuff that we use to stick paper to walls could use activator paste as the electronic storage and processing medium to let you manage  content an e-paper calendar or notice on a wall.

I can think of lots of ways of using smart pastes in health monitoring, packaging, smart makeup and so on. The basic principle stays the same though. It would be very cheap and yet very powerful, with many potential uses. Self-organising, and needs no set up beyond giving it a job to do, which could come from any of your devices. You’d probably buy it by the litre, keep some in the jar as your computer, and paste the rest of it all over the place to make your skin, your clothes, your work-spaces and your world smart. Works for me.

 

Interfacial prejudice

This blog is caused by an interaction with Nick Colosimo, thanks Nick.

We were discussing whether usage differences for gadgets were generational. I think they are but not because older people find it hard to learn new tricks. Apart from a few unfortunate people whose brains go downhill when they get old, older people have shown they are perfectly able and willing to learn web stuff. Older people were among the busiest early adopters of social media.

I think the problem is the volume of earlier habits that need to be unlearned. I am 53 and have used computers every day since 1981. I have used slide rules and log tables, an abacus, an analog computer, several mainframes, a few minicomputers, many assorted Macs and PCs and numerous PDAs, smartphones and now tablets. They all have very different ways of using them and although I can’t say I struggle with any of them, I do find the differing implementations of features and mechanisms annoying. Each time a new operating system comes along, or a new style of PDA, you have to learn a new design language, remember where all the menus, sub-menus and all the various features are hidden on this one, how they interconnect and what depends on what.

That’s where the prejudice kicks in. The many hours of experience you have on previous systems have made you adept at navigating through a sea of features, menus, facilities. You are native to the design language, the way you do things, the places to look for buttons or menus, even what the buttons look like. You understand its culture, thoroughly. When a new device or OS is very different, using it is like going on holiday. It is like emigrating if you’re making a permanent switch. You have the ability to adapt, but the prejudice caused by your long experience on a previous system makes that harder. Your first uses involve translation from the old to the new, just like translating foreignish to your own language, rather than thinking in the new language as you will after lengthy exposure. Your attitude to anything on the new system is colored by your experiences with the old one.

It isn’t stupidity that making you slow and incompetent. Its interfacial prejudice.

Smart fuse

This maybe exists now but I couldn’t find it right away on Google. It is an idea I had a very long time ago, but with all the stuff coming from Apple and Google now, this would make an easier and cheaper way to make most appliances smart without adding huge cost or locking owners in to a corporate ecosystem.

Most mains powered appliances come with plugs that have fuses in them. Here is a UK plug, pic courtesy of BBC.

fuse

If the fuse in the plug is replaced by a smart fuse that has an internet address, then this presents a means to switch things on and off automatically. A signal could be sent over the mains from a plug-in controller somewhere in the house, or via radio, wireless LAN, even voice command. The appliance therefore becomes capable of being turned on and off remotely at minimal cost.

At slightly higher expense, with today’s miniaturisation levels, smart fuses would be a cheap way of adding other functions. They could contain ROM loaded with software for the appliance, giving security via an easy upgrade that can’t be tampered with. They could also contain timers, sensors, usage meters, and talk to other devices, such as a phone or PC, or enable appliances for cheaper electricity by letting power companies turn them on and off remotely.

There really is no need to add heavily to appliance cost to make it smart. A smart fuse could cost pennies and still do the job.

Google is wrong. We don’t all want gadgets that predict our needs.

In the early 1990s, lots of people started talking about future tech that would work out what we want and make it happen. A whole batch of new ideas came out – internet fridges, smart waste-baskets, the ability to control your air conditioning from the office or open and close curtains when you’re away on holiday. 25 years on almost and we still see just a trickle of prototypes, followed by a tsunami of apathy from the customer base.

Do you want an internet fridge, that orders milk when you’re running out, or speaks to you all the time telling you what you’re short of, or sends messages to your phone when you are shopping? I certainly don’t. It would be extremely irritating. It would crash frequently. If I forget to clean the sensors it won’t work. If I don’t regularly update the software, and update the security, and get it serviced, it won’t work. It will ask me for passwords. If my smart loo notices I’m putting on weight, the fridge will refuse to open, and tell the microwave and cooker too so that they won’t cook my lunch. It will tell my credit card not to let me buy chocolate bars or ice cream. It will be a week before kitchen rage sets in and I take a hammer to it. The smart waste bin will also be covered in tomato sauce from bean cans held in a hundred orientations until the sensor finally recognizes the scrap of bar-code that hasn’t been ripped off. Trust me, we looked at all this decades ago and found the whole idea wanting. A few show-off early adopters want it to show how cool and trendy they are, then they’ll turn it off when no-one is watching.

EDIT: example of security risks from smart devices (this one has since been fixed) http://www.bbc.co.uk/news/technology-28208905

If I am with my best friend, who has known me for 30 years, or my wife, who also knows me quite well, they ask me what I want, they discuss options with me. They don’t think they know best and just decide things. If they did, they’d soon get moaned at. If I don’t want my wife or my best friend to assume they know what I want best, why would I want gadgets to do that?

The first thing I did after checking out my smart TV was to disconnect it from the network so that it won’t upload anything and won’t get hacked or infected with viruses. Lots of people have complained about new adverts on TV that control their new xBoxes via the Kinect voice recognition. The ‘smart’ TV receiver might be switched off as that happens. I am already sick of things turning themselves off without my consent because they think they know what I want.

They don’t know what is best. They don’t know what I want. Google doesn’t either. Their many ideas about giving lots of information it thinks I want while I am out are also things I will not welcome. Is the future of UI gadgets that predict your needs, as Wired says Google thinks? No, it isn’t. What I want is a really intuitive interface so I can ask for what I want, when I want it. The very last thing I want is an idiot device thinking it knows better than I do.

We are not there yet. We are nowhere near there yet. Until we are, let me make my own decisions. PLEASE!

Time – The final frontier. Maybe

It is very risky naming the final frontier. A frontier is just the far edge of where we’ve got to.

Technology has a habit of opening new doors to new frontiers so it is a fast way of losing face. When Star Trek named space as the final frontier, it was thought to be so. We’d go off into space and keep discovering new worlds, new civilizations, long after we’ve mapped the ocean floor. Space will keep us busy for a while. In thousands of years we may have gone beyond even our own galaxy if we’ve developed faster than light travel somehow, but that just takes us to more space. It’s big, and maybe we’ll never ever get to explore all of it, but it is just a physical space with physical things in it. We can imagine more than just physical things. That means there is stuff to explore beyond space, so space isn’t the final frontier.

So… not space. Not black holes or other galaxies.

Certainly not the ocean floor, however fashionable that might be to claim. We’ll have mapped that in details long before the rest of space. Not the centre of the Earth, for the same reason.

How about cyberspace? Cyberspace physically includes all the memory in all our computers, but also the imaginary spaces that are represented in it. The entire physical universe could be simulated as just a tiny bit of cyberspace, since it only needs to be rendered when someone looks at it. All the computer game environments and virtual shops are part of it too. The cyberspace tree doesn’t have to make a sound unless someone is there to hear it, but it could. The memory in computers is limited, but the cyberspace limits come from imagination of those building or exploring it. It is sort of infinite, but really its outer limits are just a function of our minds.

Games? Dreams? Human Imagination? Love? All very new agey and sickly sweet, but no. Just like cyberspace, these are also all just different products of the human mind, so all of these can be replaced by ‘the human mind’ as a frontier. I’m still not convinced that is the final one though. Even if we extend that to greatly AI-enhanced future human mind, it still won’t be the final frontier. When we AI-enhance ourselves, and connect to the smart AIs too, we have a sort of global consciousness, linking everyone’s minds together as far as each allows. That’s a bigger frontier, since the individual minds and AIs add up to more cooperative capability than they can achieve individually. The frontier is getting bigger and more interesting. You could explore other people directly, share and meld with them. Fun, but still not the final frontier.

Time adds another dimension. We can’t do physical time travel, and even if we can do so in physics labs with tiny particles for tiny time periods, that won’t necessarily translate into a practical time machine to travel in the physical world. We can time travel in cyberspace though, as I explained in

http://timeguide.wordpress.com/2012/10/25/the-future-of-time-travel-cheat/

and when our minds are fully networked and everything is recorded, you’ll be able to travel back in time and genuinely interact with people in the past, back to the point where the recording started. You would also be able to travel forwards in time as far as the recording stops and future laws allow (I didn’t fully realise that when I wrote my time travel blog, so I ought to update it, soon). You’d be able to inhabit other peoples’ bodies, share their minds, share consciousness and feelings and emotions and thoughts. The frontier suddenly jumps out a lot once we start that recording, because you can go into the future as far as is continuously permitted. Going into that future allows you to get hold of all the future technologies and bring them back home, short circuiting the future, as long as time police don’t stop you. No, I’m not nuts – if you record everyone’s minds continuously, you can time travel into the future using cyberspace, and the effects extend beyond cyberspace into the real world you inhabit, so although it is certainly a cheat, it is effectively real time travel, backwards and forwards. It needs some security sorted out on warfare, banking and investments, procreation, gambling and so on, as well as lot of other causality issues, but to quote from Back to the Future: ‘What the hell?’ [IMPORTANT EDIT: in my following blog, I revise this a bit and conclude that although time travel to the future in this system lets you do pretty much what you want outside the system, time travel to the past only lets you interact with people and other things supported within the system platform, not the physical universe outside it. This does limit the scope for mischief.]

So, time travel in fully networked fully AI-enhanced cosmically-connected cyberspace/dream-space/imagination/love/games would be a bigger and later frontier. It lets you travel far into the future and so it notionally includes any frontiers invented and included by then. Is it the final one though? Well, there could be some frontiers discovered after the time travel windows are closed. They’d be even finaller, so I won’t bet on it.