Category Archives: technology

The future of sky

The S installment of this ‘future of’ series. I have done streets, shopping, superstores, sticks, surveillance, skyscrapers, security, space, sports, space travel and sex before, some several times. I haven’t done sky before, so here we go.

Today when you look up during the day you typically see various weather features, the sun, maybe the moon, a few birds, insects or bats, maybe some dandelion or thistle seeds. As night falls, stars, planets, seasonal shooting stars and occasional comets may appear. To those we can add human contributions such as planes, microlights, gliders and helicopters, drones, occasional hot air balloons and blimps, helium party balloons, kites and at night-time, satellites, sometimes the space station, maybe fireworks. If you’re in some places, missiles and rockets may be unfortunate extras too, as might be the occasional parachutist or someone wearing a wing-suit or on a hang-glider. I guess we should add occasional space launches and returns too. I can’t think of any more but I might have missed some.

Drones are the most recent addition and their numbers will increase quickly, mostly for surveillance purposes. When I sit out in the garden, since we live in a quiet area, the noise from occasional  microlights and small planes is especially irritating because they fly low. I am concerned that most of the discussions on drones don’t tend to mention the potential noise nuisance they might bring. With nothing between them and the ground, sound will travel well, and although some are reasonably quiet, other might not be and the noise might add up. Surveillance, spying and prying will become the biggest nuisances though, especially as miniaturization continues to bring us many insect-sized drones that aren’t noisy and may visually be almost undetectable. Privacy in your back garden or in the bedroom with unclosed curtains could disappear. They will make effective distributed weapons too:

http://timeguide.wordpress.com/2014/07/07/drones-it-isnt-the-reapers-and-predators-you-should-worry-about/

Adverts don’t tend to appear except on blimps, and they tend to be rare visitors. A drone was this week used to drag a national flag over a football game. In the Batman films, Batman is occasionally summoned by shining a spotlight with a bat symbol onto the clouds. I forgot which film used the moon to show an advert. It is possible via a range of technologies that adverts could soon be a feature of the sky, day and night, just like in Bladerunner. In the UK, we are now getting used to roadside ads, however unwelcome they were when they first arrived, though they haven’t yet reached US proportions. It will be very sad if the sky is hijacked as an advertising platform too.

I think we’ll see some high altitude balloons being used for communications. A few companies are exploring that now. Solar powered planes are a competing solution to the same market.

As well as tiny drones, we might have bubbles. Kids make bubbles all the time but they burst quickly. With graphene, a bubble could prevent helium escaping or even be filled with graphene foam, then it would float and stay there. We might have billions of tiny bubbles floating around with tiny cameras or microphones or other sensors. The cloud could be an actual cloud.

And then there’s fairies. I wrote about fairies as the future of space travel.

http://timeguide.wordpress.com/2014/06/06/fairies-will-dominate-space-travel/

They might have a useful role here too, and even if they don’t, they might still want to be here, useful or not.

As children, we used to call thistle seeds fairies, our mums thought it was cute to call them that. Biomimetics could use that same travel technique for yet another form of drone.

With all the quadcopter, micro-plane, bubble, balloon and thistle seed drones, the sky might soon be rather fuller than today. So maybe there is a guaranteed useful role for fairies, as drone police.

 

 

 

The future of cyberspace

I promised in my last blog to do one on the dimensions of cyberspace. I made this chart 15 years ago, in two parts for easy reading, but the ones it lists are still valid and I can’t think of any new ones to add right now, but I might think of some more and make an update with a third part. I changed the name to virtuality instead because it actually only talks about human-accessed cyberspace, but I’m not entirely sure that was a good thing to do. Needs work.

cyberspace dimensions

cyberspace dimensions 2

The chart  has 14 dimensions (control has two independent parts), and I identified some of the possible points on each dimension. As dimensions are meant to be, they are all orthogonal, i.e. they are independent of each other, so you can pick any one on any dimension and use it with any from each other. Standard augmented reality and pure virtual reality are two of the potential combinations, out of the 2.5 x 10^11 possibilities above. At that rate, if every person in the world tried a different one every minute, it would take a whole day to visit them all even briefly. There are many more possible, this was never meant to be exhaustive, and even two more columns makes it 10 trillion combos. Already I can see that one more column could be ownership, another could be network implementation, another could be quality of illusion. What others have I missed?

The Future of IoT – virtual sensors for virtual worlds

I recently acquired a point-and-click thermometer for Futurizon, which gives an instant reading when you point it at something. I will soon know more about the world around me, but any personal discoveries I make are quite likely to be well known to science already. I don’t expect to win a Nobel prize by discovering breeches of the second law of thermodynamics, but that isn’t the point. The thermometer just measures the transmission from a particular point in a particular frequency band, which indicates what temperature it is. It cost about £20, a pretty cheap stimulation tool to help me think about the future by understanding new things about the present. I already discovered that my computer screen doubles as a heater, but I suspected that already. Soon, I’ll know how much my head warms when if think hard, and for the futurology bit, where the best locations are to put thermal IoT stuff.

Now that I am discovering the joys or remote sensing, I want to know so much more though. Sure, you can buy satellites for a billion pounds that will monitor anything anywhere, and for a few tens of thousands you can buy quite sophisticated lab equipment. For a few tens, not so much is available and I doubt the tax man will agree that Futurizon needs a high end oscilloscope or mass spectrometer so I have to set my sights low. The results of this blog justify the R&D tax offset for the thermometer. But the future will see drops in costs for most high technologies so I also expect to get far more interesting kit cheaply soon.

Even starting with the frequent assumption that in the future you can do anything, you still have to think what you want to do. I can get instant temperature readings now. In the future, I may also want a full absorption spectrum, color readings, texture and friction readings, hardness, flexibility, sound absorption characteristics, magnetic field strength, chemical composition, and a full range of biological measurements, just for fun. If Spock can have one, I want one too.

But that only covers reality, and reality will only account for a small proportion of our everyday life in the future. I may also want to check on virtual stuff, and that needs a different kind of sensor. I want to be able to point at things that only exist in virtual worlds. It needs to be able to see virtual worlds that are (at least partly) mapped onto real physical locations, and those that are totally independent and separate from the real world. I guess that is augmented reality ones and virtual reality ones. Then it starts getting tricky because augmented reality and virtual reality are just two members of a cyberspace variants set that runs to more than ten trillion members. I might do another blog soon on what they are, too big a topic to detail here.

People will be most interested in sensors to pick up geographically linked cyberspace. Much of the imaginary stuff is virtual worlds in computer games or similar, and many of those have built-in sensors designed for their spaces. So, my character can detect caves or forts or shrines from about 500m away in the virtual world of Oblivion (yes, it is from ages ago but it is still enjoyable). Most games have some sort of sensors built-in to show you what is nearby and some of its properties.

Geographically linked cyberspace won’t all be augmented reality because some will be there for machines, not people, but you might want to make sensors for it all the same, for many reasons, most likely for navigating it, debugging, or for tracking and identifying digital trespass. The last one is interesting. A rival company might well construct an augmented reality presence that allows you to see their products alongside ones in a physical shop. It doesn’t have to be in a properly virtual environment, a web page is still a location in cyberspace and when loaded, that instance takes on a geographic mapping via that display so it is part of that same trespass. That is legal today, and it started many years ago when people started using Amazon to check for better prices while in a book shop. Today it is pretty ubiquitous. We need sensors that can detect that. It may be accepted today as fair competition, but it might one day be judged as unfair competition by regulators for various reasons, and if so, they’ll need some mechanism to police it. They’ll need to be able to detect it. Not easy if it is just a web page that only exists at that location for a few seconds. Rather easier if it is a fixed augmented reality and you can download a map.

If for some reason a court does rule that digital trespass is illegal, one way of easy(though expensive) way of solving it would be to demand that all packets carry a geographic location, which of course the site would know when the person clicks on that link. To police that, turning off location would need to be blocked, or if it is turned off, sites would not be permitted to send you certain material that might not be permitted at that location. I feel certain there would be better and cheaper and more effective solutions.

I don’t intend to spend any longer exploring details here, but it is abundantly clear from just inspecting a few trees that making detectors for virtual worlds will be a very large and diverse forest full of dangers. Who should be able to get hold of the sensors? Will they only work in certain ‘dimensions’ of cyberspace? How should the watchers be watched?

The most interesting thing I can find though is that being able to detect cyberspace would allow new kinds of adventures and apps. You could walk through a doorway and it also happens to double as a portal between many virtual universes. And you might not be able to make that jump in any other physical location. You might see future high street outlets that are nothing more than teleport chambers for cyberspace worlds. They might be stuffed with virtual internet of things things and not one one of them physical. Now that’s fun.

 

Ground up data is the next big data

This one sat in my draft folder since February, so I guess it’s time to finish it.

Big Data – I expect you’re as sick of hearing that term as I am. Gathering loads of data on everything you or your company or anything else you can access can detect, measure, record, then analyzing the hell out of it using data mining, an equally irritating term.

I long ago had a quick twitter exchange with John Hewitt, who suggested “What is sensing but the energy-constrained competition for transmission to memory, as memory is but that for expression?”. Neurons compete to see who gets listened too.  Yeah, but I am still not much wiser as to what sensing actually is. Maybe I need a brain upgrade. (It’s like magnets. I used to be able to calculate the magnetic field densities around complicated shaped objects – it was part of my first job in missile design – but even though I could do all the equations around EM theory, even general relativity, I still am no wiser how a magnetic field actually becomes a force on an object. I have an office littered with hundreds of neodymium magnets and I spend hours playing with them and I still don’t understand). I can read about neurons all day but I still don’t understand how a bunch of photons triggering a series of electro-chemical reactions results in me experiencing an image. How does the physical detection become a conscious experience?

Well, I wrote some while back that we could achieve a conscious computer within two years. It’s still two years because nobody has started using the right approach yet. I have to stress the ‘could’, because nobody actually intends to do it in that time frame, but I really believe some half-decent lab could if they tried.  (Putting that into perspective, Kurzweil and his gang at Google are looking at 2029.) That two years estimate relies heavily on evolutionary development, for me the preferred option when you don’t understand how something works, as is the case with consciousness. It is pretty easy to design conscious computers at a black box level. The devil is in the detail. I argued that you could make a conscious computer by using internally focused sensing to detect processes inside the brain, and using a sensor structure with a symmetrical feedback loop. Read it:

http://timeguide.wordpress.com/2013/12/28/we-could-have-a-conscious-machine-by-end-of-play-2015/

In a nutshell, if you can feel thoughts in the same way as you feel external stimuli, you’d be conscious. I think. The symmetrical feedback loop bit is just a small engineering insight.

The missing link in that is still the same one: how does sensing work? How do you feel?

At a superficial level, you point a sensor at something and it produces a signal in some sort of relationship to whatever it is meant to sense. We can do that bit. We understand that. Your ear produces signals according to the frequencies and amplitudes of incoming sound waves, a bit like a microphone. Just the same so far. However, it is by some undefined processes later that you consciously experience the sound. How? That is the hard problem in AI. It isn’t just me that doesn’t know the answer. ‘How does red feel?’ is a more commonly used variant of the same question.

When we solve that, we will replace big data as ‘the next big thing’. If we can make sensor systems that experience or feel something rather than just producing a signal, that’s valuable already. If those sensors pool their shared experience, another similar sensor system could experience that. Basic data quickly transmutes into experience, knowledge, understanding, insight and very quickly, value, lots of it. Artificial neural nets go some way to doing that, but they still lack consciousness. Simulated neural networks can’t even get beyond a pretty straightforward computation, putting all the inputs into an equation. The true sensing bit is missing. The complex adaptive analog neural nets in our brain clearly achieve something deeper than a man-made neural network.

Meanwhile, most current AI work barks up a tree in a different forest. IBM’s Watson will do great things; Google’s search engine AI will too. But they aren’t conscious and can’t be. They’re just complicated programs running on digital processors, with absolutely zero awareness of anything they are doing. Digital programs on digital computers will never achieve any awareness, no matter how fast the chips are.

However, back in the biological realm, nature manages just fine. So biomimetics offers a lot of hope. We know we didn’t get from a pool of algae to humans in one go. At some point, organisms started moving according to light, chemical gradients, heat, touch. That most basic process of sensing may have started out coupled to internal processes that caused movement without any consciousness. But if we can understand the analog processes (electrochemical, electronic, mechanical) that take the stimulus through to a response, and can replicate it using our electronic technology, we would already have actuator circuits, even if we don’t have any form of sensation or consciousness yet. A great deal of this science has been done already of course. The computational side of most chemical and physical processes can be emulated electronically by some means or another. Actuators will be a very valuable part of the cloud, but we already have the ability to make actuators by more conventional means, so doing it organically or biomimetically just adds more actuation techniques to the portfolio. Valuable but not a terribly important breakthrough.

Looking at the system a big further along the evolutionary timeline, where eyes start to develop, where the most primitive nervous systems and brains start, where higher level processing is obviously occurring and inputs are starting to become sensations, we should be able to what is changed or changing. It is the emergence of sensation we need to identify, even if the reaction is still an unconscious reflex. We don’t need to reverse engineer the human brain. Simple organisms are simpler to understand. Feeding the architectural insights we gain from studying those primitive systems into our guided evolution engines is likely to be far faster as a means to generating true machine consciousness and strong AI. That’s how we could develop consciousness in a couple of years rather than 15.

If we can make primitive sensing devices that work like those in primitive organisms, and can respond to specific sorts of sensory input, then that is a potential way of increasing the coverage of cloud sensing and even actuation. It would effectively be a highly distributed direct response system. With clever embedding of emergent phenomena techniques (such as cellular automata, flocking etc) , it could be a quite sophisticated way of responding to quite complex distributed inputs, avoiding some of the need for big data processing. If we can gather the outputs from these simple sensors and feed them into others, that will be an even better sort of biomimetic response system. That sort of direct experience of a situation is very different from a data mined result, especially if actuation capability is there too. The philosophical question as to whether that inclusion of that second bank of sensors makes the system in any way conscious remains, but it would certainly be very useful and valuable. The architecture we end up with via this approach may look like neurons, and could even be synthetic neurons, but that may be only one solution among many. Biology may have gone the neuron route but that doesn’t necessarily mean it is the only possibility. It may be that we could one day genetically modify bacteria to produce their own organic electronics to emulate the key processes needed to generate sensation, and to power them by consuming nutrients from their environment. I suggested smart yogurt based on this idea many years ago, and believe that it could achieve vast levels of intelligence.

Digitizing and collecting the signals from the system at each stage would generate lots of  data, and that may be used by programs to derive other kinds of results, or to relay the inputs to other analog sensory systems elsewhere. (It isn’t always necessary to digitize signals to transmit them, but it helps limit signal degradation and quickly becomes important if the signal is to travel far and is essential if it is to be recorded for later use or time shifting). However, I strongly suspect that most of the value in analog sensing and direct response is local, coupled to direct action or local processing and storage.

If we have these sorts of sensors liberally spread around, we’d create a truly smart environment, with local sensing and some basic intelligence able to relay sensation remotely to other banks of sensors elsewhere for further processing or even ultimately consciousness. The local sensors could be relatively dumb like nerve endings on our skin, feeding in  signals to a more connected virtual nervous system, or a bit smarter, like neural retinal cells, doing a lot of analog pre-processing before relaying them via ganglia cells, and maybe part of a virtual brain. If they are also capable of or connected to some sort of actuation, then we would be constructing a kind of virtual organism, with tendrils covering potentially the whole globe, and able to sense and interact with its environment in an intelligent way.

I use the term virtual not because the sensors wouldn’t be real, but because their electronic nature allows connectivity to many systems, overlapping, hierarchical or distinct. Any number of higher level systems could ‘experience’ them as part of its system, rather as if your fingers could be felt by the entire human population. Multiple higher level virtual organisms could share the same basic sensory/data inputs. That gives us a whole different kind of cloud sensing.

By doing processing locally, in the analog domain, and dealing with some of the response locally, a lot of traffic across the network is avoided and a lot of remote processing. Any post-processing that does occur can therefore add to a higher level of foundation. A nice side effect from avoiding all the extra transmission and processing is increased environmental friendliness.

So, we’d have a quite different sort of data network, collecting higher quality data, essentially doing by instinct what data mining does with huge server farms and armies of programmers. Cloudy, but much smarter than a straightforward sensor net.

… I think.

It isn’t without risk though. I had a phone discussion yesterday on the dangers of this kind of network. In brief, it’s dangerous.

The future of obsolescence

My regular readers will know I am not a big fan of ‘green’ policies. I want to protect the environment and green policies invariably end up damaging it. These policies normally arise by taking too simplistic a view – that all parts of the environmental system are independent of each other so each part can be addressed in isolation to improve the environment as a whole. As a systems engineer since graduation, I always look at the whole system over the whole life cycle and when you do that, you can see why green policies usually don’t work.

Tackling the problem of rapid obsolescence is one of the big errors in environmentalism. The error here is that rapid obsolescence is not necessarily  a problem. Although at first glance it may appear to cause excessive waste and unnecessary environmental damage, on deeper inspection it is very clear that it has actually driven technology through very rapid change to the point where the same function can often be realized now with less material, less energy use, less pollution and less environmental impact. As the world gets richer and more people can afford to buy more things, it is a direct result of rapid obsolescence that those things have a better environmental impact than they would if the engineering life cycle had run through fewer times.

A 150g smart-phone replaces 750kg of 1990s IT. If the green policy of making things last longer and not replacing them had been in force back then, some improvement would still have arisen, but the chances are you would not have the smart phone or tablet, would still use a plasma TV, still need a hi-fi, camera and you’d still have to travel in person to do a lot of the things your smartphone allows you to do wherever you are. In IT, rapid obsolescence continues, soon all your IT will be replaced by active contact lenses and a few grams of jewelry. If 7Bn people want to have a good quality of digitally enabled lifestyle, then letting them do so with 5 grams of materials and milliwatts of power use is far better than using a ton of materials and kilowatts of power.

Rapid engineering progress lets us build safer bridges and buildings with less material, make cars that don’t rust after 3 years and run on less fuel, given us fridges and washing machines that use less energy. Yes, we throw things away, but thanks again to rapid obsolescence, the bits are now easily recyclable.

Whether greens like it or not, our way of throwing things away after a relatively short life cycle has been one of the greatest environmental successes of our age. Fighting against rapid obsolescence doesn’t make you a friend of the earth, it makes you its unwitting enemy.

The future of levitation

Futurologists are often asked about flying cars, and there already are one or two and one day there might be some, but they’ll probably only become as common as helicopters today. Levitating cars will be more common, and will hover just above the ground, like the landspeeders on Star Wars, or just above a lower layer of cars. I need to be careful here – hovercraft were supposed to be the future but they are hard to steer and to stop quickly and that is probably why they didn’t take over as some people expected. Levitating cars won’t work either if we can’t solve that problem.

Maglev trains have been around for decades. Levitating cars won’t use anti-gravity in my lifetime, so magnetic levitation is the only non-hovercraft means obvious. They don’t actually need metal roads to fly over, although that is one mechanism. It is possible to contain a cushion of plasma and ride on that. OK, it is a bit hovercrafty, since it uses a magnetic skirt to keep the plasma in place, but at least it won’t need big fans and drafts. The same technique could work for a skateboard too.

Once we have magnetic plasma levitation working properly, we can start making all sorts of floating objects. We’ll have lots of drones by then anyway, but drones could levitate using plasma instead of using rotor blades. With plasma levitation, compound objects can be formed using clusters of levitating component parts. This can be quieter and more elegant than messy air jets or rotors.

Magnetic levitation doesn’t have very many big advantages over using wheels, but it still seems futuristic, and sometimes that is reason enough to do it. More than almost anything else, levitating cars and skateboards would bring the unmistakable message that the future has arrived. So we may see the levitating robots and toys and transport that we have come to expect in sci-fi.

To do it, we need strong magnetic fields, but they can be produced by high electrical currents in graphene circuits. Plasma is easy enough to make too. Electron pipes could do that and could be readily applied as a coating to the underside of a car or any hard surface rather like paint. We can’t do that bit yet, but a couple of decades from now it may well be feasible. By then most new cars will be self-driving, and will drive very closely together, so the need to stop quickly or divert from a path can be more easily solved. One by one, the problems with making levitating vehicles will disappear and wheels may become obsolete. We still won’t have very many flying cars, but lots that float above the ground.

All in all, levitation has a future, just as we’ve been taught to expect by sci-fi.

 

Alcohol-free beer goggles

You remember that person you danced with and thought was wonderful, and then you met them the next day and your opinion was less favorable? That’s what people call beer goggles. Alcohol impairs judgment. It makes people chattier and improves their self confidence, but also makes them think others are more physically attractive and more interesting too. That’s why people get drunk apparently, because it upgrades otherwise dull people into tolerable company, breaking the ice and making people sociable and fun.

Augmented reality visors could double as alcohol-free beer goggles. When you look at someone  while wearing the visor, you wouldn’t have to see them warts and all. You could filter the warts. You could overlay their face with an upgraded version, or indeed replace it with someone else’s face. They wouldn’t even have to know.

The arms of the visor could house circuits to generate high intensity oscillating magnetic fields – trans-cranial magnetic stimulation. This has been demonstrated as a means of temporarily switching off certain areas of the brain, or at least reducing their effects. Among areas concerned are those involved in inhibitions. Alcohol does that normally, but you can’t drink tonight, so your visor can achieve the same effect for you.

So the nominated driver could be more included in drunken behavior on nights out. The visor could make people more attractive and reduce your inhibitions, basically replicating at least some of what alcohol does. I am not suggesting for a second that this is a good thing, only that it is technologically feasible. At least the wearer can set alerts so that they don’t declare their undying love to someone without at least being warned of the reality first.

The future of karma

This isn’t about Hinduism or Buddhism, just in case you’re worried. It is just about the cultural principle borrowed from them that your intent and actions now can influence what happens to you in future, or your luck or fate, if you believe in such things. It is borrowed in some computer games, such as Fallout.

We see it every day now on Twitter. A company or individual almost immediately suffers the full social consequences of their words or actions. Many of us are occasionally tempted to shame companies that have wronged us by tweeting our side of the story, or writing a bad review on tripadvisor. One big thing is so missing, but I suspect not for much longer: Who’s keeping score?

Where is the karma being tracked? When you do shame a company or write a bad review, was it an honest write-up of a genuine grievance, or way over the top compared to the magnitude of the offense, or just pure malice? If you could have written a review and didn’t, should your forgiving attitude be rewarded or punished, because now others might suffer similar bad service? I haven’t checked but I expect there are already a few minor apps that do bits of this. But we need the Google and Facebook of Karma.

So, we need another 17 year old in a bedroom to bring out the next blockbuster mash site linking the review sites, the tweets and blogs, doing an overall assessment not just of the companies being commented on, but on those doing the commenting. One that gives people and companies a karma score. As the machine-readable web continues to improve, it will even be possible to get some clues on average rates of poor service and therefore identify those of us who are probably more forgiving, those of us who deserve a little more tolerance when it’s our own mistake. (I am allegedly closer to the grumpy old man end of the scale).

I just did a conference talk on corporate credit assessment and have previously done others on private credit assessment. Financial trustworthiness is important, but when you do business, you also want to know whether it’s a nice company or one that walks all over people. That’s karma.

So, are you someone who presents a sweet and cheerful face, only to say nasty things about someone as soon as their face is turned. Do you always see the good side of everyone, or go to great effort to point out their bad points to everyone on the web? Well, it won’t be all that long before your augmented reality visor shows a karma score floating above people’s heads when you chat to them.

The future of walled gardens

In the physical world, walled gardens are pretty places we visit, pay an entry fee, then enjoy the attractions therein. It is well understood that people often only value what they have to pay for and walled gardens capitalise on that. While there, we may buy coffees or snacks from the captive facilities at premium prices and we generally accept that premium as normal practice. Charging an entry fee ensures that people are more likely to stay inside for longer, using services (picnic areas, scenery, toilets etc) they have already paid for rather than similar ones outside that may be free and certainly instead of paying another provider as well.

In the content industry, the term applies to bundles of services from a particular supplier or available on a particular platform. There is some financial, psychological, convenience, time or other cost to enter and then to leave. Just as with the real thing, they have a range of attractions within that make people want to enter, and once there, they will often access local service variants rather than pay the penalty to leave and access perhaps better ones elsewhere. Our regulators started taking notice of them in the early days of cable TV, addressed the potential abuses and sometimes took steps to prevent telecoms or cable companies from locking customers in. More recently, operating system and device manufacturers have also fallen under the same inspection.

Commercial enterprises have an interest in keeping customers within their domain so that they can extract the most profit from them. What is less immediately obvious is why customers allow it. If people want to use a particular physical facility, such as an airport, or a particular tourist attraction such as a city, or indeed a walled garden, then they have to put up with the particular selection of shops and restaurants there, and are vulnerable to exploitation such as higher prices because of the lack of local choice. There is a high penalty in time and expense to find an alternative. With device manufacturers, the manufacturer is in an excellent position to force customers to use services from those they have selected, and that enables them to skim charges for transactions, sometimes from both ends. The customer can only avoid that by using multiple devices, which incurs a severe cost penalty. There may be some competition among apps within the same garden, but all are subject to the rules of the garden. Operating systems are also walled gardens, but the OS usually just goes with the choice of device. It may be possible to swap to an alternative, but few users bother; most just accept the one that it comes with.

Walled gardens in the media are common but easier to avoid. With free satellite and terrestrial TV as well as online video and TV services, there is now abundant choice, though each provider still tries to make cute little walled gardens if they can. Customers can’t get access to absolutely all content unless they pay multiple subscriptions, but can minimize outlay by choosing the most appropriate garden for their needs and staying in it.

The web has disappointed though. When it was young, many imagined it would become a perfect market, with suppliers offering services and everyone would see all the offerings, all the prices and make free decisions where to buy and deal direct without having to pay for intermediaries. It has so badly missed the target that Berners Lee and others are now thinking how it can be redesigned to achieve the original goals. Users can theoretically browse freely, but the services they actually want to use often become natural monopolies, and can then expand organically into other territories, becoming walled gardens. The salvation is that new companies can always emerge that provide an alternative. It’s impossible to monopolize cyberspace. Only bits of it can be walled off.

Natural monopolies arise when people have free access to everything but one supplier offers something unique and thus becomes the only significant player. Amazon wasn’t a walled garden when it started so much as a specialist store that grew into a small mall and is now a big cyber-city. Because it is so dominant and facilitates buying from numerous suppliers, it certainly qualifies as a walled garden now, but it is still possible to easily find many other stores. By contrast, Facebook has been a walled garden since its infancy, with a miniature web-like world inside its walls with its own versions of popular services. It can monitor and exploit the residents for as long as it can prevent them leaving. The primary penalties for leaving are momentarily losing contact with friends and losing interface familiarity, but I have never understood why so many people spend so much of their time locked within its walls rather than using the full range of web offerings available to them. The walls seem very low, and the world outside is obviously attractive, so the voluntary confinement is beyond my comprehension.

There will remain be a big incentive for companies to build walled gardens and plenty of scope for making diverse collections of unique content and functions too and plenty of companies wanting to make theirs as attractive as possible and attempt to keep people inside. However, artificial intelligence may well change the way that networked material is found, so the inconvenience wall may vanish, along with the OS and interface familiarity walls. Deliberate barriers and filters may prevent it gaining access to some things, but without deliberate obstruction, many walled gardens may only have one side walled, that of price for unique content. If that is all it has to lock people in, then it may really be no different conceptually from a big store. Supermarkets offer this in the physical world, but many other shops remain.

If companies try to lock in too much content in one place, others will offer competing packages. It would make it easier for competitors and that is a disincentive. If a walled garden becomes too greedy, its suppliers and customers will go elsewhere. The key to managing them is to ensure diversity by ensuring the capability to compete. Diversity keeps them naturally in check.

Network competition may well be key. If users have devices that can make their own nets or access many externally provided ones, the scope for competition is high, and the ease of communicating and dealing directly is also high. It will be easy for producers to sell content direct and avoid middlemen taking a cut. That won’t eliminate walled gardens, because some companies will still do exclusive deals and not want to deal direct. There are many attractive business models available to potential content producers and direct selling is only one. Also, as new streams of content become attractive, they are sometimes bought, and this can be the intended exit strategy for start-ups.

Perhaps that is where we are already at. Lots of content that isn’t in walled gardens exists and much is free. Much is exclusive to walled gardens. It is easy to be influenced by recent acquisitions and market fluctuations, but really, the nature of the market hasn’t really changed, it just adapts to new physical platforms. In the physical world, we are free to roam but walled gardens offer attractive destinations. The same applies to media. Walled gardens won’t go away, but there is also no reason to expect them to take over completely. With new networks, new business models, new entrepreneurs, new content makers, new viewing platforms, the same business diversity will continue. Fluctuating degrees of substitution rather than full elimination will continue to be the norm.

Or maybe I’m having an off-day and just can’t see something important. Who knows?

 

 

The future of Jelly Babies

Another frivolous ‘future of’, recycled from 10 years ago.

I’ve always loved Jelly Babies, (Jelly Bears would work as well if you prefer those) and remember that Dr Who used to eat them a lot too. Perhaps we all have a mean streak, but I’m sure most if us sometimes bite off their heads before eating the rest. But that might all change. I must stress at this point that I have never even spoken to anyone from Bassetts, who make the best ones, and I have absolutely no idea what plans they might have, and they might even strongly disapprove of my suggestions, but they certainly could do this if they wanted, as could anyone else who makes Jelly Babies or Jelly Bears or whatever.

There will soon be various forms of edible electronics. Some electronic devices can already be swallowed, including a miniature video camera that can take pictures all the way as it proceeds through your digestive tract (I don’t know whether they bother retrieving them though). Some plastics can be used as electronic components. We also have loads of radio frequency identity (RFID) tags around now. Some tags work in groups, recording whether they have been separated from each other at some point, for example. With nanotech, we will be able to make tags using little more than a few well-designed molecules, and few materials are so poisonous that a few molecules can do you much harm so they should be sweet-compliant. So extrapolating a little, it seems reasonable to expect that we might be able to eat things that have specially made RFID tags in them.  It would make a lot of sense. They could be used on fruit so that someone buying an apple could ingest the RFID tag on it without concern. And as well as work on RFID tags, many other electronic devices can be made very small, and out of fairly safe materials too.

So I propose that Jelly Baby manufacturers add three organic RFID tags to each jelly baby, (legs, head and body), some processing, and a simple communications device When someone bites the head off a jelly baby, the jelly baby would ‘know’, because the tags would now be separated. The other electronics in the jelly baby could then come into play, setting up a wireless connection to the nearest streaming device and screaming through the loudspeakers. It could also link to the rest of the jelly babies left in the packet, sending out a radio distress call. The other jelly babies, and any other friends they can solicit help from via the internet, could then use their combined artificial intelligence to organise a retaliatory strike on the person’s home computer. They might be able to trash the hard drive, upload viruses, or post a stroppy complaint on social media about the person’s cruelty.

This would make eating jelly babies even more fun than today. People used to spend fortunes going on safari to shoot lions. I presume it was exciting at least in part because there was always a risk that you might not kill the lion and it might eat you instead. With our environmentally responsible attitudes, it is no longer socially acceptable to hunt lions, but jelly babies could be the future replacement. As long as you eat them in the right order, with the appropriate respect and ceremony and so on, you would just enjoy eating a nice sweet. If you get it wrong, your life is trashed for the next day or two. That would level the playing field a bit.

Jelly Baby anyone?