Category Archives: Arts

The future of publishing

There are more information channels now than ever. These include thousands of new TV and radio channels that are enabled by the internet, millions of YouTube videos, new electronic book and magazine platforms such as tablets and mobile devices, talking books, easy print-on-demand, 3D printing, holograms, games platforms, interactive books, augmented reality and even AI chatbots, all in parallel with blogs, websites and social media such as Facebook, Linked-In, Twitter, Pinterest, Tumblr and so on. It has never been easier to publish something. It no longer has to cost money, and many avenues can even be anonymous so it needn’t even cost reputation if you publish something you shouldn’t. In terms of means and opportunity, there is plenty of both. Motive is built into human nature. People want to talk, to write, to create, to be looked at, to be listened to.

That doesn’t guarantee fame and fortune. Tens of millions of electronic books are written by software every year – mostly just themed copy and paste collections using content found online –  so that already makes it hard for a book to be seen, even before you consider the millions of other human authors. There are hundreds of times more new books every year now than when we all had to go via ‘proper publishers’.

The limiting factor is attention. There are only so many eyeballs, they only have a certain amount of available time each day and they are very spoiled for choice. Sure, we’re making more people, but population has doubled in 30 years, whereas published material volume doubles every few months. That means ever more competition for the attention of those eyeballs.

When there is a glut of material available for consumption, potential viewers must somehow decide what to look at to make the most of their own time. Conventional publishing had that sorted very well. Publishers only published things they knew they could sell, and made sure the work was done to a high quality – something it is all too easy to skip when self-publishing – and devoted the largest marketing budgets at those products that had the greatest potential. That was mostly determined by how well known the author was and how well liked their work. So when you walked through a bookshop door, you are immediately faced with the books most people want. New authors took years of effort to get to those places, and most never did. Now, it is harder still. Self-publishing authors can hit the big time, but it is very hard to do so, and very few make it.

Selling isn’t the only motivation for writing. Writing helps me formulate ideas, flesh them out, debug them, and tidy them up into cohesive arguments or insights. It helps me maintain a supply of fresh and original content that I need to stay in business. I write even when I have no intention of publishing and a large fraction of my writing stays as drafts, never published, having served its purpose during the act of writing. (Even so, when I do bother to write a book, it is still very nice if someone wants to buy it). It is also fun to write, and rewarding to see a finished piece appear. My sci-fi novel Space Anchor was written entirely for the joy of writing. I had a fantastic month writing it. I started on 3 July and published on 29th. I woke every night with ideas for the next day and couldn’t wait to get up and start typing. When I ran out of ideas, I typed its final paragraphs, lightly edited it and published.

The future of writing looks even more fun. Artificial intelligence is nowhere near the level yet where you can explain an idea to a computer in ordinary conversation and tell it to get on with it, but it will be one day, fairly soon. Interactive writing using AI to do the work will be very reward-rich, creativity-rich, a highly worthwhile experience in itself regardless of any market. Today, it takes forever to write and tidy up a piece. If AI does most of that, you could concentrate on the ideas and story, the fun bits. AI could also make suggestions to make your work better. We could all write fantastic novels. With better AI, it could even make a film based on your ideas. We could all write sci-fi films to rival the best blockbusters of today. But when there are a billion fantastic films to watch, the same attention problem applies. If nobody is going to see your work because of simple statistics, then that is only a problem if your motivation is to be seen or to sell. If you are doing it for your own pleasure, then it could be just as rewarding, maybe even more so. A lot of works would be produced simply for pleasure, but that still dilutes the marketplace for those hoping to sell.

An AI could just write all by itself and cut you out of the loop completely. It could see what topics are currently fashionable and instantaneously make works to tap that market. Given the volume of computer-produced books we already have, adding high level AI could fill the idea space in a genre very quickly. A book or film would compete against huge numbers of others catering to similar taste, many of which are free.

AI also extends the market for cooperative works. Groups of people could collaborate with AI doing all the boring admin and organisation as well as production and value add. The same conversational interface would work just as well for software or app or website production, or setting up a company. Groups of friends could formulate ideas together, and produce works for their own consumption. Books or films that are made together are shared experiences and help bind the group together, giving them shared stories that each has contributed to. Such future publication could therefore be part of socialization, a tribal glue, tribal identity.

This future glut of content doesn’t mean we won’t still have best sellers. As the market supply expands towards infinity, the attention problem means that people will be even more drawn to proven content suppliers. Brands become more important. Production values and editorial approach become more important. People who really understand a market sector and have established a strong presence in it will do even better as the market expands, because customers will seek out trusted suppliers.

So the future publishing market may be a vast sea of high quality content, attached to even bigger oceans of low quality content. In that world of virtually infinite supply, the few islands where people can feel on familiar ground and have easy access to a known and trusted quality product will become strong attractors. Supply and demand equations normally show decreasing price as supply rises, but I suspect that starts to reverse once supply passes a critical point. Faced with an infinite supply of cheap products, people will actually pay more to narrow the choice. In that world, self-publishing will primarily be self-motivated, for fun or self-actualization with only a few star authors making serious money from it. Professional publishing will still have most of the best channels with the most reliable content and the most customers and it will still be big business.

I’ll still do both.

Forehead 3D mist projector

Another simple idea. I was watching the 1920s period drama Downton Abbey and Lady Mary was wearing a headband with a large jewel in it. I had an idea based on linking mist projection systems to headbands. I couldn’t find a pic of Lady Mary’s band on Google but many other designs would work just as well and the one from ASOS would be just as feasible. The idea is that a forehead band (I’m sure there is a proper fashion name for them) would have a central ‘jewel’ which is actually just an ornamental IT capsule containing a misting device and a projector as well as the obvious power supply, comms, processing, direction detectors etc. A 3D image would be projected onto water mist emitted from the reservoir in the device. A simple illustration might help:

forehead projector

 

Many fashion items make comebacks and a lot of 1920s things seem to be in fashion again now. This could be a nice electronic update to a very old fashion concept. With a bit more miniaturisation, smart bindis would also be feasible. It could be used with direction sensing to enable augmented reality use, or simply to display the same image regardless of gaze direction. Unlike visor based augmented reality, others would be able to see the same scene visualised for the wearer.

OLED fashion contact lenses

Self explanatory concept, but not connected to my original active contact lens direct retinal projection concept. This one is just fashion stuff and could be done easily tomorrow. I allowed a small blank central area so that you aren’t blinded if you wear them. This version doesn’t project onto the retina, though future versions could also house and power devices to do so.

Fashion contacts

OK, the illustration is crap, but I’m an engineer, not a fashion designer. Additional functionality could be to display a high res one time code into iris recognition systems for high security access.

The future of rubbish quality art

Exhibit A: Tracey Emin – anything at all from her portfolio will do.

Exhibit B: What I just knocked up in 5 minutes:

Exploration of the real-time gravitational interaction of some copper atoms

Exploration of the real-time gravitational interaction of some copper atoms

A recent work, I can Cu Now

As my obvious  artistic genius quickly became apparent to me, I had a huge flash of inspiration and produced this:

Investigating the fundamental essence of futurology and whether the process of looking into the future can be fully contained within a finite cultural bottle.

Investigating the fundamental essence of futurology and whether the process of looking into the future can be fully contained within a finite cultural bottle.

Trying to bottle the future

I have to confess that I didn’t make the beautiful bottle, but even Emin only has a little personal  input into some of the works she produces and it is surely obvious that my talent in arranging this so beautifully is vastly greater than that of the mere sculptor who produced the vase, or bottle, or whatever. Then, I produced my magnum opus, well so far, towards the end of my five minutes of exploration of the art world. I think you’ll agree I ought immediately to be assigned Professor of Unified Arts in the Royal Academy. Here it is, if I can see well enough to upload it through my tears of joy at having produced such insight.

Can we measure the artistic potential of a rose?

Can we measure the artistic potential of a rose?

This work needs no further explanation. I rest my case.

The future of creativity

Another future of… blog.

I can play simple tunes on a guitar or keyboard. I compose music, mostly just bashing out some random sequences till a decent one happens. Although I can’t offer any Mozart-level creations just yet, doing that makes me happy. Electronic keyboards raise an interesting point for creativity. All I am actually doing is pressing keys, I don’t make sounds in the same way as when I pick at guitar strings. A few chips monitor the keys, noting which ones I hit and how fast, then producing and sending appropriate signals to the speakers.

The point is that I still think of it as my music, even though all I am doing is telling a microprocessor what to do on my behalf. One day, I will be able to hum a few notes or tap a rhythm with my fingers to give the computer some idea of a theme, and it will produce beautiful works based on my idea. It will still be my music, even when 99.9% of the ‘creativity’ is done by an AI. We will still think of the machines and software just as tools, and we will still think of the music as ours.

The other arts will be similarly affected. Computers will help us build on the merest hint of human creativity, enhancing our work and enabling us to do much greater things than we could achieve by our raw ability alone. I can’t paint or draw for toffee, but I do have imagination. One day I will be able to produce good paintings, design and make my own furniture, design and make my own clothes. I could start with a few downloads in the right ballpark. The computer will help me to build on those and produce new ones along divergent lines. I will be able to guide it with verbal instructions. ‘A few more trees on the hill, and a cedar in the foreground just here, a bit bigger, and move it to the left a bit’. Why buy a mass produced design when you can have a completely personal design?

These advances are unlikely to make a big dent in conventional art sales. Professional artists will always retain an edge, maybe even by producing the best seeds for computer creativity. Instead, computer assisted and computer enhanced art will make our lives more artistically enriched, and ourselves more fulfilled as a result. We will be able to express our own personalities more effectively in our everyday environment, instead of just decorating it with a few expressions of someone else’s.

However, one factor that seems to be overrated is originality. Anyone can immediately come up with many original ideas in seconds. Stick a safety pin in an orange and tie a red string through the loop. There, can I have my Turner prize now? There is an infinitely large field to pick from and only a small number have ever been realized, so coming up with something from the infinite set that still haven’t been thought of is easy and therefore of little intrinsic value. Ideas are ten a penny. It is only when it is combined with judgement or skill in making it real that it becomes valuable. Here again, computers will be able to assist. Analyzing a great many existing pictures or works or art should give some clues as to what most people like and dislike. IBM’s new neural chip is the sort of development that will accelerate this trend enormously. Machines will learn how to decide whether a picture is likely to be attractive to people or not. It should be possible for a computer to automatically create new pictures in a particular style or taste by either recombining appropriate ideas, or just randomly mixing any ideas together and then filtering the new pictures according to ‘taste’.

Augmented reality and other branches of cyberspace offer greater flexibility. Virtual objects and environments do not have to conform to laws of physics, so more elaborate and artistic structures are possible. Adding in 3D printing extends virtual graphics into the physical domain, but physics will only apply to the physical bits, and with future display technology, you might not easily be able to see where the physical stops and the virtual begins.

So, with machine assistance, human creativity will no longer be as limited by personal skill and talent. Anyone with a spark of creativity will be able to achieve great works, thanks to machine assistance. So long as you aren’t competitive about it, (someone else will always be able to do it better than you) your world will feel nicer, more friendly and personal, you’ll feel more in control, empowered, and your quality of life will improve. Instead of just making do with what you can buy, you’ll be able to decide what your world looks, sounds, feels, tastes and smells like, and design personality into anything you want too.

Future fashion fun – digital eyebrows

I woke in the middle of the night with another idea not worth patenting, and I’m too lazy to do it, so any entrepreneur who’s too lazy to think of ideas can have it, unless someone already has.

If you make an app that puts a picture of an eyebrow on a phone screen and moves it according to some input (e.g voice, touch, or networked control by your friends or venue), you could use phones to do fun eyebrowy type things at parties, concerts, night clubs etc. You need two phones or a midi-sized tablet unless your eyes are very close together. The phones have accelerometers that know which way up they are and can therefore balance the eyebrows in the right positions. So you can make lots of funny expression on people’s faces using your phones.

Not a Facebook-level idea you’ll agree, but I can imagine some people doing it at parties, especially if they are all controlled by a single app, so that everyone’s eyebrows make the same expression.

You could do it for the whole eye/eyebrow, but then of course you can’t see the your friends laughing, since you’re holding a screen in front of your eyes.

You could have actual physical eyebrows that attach to the tops of your glasses, also controlled remotely.

You could use e-ink/e-paper and make small patches to stick on the skin that do the same function, or a headband. Since they don’t need much power, you won’t need big batteries.

You could do the same for your nose or mouth, so that you have a digitally modifiable face controlled by your friends.

I’m already bored.

Time – The final frontier. Maybe

It is very risky naming the final frontier. A frontier is just the far edge of where we’ve got to.

Technology has a habit of opening new doors to new frontiers so it is a fast way of losing face. When Star Trek named space as the final frontier, it was thought to be so. We’d go off into space and keep discovering new worlds, new civilizations, long after we’ve mapped the ocean floor. Space will keep us busy for a while. In thousands of years we may have gone beyond even our own galaxy if we’ve developed faster than light travel somehow, but that just takes us to more space. It’s big, and maybe we’ll never ever get to explore all of it, but it is just a physical space with physical things in it. We can imagine more than just physical things. That means there is stuff to explore beyond space, so space isn’t the final frontier.

So… not space. Not black holes or other galaxies.

Certainly not the ocean floor, however fashionable that might be to claim. We’ll have mapped that in details long before the rest of space. Not the centre of the Earth, for the same reason.

How about cyberspace? Cyberspace physically includes all the memory in all our computers, but also the imaginary spaces that are represented in it. The entire physical universe could be simulated as just a tiny bit of cyberspace, since it only needs to be rendered when someone looks at it. All the computer game environments and virtual shops are part of it too. The cyberspace tree doesn’t have to make a sound unless someone is there to hear it, but it could. The memory in computers is limited, but the cyberspace limits come from imagination of those building or exploring it. It is sort of infinite, but really its outer limits are just a function of our minds.

Games? Dreams? Human Imagination? Love? All very new agey and sickly sweet, but no. Just like cyberspace, these are also all just different products of the human mind, so all of these can be replaced by ‘the human mind’ as a frontier. I’m still not convinced that is the final one though. Even if we extend that to greatly AI-enhanced future human mind, it still won’t be the final frontier. When we AI-enhance ourselves, and connect to the smart AIs too, we have a sort of global consciousness, linking everyone’s minds together as far as each allows. That’s a bigger frontier, since the individual minds and AIs add up to more cooperative capability than they can achieve individually. The frontier is getting bigger and more interesting. You could explore other people directly, share and meld with them. Fun, but still not the final frontier.

Time adds another dimension. We can’t do physical time travel, and even if we can do so in physics labs with tiny particles for tiny time periods, that won’t necessarily translate into a practical time machine to travel in the physical world. We can time travel in cyberspace though, as I explained in

https://timeguide.wordpress.com/2012/10/25/the-future-of-time-travel-cheat/

and when our minds are fully networked and everything is recorded, you’ll be able to travel back in time and genuinely interact with people in the past, back to the point where the recording started. You would also be able to travel forwards in time as far as the recording stops and future laws allow (I didn’t fully realise that when I wrote my time travel blog, so I ought to update it, soon). You’d be able to inhabit other peoples’ bodies, share their minds, share consciousness and feelings and emotions and thoughts. The frontier suddenly jumps out a lot once we start that recording, because you can go into the future as far as is continuously permitted. Going into that future allows you to get hold of all the future technologies and bring them back home, short circuiting the future, as long as time police don’t stop you. No, I’m not nuts – if you record everyone’s minds continuously, you can time travel into the future using cyberspace, and the effects extend beyond cyberspace into the real world you inhabit, so although it is certainly a cheat, it is effectively real time travel, backwards and forwards. It needs some security sorted out on warfare, banking and investments, procreation, gambling and so on, as well as lot of other causality issues, but to quote from Back to the Future: ‘What the hell?’ [IMPORTANT EDIT: in my following blog, I revise this a bit and conclude that although time travel to the future in this system lets you do pretty much what you want outside the system, time travel to the past only lets you interact with people and other things supported within the system platform, not the physical universe outside it. This does limit the scope for mischief.]

So, time travel in fully networked fully AI-enhanced cosmically-connected cyberspace/dream-space/imagination/love/games would be a bigger and later frontier. It lets you travel far into the future and so it notionally includes any frontiers invented and included by then. Is it the final one though? Well, there could be some frontiers discovered after the time travel windows are closed. They’d be even finaller, so I won’t bet on it.

 

 

And another new book: You Tomorrow, 2nd Edition

I wrote You Tomorrow two years ago. It was my first ebook, and pulled together a lot of material I’d written on the general future of life, with some gaps then filled in. I was quite happy with it as a book, but I could see I’d allowed quite a few typos to get into the final work, and a few other errors too.

However, two years is a long time, and I’ve thought about a lot of new areas in that time. So I decided a few months ago to do a second edition. I deleted a bit, rearranged it, and then added quite a lot. I also wrote the partner book, Total Sustainability. It includes a lot of my ideas on future business and capitalism, politics and society that don’t really belong in You Tomorrow.

So, now it’s out on sale on Amazon

http://www.amazon.co.uk/You-Tomorrow-humanity-belongings-surroundings/dp/1491278269/ in paper, at £9.00 and

http://www.amazon.co.uk/You-Tomorrow-Ian-Pearson-ebook/dp/B00G8DLB24 in ebook form at £3.81 (guessing the right price to get a round number after VAT is added is beyond me. Did you know that paper books don’t have VAT added but ebooks do?)

And here’s a pretty picture:

You_Tomorrow_Cover_for_Kindle

The future of music creation

When I was a student, I saw people around me that could play musical instruments and since I couldn’t, I felt a bit inadequate, so I went out and bought a £13 guitar and taught myself to play. Later, I bought a keyboard and learned to play that too. I’ve never been much good at either, and can’t read music, but  if I know a tune, I can usually play it by ear and sometimes I compose, though I never record any of my compositions. Music is highly rewarding, whether listening or creating. I play well enough for my enjoyment and there are plenty of others who can play far better to entertain audiences.

Like almost everyone, most of the music I listen to is created by others and today, you can access music by a wide range of means. It does seem to me though that the music industry is stuck in the 20th century. Even concerts seem primitive compared to what is possible. So have streaming and download services. For some reason, new technology seems mostly to have escaped its attention, apart from a few geeks. There are a few innovative musicians and bands out there but they represent a tiny fraction of the music industry. Mainstream music is decades out of date.

Starting with the instruments themselves, even electronic instruments produce sound that appears to come from a single location. An electronic violin or guitar is just an electronic version of a violin or guitar, the sound all appears to come from a single point all the way through. It doesn’t  throw sound all over the place or use a wide range of dynamic effects to embrace the audience in surround sound effects. Why not? Why can’t a musician or a technician make the music meander around the listener, creating additional emotional content by getting up close, whispering right into an ear, like a violinist picking out an individual woman in a bar and serenading her? High quality surround sound systems have been in home cinemas for yonks. They are certainly easy to arrange in a high budget concert. Audio shouldn’t stop with stereo. It is surprising just how little use current music makes of existing surround sound capability. It is as if they think everyone only ever listens on headphones.

Of course, there is no rule that electronic instruments have to be just electronic derivatives of traditional ones, and to be fair, many sounds and effects on keyboards and electric guitars do go a lot further than just emulating traditional variants. But there still seems to be very little innovation in new kinds of instrument to explore dynamic audio effects, especially any that make full use of the space around the musician and audience. With the gesture recognition already available even on an Xbox or PS3, surely we should have a much more imaginative range of potential instruments, where you can make precise gestures, wave or throw your arms, squeeze your hands, make an emotional facial expression or delicately pinch, bend or slide fingers to create effects. Even multi-touch on phones or pads should have made a far bigger impact by now.

(As an aside, ever since I was a child, I have thought that there must be a visual equivalent to music. I don’t know what it is, and probably never will, but surely, there must be visual patterns or effects that can generate an equivalent emotional response to music. I feel sure that one day someone will discover how to generate them and the field will develop.)

The human body is a good instrument itself. Most people can sing to a point or at least hum or whistle a tune even if they can’t play an instrument. A musical instrument is really just an unnecessary interface between your brain, which knows what sound you want to make, and an audio production mechanism. Up until the late 20th century, the instrument made the sound, today, outside of a live concert at least,  it is very usually a computer with a digital to analog converter and a speaker attached. Links between computers and people are far better now though, so we can bypass the hard-to-learn instrument bit. With thought recognition, nerve monitoring, humming, whistling, gesture and expression recognition and so on, there is a very rich output from the body that can potentially be used far more intuitively and directly to generate the sound. You shouldn’t have to learn how to play an instrument in the 21st century. The sound creation process should interface almost directly to your brain as intuitively as your body does. If you can hum it, you can play it. Or should be able to, if the industry was keeping up.

Going a bit further, most of us have some idea what sort of music or effect we want to create, but don’t know quite enough about music to have the experience or skill to know quite what. A skilled composer may be able to write something down right away to achieve a musical effect that the rest of us would struggle to imagine. So, add some AI. Most music is based on fairly straightforward mathematical principles, even symphonies are mostly combinations of effects and sequences that fit well within AI-friendly guidelines. We use calculators to do calculations, so use AI to help compose music. Any of us should be able to compose great music with tools we should be able to build now. It shouldn’t be the future, it should be the present.

Let’s look at music distribution. When we buy a music track or stream it, why do we still only get the audio? Why isn’t the music video included by default? Sure, you can watch on YouTube but then you generally get low quality audio and video. Why isn’t purchased music delivered at the highest quality with full HD 3D video included, or videos if the band has made a few, with all the latest ones included as they emerge? If a video is available for music video channels, it surely should be available to those who have bought the music. That it isn’t reflects the contempt that the music industry generally shows to its customers. It treats us as a bunch of thieves who must only ever be given the least possible access for the greatest possible outlay, to make up for all the times we must of course be stealing off them. That attitude has to change if the industry is to achieve its potential. 

Augmented reality is emerging now. It already offers some potential to add overlays at concerts but in a few years, when video visors are commonplace, we should expect to see band members playing up in the air, flying around the audience, virtual band members, cartoon and fantasy creations all over the place doping all sorts of things, visual special effects overlaying the sound effects. Concerts will be a spectacular opportunity to blend the best of visual, audio, dance, storytelling, games and musical arts together. Concerts could be much more exciting, if they use the technology potential. Will they? I guess we’ll have to wait and see. Much of this could be done already, but only a little is.

Now lets consider the emotional connection between a musician and the listener. We are all very aware of the intense (though unilateral) relationship teens can often build with their pop idols. They may follow them on Twitter and other social nets as well as listening to their music and buying their posters. Augmented reality will let them go much further still. They could have their idol with them pretty much all the time, virtually present in their field of view, maybe even walking hand in hand, maybe even kissing them. The potential spectrum extends from distant listening to intimate cuddles. Bearing in mind especially the ages of many fans, how far should we allow this to go and how could it be policed?

Clothing adds potential to the emotional content during listening too. Headphones are fine for the information part of audio, but the lack of stomach-throbbing sound limits the depth of the experience. Music is more than information. Some music is only half there if it isn’t at the right volume. I know from personal experience that not everyone seems to understand this, but turning the volume down (or indeed up) sometimes destroys the emotional content. Sometimes you have to feel the music, sometimes let it fully conquer your senses. Already, people are experimenting with clothes that can house electronics, some that flash on and off in synch with the music, and some that will be able to contract and expand their fibres under electronic control. You will be able to buy clothes that give you the same vibration you would otherwise get from the sub-woofer or the rock concert.

Further down the line, we will be able to connect IT directly into the nervous system. Active skin is not far away. Inducing voltages and current in nerves via tiny implants or onplants on patches of skin will allow computers to generate sensations directly.

This augmented reality and a link to the nervous system gives another whole dimension to telepresence. Band members at a concert will be able to play right in front of audience members, shake them, cuddle them. The emotional connection could be a lot better.

Picking up electrical clues from the skin allows automated music selection according to the wearers emotional state. Even properties like skin conductivity can give clues about emotional state. Depending on your stress level for example, music could be played that soothes you, or if you feel calm, maybe more stimulating tracks could be played. Playlists would thus adapt to how you feel.

Finally, music is a social thing too. It brings people together in shared experiences. This is especially true for the musicians, but audience members often feel some shared experience too. Atmosphere. Social networking already sees some people sharing what music they are listening too (I don’t want to share my tastes but I recognise that some people do, and that’s fine). Where shared musical taste is important to a social group, it could be enhanced by providing tools to enable shared composition. AI can already write music in particular styles – you can feed Mozart of Beethoven into some music generators and they will produce music that sounds like it had been composed by that person, they can compose that as fast as it comes out of the speakers. It could take style preferences from a small group of people and produce music that fits across those styles. The result is a sort of tribal music, representative of the tribe that generated it. In this way, music could become even more of a social tool in the future than it already is.

Vampires are yesterday, zombies will peak soon, then clouds are coming

Most things that you can imagine have been the subject of sci-fi or fantasy at some point. There is certainly a large fashion element in the decision what to make the next film about and it is fun trying to spot what will come next.

Witches went out of fashion a decade ago even while other sword and sorcery, dungeons and dragons stuff remained stable and recurrent, albeit a niche. Vampires and werewolves accounted for far too many films and became boring, though admittedly, some of them were very good fun, so it’s safe to bury them for a decade or hopefully two.

Zombies are among the current leaders, (as I predicted several years ago, in spite of being laughed at back then). It is still hard to find a computer game that doesn’t have some sort of zombies in it, so they have a good while to go yet. The zombie apocalypse is scientifically and technologically feasible (see https://timeguide.wordpress.com/2012/02/14/zombies-are-coming/and that makes them far more disturbing than vampires and dragons, though the parasites in Alien are arguably even scarier.

Star Trek and the Terminator series introduced us to shape shifters. Avatar and Star Trek enthused over futuristic Indians. Symbionts and proxies are interesting but that’s really quite a shallow seam, there is really only one idea and it’s been used already. Religion and New Age trash has generally polluted throughout sci-fi and fantasy, but people are getting tired of it – American Indians and Australian Aborigines have been apologised to now. Recent Muslim backlash however suggests that the days are numbered for Star Wars, Dune, Mk 1 Klingons and others tapping into middle eastern stereotypes, so maybe  that will force other exotic cultures into the sci-fi limelight. The Cold War has already been done in overdose. South America has already been fully mined too. It’s a good while since the Chinese and Japanese cultures had a decent turn and I suspect they will come back strongly soon, whereas Africa doesn’t hold enough cultural identification points yet. Homophilia is having recurrent effects from Star Wars to Dr Who, but apart from gender-hopping, there isn’t really very far it can go. You can’t make many films from it.

So if those are the areas that are already showing signs of exhaustion  what comes after zombies? Gay zombies? Chinese zombies? Virtual zombies? Time travel zombies? Yeah, but after that?

Here’s my guess. Clouds.

Clouds are the IT Zeitgeist. They are the mid term future for sci-fi. There are a few possible manifestations and some tap well into other things we are getting to like. Clouds are a deep seam too. Not just one idea there. We have self-organisation, distribution, virtualisation, hybridisation, miniaturisation, self-replication, adaptation and evolution. We have AI, biomimetics, symbiosis, parasitic and commensalistic relationships. We have new kinds of gender, new kinds of intelligence, new physical and electronic forms. We have new kinds of materials, new ways of reproduction, new forms of attack and defense. I could write dozens of sci-fi books based on clouds. So could other people, and some of them will. Books, games, films, lots of them. About clouds.

You heard it here first. Clouds are the future of sci-fi.