The future of music creation

When I was a student, I saw people around me that could play musical instruments and since I couldn’t, I felt a bit inadequate, so I went out and bought a £13 guitar and taught myself to play. Later, I bought a keyboard and learned to play that too. I’ve never been much good at either, and can’t read music, but  if I know a tune, I can usually play it by ear and sometimes I compose, though I never record any of my compositions. Music is highly rewarding, whether listening or creating. I play well enough for my enjoyment and there are plenty of others who can play far better to entertain audiences.

Like almost everyone, most of the music I listen to is created by others and today, you can access music by a wide range of means. It does seem to me though that the music industry is stuck in the 20th century. Even concerts seem primitive compared to what is possible. So have streaming and download services. For some reason, new technology seems mostly to have escaped its attention, apart from a few geeks. There are a few innovative musicians and bands out there but they represent a tiny fraction of the music industry. Mainstream music is decades out of date.

Starting with the instruments themselves, even electronic instruments produce sound that appears to come from a single location. An electronic violin or guitar is just an electronic version of a violin or guitar, the sound all appears to come from a single point all the way through. It doesn’t  throw sound all over the place or use a wide range of dynamic effects to embrace the audience in surround sound effects. Why not? Why can’t a musician or a technician make the music meander around the listener, creating additional emotional content by getting up close, whispering right into an ear, like a violinist picking out an individual woman in a bar and serenading her? High quality surround sound systems have been in home cinemas for yonks. They are certainly easy to arrange in a high budget concert. Audio shouldn’t stop with stereo. It is surprising just how little use current music makes of existing surround sound capability. It is as if they think everyone only ever listens on headphones.

Of course, there is no rule that electronic instruments have to be just electronic derivatives of traditional ones, and to be fair, many sounds and effects on keyboards and electric guitars do go a lot further than just emulating traditional variants. But there still seems to be very little innovation in new kinds of instrument to explore dynamic audio effects, especially any that make full use of the space around the musician and audience. With the gesture recognition already available even on an Xbox or PS3, surely we should have a much more imaginative range of potential instruments, where you can make precise gestures, wave or throw your arms, squeeze your hands, make an emotional facial expression or delicately pinch, bend or slide fingers to create effects. Even multi-touch on phones or pads should have made a far bigger impact by now.

(As an aside, ever since I was a child, I have thought that there must be a visual equivalent to music. I don’t know what it is, and probably never will, but surely, there must be visual patterns or effects that can generate an equivalent emotional response to music. I feel sure that one day someone will discover how to generate them and the field will develop.)

The human body is a good instrument itself. Most people can sing to a point or at least hum or whistle a tune even if they can’t play an instrument. A musical instrument is really just an unnecessary interface between your brain, which knows what sound you want to make, and an audio production mechanism. Up until the late 20th century, the instrument made the sound, today, outside of a live concert at least,  it is very usually a computer with a digital to analog converter and a speaker attached. Links between computers and people are far better now though, so we can bypass the hard-to-learn instrument bit. With thought recognition, nerve monitoring, humming, whistling, gesture and expression recognition and so on, there is a very rich output from the body that can potentially be used far more intuitively and directly to generate the sound. You shouldn’t have to learn how to play an instrument in the 21st century. The sound creation process should interface almost directly to your brain as intuitively as your body does. If you can hum it, you can play it. Or should be able to, if the industry was keeping up.

Going a bit further, most of us have some idea what sort of music or effect we want to create, but don’t know quite enough about music to have the experience or skill to know quite what. A skilled composer may be able to write something down right away to achieve a musical effect that the rest of us would struggle to imagine. So, add some AI. Most music is based on fairly straightforward mathematical principles, even symphonies are mostly combinations of effects and sequences that fit well within AI-friendly guidelines. We use calculators to do calculations, so use AI to help compose music. Any of us should be able to compose great music with tools we should be able to build now. It shouldn’t be the future, it should be the present.

Let’s look at music distribution. When we buy a music track or stream it, why do we still only get the audio? Why isn’t the music video included by default? Sure, you can watch on YouTube but then you generally get low quality audio and video. Why isn’t purchased music delivered at the highest quality with full HD 3D video included, or videos if the band has made a few, with all the latest ones included as they emerge? If a video is available for music video channels, it surely should be available to those who have bought the music. That it isn’t reflects the contempt that the music industry generally shows to its customers. It treats us as a bunch of thieves who must only ever be given the least possible access for the greatest possible outlay, to make up for all the times we must of course be stealing off them. That attitude has to change if the industry is to achieve its potential. 

Augmented reality is emerging now. It already offers some potential to add overlays at concerts but in a few years, when video visors are commonplace, we should expect to see band members playing up in the air, flying around the audience, virtual band members, cartoon and fantasy creations all over the place doping all sorts of things, visual special effects overlaying the sound effects. Concerts will be a spectacular opportunity to blend the best of visual, audio, dance, storytelling, games and musical arts together. Concerts could be much more exciting, if they use the technology potential. Will they? I guess we’ll have to wait and see. Much of this could be done already, but only a little is.

Now lets consider the emotional connection between a musician and the listener. We are all very aware of the intense (though unilateral) relationship teens can often build with their pop idols. They may follow them on Twitter and other social nets as well as listening to their music and buying their posters. Augmented reality will let them go much further still. They could have their idol with them pretty much all the time, virtually present in their field of view, maybe even walking hand in hand, maybe even kissing them. The potential spectrum extends from distant listening to intimate cuddles. Bearing in mind especially the ages of many fans, how far should we allow this to go and how could it be policed?

Clothing adds potential to the emotional content during listening too. Headphones are fine for the information part of audio, but the lack of stomach-throbbing sound limits the depth of the experience. Music is more than information. Some music is only half there if it isn’t at the right volume. I know from personal experience that not everyone seems to understand this, but turning the volume down (or indeed up) sometimes destroys the emotional content. Sometimes you have to feel the music, sometimes let it fully conquer your senses. Already, people are experimenting with clothes that can house electronics, some that flash on and off in synch with the music, and some that will be able to contract and expand their fibres under electronic control. You will be able to buy clothes that give you the same vibration you would otherwise get from the sub-woofer or the rock concert.

Further down the line, we will be able to connect IT directly into the nervous system. Active skin is not far away. Inducing voltages and current in nerves via tiny implants or onplants on patches of skin will allow computers to generate sensations directly.

This augmented reality and a link to the nervous system gives another whole dimension to telepresence. Band members at a concert will be able to play right in front of audience members, shake them, cuddle them. The emotional connection could be a lot better.

Picking up electrical clues from the skin allows automated music selection according to the wearers emotional state. Even properties like skin conductivity can give clues about emotional state. Depending on your stress level for example, music could be played that soothes you, or if you feel calm, maybe more stimulating tracks could be played. Playlists would thus adapt to how you feel.

Finally, music is a social thing too. It brings people together in shared experiences. This is especially true for the musicians, but audience members often feel some shared experience too. Atmosphere. Social networking already sees some people sharing what music they are listening too (I don’t want to share my tastes but I recognise that some people do, and that’s fine). Where shared musical taste is important to a social group, it could be enhanced by providing tools to enable shared composition. AI can already write music in particular styles – you can feed Mozart of Beethoven into some music generators and they will produce music that sounds like it had been composed by that person, they can compose that as fast as it comes out of the speakers. It could take style preferences from a small group of people and produce music that fits across those styles. The result is a sort of tribal music, representative of the tribe that generated it. In this way, music could become even more of a social tool in the future than it already is.

About these ads

4 responses to “The future of music creation

  1. Here’s a British instrument maker that you might find interesting …

    http://www.engadget.com/2013/03/12/roli-seaboard-piano/

    ~ Mark

  2. a)sitting by the woofers sub or not is not advisable for anyone with head injuries.
    b)rhythm of life is seismic
    the answer lies with a musician/oceanographer/seismologist.

  3. c you’ll not do it right if you dont learn music

  4. This may be true from a listener’s stand point, but from a point of making music, it feels much more rewarding to play an actual instrument than to tinker with computer (at least for me). Digital music has been with us for a long time already, and styles like techno are probably completely computer-generated as is. Anybody who has some ear, a knowledge of music theory and right software can compose electronic music, but I still don’t think its not quite the same as playing personally or singing.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s