This blog was written for primeloops.com – check them out for some ace samples and to see my blogs 2 months earlier!
I’ve done several articles on the histories of various aspects of music, but what about the future?
While you can’t predict it, you can make some well-educated guesses about what’s coming next… So let’s do that with electronic music!
There are several different revolutionary technological advancements regarding computer music that are in development. Let’s take a look…
Direct Note Access
The aptly named DNA allows you to change the inner workings of an audio file – to change one note in a chord. Previously you could only transpose the whole chord. Now, if you play one chord on a piano, for example, you can change the notes to turn it into any other chord. This can also allow you to quantize individual notes in a chord, tidying up those sloppy performances where before it was not possible to do so.
The company Melodyne has already beta tested this software – it won’t be long before it’s as commonplace as conventional autotune plug-ins.
With live instrument multi-sampling firmly cemented into studios all over the world, the next step is to directly create the sounds on the computer so there’s even more control over the timbre and tone of the sounds. This is called physical modelling.
With instruments like guitar, where the tone is a really complex collection of elements, this is no mean feat, but the future surely holds a synthesized solution to those guitarists that have big ideas, but whose fingers aren’t quite fast enough to realise them.
Previously, accurate Physical Modelling hasn’t been possible in real time, so sampling has been substituted for the sake of convenience, or where it’s required in a live setting, but the future will no doubt bring us faster computers which will be capable of real time reproduction of physically modelled instruments. Sweet!
Autotune not being enough, Japanese developers have created programs that generate human-sounding voices from scratch. Dubbed “Vocaloids”, these programs will surely push the envelope out further for virtual acts such as Gorillaz, not even requiring voice actors any more.
The first vocaloid, named “Hatsune Miku” by the Japanese developers Yamaha (made in 2007 – this technology has been developed considerably already), was chosen by combining Hatsu (First), Ne (Sound), and Miku (Future), which really does speak volumes – this is ground breaking stuff that could change the world forever.
Imagine boybands that don’t even exist… TV shows where the production team doesn’t have to pay for voice acting… It’s not going to be long before people are going to have to choose whether or not they have enough respect for real voice actors to dismiss the vocaloids, or whether they don’t mind, or even prefer the computer generated alternatives.
Another part of music is the actual genres being developed! Styles are constantly evolving and changing, but what will the next steps be? Let’s take some more educated guesses…
This emerging genre involves combining high end synthesis with lo-fi sounds to create a very synthetic sounding take on hip hop. With artists like Black Eyed Peas making songs like Boom Boom Pow, this style has already had an effect on the mainstream whether they know it or not.
Perhaps this genre will further induct itself into the mainstream pop hall of fame in the not so distant future?
With pop music using electro more and more, what is the future for the mainstream? Further blurring of the lines between Electronica and rock, as many artists have done over the past decade? Some sort of retro revival? I think more likely a combination of the two – pop goes in so many different directions that the chances are that if there’s a niche, someone will fill it.
It’s not just creating music – it’s performing it. What’s around the corner for artists looking to push the envelope on the stage?
Already there have been robot drummers, violinists and trumpet players developed, how long before these turn into more streamlined, affordable units that can get the tone of a live kit and give the performance of a live performer?
It has already been done – 3D images displayed on stage so it seems like it’s actually there. Computer generated characters performing alongside real people. Will this become more commonplace? What awesome special effects will be possible with this technology? Sitting at the lighting desk just became a whole lot more interesting.
A lot of these technologies work towards replicating live music – perhaps the future of electronic music lies in the past – as electronic and acoustic music become less distinguishable, perhaps the genres will too.
While the future isn’t, one thing’s for certain – it’s seriously exciting to think what will happen in our lifetimes!