# AI and the art of unemployment



## mikeh375 (Sep 7, 2017)

This is either a depressing or exhilarating vision of a future, depending on your age, outlook and musical education I guess. I listened to the piece below and thought that the music was as good as and even better than an awful lot of kids DAW work one hears so much of online.
The day is coming when a film/media producer will not even consider looking for composers with a view to hiring them and why should they? Also, it's conceivable that the tune you might bop along to in the future will be one Alexa makes up for you on the spot, given your digitally stored musical proclivities....and why not?

We can talk about 'soul', humanity and all the other attributes we associate with with 'real' music, but the interesting point here for me is that in all honesty, if I'd heard this without knowing it was an AI piece I would have encouraged the composer into a media career. The 'stuff' of music and the age old notions of what it actually is and does are called into question by AI imv, suggesting that Stravinsky's view that music expresses only itself might be very accurate indeed.

If you want to buy this software and delude yourself into thinking that you are being fully creative and reaching your full potential, here's the link...https://aiva.ai...oh, and good luck 'competing' with all of the other AI's in the media and popular market if that's were you are headed. I hope this does not become a 'thing' but do fear that the genie is out.

I also get the feeling that Schoenberg's dictum that there is plenty more to be said in C major will cease to apply sometime soon as the curse of popular ubiquity becomes exponential and aesthetically pointless.


----------



## pkoi (Jun 10, 2017)

It's amazing what one can do with artificial intelligence these days. A couple of years ago I worked for 10 months training a machine learning algorithm to recognise chords from popular songs, and while the results of that algorithm were not perfect, it was really fascinating to see its process from a tabula rasa to an application able to recognise basic triads quite efficiently at the end of the project. When I started the job, the algorithm had already been trained with around 5000 songs annotated by different university projects in the past 20 years. I contributed to it with around 1000 analyses of songs + reviewed the quality of the previous material which was very varying in quality. I think this kind of trainer/quality inspector will be very common profession among musicians of the future, especially in music for media. However, I don't think humans will be replaced with machines on the more high end jobs of composition yet. The example you posted, while on the surface beautiful music, seems to have that same problem as many AI-based compositions: lack of understanding for properly creating dramatic structures. While the piece had dynamic variance, it was still pretty much just noodling around with the general pretty A-minor sounds, without really producing anything super meaningful.


----------



## mikeh375 (Sep 7, 2017)

pkoi said:


> It's amazing what one can do with artificial intelligence these days. A couple of years ago I worked for 10 months training a machine learning algorithm to recognise chords from popular songs, and while the results of that algorithm were not perfect, it was really fascinating to see its process from a tabula rasa to an application able to recognise basic triads quite efficiently at the end of the project. When I started the job, the algorithm had already been trained with around 5000 songs annotated by different university projects in the past 20 years. I contributed to it with around 1000 analyses of songs + reviewed the quality of the previous material which was very varying in quality. I think this kind of trainer/quality inspector will be very common profession among musicians of the future, especially in music for media. However, I don't think humans will be replaced with machines on the more high end jobs of composition yet. The example you posted, while on the surface beautiful music, seems to have that same problem as many AI-based compositions: lack of understanding for properly creating dramatic structures. While the piece had dynamic variance, it was still pretty much just noodling around with the general pretty A-minor sounds, without really producing anything super meaningful.


I agree pkoi that the example is not quite there yet, but from what I've heard online in terms of demos and soundcloud pages, the result is a lot further on in expression than many kids DAW offerings, at least to my ears. It already seems to have a burgeoning know-how that has perhaps been gleaned from the sort of programming you were doing. As always with composing, one's limitations sometimes hamper creativity and it strikes me that eventually AI will have no expressive limitations, being able to borrow and manipulate from the whole canon of music once it has been made aware of it (don't read too much into my use of the word 'aware'..).

Given that your more up with this than me, I read somewhere that introducing randomness into an algorithm is what has the potential to produce more of what's considered art rather than artifice - a less predictable result perhaps - can you shed anymore light on developments in this? I know programmers are hard pressed to understand output from input at times. Also, I'm curious to learn what applications your 10 month input is being used for?


----------



## pkoi (Jun 10, 2017)

I've heard something about the randomness-factor but to be honest, I'm not super well versed with the technical output of the process. Also, I worked specifically with chord recognition from audio and not AI-composing so it was slightly different. The way it worked for me was that each day I transcribed a set of songs. The transcribing was done in Logic Pro X. It started with me tapping the tempo of the piece (in most cases I just googled the BPM of the piece, as 99% of modern pop music is quantized) in order to set the piece to the grid. Then I would make a midi map of all the harmonies of the piece and export a midi file out of it. Then I would run a simple python-script which would send my analysis to the main server. About once a week they ran the "training" with the data and then we would do testing. The engineers evaluated some more technical aspects, such as latency of the analysis information in accordance to the audio. I would pick up random songs from the youtube and check the quality of the final product, and then we would do it all again. I remember from the process that the software worked really well with root positioned triads but was struggling with inversions and chords larger than four notes. The training data I created was very accurate, and I analysed the full harmony of those pieces but they extracted the triadic information out of that for the algorithm. They explained to me that an algorithm is good to train this way, as the more the information it gets, the harder it is to train it properly. Also I remember them talking ofter about avoiding over-training the algorithm, as that would produce worse results. 

The app where they used it was designed to pick up chords from youtube-videos so that the users could use it to play along their favourite songs. Even though this particular app was an experiment, I'm 100% sure that commercial apps like this will emerge in the future.


----------



## pkoi (Jun 10, 2017)

Also, not AI, but algorithm based composition is common in the field of CCM (openmusic for example). I would imagine some AI-based compositional tools will become a normal tool used in composition in the future.


----------



## mikeh375 (Sep 7, 2017)

......................................


----------



## mikeh375 (Sep 7, 2017)

pkoi said:


> Also, not AI, but algorithm based composition is common in the field of CCM (openmusic for example). I would imagine some AI-based compositional tools will become a normal tool used in composition in the future.


IRCAM springs to mind here as does Fernyhough, Harvey et al. But as we know, CCM requires a high degree of training for sincere and brilliant writing.

AI used as it is in the OP, has the potential imv to be a double edged threat to a) the compositional development of the younger DAW generation who might come to rely too heavily on it to the detriment of their own potential and b), any subsequent career they may contemplate in the future within media composition. I feel sad about that but one's mileage may vary as they say. Some kids might find it a real boon to spur themselves on, I do hope so.


----------



## mikeh375 (Sep 7, 2017)

pkoi said:


> I've heard something about the randomness-factor but to be honest, I'm not super well versed with the technical output of the process. Also, I worked specifically with chord recognition from audio and not AI-composing so it was slightly different. The way it worked for me was that each day I transcribed a set of songs. The transcribing was done in Logic Pro X. It started with me tapping the tempo of the piece (in most cases I just googled the BPM of the piece, as 99% of modern pop music is quantized) in order to set the piece to the grid. Then I would make a midi map of all the harmonies of the piece and export a midi file out of it. Then I would run a simple python-script which would send my analysis to the main server. About once a week they ran the "training" with the data and then we would do testing. The engineers evaluated some more technical aspects, such as latency of the analysis information in accordance to the audio. I would pick up random songs from the youtube and check the quality of the final product, and then we would do it all again. I remember from the process that the software worked really well with root positioned triads but was struggling with inversions and chords larger than four notes. The training data I created was very accurate, and I analysed the full harmony of those pieces but they extracted the triadic information out of that for the algorithm. They explained to me that an algorithm is good to train this way, as the more the information it gets, the harder it is to train it properly. Also I remember them talking ofter about avoiding over-training the algorithm, as that would produce worse results.
> 
> The app where they used it was designed to pick up chords from youtube-videos so that the users could use it to play along their favourite songs. Even though this particular app was an experiment, I'm 100% sure that commercial apps like this will emerge in the future.


I'm sure programmers will overcome details like chord inversions and harmonic complexity at some stage.
As an educational tool, AI will be very beneficial no doubt. Sadly though it'll be of no use to the kid wanting to make a mark in media. Instead it will be an almost unbeatable competitor and one can even imagine certain successful algorithms being hired out by their programmers rather than being sold to everyone - the new A listers who will specialise in pumping out music in certain genres.


----------



## pkoi (Jun 10, 2017)

mikeh375 said:


> I'm sure programmers will overcome details like chord inversions and harmonic complexity at some stage.
> As an educational tool, AI will be very beneficial no doubt. Sadly though it'll be of no use to the kid wanting to make a mark in media. Instead it will be an almost unbeatable competitor and one can even imagine certain successful algorithms being hired out by their programmers rather than being sold to everyone - the new A listers who will specialise in pumping out music in certain genres.


Yes, for sure. The technology in that field develops super fast. I think it will alter the music scene in the future but I still don't believe it would replace humans altogether. I can certainly see it being the main source of background music in YT-videos (the ukulele-glockenspiel & clapping hands-type of music) and some small budget ads but we'll see in 10 years or so how it developed and how the field has learned to cope with it.


----------



## adrien (Sep 12, 2016)

Pretty sure Presonus' Studio 1 has been able to deduce chords from audio since version 4 (current is 5).


----------



## BabyGiraffe (Feb 24, 2017)

Don't worry, our civilization is unsustainable, so we will never see such thing, because everything will collapse in the next 1 or 2 decades (overfishing and loss of populations, pollution, deforestation, climate etc etc etc).
And if computers really become capable of creating real music, being unemployed will be your last concern. T


----------



## JAS (Mar 6, 2013)

That is actually pretty impressive, although, as noted, it feels as if it isn't really going anywhere. At 2:50 minutes, it feels like a very long introduction to something that never quite arrives. Given the Zimmer school, I suspect that it could fill that niche in films. I also wonder how many pieces come out like this (perhaps a bit too much like this) as opposed to those that just don't really work at all.


----------

