# My personal development of Algorithmic Orchestration! (with video demonstration)



## Oscar South (Aug 6, 2020)

Hi all! 
I've been enjoying joining in discussions on this message board recently. Today I've got some cool art to share!

Over the last year I've been making a specific effort to expand my skills in orchestration as well as to bring in new developments to my personal application of the craft though integration with methodologies taken from my personal experience as a performer of 'Live Coding' -- the art of live performance with programming languages.

This is firstly a tech demo presenting a structural concept that I think of as 'Algorithmic Orchestration'. In the interest of demonstration I've included an example piece of music -- a transcription of Igor Stravinskys 'Suite No.1 for small orchestra, I.Andante'

_*"My music of today is so much based on the new musical technology. We use the technology as a material for our musical art"*
Igor Stravinsky, 1957_






In terms of creative application, I see three immediate directions of approach:

1: Transcription of existing/traditional orchestral scores into algorithmic representation. This is primarily an analytic process which provides a deep insight into the structural properties of an existing work.

2: Transcription of algorithmically created works onto playable orchestral scores for performance by a contemporary orchestra. This represents a personal compositional process/methodology alongside existing/traditional cognitive 'technologies' of composition. Close attention to principles of practical orchestration will be necessary throughout the algorithmic and notational phases of composing.

3: Direct performance of the orchestral work via an interface of pure text ('Live Coding'). As code is typed and executed in real time, the musical composition will build up and develop. Material could be memorised and rehearsed or improvised. Principles of practical orchestration could be adhered to or ignored under this approach.

Approaches 2 & 3 logically presents a fourth application:

4: A combination of 2 & 3 in performance at the same time.

I'm planning to create future demonstrations showcasing all these approaches

I'd love to hear any questions or opinions that anyone has about this concept! I've been working hard on it and I have a lot of fascinating and exciting (to me at least!) additions in development to further enrich the scope of the project.

Oscar South

_GitHub for my own music experimentation codebase:
https://github.com/OscarSouth/theHarmonicAlgorithm_


----------



## Phil loves classical (Feb 8, 2017)

I feel the melody in the 1st violins of the original is lost in the first section (it doesn't sound like in high treble range but more in the alto range), and when the melody goes to the flutes in the beginning of 2nd section. I hear the bass too much. Does the algorithm also dictate the balance between the parts?


----------



## Oscar South (Aug 6, 2020)

Phil loves classical said:


> I feel the melody in the 1st violins of the original is lost in the first section (it doesn't sound like in high treble range but more in the alto range), and when the melody goes to the flutes in the beginning of 2nd section. I hear the bass too much. Does the algorithm also dictate the balance between the parts?


Hey! Thanks very much for your thoughts. I listened through with your comments in mind. All aspects of performance are defined in the code, with particular focus placed on both long range and short range dynamic.

I hear what you mean about the early violin (real orchestral recordings sound brighter in high overtones here) although I'm not too concerned in this case -- I personally incline toward a damped or 'dark' (in timbre, not mood) sound that allows for a pleasing accumulation of resonance (which this piece delivers on in the final section!). I personally feel that the balance here delivers the effect I intended. It is indeed a delicate balance -- for example in the first section after the melody is first stated by vn1 vn2 and fl 1+2 in octaves, then all drop out with only vn1 continuing. The melody seems to sink into the texture while remaining audible, and seems to 'pull' your attention into the background texture as your focus remains fixed on that element.

I was never quite satisfied with the balance of the clarinets in the first part of section 2 where the bass instruments drop out, but I also wanted to move forward into new territory so got it to an 'ok' place for this demo and left it there. I did actually raise the bassoon dynamic up a bit at the point than they re-enter in mid section 2 because I personally liked the effect when they enter and the following swell with flutes carrying higher overtones was not diminished by that change in my opinion.

It much be noted that any transcription from score TO algorithm is purely for the purpose of study or demonstration. A piece written with the sole intent of being performed by a real orchestra will likely never be fully realised in 'algorithmic' form (though I did personally feel that the essentials of Stravinskys orchestration were able to be transferred to the medium). New works for orchestra that are composed through this methodology will be a different story!

I'm copying Stravinskys quote from above here again, because I feel it's so relevant to the motivations of this concept:

_*"My music of today is so much based on the new musical technology. We use the technology as a material for our musical art"*
Igor Stravinsky, 1957_

Thanks again for listening and for your comments.


----------



## Phil loves classical (Feb 8, 2017)

Actually relistening to the original, and looking at the score, I was hearing the 2nd violins in your version. I don't hear the 1st violins in the first part at all. I feel the accompaniment is much more prominent than the melody, as also in the clarinets/flutes. I thought the register and even parts were autogenerated by algorithm, but it seems you intentionally or manually change the dynamics?


----------



## Oscar South (Aug 6, 2020)

Phil loves classical said:


> I thought the register and even parts were autogenerated by algorithm, but it seems you intentionally or manually change the dynamics?


Correct -- there's no 'AI', 'autogeneration' or other related buzzwords going on. The code is a structural construct that extends, augments or coerces the conceptual capabilities of my mind and the skills contained within! The cliche would be the tired old 'bicycle for the mind' analogy.



Phil loves classical said:


> Actually relistening to the original, and looking at the score, I was hearing the 2nd violins in your version. I don't hear the 1st violins in the first part at all. I feel the accompaniment is much more prominent than the melody, as also in the clarinets/flutes.


Clarinets I agree on and especially I'm not a huge fan of the bite of a few of the higher notes. I filed that under 'close enough for the context' and moved on in that instance. Everything else you mention here are personal choices of the 'conceptual conductor' (myself). Thanks for noticing and critique of creative decisions is also equally valid 

To provide some additional context (unrelated to this specific example) -- in performance the 'algorithmic' orchestra usually sits behind or alongside a modular synthesizer which is often blended 'seamlessly' into the orchestral texture, rather than carrying it's own voice all the time. For this reason I deliberately darken the sound of the orchestra to facilitate the most musically pleasing blend with the higher overtones of the modular array. In this case this is mainly through dynamics -- an original piece of course has a lot more scope for flexibility. There was no particular reason to incline that way with this example beside habit and preference.

Also, the second violins drop out very quickly after entering (after 4 bars). After that there are only first violins in the foreground -- the second is silent. I definitely can hear what you mean about balance with the texture against references, this is just my personal choice though. To me, the impact of the orchestral reference versions (to me) is present and the rest is a matter of taste.

Thanks again for your comments, for engaging in debate and providing good quality food for thought!


----------



## Oscar South (Aug 6, 2020)

Phil loves classical said:


> .


By the way, I'll add a postscript that while I'm engaging in discussion (and enjoying it) on the nuance of your comments (which are all 100% valid, true and appreciated); in truth we are debating over real life minutia related to an incredibly highly developed and time-honed craft, when the object of our present debate is a construct generated by computer code and auralised (in real time) by a small black box that was produced in the 90s and holds over 600 different instruments inside 8MB of ROM, with no additional post-processing.

*takes breath*

I feel that this is remarkable and that this observation is a bold validation of success inside the scope of this early venture into the concept. The poignant observation that I've realised myself through this exploratory process is that the process of transcribing the music into algorithmic representation *DOES* effectively transfer (to some degree) the 'soul' or 'life' of the music -- it is all flowing through me, after all. As an instrumental performer, I can observe that listening back to it, it feels 'alive' in the same way that listening to myself performing someones music on an instrument does.

I've additionally observed that in this process, my own voice is becoming embedded into the sound, in the same way that the personality of a conductor does into the works he conducts. This was not something that I'd considered prior to this transcription experiment.

Again thanks for engaging in debate and facilitating a productive discussion. I also took your thoughts on the aural impact orchestration into consideration and will keep them in mind for future works.


----------



## Phil loves classical (Feb 8, 2017)

Ok, so I completely misunderstood your meaning of 'algorithmic orchestration'. I thought you wrote some code of a certain procedure (I did some basic programming in Pascal myself a couple of decades ago, surprised they still use it?) with input of maybe a few parameters, press Enter, and the computer comes up with everything as in orchestral arrangement of parts based on a certain algorithm. But you're actually writing out code to do specifics? I wouldn't call that algorithmic, as in algorithmic composition of music. More like code-generated or specified music?

Actually it's not that different from LilyPond, which you write code to generate music. The advantage of LilyPond is you can print out a real nicely engraved score.


----------



## Oscar South (Aug 6, 2020)

Phil loves classical said:


> Ok, so I completely misunderstood your meaning of 'algorithmic orchestration'. I thought you wrote some code of a certain procedure (I did some basic programming in Pascal myself a couple of decades ago, surprised they still use it?) with input of maybe a few parameters, press Enter, and the computer comes up with everything as in orchestral arrangement of parts based on a certain algorithm. But you're actually writing out code to do specifics? I wouldn't call that algorithmic, as in algorithmic composition of music. More like code-generated or specified music?


This is a little bit of a silly juggling of synonyms actually, so I'll try not to go too deep and confuse things further. I'll just say that it is the original thing that you mentioned and it's also the other thing all at once, as well as a ton of other things that we've not mentioned yet.

If you're interested to learn more deeply about the technical aspects, you can browse (the underlying part of) my personal codebase here:
https://github.com/OscarSouth/theHarmonicAlgorithm

As well as this project, which underpins all performance elements and which performance aspects of my own project build on:
https://github.com/TidalCycles/Tidal

I'm also not sure you'll have run into it but it's built inside the framework of 'Functional Programming' -- there's no concept of linear execution (such as in the traditional languages like you mentioned) and it's more a declaration of what the state of the universe 'is' at a given moment, which can be modified while those rules are being enforced thus facilitating live performance as code is typed, executed and accumulated in real time. There's actually an entire artform and cultural movement called 'Algorave' in which most artists create more repetitive evolving dance music live while the code is projected onto a screen (also guilty -- it's quite fun). I'm taking it in a somewhat different direction with 'Algorithmic Orchestration' 

I'm happy to debate this side of things, but it gets incredibly deep, very mathematical and highly theoretical very quickly, so it might not make for the best reading material for other members of this (music) forum.

Rather than simply saying "trust me", here's a link to a paper I wrote a few years ago that accompanies the codebase I linked and outlines a lot of the fundamental principles that the computer science side of the concept is built on: https://github.com/OscarSouth/theHa...Data_Science_In_The_Creative_Process_2018.pdf

_*"My music of today is so much based on the new musical technology. We use the technology as a material for our musical art"*
Igor Stravinsky, 1957_


----------



## Phil loves classical (Feb 8, 2017)

Ah ok. I've used procedural and a bit of OOP, but never got into Functional Programming. Looking into your code, it's mainly in that 'mapM' inside of line 24, where you implement your interpretation to the music.

Ya, that is definitely more structuralized than coding in LilyPond, although not quite as 'black-boxed' as I imagined.


----------



## Oscar South (Aug 6, 2020)

Phil loves classical said:


> Ah ok. I've used procedural and a bit of OOP, but never got into Functional Programming. Looking into your code, it's mainly in that 'mapM' inside of line 24, where you implement your interpretation to the music.
> 
> Ya, that is definitely more structuralized than coding in LilyPond, although not quite as 'black-boxed' as I imagined.


Yeah, `mapM_` is a function that triggers multiple events of type `IO ()` (or in laymans terms -- makes real things happen) in immediate sequence. The structure defined inside the `mapM_` is essentially a pattern of patterns. The blocks of code below are the 'detail' contained in each section while the `seqP` function inside `mapM_` organises them into a structure and executes them.

Here's an older example from a while ago using synth sounds rather than orchestral and playing in a more improvisatory way with some other musicians:





You can see me loading the code blocks into 'active memory' (using that term metaphorically here) and then executing them together with the `mapM_` block manually on transition points.

Also, what you see (that is written in the writing/transcription process or in real time in a live performance) is just the 'front end' code that actually initialises the sound. There are thousands of times more code than that working on the back end to provide an environment where executing a simple line of code like `d1 $ horn "pp" +| note "[[12 . 14 16] [[email protected] 11 . 12] [17 . 14 16] [19 . 14 16]]/8"` will perform the opening bars of Schubert's 9th Symphony.

Here's a clip of that! (sadly the audio quality is poor and it's also a much earlier stage of development so the sound is less developed):


----------

