In my first post in this series, I gave the following three points as the primary motivators of my personal ideology in relation to Live Coding performance:
- Having come from an academic and professional background in session musicianship, where technical mastery (incorporating musicality) of ‘the instrument’ is valued above all other concerns.
- A long term personal interest in both the discussion of and applications for machine learning methodology in creative practices. (https://github.com/OscarSouth/theHarmonicAlgorithm)
- Long term professional collaboration with Saydyy Kuo Fedorova, who is inspired heavily by historical cultural material from the indigenous heritage of her home nation, The Republic of Sakha (North Siberia) (https://UDAGANuniverse.com)
So far in parts one and two of this series, I’ve discussed the first point from perspectives of conceptual and practical experience:
Part 1 (conceptual) https://toplap.org/thoughts-on-live-coding-as-a-session-musician-1-of-3/
Part 2 (experiential) https://toplap.org/thoughts-on-live-coding-as-a-session-musician-2-of-3/
Point two (related to machine learning methodology) is a topic that I’ll leave out of the scope of this article. I plan to discuss that more deeply in future posts. In this post, I will conclude the discussion of point one from within the scope of this series (‘where I’m at right now’) and point three is also somewhat present here, as you can observe the cultural collaboration with Saydyy-Kuo in the included clips.
Last Friday, we performed a set at an Algorave event in Sheffield, UK. We performed three compositions, which can be viewed below:
‘Aurora Polaris’
‘Electric Khomus’
‘Uluu Kuday Kakhsy’
This performance represents the cumulation of our performance development through 2019, which will continue on through 2020. In reflection and in observing the documentation footage above, I’ve managed to clarify a position for myself on a key decision factor relating to Live Coding performance. The point in question is my internal debate on ‘start with a blank screen or write code in advance’ which has evolved through the last two articles.
In this concert, I found that the methodology discussed in article two of moving down a commented file was very fluid from my perspective as a performer. It also allowed for much better reproducibility of repertoire from both a mental and technical point of view. It also offers a great workflow in terms of code archiving — hosting prepared repertoire on GitHub (or alternative) and cloning that repertoire onto the local machine in advance of a performance. To avoid unexpected behaviour in future performances from changes that may have been made during a prior performance and forgotten about, then the entire directory for that repertoire can simply be deleted and a fresh copy cloned. This workflow has solved a lot of problems for me!
Additional notes:
- I feel that it would further enhance the performance to include commented sections related not only my own musical elements, but also related to other musicians on the stage as well as arranging the visuals throughout the performance to accentuate moments when there is cultural information available to read. This aspect can be heavily developed from where I am at right now.
- I do not necessarily consider myself to be an ‘Algorave’ performer in a pure sense, though I share many tools. To me, Live Coding is not a genre, but an instrument that can be applied as a major or minor part of performance in any genre. I see a growing number of Live Coding performers taking the tools and pulling in different directions. I think that this is amazing and I love how everything can still come together, falling under the larger ‘Algorave’ umbrella of ‘different people making art with code’. I also love performing artsy/cultural repertoire earlier in the evening so that I can then relax and dance to the gods of Algorave who continue through the night!
- I found myself looking at the code much less than in earlier performances where I improvised more freely with my codebase. In fact, I had time to spare that I could have used on any number of technical or performative functions to further enhance the performance.
In relation to my quandary of ‘start with a blank screen or write code in advance’, I have realised that the answer is “both”.
Like is often the case in life, I have somewhat ended up back where I started but with better perspective (and more a clearly defined workflow). In earlier performances, I would primarily structure pieces by slowing introducing different elements then dropping things out/in and playing with the dynamics or inserting stops and structural ‘punctuation’ (etc.). I found this method expressive but limiting in scope. In the recent performances documented above I’ve conceptually zoomed out and have approached the composition as more of a ‘list of sections’, allowing for movement through a larger form piece of music with more dynamic changes and possibility for a degree of free-form manipulation inside each section. In zooming out, I DO feel that I’ve lost some expressiveness and musical sensitivity. I also feel that with the perspective of experience I can re-introduce this ‘intimate’ musicality without losing any of the advantages of composing with ‘zoomed out’ larger form structure in mind.
I feel that I can achieve this by combining both methodologies into one, where a number of highly varied scenes are moved through inside a performance. This would give me a lot of flexibility in terms of how to incorporate a wide array of performance elements into each piece of music. For example, one piece of music could feature a combination of scenes
Scene 1:
Blank screen where core motifs of the composition are typed and introduced in real time, building the feeling of authenticity between the coder and algorave audience.
Scenes 2-4:
Moving through some pre-prepared rapid sequential changes that carry the composition logically forward from Scene 1 and allow the structure to develop in a more dynamic fashion than may have been possible by hand coding them.
Scene 5:
More ‘open’ or more sparsely textured groove which repeats for some time and leaves space for other performers to develop their own voices, while freeing the coder some time to address other aspects of the performance (for example making changes to visuals or preparing an instrument to be played).
Scene 6:
Dramatic change where other instrumentalists/vocalists pull back and a dynamic beat is coded in pre-prepared empty space. Other live coding elements drop out. As the coder executes the beat, he also begins to play a lead melody on an instrument, which he has prepared to play during the previous scene. A lengthy dynamic ramp of 32 bars (for example) is coded into the beat.
Scene 7:
Crescendo hits as all previous musical elements re-enter together (pre-prepared) and instrumental performance by the coder continues.
Scene 8:
Sequenced music drops back to pre-written simple chords section while a vocalist performs an intimate 16 bar section. Coder adjusts visuals in the ‘breathing room’ provided.
Scene 9:
Visuals evolve while all sequenced music drops out and the composition continues instrumentally/vocally.
– – I’ll cut myself off here before I go too far into imagination and start air-guitaring (or ‘air-coding’?)
By thinking in this way, the ‘blank page or pre-written code’ conflict doesn’t really exist any more — every scene of the ‘zoomed out’ form can be designed differently with a different performance ethos in mind for the performer to ‘zoom in’ to. A composition thus becomes a series of ‘playgrounds’, each providing different musical elements and alternative technical machinery for the Live Coding performers to explore and express themselves through.
It does require some advance preparation in terms of developing the underlying machinery to facilitate ‘activating’ each scene in turn (my own highly unorganised codebase for live performance can be viewed here: https://github.com/OscarSouth/liveCode, much of which relies on pre-compiled code from the MusicData.hs module here: https://github.com/OscarSouth/theHarmonicAlgorithm).
So, that’s where I’m at right now! We’re (myself and Saydyy-Kuo) going to be spending some time to refine these present tracks and produce studio versions (I’m thinking of publishing an article or two on ‘Live Coding In The Studio’ to reflect on this process). We’ll then be going forward into future performance development through 2020 with the ideas discussed here in mind.
I’m sure as time goes by I will have plenty more to think and talk about on this topic! Please share any thoughts or ask any questions you might have in the comments section.
Oscar South