Making decisions in time
Material – Decisions dealing with manipulations of local, sonic materials. This can come in the form of instrumental behaviours or general development, but is open to context and interpretation.
￼Each of three first versions of ‘Everything’ videos were analyzed in this manner. I felt that for the analysis to be meaningful, I would have to be brutally honest with my thinking, even if it was not flattering. Here is a segment of the raw, unedited analysis for Everything. Everything at once. Once. (1b):
0:05 – decide on ‘soft’ entry
After coming up with the moment-by-moment list of decisions I separated them into individual streams. This was occasionally difficult, where some decisions could potentially fall into multiple categories. When this was the case, I just went with the most pertinent stream. Here is the same section of the analysis with the streams attached and the wording cleaned up slightly:
0:05 – Material : Decide on ‘soft’ entry.
Feel free to follow along with the video. Everything. Everything at once. Once. (1b):
Generating these analyses has given me some insight into how my decision making apparatus works in time. I can see explicit patterns and tendencies in the way decisions are structured, but more importantly, tuning in to that decision framework has let me draw a conceptual circle around that creative plane, and let me articulate on it. This is very similar to how focusing on the curation of instruments let me articulate on that creative plane in the Everything. Everything at once. Once. pieces.
Additionally, this kind of analytical thinking led to the composition an amplifier a mirror an explosion an intention, though in that composition it is used as the conceptual and composition framework, and not an after the fact analysis tool.
After conceiving the analytical framework I decided that a visualisation of some manner would be helpful in understanding how my improvisatory thinking operated. I initial experimented with the SubRip file format (.srt), with the idea to use video subtitles as the main way to view the analyses. A fellow PhD candidate, Braxton Sherouse, helped me create a ruby script that would take my text files and convert them into suitably formatted .srt files. This approach proved to be problematic as anything beyond a low density of simultaneous events quickly becomes unreadable. It also did not allow for any statistical analysis of the analyses.
I created a spreadsheet using the analysis data shown above (time, stream, comment), and then began creating all kinds of graphs and charts from the data. The most useful one being the static “constellation” view, showing the streams on one axis, and time on the other. Here is the analysis for Everything. Everything at once. Once. (1a):
There are some remarkable things in this analysis. First being that Material decisions stop half way through the piece, when the sonic material shifts towards being more electronic. This comes along with an increase in activity in the Interface/Interaction streams. There are more similar insights coming from being able to visualize the analysis data this way.
I also produced activity rates within each stream, along with trend line showing the overall trajectory of activity in the piece (or within each stream).
In addition to the static, graph-based analyses, I did some generic number crunching, calculating the minimum, maximum and mean for each stream. As well as a co-occurance matrix, showing how often each stream went to each other stream (ie Material going to Formal 6 times in the piece). These static analyses proved insightful, and I imagine will provide even more insight once I have enough analysis data to correlate between individual analyses. This will allow me to notice tendencies that I may have on a subconscious, or even physiological level, if my language-based rate of Material decisions theory is correct.
So with these static-spreadsheet based analyses in hand, I decided I wanted something more interactive, and musically useful. Not to mention something easier to produce. Many of the metrics I produced (such as co-occurance) I had to calculate manually for each analysis I produced. Something that is both tedious and error prone.
I contacted Tom and asked him if he would be interested in putting together a better version of what I had built. Luckily he was. Tom, in addition to being a programmer is an improvising saxophonist, so he was able to not only understand the motivations and functionality of this kind of framework, but to contribute to how it could be best displayed.
Working back and forth over a few weeks Tom put together something interactive, compact, and significantly better than what I had cobbled together in a spreadsheet. On top of all of that it used all open source technologies. Fitting in well with the sharingethos I have.
It is largely built around the D3 library, but uses some additional web technologies to allow linked audio playback, and dynamic recalculation of zoomed in data.
Here is the playback/zoom section, along with the no-longer-static “constellation” view.
You can hover over each point to see the comment. You can click on any point (or the waveform) to begin playback from there. You can zoom in to a specific section of the piece using the selection bars at the top. Everything dynamically adjusts when you resize the viewable area.
There is a large trend chart view which allows you to see the trends for the overall piece, or within any given stream. It looks like this.
Everything, including the trend lines, recalculate when a new selection is made in the top part of the window.
Finally are the static metrics of minimum/maximum/mean/standard deviation/co-occurance. All of these are calculated automatically, and they dynamically recalculate based on a new selection being made.
So far I have analyzed three performance. All three videos from my initial Everything (1) set of videos. In analyzing these videos I quickly learned that the ability to empathize with one’s own decision making, while watching a video, dissolves very quickly. I analyzed (1a) the day after the performance, then (1b) the day after that, and finally (1c) a day later. By the time I got to the third day, I found that I could only really infer what I was thinking, by observing the results of that thinking (ie what I was doing). As a result, the data for the 3rd analysis is quite different from the first two. I have kept it, and will likely use it as a control, showing what ‘bad analysis’ looks like.
I plan on analyzing all of my upcoming videos/performances and feeding them into this system, and have asked some other performers to contribute their own analyses. Once I have enough data, I, with the help of Tom, will come up with some more graphs/metrics to correlate between the data sets, showing tendencies over time and between different performers. Eventually, this will expand to have a page where one can upload their own analysis (as a .csv file, an mp3, and an optional video file), and the system will add it to a database of existing analyses. The user will then be able to view their analysis, or any analysis in the database, and then view correlated data between a selection of these analyses.
In addition to the analysis and correlation of solo performances, I plan on analyzing duo and trio improvisations where each performer will analyze their own performance ‘blindly’ to then correlate data from each performance within the same performance. This will, undoubtedly, provide tremendous insight into the group dynamic and interplay happening between performers. This will, of course, require some completely different visualization tools.
You can follow the developments of this improvisation analysis framework on it’s static page here. As I add more analyses, and come up with new approaches on how to visualize the data, I will update the static page, and add any relevant links to it there.