About Headless Monkey Attack
Headless Monkey Attack is an electronic (and sometimes also acoustic) music project founded by composer Ryan Carter. At its core, Headless Monkey Attack performs live electronic music that is synthesized in real time from code that responds to input from a video game controller. This controller (the "Gametrak") features two retractable tethers that can be pulled in any direction. By connecting the controller to my computer (this is Ryan speaking), I can manipulate whatever aspects of the sound I've coded to be interactive during the performance. The code also incorporates some randomized elements, so my performance is partially in response to events that I can't entirely predict. The music is coded to ensure certain features are consistent (the duration of each track is predetermined, as are the duration and order of each section within a track), and the randomized features are kept within ranges of possible values that I've planned in advance. Each performance sounds in some ways the same and in some ways different.
Aesthetically, the music draws from different genres of electronic dance music (there's some vaguely dubstep-y and glitch-y stuff) with a global sense of form more inspired by the long history of the Western classical tradition than the world of EDM.
More Information► Too long; didn't read.
I wave my arms around and the computer makes weird/pretty/destroyed sounds.[ close all ]
Sure! Well, some of it is pretty danceable. Some of it, not so much.[ close all ]
I live in upstate New York most of the time and Cleveland, Ohio some of the time.[ close all ]
Almost all of the sound is synthesized from code written in RTcmix, which I embed in Max/MSP using the rtcmix~ object. I pipe data from the Gametrak controller into the RTcmix code so that I can manipulate aspects of the sound (at the level of the synthesis) during performance. For some of the synthesis, I use SPEAR to analyze a sound (say, my own voice) and I take data from one FFT frame to generate an RTcmix wavetable (using a command-line utility I programmed, for which the source code is available on this site). Some of the DSP and audio file handling takes place in my Max patch, which also functions as a composing and audio analysis platform.[ close all ]
There are music programming languages, in which you can write code that makes sound or processes sound. I write code in one of these languages and that generates the music. The sound isn't synthesized until I press the space bar. This allows me to plug a video game controller into my computer so that I can control certain aspects of the sound as it is generated.[ close all ]
My background (and about half of my current work) is in acoustic composition (writing things like string quartets and music for orchestra), but I've also for years had an interest in electronic dance music. In early 2011, I was taking a seminar on programming musical apps, taught by Brad Garton at the Computer Music Center at Columbia University and I had an idea for an iPhone app. I wanted to compose an album of interactive electronic music that would respond to the way the listener held or moved the phone. I had no programming experience, though, and the project took a year and a half to build. In the summer of 2012, I released iMonkeypants on the App Store. In the process of developing the app, I frequently ran into problems of resource constraints; the musical ideas I had required too much from mobile devices and the app would crash unless I simplified some of the music. As soon as I released iMonkeypants, I began experimenting with making music with the prototyping patch on my laptop, which is much more computationally powerful than a phone. In the fall of 2012, I composed a piece for the Princeton Laptop Orchestra and during this collaboration I discovered the Gametrak controller. I had been thinking for years that I'd like to form a collaborative music project, and I began working on music that would be written in the same programming language as my app, but would expand both the interactivity and musical complexity of the app into a live act that could be performed alone or in collaboration with other musicians.
In early 2013, I finished producing Headless Monkey Attack's debut album, Latency in the System, which I performed in April 2013 with percussionist Dennis Sullivan. The project is under continuous development and more shows are in the works.[ close all ]
I'm glad you asked. Perhaps my favorite writing genre is the half-baked manifesto that I make no attempt to defend.
Much of my work as a composer is interested in the impact of new technologies on how we experience music. This has, over the years, taken many forms, from a tongue-in-cheek Ludditism to my current preoccupation with fragmented listening practices. The music of Headless Monkey Attack responds to two concerns I have at the moment: 1) with an increasing reliance on automated systems, users uncritically accept with increasing frequency the easiest paths presented, which may not be the best choices; and 2) the same technologies responsible for the ubiquity of music in our culture encourage fragmented, distracted, and incidental listening practices.
1) I built Headless Monkey Attack from scratch. I didn't invent the RTcmix programming language or the Max/MSP programming environment, but all of the code and patches are developed from the ground up. Even this not particularly fancy website is more or less made-from-scratch, each function hand-coded with love. [ + Like this interactive "footnote." ] One of the advantages of designing a system is defining its capabilities and limitations. Every system makes some things easy to do, some things hard to do, and some things impossible. That's fine; there's no way around it. But an uncritical acceptance of a system allows the technology to dictate what is possible and - more importantly - what is not. A MIDI-based commercial DAW (digital audio workstation) may, for example, assert that quintuplets don't exist. [ + They do. ] When you design your own system, you're forced to choose what will be possible, which leads to an appreciation of the necessity of making these decisions and an understanding of their sometimes arbitrary nature. [ + Many of these ideas resonate with (but were developed independently of) or are directly influenced by the views propounded by Jaron Lanier in You Are Not a Gadget: A Manifesto ]
2) For a little over a century, the development of technologies for recording and distributing sound have fundamentally altered how we hear and create music. Before audio recording existed, the only way to hear music was through live performance, which meant listeners were likely to hear an entire piece (or at least an entire movement) at once. Someone in the time of Beethoven would only have heard his Fifth Symphony in a concert hall and would probably have listened to the whole thing. [ + Sure, someone could have heard a fragment by dropping in on a rehearsal, and the domestic practice of playing piano reductions of major orchestral works became popular in the 19th-century. ] The music was constructed to only really make sense when experienced in its entirety. The first four notes of Beethoven's Fifth Symphony are - in isolation - famous, but trivial and meaningless. Today, we have unlimited access to most of the music ever made at the push of a button, but it's as easy to stop listening as it is to continue. People who make music tend to feel (consciously or not) a pressure to make everything sound immediately gratifying. [ + Theodor W. Adorno foreshadowed this concern in Current of Music, a study of radio music conducted while he was living as a refugee in New York City from 1938 to 1941. He referred to this listening as "gustatory," in that it feels immediately and momentarily good. ] Ultimately, though, much more satisfying listening experiences can come from music in which each detail is only truly meaningful in the context of the entire work. This "integral listening" may require a time commitment, but is worthwhile.[ close all ]