The Making of Semi-Conducted

This page serves both as an introduction to Field (software), and as a look at the programming and compositional process for Semi-Conducted (piece for quartet and interactive video).

About Field

Field is an open-source digital-art-making environment that was originally created by Marc Downie at the MIT Media Lab, and which has been developed and used by OpenEnded Group. They have worked with Field to create all of their recent artworks, which include 3-D animations, processed video, high-resolution displays, algorithmic art, dance collaborations and more. The following statement is taken from OpenEnded Group's Artist Statement, which can be found here:

More than an authoring system, Field is in fact a system for creating authoring systems (a "meta authoring system," if you like). It allows users to fashion their own authoring environment for any given project; and as they work they can continue to adjust and even to recast this environment as the need arises.

If this statement doesn't make sense to you, or if it does but you don't see why it's important, don't worry. Hopefully by the end of this page you will understand how powerful this idea is, and how it sets Field apart from almost all other digital authoring environments out there (certainly any that I've used). It is perhaps the most central philosophical concept behind Field, because it allows the software to be both incredibly powerful, and incredibly flexible and agile. Because Field is so flexible, I was able to design my own "authoring environment" which allowed me to construct a custom graphic score, and to create a meaningful timeline for laying out animation keyframes. Neither of these features come "pre-packaged" with Field, but by building up enough of the small elements, very flexible and powerful systems can be built up from scratch.


Not Just Code, Not Just GUI

Field is not just another text-based programming environment. The canvas provides a strong visual metaphor that allows you to draw, arrange code, and execute code in ways that would be very difficult with pure text-based code. On the other hand, the fact that you can and must write code (Python) to do most of the heavy lifting within each of your objects means that users have a lot more customizable control than you do in purely "graphical" programming environments, such as Max/MSP/Jitter, Pure Data, and Isadora, just to name a few. Having both paradigms available can give the right balance of rapid prototyping (not easy with pure code), and customizability (not easy with graphical programming).

The creators of Field took the idea of integrating code and GUI elements even further by allowing for the ability to embed sliders, graphs, and other graphical elements directly into the code. In the example below, I am using a slider (returns a float from 0.0 to 1.0) to determine the value of stroke(). I can be executing the code (in this case it's drawing a single line that changes positions) and at the same time, I can drag the slider to different positions to see the resulting color change in real time. This is what is meant by rapid prototyping.

Below I provide one (there are many more...) example of how you can execute code using the canvas. In this example, each object draws a specific pattern of lines and colors, and after a certain amount of time, it passes control to the next object in the chain. For a bit of additional fun, I added two conditional "switches" into the path - when the slider is less than .5, control passes to the first object. When it is greater than .5, control passes to the second object. Finally, note that towards the end of the video, two separate chains of execution are started, both drawing to the same animation screen. This is somewhat of a contrived demonstration, but it is easy to imagine scenarios when this type of complicated behavior is much more desirable than traditional control structures such as for and while loops.


Drawing on the Canvas

Being able to draw on the canvas was very important for the creation of Semi-Conducted, because this is how I eventually created my graphical score. By simply drawing my shapes onto the canvas, I could sweep the timeline 'playhead' over them to indicate when the events were to happen.

This example video demonstrates how Field uses Python to draw directly to the canvas. Next, the drop-down menu shows all the GUI elements that can be embedded directly into the python code (just as they can be embedded on the canvas itself). These elements can be set to update the code in real time as they are used. Lastly, anything drawn to the canvas can be exported as a pdf by right-clicking on it.


Support for Processing

For anybody familiar with using Processing for doing video animation, you may be interested to know that Field supports full integration with Processing. That is, you can write the same (or very similar) code in Field and have it render in the Processing applet. In fact, that is exactly what I did for Semi-Conducted. The 3-D OpenGL spheres and lines are all rendered within the Processing applet. As demonstrated clearly on Field's Processing Integration page, Field is an even better environment for using Processing than the Processing environment, since it allows you to break free from the edit-compile-run cycle. (Know how you have to press that little "play" button every time you make a change to your code?) Within Field, you can be running some Processing code in an animation applet, change the code and see it update live. Better yet, you can add a GUI slider directly into your processing code and control some parameter, say rotation or size, on the fly as it's running. This type of workflow is ideally suited to rapid prototyping and is very much a part of Field's flexibility.


Keyframing Animations

To animate the motion of my 3-D spheres and cubes in space, I designed a keyframe editor within Field that worked with the built-in timeline plugin. Essentially, I designated a specific rectangle on the canvas in which my animator object would look for keyframes. It could then order them based on their x positions. Each keyframe could be edited internally by opening them and adjusting GUI sliders that corresponded to different parameters (such as sphere size, camera zoom, rotation amount, etc). I could control how quickly a keyframe would be reached from the previous keyframe by physically moving the keyframe object closer or further from the previous keyframe. Lastly, I had control over the "tween curve" between successive keyframes by using graph objects connected with each keyframe. The curve drawn indicated how each of the parameter values would change from the start of the current keyframe box to the start of the next keyframe box.

The video below demonstrates most of how the editing process worked. Notice that I started from totally flat keyframe curves which means that there was no "tween curve" between keyframes. By adding just two such curves, I was able to go from three unconnected stills to a smoothly interpolated animation.


The Graphical Score

The score works as follows: there are four "staves" that represent the parts for the different players. Red for flute, green for clarinet, blue for violin, and yellow for cello (get it? yellow cello). The four players follow their parts on computer screens and play when the timeline crosses one of their triangular "notes". The vertical overlap between a triangle and the timeslider at any moment indicates how loudly the player should be playing that note. Thus, each triangular note inherently represents an attack and release envelope (see ADSR envelopes for more). During any given section, each player has four possible pitches they could be asked to play by the computer. The different pitches are represented as vertical translations of the triangle within their "staff space". So a higher triangle corresponds to the higher pitch.

When I was composing the piece, I needed to develop a method for drawing notes that didn't involve dragging them in one by one with the mouse, or adding them by line by line with code. This is where Field's "meta-authoring" capability kicked in. By programming how I wanted the built-in GUI elements to affect my notes, I was able to prototype large numbers of phrase structures and note patterns simply by dragging boxes around and adjusting slider values.

This demo shows some of the details involved in creating the graphic score notation used in Semi-Conducted. Keyframes were used to store the configurations of "phrase structures" and morph between them. These "phrase structures" consisted of up to four notes per player that could be one of four pitches per player. Other details included note length, start position and note envelope shape. All the data within each of these "phrase structures" (and there is quite a lot of data within each) was set using GUI sliders embedded in the text.

This video shows how the compositional process worked once a few phrase keyframes had already been made. Edits performed within this demo include changing the phrase duration slider, moving the left-right (time) position of a keyframe, and adjusting the "length" of a keyframe. (In traditional keyframing, keyframes don't have a "length" as they represent discrete instances in time. However, in my formulation, the length of a keyframe box determines how many times the given phrase structure repeats without changing before morphing into the next phrase.)

For this use, I found it unnecessary to include customizable keyframe curves (as seen above in Keyframing Animations), so all "curves" between keyframes are linear.


Pitch Tracking with Max/MSP

Hopefully, when you are watching the video on the main Semi-Conducted page, you can tell that the rotating spheres and shapes respond to the players' notes by lighting up with different colors. This was done by actually listening to the audio output of each of the instruments and doing pitch detection to determine what note they were playing. For any given section, each player has up to five shapes they can trigger by playing different notes. Each player's notes are colored (red, green, blue, yellow) in the same way their scores are colored.

The Max/MSP patch I constructed takes four audio inputs (each instrument is individually miked), runs each signal through its own fiddle~ object and compares the MIDI pitch to the five expected pitches for the given section and instrument. If the MIDI pitches match, the patch then sends out the amplitude of the given pitch, also given by the fiddle~ object, along with the instrument number and pitch number via OSC to Field. The Max/MSP patch also receives OSC messages from Field whenever the piece comes to a new section. This tells Max/MSP to step into the next preset which contains a different set of notes to listen for.


The Compositional Process

The process of composing and creating all aspects of Semi-Conducted took place gradually over the course of my senior year at Oberlin. I was working closely with my private composition teacher within the TIMARA department and was also taking an independent study with my advisor from the Computer Science department for help with the OpenGL and graphics components of the piece.

Since this was my first actual piece made using Field, I spent a good deal of my time actually learning how to program within the environment, and construct various structures such as keyframing, OSC send/receive, canvas scrolling, etc. However, I felt the need to work simultaneously on both the software and music composition side of things. This was a delicate balance because in order to compose something, I needed to know if I could ultimately create the framework for it within software, and to create the software, I needed to know how it was going to be used within the composition.

Once I had developed the notion of creating "phrase structures" (see The Grapical Score above), I set to work composing a whole lot of these using pencil and paper. Generally, these were about one or two bars of notes composed for the four instruments, although they did not need to conform to any particular time signature. In addition to "phrase structures", the music in the piece is also made up of several sections that are notated out in a more traditional manner. Once I had a collection of phrases and sections notated on paper, I cut them all out so I could arrange them in whatever order I liked.

Once I had arranged my composed units into an overall structure, it was time to implement the phrase structures within Field, write out the score with a notation software (I used Sibelius), define the right pitches to track within Max/MSP, and construct animations to go with each section. Again, it was necessary to be able to work simultaneously on all of these fronts in order to make sense of the final output. Since working on the piece during this phase involved such an elaborate setup, I found myself taking large chunks of time to setup and work in TIMARA Studio 2.


Future Documentation

Many of the details regarding actually constructing the code necessary for putting this work together is omitted from this page. Soon, I hope to create a set of small tutorial files to be opened within Field that contain working versions of all aspects of the piece.