top of page

PREPARED MUSIC FIELD

As part of an on-going collaboration between Scott Christian of Fresh Ink and the Digital Arts Center at UNC Charlotte, composer Ian Dicke has written User Agreement,  an original piece for performance in a selected number of museums and public settings.

 

These performances will occur in a prepared music field that will allow the audience to move through the space using their smart phones to engage both with live performers and digitally delivered augmented sound. Each member of the audience will have a unique listening experience depending on their position and movement.

The spatial setting informs the prepared music field both through the diffuse spacing of the musicians and by the configuration of performances in multiple settings using topological instruction. The movement through the space affords a new method of spatial engagement for the audience.

 

The historical constellation of music engages the prepared music field by developing the range of instrumental, found, and manufactured sounds as media for the piece. This project fits within a tradition of innovation and inclusion that stretches back at least a century.

 

The technological matrix provides a new medium of engagement with the prepared music field through the use of smart phone technology to provide precise location information and to supplement or alter the live acoustics. This project uses technology to actively prepare and interact with the space. The presentation of this piece will begin with a premiere in Charlotte, following by performances at sites in Rome, New York, Los Angeles and Boston.

 

 
SPATIAL SETTING

To understand the role that space might play in this project, we traced the role of the performance venue in the history of music. The slow pace and single musical line of Gregorian chants are acoustically matched to the long reverberation time of medieval cathedrals. The polyphonic Baroque work of Bach and Vivaldi coincides with musical performance migrating into smaller, harder surface rooms with more limited reverberation times. An even more direct connection between space and performance was struck by Wagner, who wrote Parsifal specifically for the Festspeilhaus in Bayreuth and by the Requiem of Berlioz, intended for performance in Les Invalides.

 

While the acoustics of concert halls and other musical performance spaces is still relevant, the advent of audio recording has altered the relationship between music performance and audience experience. The ability to record and manipulate sound has become more and more sophisticated with the use of digital processing, but at the cost of disengaging from a spatial setting.

 

One of the intentions of this project, then, is to re-engage the space of the performance as an active part of the composition, not as a return to historical settings of music, but as once again an important part of a composer’s intentions.

 

For our project, the engagement with space has two aspects. First, players will be distributed in space such that the audience can move freely amongst the players, rather than being fixed in a rigid dialectic. This generates a variable and individualized performance, depending on the position of listeners and on the nature of the digital manipulations.

 

The project engages space not only at the level of the individual listener, but also with the venue of the music performance. After considering a variety of settings, we chose the museum, for though it is not configured as a conventional performance space (although musical performances at museums are not at all uncommon) it still retains presence as a venue in which particular and framed attention is paid to whatever is within. The audience will see and hear whatever occurs with heightened expectations.

 

Second, we have redefined the museum as a spatial matrix, which affords a fresh interpretation in each venue. Individual players are positioned based on a set of instructions that describe the topological relationship of each player to the next in the form of a recursive rule set:

Player A will be able to see Player B

Player B will be able to see Player C

Player A will not be able to see Player C

Etc.

Player A will not be able to hear Player B

Player B will be able to hear Player C

Etc.

This idea of space is also found in the work of the painter and sculptor Sol Lewitt. His wall drawings are presented as a series of instructions with a set of drawing operations. Lewitt chose not to execute these drawings himself, leaving the actual drawings to other artists. His descriptions are clear, but leave room for interpretation, not unlike the score of a piece of music.

 

 

HISTORICAL CONSTELLATION

To better place our project in an historical context, we have created a model for interpreting experimental music works in the last 100 years, all of which vary in terms of historical merit, intention, performance, and use of technology. First, we distinguish between three basic understandings of experimental music: 1) Traditional Instrumentation, characterized by acoustic orchestration for conventional instruments; 2) Electronic/Manufactured Sounds, characterized by electronically generated and manipulated sounds that do not naturally occur; and 3) Natural/Found Sounds, characterized by observed or recorded sounds (noises) in their natural state.

 

We then position precedent works on the resulting 3-point field. Some works strongly favor one specific idea of experimental music, while most fall between 2 or 3 categories. For example, the 2009 performance of Henry Brant’s Orbits at the Guggenheim—involving a spatial arrangement of 80 trombonists, sopranos, and an organist—is firmly placed under Traditional Instrumentation. However, the 1995 performance of Karlheinz Stockhausen’s Helikopter Streichquartett—which involved a live quartet inside four flying helicopters while video streaming to a concert hall—is halfway between Traditional Instrumentation and Natural/Found Sounds.

 

Most centrally positioned, Edgar Varese’s 1958 Poeme Electronique—an abstract combination of synthetic music, found sounds, and instrumentation all presented through 200-plus speakers with accompanying video projections—demonstrates parts of all three categories.

 

Next, we layer in 5 accompanying elements that reflect additional information on each piece: Audience Participation, Human Computer Interaction, Visual Merged Media, Spatial Augmentation, and Abstracted Musical Score. We find that 3 of the Traditional Instrumentation pieces have abstracted musical scores while only 1 piece outside of that category does. Like-wise, all works noted for Visual Merged Media seem to be influenced by Electronic/Manufactured Sounds.

 

Another data layer shows a loose interpretation of “compositional styles” or the intent of the composer. We see that orchestral music is centered about Traditional Instrumentation, and Sound Art stems from Natural/Found Sounds. Technology is a common theme that flows through most of the chart but barely touches Traditional Instrumentation. Most fascinatingly, Musique Concréte—the theory that both found sounds and technologically manipulated sounds can be combined to create a new music synthesis—moves across the chart in malleable fashion.

 

We also organize the precedents on a timeline so as to follow the lineage of different experimental music theories, technologies and historical events affecting music. We begin with the Italian Futurist, Luigi Russolo’s Macchina Tipographica (1913), which was one of the first works for noise generators, sound recordings, and live performance. Russolo’s manifesto The Art of Noise extensively categorized types of sounds and included instructions on how they should be mechanically generated as well as combined and utilized for future orchestration. Russolo concludes, “...one day we will be able to distinguish among ten, twenty, or thirty thousand different noises. We will not have to imitate these noises but rather to combine them according to our artistic fantasy.”

 

Next, we map the loose overlap of compositional approaches. We find a close correlation between technology and sound art while orchestration only makes the occasional leap into a different field. Musique Concréte almost disappears after the invention of the Moog synthesizer and other sound-generating technology, but returns in the 2000’s. Most interestingly, all music forms deviate, split, and overlap wildly within the last decade.

 

On both diagrams, we also include a past Digital Arts collaboration. In 2013, the Fresh Ink Music Series and our Digital Arts Center prepared a merged-media performance of Morton Feldman’s Crippled Symmetry in the Storrs Hall salon. Audience members reclined on beach chairs on either side of the centered musicians while watching manipulated visuals projected onto the length of the salon ceiling. 

 

 
TECHNOLOGICAL MATRIX

If mobile computing applications are to be part of a prepared music field, we need to know both the technical possibilities and the relationship to music performance. While locational aware applications immediately call to mind Geographic Positioning Systems, GPS becomes widely inaccurate and unreliable indoors because of the affects of buildings materials on the signal. Systems to calculate indoor position are a current research field in computer science and there are a number of possibilities that are feasible for this project.

 

Bluetooth Beacons

Bluetooth beacons are a simple way to emulate the characteristics of GPS. Acting like a single satellite, they allow individuals to know their rough proximity to the beacon (close, medium or far). This allows two types of interaction. When a phone is within range of the signal we can have an on/off effect, or we can modulate an effect based on the phones proximity to the beacon.

 

To further emulate GPS, we can combine three of these beacons to find an approximate location in space by identifying which the proximal zone for each beacon (far, close, far). This allows us to do one of two things: we can either have three different effects that are modulated simultaneously, or we can have a complex effect that has several different variable qualities we can adjust.

Smartphone Accelerometers & Gyroscopes

 

Accelerometer and gyroscopes are prominent features of nearly all smartphones. While bluetooth beacons work with the last few generations of phones, these sensors will work on every smartphone starting with the first generation. Using measurements from these sensors we can track people’s relative movement within a space and could be used to make small adjustments to the sounds people are hearing. They can form a backup system in case the beacons’ signal is intermittent. Accelerometers could also be used to modify the sounds people hear within a proximal zone (such as “far”) so that small movements will impact the music along with large movements.

 

Creating Mobile Effects for the Audience

There are two options available to us when thinking about effects we can give the audience through a mobile app. First, we can program a MIDI controller in the app to create sounds, or use prerecorded music that will be layered on top of the live sounds. Secondly, we can use the built-in microphone to record the live sound and modify it using effects on the phone. The advantage to modifying live sound is that it will always be synchronized. Synchronization with a MIDI controller is discussed in the next section, but should not be any more complicated.

 

When using prerecorded sounds, the two options are including the sounds in the app, or connecting to a central server over WiFi to stream or download the music. While not incredibly hard, streaming presents the most risk for complication since it relies on the quality of the existing wireless at the exhibition location.

Synchronizing Musicians & Mobile Sounds

 

To synchronize all of the music in the exhibition when using a MIDI controller, we have decided to include the metronome in the mobile app. When the audience loads the app, they will be unaware of the option unless they browse through the settings. For musicians, they will simply go to the options and flip a switch so that they hear a metronome instead. By linking the metronome to the clock on each smartphone, we can easily synchronize each musician spread across the exhibition space without the need for visual or auditory contact.

The use of prerecorded or streaming music with the option to synchronize as an alternative to a metronome is also possible.

 

Visualization

To the audience, the existence of sensors and triggers that can change sound is largely irrelevant. They do not need to know, or it should not be completely evident to them how the project is technically implemented. The audience will, however, want to have some control or ability to predict changes to the sounds as they move through the physical space. Presently, this can be thought of as a visualization of the abstract space of the beacons and sound effects. Naively this could be a simple map of the space with the sensing areas around each beacon. From an artistic perspective, however, a map is not a good visualization. We will need to develop some abstraction of a map, or the fields involved in sound generation and changes. This visualization should recognize that we will not always know the position of the audience member (such as when they are outside of a beacon’s sensing area) and should not fully predict changes to the sound in order to preserve a sense of discoverability inside the system.

bottom of page