Introduction:

Problems With Liveness in Laptop Performance

Liveness in electronic music is a topic that has been researched and written about extensively (Emmerson 2007; Croft 2007; Sanden 2013). It is not within the scope of this project to address the full spectrum of issues relating to liveness. Rather, I will focus on ​specific problems I have encountered in my performance practice, which have affected and interested me. I will then propose my personal solutions to these problems (that I have arrived at through research).

Over the last three years I have gradually transitioned from performing with acoustic guitar, to electric guitar, no-input mixer, and finally laptop. With each instrumental change I have noticed a significant drop in both personal and audience engagement. Primarily, this appeared to be due to a reduction and simplification of my own physical interaction with each instrument, combined with a lack of obvious causality between gesture and resultant sound.

Opaque Gestures

(The Email Problem)

"How do I know you aren't just checking your emails?" will likely be a question familiar to any musician who regularly performs live music with a laptop as their sole instrument. As observed by Bown et al (2014), this common criticism stems from the visual opacity of the performer's actions; there is often little perceptible link between gesture and resultant sonic event. This can be unsatisfying to watch, especially for the concert-goer who is used to seeng performers interact with more traditional instruments. In his notion of 'Corporeal Liveness', Sanden (2013) posits that "Music is live when it demonstrates a perceptible connection to an acoustic sounding body." The use of a standard laptop interface obscures this connection.

 

The Usual Solutions:

- Gestural Controllers

A common solution to this disconnect is the incorporation of gestural controllers. Gestural controllers such as the Myo Armband explicate the causality between gesture and sound. This area of human-computer interaction (HCI) has been explored at length in the work of computational artists such as Atau Tanaka.

 

Many of these interfaces mimic traditional acoustic instrumentation. This regressive attitude surely limits the potentiality of computer music. Computer technology has the potential to transcend "traditional" instrumentation, to be a sui generis (Koenig 1980, p. 111); why pantomime pre-existing tools?

- Live Coding

The practice of live coding deals with the issue of disconnect by projecting the screen of the performer. This makes the process of sound creation transparent.

 

Instrument/Speaker Dislocation

Another issue which arose as I began to play with instruments requiring external amplification was that of dislocation between the instrument (laptop) and the sound source (speaker). It is typical for laptop performance to be amplified through a PA system, which will usually be a significant distance from the laptop. This creates a surreal and jarring disconnect which makes a performance seem inauthentic. This is clearly not an issue with acoustic instruments, where the instrument is itself an amplifier. In some circumstances this dislocation can be so extreme as to affect gestural causality. For example, in large venues, the speakers  can be so far from the performer that a delay is experienced between instrumental gesture and resultant sound. This issue is in some rare cases solved by the inclusion of built-in speakers; and example of this is the EMS Synthi.

 

Finding My Own Solution

I am not interested in adopting a readymade solution to these issues, as this would likely not be appropriate to my individual practice. Instead I will undertake experimental (De Assis, 2018) research, in order to discover a personal solution that is relevant to my own practice.