Category: News


Ludo2018, the Seventh European Conference on Video Game Music and Sound, will take place April 13th-15th at HMT LeipzigGermany.

This seventh annual conference is focused on the theme of ‘Soundscapes and Interfaces’. The conference will feature keynote addresses and sessions by:

  • Michael Austin (Howard University), editor of Music Video Games (Bloomsbury, 2016)
  • Adele Cutting, BAFTA-winning audio professional whose credits include the Harry Potter and The Room franchises
  • Kristine Jørgensen (University of Bergen), author of Gameworld Interfaces (MIT, 2013)

Draft programme and abstracts available here!

The conference is hosted by HMT Leipzig and the University of Leipzig. We are grateful to be further sponsored by Stiftung Digitale Spielekultur and EA Blog Digitale Spielkultur.


Registration is now open. Please fill out the Google form here.

Location & Travel

The conference will take place at the recently founded Zentrum für Musikwissenschaft, which is a collaboration of the Hochschule für Musik und Theater „Felix Mendelssohn Bartholdy“ (or HMT for short) and the University of Leipzig.

The conference venue is the HMT, which is within 15 minutes walking distance from Leipzig Hauptbahnhof (main station) and located on the brink of the city centre. The address is:

Hochschule für Musik und Theater „Felix Mendelssohn Bartholdy“
Dittrichring 21
04109 Leipzig
(HMT Homepage)

The HMT is barrier-free and provides parking space for the disabled. Please let us know in advance if you need any assistance, so we can make sure that a parking spot will be available.

By airplane
You can travel to Leipzig/Halle airport directly in case you find a good connection from your home airport. There is a direct S-Bahn connection going from the airport to Leipzig Hauptbahnhof (ca. 15 minutes travel time).

Other options are flying in via Berlin (Tegel or Schönefeld airport) or Frankfurt International Airport.

  • Berlin Tegel has a direct bus connection to Berlin Hauptbahnhof (main station, Bus TXL).
  • Berlin Schönefeld is connected to Berlin Südkreuz per bus, per S-Bahn or per local trains.

Both railway stations, Berlin Hauptbahnhof and Berlin Südkreuz, have direct speed train connections (ICE / IC) to Leipzig Hauptbahnhof. For information regarding travelling by train see the item By train below. Estimated travel time from the airports to Leipzig Hauptbahnhof is ca. 2 hours.

Frankfurt International Airport has a train station and is connected to the speed railway network (ICE). There are some direct connections to Leipzig. For information regarding travelling by train see the item By train below. Estimated travel time from Frankfurt Airport Railway Station to Leipzig Hauptbahnhof is between 3 ½  – 4 hours.

By train
Leipzig Hauptbahnhof (main station) is located on the north brink of the city centre. Information about schedules, train tickets etc. can be gained via the Deutsche Bahn website.

Regarding traveling by train in Germany: When buying your ticket via the Deutsche Bahn website in advance there are usually two rates offered:

  • The cheaper price (savings fares) is bound to the specific train connection you booked – so if you miss that specific train the ticket is invalidated. You can also make seat reservations for 9€ extra in the ICE/IC trains, but same here: if you miss your train, the reservation is gone as well.
  • The standard rates (Flexpreis) are usually more expensive, but this ticket allows you to take any connection, so in case you miss your train, just jump on the next one.

For more information consult the Deutsche Bahn website or information desks at the train station you are starting from.

By car
Leipzig is connected to the Autobahnen A9 (Berlin – Munich), A14 ( Wismar – Nossen ) and A38 (Leipzig – Göttingen). Directions can be found on the City of Leipzig website.

Please note: As the HMT is located right next to the city centre parking could be very problematic, if not a mission impossible. The HMT only provides parking space for the disabled. Please let us know in advance if you need any assistance, so we can make sure that a parking spot will be available.


There are many hostels, hotels, apartments and bed and breakfasts available in the city centre, fitting all pocket sizes and – as the entire city centre – within walking distance from the conference venue. Here are a few suggestions (please note: Prices may vary because of occupancy and breakfast is not always included in the basic price!):

Seaside Park Hotel Leipzig (4*)
Standard Single Room with Private Bathroom from 90 €/ night

Meininger Hotel Leipzig Hauptbahnhof (3*)
Standard Single Room with Private Bathroom from 60 €/ night
Bed in small mixed dormitory (4 person) from 21 €/ night

IBIS Budget Leipzig City (1*)
Standard Single Room with Private Bathroom from 53 €/ night

B&B Hotel Leipzig City (1*)
Standard Single Room with Private Bathroom from 56 €/ night

5 Elements Hostel
Bed in mixed dormitory (10 person) from 12,50 €/ night;
many other options available (4 / 6 / 8 bed dorms, 4 bed women dorms, 4 bed private dorms, single rooms, private apartments and more)

Preliminary Programme

This draft programme is subject to change.

Day 1: 13th April 2018

9:00 – 09:30 Registration, Coffee & Welcome
09:30 – 11:00 Session 1Interfaces and Performance
“GAPPP: Gamified Audiovisual Performance and Performance Practice” is an arts-based research project based at the University of Music and Performing Arts Graz. It has been conceived by composer, audiovisual artist and project leader Dr. Marko Ciciliani who together with his team, the artistic researcher and performer Dr. Barbara Lüneburg and musicologist Andreas Pirchner, investigates the combination of game elements and performer interactions for their artistic potential in contemporary audiovisual artworks. This paper offers a perspective on performers’ agencies to (musical) meaning making, and to the creative and strategic shaping of audiovisual gamified audiovisual works of the artistic research project GAPPP ( Lüneburg will introduce her model of ‘performative involvement’ that describes how agencies that are afforded to the performer( through the introduction of game elements, software design and control devices for musical or visual interaction) in GAPPPs works may influence the player’s range of expression and artistic and emotional involvement and meaning making during a live concert performance. She investigates how agencies and affordances translate firstly into game related involvement, and can secondly be transformed into ‘performative involvement’ that ideally transfers to their audiences. In her model she bears in mind, that in a performance situation the gamified interactive musical systems not only concern performer or instrument or composer, but that the spectator is part of the performance ecosystem. Following Gurevich when he speaks about a performer’s ‘skill’, Lüneburg states that ‘meaning’ – like ‘skill’ – “emerges from a performance ecosystem that includes a performer, instrument, and spectator, all as active participants that also exist within a society and draw upon cultural knowledge.” (Gurevich 2017). This investigation is based on three case studies of the artistic research project GAPPP.

When reading about the works of Japanese game designer Tetsuya Mizuguchi, Russian artist and theorist Wassily Kandinsky and his idea of synaesthesia are usually mentioned as his main inspiration. But this is just one piece of the puzzle. In order to explore and extend his own overarching concept of what he calls “music interactives”, which include games as well as his music project Genki Rockets, Mizuguchi always used the latest state of the art gaming technology as his experimental playground. While pursuing his main goal to find new ways of interactive musical experiences and forms of musical expression, he further processed diverse musical cultures and aesthetics in his projects. Whereas for example his „Space Channel 5“ series was strongly drawing on music video aesthetics as promoted i.a. by MTV, “Rez” and its successor “Child of Eden” instead were mainly influenced by European techno culture as encountered by Mizuguchi himself.
In this presentation the particular case of “Child of Eden”, being the latest point of culmination of Mizuguchis design approach, will be carefully examined. It will be shown that the game offers a form of active critical involvement with techno, by being closely designed around the underlying “social practices” (Stefani 1987, Strötgen 2014) of techno culture, which are again interwoven with aspects of Japanese Idol culture. That way, a coherent music-based gameplay gestalt (Lindley 2002, Fritsch 2014) is created during play that allows the player to experience “techno” not just as a sounding phenomenon, but as an idea via his own body that “expands into technology” (Klein 2001) –a “human interface”.
With the release of Connected Gaming: What Making Video Games Can Teach Us about Learning and Literacy (Kafai, Burke & Steinkuehler, 2016) and the edited collection, Exploding the Castle: Rethinking How Video Games & Game Mechanics Can Shape the Future of Education (Edited by Young & Slota, 2017), the topic of video games and game technology serving as tools for education has very much been propelled into the pedagogical zeitgeist.
One of the tools I experiment with in my thesis, is a virtual instrument created in Minecraft that affords the students a “constructivist” learning experience, in that learners may complete set tasks in a playful, constructive and exploratory manner. These tools emphasise embodied cognition and pattern recognition in an attempt to ‘quantify’ musical concepts, and allow learners to bridge their tacit musical knowledge with practical knowledge. In doing so, topics such as intervals and note relationships can be seen in vivid detail and interacted with. This method combines goal based, ‘gameified’ teaching methods that allow autonomy over learning outcomes, as opposed to more conventional methods such as reading and being taught in more traditional lesson situations. I discuss in my paper, the ways in which the tools I developed can elucidate chosen aspects of music theory in an interactive setting in which learners can engage with the learning experience at their own pace.
11:00 – 11:30 Tea & Coffee Break
11:30 – 13:00 Session 2 – Virtual Worlds
The steady development of virtual reality has increased the awareness of academics on the issues
associated with understanding both its derivation and originality. One of the primary questions becomes: how do we contextualised VR in such a way that pays attention to both its antecedence and its uniqueness? Answering this question is made more complex by a focus on the soundscapes of VR. Focus appears to continue to be placed on the visual novelty, rather than the sonic or the audio-
visual relationship. While links between VR and gaming have been dominating the former’s discourse, in this paper I want to place VR soundscapes as part of the wider filmic audio-visual experiences of the early 1900s, such as Hale’s tours which would develop into VR experiences at Universal Studios and other such amusement parks. By returning to the early years of film we can draw parallels between the novelty of audio-visual soundscapes in film and VR, as well as understanding how VR soundscapes are being sculpted based upon audience sonic literacy, which developed over the last one hundred or so years of film experience. Rick Altman and Ian Christie have written extensively about the early soundscapes of film and how audience’s negotiated them for the first time. What needs to develop now, however, is a discussion on how these early filmic experiences created a way of understanding audio-visual experiences that are today dictating VR soundscapes.
The recent boom in virtual reality is firmly rooted in video game culture and technology through its devices’ connection to gaming platforms (HTC Vive and Valve’s Steam) and consoles (PlayStation VR). Not all of the applications offered on these devices can be called ‘games,’ however, and both HTC/Valve and Oculus prefer the label ‘experiences’ on their websites. Many of these experiences are VR versions or ‘ports’ of products from older media, like Minecraft and Superhot VR, or genres in older media, such as space simulation or racing simulation games. As a consequence, there is a certain amount of remediation (Bolter and Grusin 1999) in terms of audiovisual aspects and protocols for interaction (Gitelman 2006), including soundtrack elements and their affordances (Clarke 2005; Grimshaw 2008; Kamp 2014).
This paper looks at a non-game, Google Earth VR, that in its porting from Windows desktop and Chrome browser application to VR experience gained a soundtrack that features, in addition to environmental and interface sounds, dynamic music (Collins 2008; (Käser et al. 2017). GEVR’s score resembles aspects of film and video game music, in particular games with visual perspectives such as Cities: Skylines and SimCity (2013). I will interpret this not as GEVR becoming more game-like by virtue of its score, but that, taking a cue from Eric Clarke’s ecological approach to listening, the experience’s soundtrack acquires certain affordances remediated from those older audiovisual media.
Set in a post-apocalyptic, alien ravaged, earth, NieR: Automata (2017, Platinum Games) is a video game that encompasses diverse environments and gameplay styles; from 3D open worlds, to 2D side scrolling platforms, to shoot ‘em up and bullet-hell styles. The player traverses changing environmental visual spaces whilst shifting between these different styles of combat, accompanied by a soundtrack that adapts to the in-game environments. The significance of this adaptive soundscape in NieR is its intense focus upon the location and status of the player-character, which determines the various,individually altering, aspects of the soundscape. James Cook speaks of the medieval soundscape in his case study The Witcher 3: Wild Hunt, identifying that the game addresses ‘not only the musical score but also wider aspects of soundscape such as vocal accent, foley, and manipulation of the aural field.’ This paper will discuss these wider aspects of NieR’s audio world, following the introduction of various languages within the music, that incorporate an android/human culture within the soundscape of the game. It will identify the significance of the composer’s decision to incorporate quiet, medium, and dynamic variations of each area theme,building the soundscape alongside the player’s progress within the game’s narrative. It is the identification of that progression which triggers the introduction, and intensity, of vocality and song within the soundscape.
13:00 – 14:00 Lunch
14:00 – 15:30 Session 3 – Music During and After Play
This paper explores music in and around the digital game Dota 2 (2013, Valve). I draw on a range of methods and sources, including the analysis of musical style and reception, an anonymous online survey, and three months of immersion in the game-world and its surrounding social spaces such as Reddit and the broadcast platform Twitch. Firstly, it is shown how the game’s modularly-constructed soundtrack is affective and functionally successful for players. The music facilitates strategic gameplay and group sociality despite its use of often-criticised stylistic conventions of contemporary film scoring, such as loud synthesised bass lines and repetitive ostinati as primary vehicles of thematic development. Secondly, when a large portion (70%) of players choose to instead listen to their own personally-curated playlists while playing Dota 2, I suggest they employ music as a tool for managing their affective state. This brings into question the relatively new cultural assumption that music and mood are naturally connected, and allows for broader reflection on music’s relationship with individual agency and modern technology. Thirdly, considering Dota 2’s broader social world, I show how music for gameplay melds into the soundscapes of spectacular, Olympic Games-style performance events in the professional esports arena, while professional players also broadcast their musical taste to a global audience through Twitch. The paper ultimately aims to highlight Dota 2’s place within existing musical traditions, bring its music into dialogue with contemporary musicology, and set the stage for further scholarly investigation of music in similarly popular competitive games like League of Legends (2009, Riot).
Game preservation has risen the research agenda in recent years (e.g. Lowood 2009; Guttenbruner et al 2010; McDonough et al 2011) with much emphasis centring on emulation and the original experience (Serbicki 2016; Swalwell 2017) and the importance of documentation (Lowood 2013; Newman 2012). However, comparatively little work has been conducted on the theory and practice of game sound archiving (though note the commentary in McDonough et al 2010 on the limitations of videogame sound emulation). This paper reports on one recently-founded project that seeks to tackle this issue. The National Videogame Foundation ‘Game Sound Archive’ (GSA) operates in partnership with The British Library and centres on the creation and curation of archival-quality recordings of the distinctive sounds of digital games and gameplay.
Importantly, the GSA is not a collection of abstracted music files or sound effects and does not focus only on capturing the raw output of hardware systems and sound chips. Rather, the scope of the project extends to actuality recordings of games being played. This decision has two immediate consequences. In the first instance, it means that the recordings account for the totality of game audio emanating from systems and games at play. As such, music and effects intermingle, sometimes complementing one another and sometimes competing for sonic space depending on the design of the audio engine. However, more than this, the interest in documenting the actuality of gameplay brings the sounds of player interactions and the operation of the physical interface within the scope of the project.
Giving examples of some of the recordings, the paper explores the rationale and development of the GSA in the context of extant formal and informal game preservation projects and (game) sound collections such as the HVSC (High Voltage SID Collection) and VGMRips. The paper continues by considering the implications of these various projects’ approaches, the (patchy) state of current game sound emulation and its role in preservation, and use-cases for archival recordings of game sound and actuality recordings such as those in the GSA. The paper concludes with an outline of the curatorial development plan for the GSA and an invitation to collaborate in the recording process.
Travellers in Japan, whether strolling amongst skyscrapers in the large urban centers of the country or passing through train stations in remote rural areas, are sure to encounter sonic environments colored by the sounds of gaming. Handheld devices, the solitary machines ubiquitous in public spaces, and the full-fledged gaming parlors that populate the country have changed the Japanese soundscape drastically.
My paper is an examination of a music genre whose primary points of sonic reference are these aspects of the Japanese sound environment. An onomatopoetic phrase used in the Japanese language to signify the sound of video games, “pico-pico” now also signifies this musical new genre. Pico-pico has reinvented its other major point of reference, the popular underground Shibuya-kei style, for a younger generation of listeners, the first generation that grew up with video games as a pastime and important source of cultural reference.
A reading of the ways in which the aesthetics of video game sound has affected pico-pico and other recently emergent musical genres will lead into questions of how sound and music can serve as the means through which the virtual travel central to the experiential aspects of gaming is written into the bodies of listeners outside of game space. Making reference to fieldwork I completed in Tokyo, I will discuss how certain forms of game sound have become codified for certain circles of listeners in manners that allow the sounds to afford kinds of imagined travel that have profound effects on how time, space,distance, and place are experienced by listeners
15:30 – 16:00 Tea & Coffee Break
16:00 – 17:00 Keynote Address: Adele Cutting
 Evening Evening out in Leipzig at Local Kneipe

Day 2, 14th April 2018

9:30 – 10:30 Keynote Address: Kristine Jørgensen
10:30 – 11:00 Tea & Coffee Break
11:00 – 12:30 Session 4 Information from Music
Payday 2 seems superficially similar to many other first-person shooters and stealth games. The Graphical User Interface (GUI) contains typical shooter indicators for health and ammunition alongside typical stealth-game indicators for suspicious and alerted enemies. However, Payday 2 also omits or limits a number of elements found in GUIs common to these genres, such as player radars, objective markers and ability timers. Instead, these commonplace GUIs are replaced with auditory interfaces throughout the game.
This paper deconstructs two levels from the co-operative first-person stealth-shooter Payday 2 to demonstrate how auditory elements can be used within interactive media to replace elements of user interface that are conventionally visual. It examines music, dialogue and sound to build an understanding of how players must interact with the audio of the game.
To successfully navigate the game world and find ludic success, players must develop an understanding of the game audio in what seems similar to the knowledge described by Bourgonjon as “video game literacy”. This may help to immerse players more completely within the game following principles of Grimshaw and Ward, and allow us to establish a basis for examination of immersive audiovisual environments such as those found in virtual reality.
This paper focuses on how video game audio can be transformed into new auditory information systems (Fritsch 99; Jørgensen 168; Summers 130) in the act of speedrunning. Hereby, audio is taken to subsume music, sound(effects) and dubbing (Fritsch 96) and speedrunning is conceptualized as the goal to finish a game as fast as possible under certain rules. Since perceivable visual information is tendentially reduced in speedruns, such new auditory information systems allow to compensate visual information loss (Jørgensen 164).This paper argues that linear or reactive music can become proactive music (Liebe 47) during speedrunning. Existing video game audio can be recontextualized as audio cues for tricks and glitches what constitutes new auditory information systems that evoke actions from the players.Audio cues can be realized in two different fashions: Either, those audio cues are intended and properly designed by the developer, or they are made up by runners and communities through the aforementioned recontextualization, which cuts through the intended sound design of the game. In so far,it is plausible to say that speedruns are not just non-narrative interventions, but can also be non-auditory interventions.All those aspects are analyzed with examples from actual speedruns in contrast to the normally played games. Blindfolded speedruns will also be analyzed as extreme cases in which the newly created auditory information systems have to compensate for the complete loss of visual information
In musical aesthetics, it was widely accepted from the German romanticism until the early 20th century, that music would carry extra-musical meaning by virtue of its expressiveness. This notion seemed particularly appealing in cases when several arts met in a synthesis of visual, lyric, dramatic and audible elements, thus forming a Gesamtkunstwerk. E.T.A Hoffmann, Richard Wagner, Hans Pfitzner and Ferruccio Busoni are representatives of the common view lasting more than a century, whereby opera and music drama were capable of expression beyond speech.Video Games resemble the concept of Gesamtkunstwerk. They are as well constituted by a combination of multiple arts. However,what is the role of music in such a composition? Ever since the linguistic turn in philosophy not only the idea of music capable of expressing “higher truth” beyond speech is counted as obsolete, also its ability to impart semantic meaning of any kind is generally doubted.Yet music and sound in video games can be crucial carriers of information: They support the atmosphere, anticipate events or provide feedback to the player. In this way they are, either to a greater or lesser extent, an important part of the game interface. How exactly do music and sound perform this function? What is the relationship to the further arts involved? This paper shall approach these questions from a musical aesthetic viewpoint, using methods of semantics, semiotics, musicology and music theory.
12:30 – 13:30 Lunch
13:30 – 15:00 Session 5Music and Personal Experience
‘Ludomusicality’ refers to the processes and gestures involved in musical play (Mosley 2016), but not strictly in relation to instrumentality. Through such musical play, chiptune – as a genre and (sub)culture –is in continual expansion beyond its origins in technological necessity (Collins 2008; Jenkins, Ford and Green 2013). At the ludomusical hands of its creative participants, chiptune has spread into the fannish ‘margins’ (Jenkins 2013) of existing media and musical texts. Whether a nostalgic ‘miracle fuel’ (cf. Cheng 2014) or a display of multiple tastes, composers fuse chiptune with – among many other genres – jazz and reggae, and transform television and film soundtracks. If music is integral to ‘producing’ identity (Frith 2002), then how is intertextual chip-ludomusicality – which moves through cultural boundaries towards interconnection – contingent to the fannish identity of the chiptune composer?Through Rosi Braidotti’s framework of nomadic subjectivity (2011), my paper approaches this question by theorizing chiptune fan identity –as a fannish persona –as a nomadic play of subjectivities. I contend that chiptune fan personas are temporary and dynamic amalgams of heterogeneous and posthuman (Braidotti 2013) elements. Not an unchanging, fixed or lone ‘unity’ (cf. Stanyek and Piekut 2010), but a temporary and fluid persona reliant on a nomadic interplay of fannish yearning for identification (Sandvoss 2005), and the agency of musical/non-musical, human/non-human ‘actors’ (Blake and Van Elferen 2015; cf. Latour 1996). In addition I propose that not only do chiptune and non-chiptune elements allow the participant to synthesize a stance of fan identity – as subjectivity – through musical ‘encounters’(Massumi 2002), but also that chiptune composers in turn can influence and tailor these encounters – for specific desires and senses of “self”– through ludomusical gestures.
Websites like AutismGames offer online ‘serious games’ or ‘educational games’ for people (especially children) with Autism Spectrum Disorders (ASD). But all respondents in my survey indicated that they prefer to play casual games. Research on neurotypical persons has shown that casual video games also improve mood and decrease stress (Russoniello, Fish, O’Brien, Pougatchev, & Zirnov, 2011; Russoniello, O’Brien, K., & Parks, 2009a, 2009b; Ryan, Rigby, & Przybylski, 2006). Could this also be the case for people with ASD?
My own experiences as a high-functioning Aspergirl, in researching autism and in teaching the piano to autistic children, have ignited me with a wish to understand what is so soothing about casual games, especially in the variant that can be described as
‘sound toys’. Combining anecdotal evidence with survey research, embedded in literature, this article will answer the question ‘what is so appealing about sound toys, that autistic people like to play (with) them?’.All respondents were asked to play Ariel’s Symphony, a Disney mini-game in which the player can combine various musical fragments. The little mermaid Ariel is happy with everything the player does and the game cannot end itself. That could mean that Ariel’s Symphony is not a game, for a game should have an in-game goal (Suits, 2005). But if it is not a game, then: what is it? And why would people with ASD be so happy to play (with) it? In order to answer these questions, this paper will explore the personal goals of “playing” this “game” for people with ASD.
Musical entrainment is widely recognised as occurring in a variety of situations (Merker, 1999), ranging from music listening (Large & Kolen, 1994), ensemble performance (Huron, 2001), armies marching in step (McNeill, 1995), and mother-infant bonding (Feldman, 2007). As a result it appears to be a universal and fundamental human trait (Philips-Silver, 2009).
Entrainment has been linked to feelings of enjoyment and pleasure (McNeill, 1995), the ability to perceive time (Clayton et al, 2005), and the loss of awareness of surroundings (Woody & McPherson, 2010). The implications of this are that entrainment can be linked to experiences of flow (Csíkszentmihályi, 1992), thought to be a key motivational factor for engagement in video games (Stevens & Raybould, 2014).
There is anecdotal evidence to suggest that musical entrainment occurs during video game play, but to date there has been very little work conducted in this area (Phillips-Silver, 2009). However, links can be drawn between the rhythmic interplay of the game and player, and the overall play experience (Costello, 2016). The majority of entrainment studies tend to focus on
relatively simple movements, such as finger tapping (Repp, 2006), rather than more complex whole-body movements. As a result their findings are directly applicable to video games where the input movement tends to be a button press or joystick movement.
So, if entrainment does indeed occur within video games, then its successful facilitation should lead to a more enjoyable experience for the player.
This presentation will provide an overview of current research into musical entrainment and what parallels can be drawn to the field of video game music. It will also discuss a study that aims to investigate the extent to which entrainment occurs within video games, and what affect the phenomenon has on playing style, as well as provide some ideas as to what may impact on the likelihood of entrainment occurring.
15:00 – 15:30 Tea & Coffee Break
15:30 – 17:30 Session 6 – Aesthetics and Ethics (or, Drinking and Thinking)
Games are an integral aspect of human culture and they are always an aesthetic experience (Mandoki, 2016); understanding aesthetics in relation to the subject’s condition of openness to her context, whether natural or social. Thus, videogames are a more recent iteration of an cultural-aesthetic human activity that, as all human activity, has a normative moment, and is expressive of a particular ethics (Dussel, 2014).
To study the relationship between music and ethics, the present work will explore how the musicalization of three common situations (Introduction, first overworld presentation and regular combat) in the games Chrono Trigger (1995) and Final Fantasy VI (1994) is a relevant component in the characterization of the videogame space (Roth, 2017). I will review how this characterizations are expressive of some qualities of a particular ethicalmythical nucleus (Ricoeur, 1990), that Enrique Dussel explains as “el complejo orgánico de posturas, concretas de un grupo ante la existencia” (1975, ix).
To that effect, I will do an aesthetic analysis of this three situations, applying a model proposed by Katya Mandoki (2001), stressing the acoustic register and the tonical and proximity modalities; framing it in the general perspective that she stablishes for relations
between aesthetics and games (Mandoki, 2006). All of this, in the general narrative context of both games.
Finally, I will also comment eventual conflicts that could stem from the musical inmersion that chilean players could experience in videogame worlds created by foreign designers; by applying the ALI model (Isabella van Elferen, 2017), stressing -from a local perspective- the musical affect and literacy dimentions.
The authoritarian metropolis with its rampant social inequalities plays an important part in video games. From the Midgar slums of Final Fantasy VII (1997) to the elvish ghettoes in the Dragon Age series (2009-2014), visions of poverty act as confirmation of the need for a player-hero to intervene. This paper seeks to contextualise the class structures in Revolution Software͛’s Beneath a Steel Sky(1994) within the wider depictions of class in videogames (and dystopias more specifically) and their relationship to Jameson’s idea of the ͚’political unconscious͛’, and analyse how the class-based environments in the game are characterised musically. The 1994 cyberpunk adventure game takes places largely in a Ballardian literally stratified metropolis; the three levels of the city reflect the social classes of the inhabitants, with the working class occupying the industrial top layer, the middle class occupying the middle level, and the upper classes occupying the ground floor, away from the pollution of the upper levels. Each of these levels has its own individual sound, from the harsh rhythm of the working-class level, which blends seamlessly with the industrial sounds of its factories, to the much more diversified soundscape of the city͛’s upper-class level, which also boasts a bar with a live band, as well as a jukebox. This paper will analyse these soundscape from a semiotic perspective (e.g. Tagg, 2012), while also drawing on Henry Jenkins͛’s ideas surrounding narrative architecture and environmental storytelling (2004), finally explaining how music is not here an expression of class identity, but a sonic characterisation of class structure.
This paper aims to define and bring a historical perspective of aleatoric music composition relating to terms like chance music, dice music game, open music form, mobile music form, procedural music, undetermined music, among others and to discuss some possibilities in the computer games field using audio middlewares like FMOD Studio, Wwise and Pure Data in order to create a more immersive audio experience for the player, avoiding excessive music repetitions, using less music material and computer memory. During the presentation I demonstrate the different compositional processes applied in the Audio Game Breu, a thriller computer game using only audio resources implemented in the audio middleware FMOD Studio, middleware that allows the creation of audio adaptability according to the definition of game parameters. Considering the particular way of creating music for games using non-linear materials in the form of loops, differing from linear media like movies and animation, the continuous growth of the game industry and the new audio technologies brought by vr devices, it is important to investigate new ways of creating and providing music to games in order to bring a more immersive experience to the player.
This paper develops a framework for associations between sounds,video games, and alcohol. Some recent studies (Kowert and Quandt 2016; Cranwell et al. 2016) examine concerns regarding representations of drugs and alcohol in video games, while others (Montgomery 2006; Schultheis and Mourant, 2001) use Virtual Reality to simulate intoxication. These studies primarily focus on the presence and stereotypical use of alcohol, but offer little attention to related sounds and music, or the increased integration between sound design and game-play. This paper lays historical, cultural,and music-theoretical groundwork for creating an associative soundscape of alcohol in multimedia experiences, particularly video games.
I first define four primary areas of inquiry into sonic representations of alcohol in multimedia: 1) Sound Iconography, which highlights representative sounds of objects and personal behaviors;2) Sound Environments, or unique sonic locations and settings; 3) Musical Depictions of Drunkenness, such as the use of specific orchestrations and cultural influences; and 4) Simulation of Intoxication, which looks specifically at altered sonic perceptions and experiences. I then demonstrate attributes of these four features through examples from the following video games (and other media): Bioshock; Red Dead Redemption; the Final Fantasy franchise; Warner Brothers’ cartoon High Note; World of Warcraft; a 2017 advertisement series for Busch beer; and others. I conclude the paper by considering a larger context of sound and music studies related to alcohol, drugs, and addiction.
 Evening Ludo2018 Presents: Bonus Levels

Day 3, 15th April 2018

9:30 – 11:00 Session 7 – Sonic Engagement
In the age of participatory and convergence paradigms, videogame music has its own networked culture with cybercommunities that discuss, share and create, thus allowing an open space of creativity and artistic activities in a digital and constant flow. One of these practices is music composition and production for this medium, available on several platforms such as Soundcloud, Youtube and specifically in the format of modification files (or mods). In my Masters dissertation, I demonstrated the existence of a new model of online artistic production and circulation of musical mods composed and shared in the Nexus Mods platform for the The Elder Scrolls IV: Oblivion and The Elder Scrolls V: Skyrim videogames. These mods consist in adding new musical material similar to the pre-existent soundtrack of both titles; however, the majority of the files present in the audio category in this platform are related to sound only. With the usage of titles such as “better sounds” or “immersive sounds”, many modders aim to provide a more immersive experience to other gamersthrough the application of their mods in the game(s). In this case, immersive relates not only to the sound quality of all the aural effects, but especially regarding a plausible construction of reality, in which the gamers are living, playing and negotiating meaning in their own social context. Intersecting playbour, fandom, aural immersion and audiovisual literacy, these audio modders work on adding new layers to the soundscapes and ambiances of the virtual worlds presented in these two objects, placing the immersion component as a key aspect of design, playability, and using this material as a way of enabling their social capital and visibility on online platforms
Sound and ambience have been widely considered as major factors of player engagement for casino games (Dixon et al., 2013; Marmurek et al., 2010). From 20th century slot machines to present-day video bingo games, both composers and sound designers have provided the medium with stimulating soundscapes as a means to keep players aroused. Arousal, according to Brown (1986), is the major reinforcer of gambling behavior.
This paper aims to outline some of the techniques used to provide increase of arousal— such as anticipation (Inouye, 2013) and rolling (Collins et al., 2011) sounds— in casino-themed video games from a composer’s perspective by analyzing selected cases and discussing the existing bibliography on the subject. Considering the environmental influence on players’ listening and psychological responses (Collins, 2013; Griffiths & Parke, 2005; Marmurek et al., 2007), this research also intends to discuss differences between soundtracks in gambling computer games and physical betting machines.
Our presentation focuses on sound and music as key –though often neglected–points of interface in VR experiences, and how the technology of VR gaming might be used to reconstruct historic performances and spaces, situating both audiences and performers in a shared virtual auditorium to connect and share the ephemeral elements of music performance that might otherwise be lost.In the last few years, Early Music has grown in popularity. With audiences increasingly demanding ‘authenticity’, there has also been a concerted effort to create historically-accurate performances, featuring musicians in period dress performing on period instruments, and on occasion performing in physical reconstructions of period venues. While this approach has clear benefits –it offers new experiential perspectives on Early Music and its performance –it also has its limitations; physical spaces are expensive to build, and very difficult to modify and investigate systematically, and by performing in venues that have been custom-built for these concerts, geographical limits are imposed on potential audiences.This is where VR technologies have real potential. Our project explores how they might be used as a platform for investigating historical performance spaces and the music that was performed within them. Using a mixed methods approach,combining 3D modelling, acoustic modelling, ambisonics and immersive interfaces, we are recreating two virtual auditoria–St. Cecilia’s Hall in Edinburgh and the Chapel at Linlithgow Palace –and recreating performances from historical records.In our presentation, we will discuss in detail our approach to modelling, highlighting the key psychophysical cues that encourage and inhibit presence and immersion within the virtual space, the implementation of different aspects of the virtual auditorium, and some of our preliminary findings. We will conclude by discussing emerging lines of enquiry and how these have shaped the next phase of the project.
11:00 – 11:30 Tea & Coffee Break
11:30 – 12:30 Keynote Address: Michael Austin
12:30 – 13:30 Lunch
13:30 – 15:00 Session 8 – Composition and Design
Research has shown that peak emotional responses to music are often associated with expectation (Huron, 2008), or to borrow Chion’s terminology, music that is highly vectorized; “…sound vectorizes or dramatizes shots, orienting them toward a future, a goal, and creation of a feeling of imminence and expectation” (Chion, 1994. p13-14). Outside of linear cut scenes or predestined sequences the use of such vectorized music in video games is highly problematic given the aesthetic preference for smoothness (Medina-Gray, 2016), since the temporal indeterminacy that is a consequence of interactivity is likely to lead to musically jarring transitions (Munday, 2007).This paper attempts to examine the stylistic characteristics of active game music, revealing harmonic stasis, metrical ambiguity and ‘shortness’ (motivic rather than phrase based structures) as products of indeterminacy and noting parallels in the work of film composers facing similar ‘indeterminacy’ through the rise of non-linear editing and compressed production schedule. Finally the paper will discuss how the ‘infinite riser’ (the Shepard-Risset glissando) heard in the recent film work of Hans Zimmer, and in games such as Wolfenstein: The New Order (2014) and Doom (2016) might point the way towards the compositional holy grail – music that is both vectorised and temporally ambiguous.
The manner in which soundscapes evolve and change during gameplay can have many implications regarding player experience. Playdead’s 2016 release INSIDE features a number of gameplay sections in which rhythmic audio cues loop continuously both during gameplay and player death. During these sections the game will wait to respawn the player at on opportune moment during the loop. This paper uses one such section as a case study, building on the ideas put forth in Bash (2014) regarding spectromorphology in games to examine the effects of transitioning from diegetic sound effects to abstract musical cues on player immersion, mastery and narrative cohesion. The “musical suture” (Kamp 2016) created by continuously looping audio during death and respawn is also examined with regards to immersing the player within an evolving soundscape.
Scoring interactive experiences, such as video games and VR, is remarkably different from
creating a film soundtrack. In interactive content, users can choose their own path through the story, whereas in linear content, there is only a single way of progressing. This linear/non-linear dichotomy has a major impact on music. For the music for interactive content to be effective, it has to adapt dynamically to the interactions and decisions of the users.
Video game composers have developed a number of techniques such as vertical layering and horizontal resequencing, which allow the music to respond to the non-linear nature of a game. This is generally referred to as adaptive music. Although at times particularly effective, these techniques are limited both in (musical) scope and to the extent they can match the musical content with the in-experience events on a granular basis.
In this paper, we argue that in order to create video game music that could score the events of interactive content as granularly as linear music does for films, a collaboration between composers and Artificial Intelligence (AI) is necessary. To support this thesis, we introduce the concept of Deep Adaptive Music (DAM), wherein music is generated in realtime directly in the experience. The resulting collaboration between an AI and a composer augments the possibilities of traditional adaptive music by enabling infinite variation, and complex musical adaptation. We present some examples of DAM and also discuss preliminary results of a psychological experiment, which indicate that DAM is able to significantly increase engagement of VR players.
15:00 – 15:30 Tea & Coffee Break
15:30 – 17:00 Session 9 – Soundscapes
Spatial characteristics of music have been recognised and exploited in the West since the
early Christian antiphony and became clearly defined as parametric elements during the 20th century. American composers began experimenting with the use of space in composition during the early 20th century – these experiments were eventually encouraged by a national drive to create participatory, democratic forms of art, in opposition to fascist authoritarian modes of communication. Fred Turner coined the term ‘Democratic Surround’ to describe these new media models – multi-image, multi-sound source environments created by artists associated with the 1960s counterculture, and designed to model and produce a more democratic society. Contemporary mass media is directly connected to these media practices that emerged during the 1950s and ‘60s but are also expressions of something different – a ‘Commercial Surround.’ We are surrounded by media and music but not in the way that John Cage or La Monte Young envisioned. Video game music surrounds the listener too, but the impetus to design these enveloping audiovisual environments doesn’t come from the confrontation with fascism – it comes from an overarching media and consumer culture. This paper explores the spatial deployment of music in video games, its connections with the ‘Democratic Surround’ and how it can be analysed in the context of contemporary algorithmic culture. Examples from contemporary games and my own spatial music experiments in Unreal Engine will be used to illustrate.
The use of “virtual reality” VR-technologies is rising in computer games, but also in research fields like psychology and cognitive science. One reason for that is the wide range of new possibilities that opens up with this technology. Recently, VR has been tested for new ways of learning, playing with spatial perception, and new approaches in psychotherapy. Common ground for all these applications are the visual possibilities. Not yet explored is the wide range of options within audio-visual projects that explicitly focus on audio for VR-applications. From our daily life, we know that our spatial orientation is influenced by the things we hear. From film studies we know that audio can help to increase immersion in audio-visual works.
In our project, we ask if it is possible to come as close to reality with VR technology and 360° audio that we can develop a serious game for ear/perception training on VR. The content was filmed and recorded at several indoor and outdoor locations in Berlin and Bayreuth with a 360° GoPro rig and a 360° microphone prototype developed by TU Berlin. Each level is bound to a new location, in which the player has to solve perception tasks, such as matching the direction of the audio to the visual, dismissing the filtered audio in order to find the ‘real’ sound, etc. The game design therefore is based on the idea of an escape-room concept. It is meant to be motivating and entertaining, but at the same time train the participants for a differentiated auditive perception.
Based on this game, which is still in development, we also want to stress some questions that go beyond the scope of the game itself, such as: What can and/or should we achieve with spatial audio in VR? How do spatial audio and VR-visual interact? How can we influence the perception of space by audio and visuals in VR? How can we use VR for training and knowledge transfer?
The Dark Souls and Bloodborne series is notorious for its difficulty and challenge employed by the unique mechanics and concepts behind its peculiar narrative. Through the application of a sound and silence dichotomy, the player is presented with a constant need to carefully inspect the environment through sound and, in key moments, through music. For that reason, the Soulsborne’s soundscape and music are very important pieces to the puzzle of the online fan community phenomenon that has been built around the series throughout the recent years.
The crossover between Victorian, Gothic and Medieval archetypes are conducted through the music which sets the mood to several locations and characters, mainly in boss fights, laying out the foundation for the sound vs silence argument in the game. Furthermore, the eerie aesthetics are carried out to the strategic sounds of steps, weapons and general creature onomatopoeia heard over by the player.
As such, my proposal is based on two relatable hypothesis, the first one being the
employment of the sound and silence dichotomy as a way of absorbing the player into the character’s standpoint, allied to Daniel Vella’s “ludic sublime” (2015), the application of Lovecraft’s Cosmicism and how it affects the gamer’s agency within that digital world. Secondly, the way how Soulsborne’s music is based in specific tropes within the horror, gothic and epic musical imageries and yet it deconstructs the usual functions of music in videogames, defining itself as a crucial element in the overall gameplay and as the establishing factor of key segments in the narrative arch.



Ludo 2018 Call for Papers

We are excited to announce that Ludo2018, the Seventh Conference on Video Game Music and Sound, will take place April 13th – 15th at HMT Leipzig in Germany.

Please share our Call for Papers poster online and around your institutions.

The organizers of Ludo2018 are accepting proposals for research presentations. This year, we are particularly interested in papers that support the conference theme of ‘Soundscapes and Interfaces’. We also welcome all proposals on sound and music in games.

Proposed papers might be presented as part of planned sessions on:

  • Auditory Interfaces
  • Crossmedia Soundscapes
  • Soundscapes in AR/VR
  • Arcade Soundscapes
  • Interfacing with Other Cultures in Video Game Music
  • Soundscapes and Class in Games
  • Sound in Casual Games

Presentations should last twenty minutes, to be followed by questions. The conference language is English. Please submit your paper proposal (c.250 words) plus provisional bibliography by email to by February 14th 2018.

Practitioners and composers may submit proposals to present work. We also welcome session proposals from organizers representing two to four individuals; the organizer should submit an introduction to the theme and c.200 word proposals for each paper.

The conference will feature the following keynote speakers:

Michael Austin (Howard University), editor of Music Video Games (Bloomsbury, 2016)
Adele Cutting, BAFTA-winning audio professional whose credits include the Harry Potter and The Room franchises
Kristine Jørgensen (University of Bergen), author of Gameworld Interfaces (MIT, 2013)

Hosted by Christoph Hust (HMT Leipzig, Department of Musicology) and Martin Roth (Leipzig University, Department for Japanese Studies)
Organized by Melanie Fritsch, Michiel Kamp, Tim Summers & Mark Sweeney.

Ludo2017 Conference Review by Ivan Mouraviev

Ludo17 Conference Report: Highlights and Themes

Ivan Mouraviev [1] reviews Ludo2017 for us, offering his thoughts on the experience.

Ivan is a student at the University of Auckland, New Zealand, where he specializes in game music. Ludo2017 was his first Ludo conference, where he presented a very well-received paper ‘Textual Play: Music as Performance in the Ludomusicological Discourse’.

Independent scholar Mark Benis, writing in his report for the 2017 North American Conference on Video Game Music, recently remarked that “video games have a way of bringing people together.” Indeed they do. This is how I felt at the sixth annual Ludomusicology conference held over 20-22 April at Bath Spa University. The event was hosted by Professor James Newman and organised by Ludomusicology Research Group members Michiel Kamp, Tim Summers, Mark Sweeney, and Melanie Fritsch. [2] As a student and newcomer to the world of academic conferences, I did not entirely know what to expect at Ludo17. However, delegates and organisers alike were superbly welcoming. Being at the conference was a fun and intellectually stimulating experience from start to finish. With 40+ attendees across three days, 30 diverse papers were presented that ranged from musicological and music-theoretical investigations of music in video games to studies of game music history, composition, technology, and performance. Indeed, diversity of approach and subject matter was a hallmark of the event. In what follows I report on the conference via a series of personal highlights, summarising what I found to be among the most significant research presentations; I also tease out emerging trends, questions, and possible points of departure for future research. The report is organised loosely around four themes rather than by presentations chronologically. Please forgive my inevitably many omissions.


  1. Constraints and affordances: game music technology and composition

Blake Troise opened the conference by presenting his research on the technological and creatives affordance of 1-bit music: a sub-category of chiptune based on a single square wave. [3] As the name of 1-bit suggests, the synthetic process of 1-bit music imposes binary limitations—a square wave can only be produced at either a high or low amplitude (that is, and on or off signal). However, through a live demonstration, Troise showed how sophisticated polyphony, timbral variation, and even supra-binary amplitudes can be achieved with 1-bit chiptune—for example by exploiting the limits of perception (since discrete transients less than 100 milliseconds apart are perceived by the human brain as a single sound), and using techniques like pin-pulse modulation (which can help avoid the mutual cancellation of two overlapping signals).

Composer Ricardo Climent offered a different flavour of research on a similar theme, also on the first day. Climent presented his fascinating use of the freely available game-design software Unreal Engine to unfold musical narratives ludically. [4] Specifically, this took the form of an interactive work titled s.laag, which serves as a game-level replica of the World’s Fair held in Brussels in 1958; primarily the player-character takes on the role of a bass clarinet to navigate through various mini-games and around architectural icons. Kevin Burke’s presentation was also retrospective but took a different, more analytical approach, examining how composer Hitoshi Sakimoto—of Final Fantasy and Valkyria Chronicles fame—utilised a custom Terpsichorean sound driver in the 1990s to produce musical results that significantly surpassed late-twentieth-century expectations for 16 bit sound synthesis. Come day three, Richard Stevens and Niklos Stavropoulos dealt with video game music from a more explicitly design- and implementation-focussed perspective, presenting some valuable techniques for manipulating and performing pre-composed sound in games (also using, like Climent, Unreal Engine). [5]

Ultimately it was Kenneth ‘Kenny’ McAlpine, though, the first of Ludo17’s three keynote speakers, who most compellingly synthesised the many diverse strands making up this broad theme of game music technology, composition, and affordances/constraints. [6] McAlpine showcased some of the research behind his forthcoming Bits and Pieces: A History of Chiptunes (Oxford University Press). He discussed the various affordances of technologies like the ZX Spectrum, Commodore 64, and more, presenting a broad range of historical and conceptual themes in a captivatingly personal way. Especially memorable was McAlpine’s emphasis on the idea that the near-total freedom of musical production available to us today, not least through digital audio workstations such as Apple’s Logic Pro, can be “crippling”. The goal of contemporary artistic practice—both within and beyond the realm of video game music—may not so much be a matter of “freedom of choice” as “freedom from choice”.


  1. Rule-bound musical play

What defines the “game” in “video game”? This was a question addressed by James Saunders, who highlighted Jesper Juul’s work on the topic (2003) as well as Huizinga’s important theorisation of play (1955), to pinpoint some insightful correspondences between rules in games and indeterminate music. [7] Saunders also noted how the structuring, constraining, and sometimes not immediately perceptible effects of video game rules can (re)present models of social interaction, and facilitate players’ agency by offering both choices and goals for game and music play. Two-way engagement between Twitch streamers and their often expansive audiences was raised as an example of such interaction in the discussion following Saunders’ presentation. Indeed, web audiences can significantly influence—and at times even determine—the structure and content of a streamer’s gameplay. Many streamers also publicly perform their musical taste by playing popular music as a kind of trans-diegetic underscore, that can be structured by audience interaction and be experienced as both external (non-diegetic) and integral (diegetic) to the streamer’s ludic performance. The 2015 article “From Mixtapes to Multiplayers” by Michael Austin (who also presented a fascinating paper on the participatory musical culture of “Automatic Mario Music Videos” on Day 2) certainly comes to mind, for Austin’s examination of how different kinds of social video gaming can serve as gamified “transmutation[s] of the mixtape” and displays of curatorial control. [8] As the professional players of massive online battle-arena (MOBA) games like Dota 2 continue to attract large streaming audiences, and video games become increasingly formidable icons in popular culture more generally, the realm of game-like musical interactions in virtual spaces seems ripe for further scholarly investigation. How, for example, are streamer-audience musical interactions shaped by the (in)formal rules that moderators enforce on platforms such as Twitch, perhaps contributing in turn to a broader fostering of online community?

On the broader theme of music and rule-bounded play it is hard not to mention the work of Roger Moseley. [9] On Day 3 Moseley presented a superb keynote that resonated with the approach and several themes within his recently-published and open-access monograph Keys to Play (University of California Press, 2016). [10] The keynote was titled “Recursive Representations of Musical Recreation”, placing “recursion”—signalling basic repetition and looping, the successive executions that occur in computation, and more specifically a kind of historical ludomusical praxis—in the critical spotlight. One particular argument was for “recreation” as a potentially more critically rewarding notion than “reproduction” when dealing with the recursive nature of ludomusicality, since “reproduction” has been historically more closely associated with a decidedly “serious” “phonograph ideology” rather than intrinsically creative and performative action (an association no doubt spurred by, or at least reflected in, Adorno’s and Walter Benjamin’s famous twentieth-century critiques of commercial culture). The first known use of the term “ludomusicology” can be traced to digital-game researcher and music theorist Guillaume Laroche in 2007; nevertheless, Moseley’s contributions to our understanding of the implications of the term “ludomusicology”—broadly construed as the study of music and play—have been seminal. [11] This is eminent not only in Keys to Play, but also in the 2013 chapter “Playing Games with Music” which elaborates play theory by Huizinga and Roger Caillois in the context of Guitar Hero after a much-needed historicization of work and play. [12] Indeed, central to Moseley’s work has been the goal of putting “play on display” in historical terms within a “media archaeology” framework, illuminating the possibility that “notions and terminology associated with digital games are capable of enlightening historical ludomusical praxis, just as the latter informs the former.” [13]


  1. Video game music as performance and/or culture

Several papers dealt with video game music and broader notions of culture, performance, or both. Presenting on the first day, Donal Fullam discussed how video game music can be understood as an expression of “algorithmic culture”. [14] For Fullam this cultural expression is a relatively recent incarnation of a more long-standing impulse, one that “treats music as an algorithmically determined system” and can be traced to the twentieth century avant garde and even further, to the foundations of functional harmony (which in turn represents a more basic tendency to systematise musical sound as a “cultural articulation”). A similar theoretical view of music as performing cultural and aesthetic functions was explored on Day 2 by Edward Spencer. His study investigated the bass-music signification and broader sociopolitical implications of Major League Gaming Montage Parodies, or MLGMPs. These represent a specific music video genre that employs audiovisual memes and “canonic” dubstep tracks by the likes of Skrillex to parody montages of skillful first-person shooter gameplay. [15] As Spencer convincingly showed through a critique of recent postmodern theory around notions of meaningless in contemporary culture, MLGMPs should not be automatically dismissed simply because they may, at first glance, seem to represent “ultimate” instances of “media convergence and ludic semiotic excess”.

Melanie Fritsch presented and applied also on Day 2 a theoretical platform for the analysis of music in video games.  principally argue that music in video games may be, but so far largely has not been, studies through the lens of interdisciplinary performance studies—which generally favours an ontology of music that is necessarily behavioural and social. [16] Fritsch did note, however, that scholars such as Tim Summers, Kiri Miller, and Karen Collins (and, I would add, William Cheng) have started to investigate music in video games beyond the basic paradigm of musicological close reading; both Miller and Cheng have favoured ethnographic paradigms while Summers is broadly interdisciplinary and Collins has tended towards embodied cognition and performance analysis. [17] Fritsch also introduced the German terms aufführung and leistung for understanding performance in a novel and more multi-dimensional way, with the former referring to presentation, aesthetics, and artistry and the latter encapsulating notions of skillful display, effort, and efficiency. Fritsch’s transnational perspective resonates with Moseley’s valuable historicisation of work and play in that both serve as a reminder that fundamental terms in music scholarship like “performance” and “play” are historically and socially contingent. Indeed, what one group of gamers or scholars regards as “play”, whether ludically or musically or both, may take on dramatically differently meaning across different times, spaces, or sociocultural settings. Or, put differently, the somewhat taken-for-granted idea that both games and music are inherently playful may be more thoroughly examined in a more empirically grounded, historically and socially (and perhaps even politically) specific way.

This last question may apply equally to video game music—that which is produced, performed, and listened to beyond conventional gameplay, such as in the concert hall. Video game music in this sense was explored in a concentrated and lively manner across four back-to-back presentations in Session 7, titled “In Concert”. In the first half of the session, Joana Freitas and Elizabeth Hunt drew attention to how notable organisations like Video Games Live have sought to “gamify” the concert hall in order to achieve “collaborative immersion and experience”. [18] James S. Tate and Ben Hopgood then dealt more specifically with music associated with Japanese Role-Playing Games (JRPGs) and Final Fantasy respectively; Tate presented convincing evidence for, and hypotheses to explain, the widespread popularity of JRPG soundtracks in concert performance, while Hopgood’s study was more analytical in discussing the easy-to-forget but nevertheless prominent “classical music identifiers” that video game music often carries as part of its dense semiotic baggage. [19] Though it was only mentioned in passing, an exciting and potentially highly rewarding direction for future research in this area is the ongoing global concert tour of thatgamecompany’s broadly well-received PS3/4 title Journey; the tour features Chicago’s Fifth House ensemble performing the game’s soundtrack in real time in response to the actions of four-to-six players on stage. [20]


  1. Learning music through games and vice versa: video game pedagogy

Talks on the role of video games—and principles of play more generally—in education only made up a small portion of Ludo17, however, the quality of research presented and potential for growth on this theme certainly warrants its own sub-heading. On day three Meghan Naxer brought to light the value of video game game principles and practices to be fruitfully manifested in the classroom. [21] A personal anecdote in this regard was especially revealing: after responding student email queries with indirect suggestions to literature and other resources, Naxer’s students interpreted the interaction as a game-like “side quest” and subsequently became all the more excited to engage in independent study. Jan Torge Claussen next presented his ongoing research with 18 students learning to play guitar through Rocksmith, the decidedly more education-oriented competitor of Guitar Hero and Rock Band. [22] Claussen’s students have been video recorded and completed journals detailing their experiences with the game; early findings tentatively suggest that Rocksmith may be a useful means to learn how to play guitar through Rocksmith rather than to gain guitar proficiency in general.


Concluding remarks

In closing, I would like to draw attention to three talks that were especially intellectually stimulating, but do not fall neatly under any of the thematic categories I use above. Firstly, Stephen Tatlow and George Marshall expertly examined complex questions of music and diegesis, through voice communication in the fantasy role-playing game EVE Online and popular music in the racing title Forza respectively. [23] Implicit in Tatlow’s discussion was the possibility for in-game diegetic voices to function musically, or rather for music to function as a player’s in-game diegetic voice—as music arguably already does in Journey, where the only means of direct communication involves performing short musical pulses in the absence of conventional text- and voice-based chat. Secondly, James Tate discussed the problematic potential of developing a video game music studies canon, an especially important issue that we need not inherit from popular music studies and the Western ‘art’ music realm. [24] As Tate’s research showed, though, nostalgia is already something of a potent structuring force in steering which titles are most prominent in the game studies discourse. How, going forward, will we negotiate our personal tastes with academic integrity and maintain a field driven by egalitarian values that emphasise the embracing of diversity? Thirdly, Michiel Kamp’s “Ludo-musical Kuleshov?” drew much-needed attention to the importance of understanding the psychology of video game music perception and affect, including how strongly our interpretations of music can be guided by on-screen and vice versa. [25] Kamp also presented the exciting potential of his ludo-musical (practice-led) research paradigm whereby a relatively simple game design allowed flexible and iterative reformulation of research questions as uncertainties were clarified or new questions arose. In turn this brought to light how empirically-grounded musicological study tends to exist at the broader intersection of the ‘hard’ sciences and the humanities, drawing on the principles and techniques of both.

Finally, it is worth highlighting the concert curated by Professor James Saunders (with thanks to Alex Glyde-Bates) held at the end of day two. Performed works included the playful and aesthetically engaging, as was the case with Troise’s chiptune piece “FAMIFOOD” and Clement’s live play-through of s.laag. More overtly unconventional and thought-provoking compositions by Louis d’Heudieres and Ben Jameson explored, by ludic means, the ontological boundaries of “authentic” live performance—through a rule-based approach and Guitar Hero respectively. [26] Jameson’s piece in particular stood out, as a novel, compositional and performative elaboration of the seminal Guitar Hero research carried out by Kiri Miller. I believe our broad and fast-growing field of video game music studies should continue to feature and therefore encourage more work in this vein of artistic practice as research, which includes the studies by Clement and Kamp mentioned above; it is an emerging paradigm that has long been accepted in the visual and dramatic arts as a valid means of producing knowledge but remains relatively under-theorised and under-developed in music. [27]

In summary, Ludo17 was diverse, fun, and intellectually stimulating; it featured student and early-career researchers alongside established scholars; and it did what arguably most ‘good’ scholarship should do: open up, rather than close off, new and exciting lines of inquiry. To the curious reader I highly recommend visiting the #Ludo2017 twitter feed as well as the booklet of abstracts, for a more comprehensive look into the diversity of research that was presented beyond what I have been able to discuss here. I very much look forward to next year’s conference and wish to thank the organisers for organising a fantastic event.



  1. Ivan Mouraviev, BMus/BSc in musicology and biological sciences; currently undertaking a BMus (hons) in musicology at the University of Auckland, New Zealand.
  2. For the organisers’ biographies please see
  3. Blake Troise, University of Southampton.
  4. laag was composed especially for Dutch bass clarinettist Marij Van Gorkom, as part of the project started in 2015. For more information see
  5. Richard Stevens, Leeds Beckett University; Nikos Stavropoulos, Leeds Beckett University. Stevens has co-authored with David Raybould the monograph Game Audio Implementation: A Practical Guide Using the Unreal Engine (Waltham, MA: Focal Press, 2015).
  6. Kenneth McAlpine, University of Abertay, Dundee.
  7. James Saunders, Professor of Music, Bath Spa University.
  8. Austin, “From mixtapes to multiplayers: sharing musical taste through video games,” The Soundtrack 8/1–2 (2015), 77–88.
  9. Roger Moseley, Assistant Professor in Musicology, Cornell University.
  10. Keys to Play is freely accessible at
  11. See Tasneem Karbani, “Summer research project was music to student’s ears,” folio, University of Alberta, published 7 September 2007, accessed 23 May 2017,
  12. In Nicholas Cook and Richard Pettengill (eds), Taking it to the Bridge: Music as Performance (Ann Arbor: University of Michigan Press, 2013), 279–318.
  13. Keys to Play, 7.
  14. Donal Fullam, PhD Candidate, University College Dublin.
  15. Edward Spencer, DPhil Music student, University of Oxford.
  16. Melanie Fritsch M.A., PhD Candidate, University of Bayreuth.
  17. See, for example: Miller, Playing Along (Oxford: Oxford University Press, 2012; chapter four of Cheng, Sound Play (Oxford: Oxford University Press 2014); and Tim Summers, “Communication for Play,” in Understanding Video Game Music (Cambridge: Cambridge University Press, 2016), 116–142.
  18. Joana Freitas, MMus, Universidade NOVA de Lisboa; Elizabeth Hunt, University of Liverpool.
  19. James S. Tate, PhD Candidate in Musicology at Durham University; Ben Hopgood, Musicology at University of Goldsmiths.
  20. A recent review with Fifth House performers by CBC news is particularly illustrative of the unique challenges and interactive components of Journey: Live. See
  21. Meghan Naxer, Assistant Professor of Music Theory at Kent State University.
  22. Jan Torge Claussen, PhD Candidate, University of Hildesheim.
  23. Stephen Tatlow, MMus Royal Holloway; George Marshall musicology University of Hull.
  24. James Tate, BMus University of Surrey.
  25. Michiel Kamp, Junior Assistant Professor in Musicology, University of Utrecht.
  26. Louis d’Heudieres, Bath Spa University; Ben Jameson, composition PhD Candidate, University of Southampton.
  27. For an up-to-date account of artistic practice as research in music in both theoretical and practical terms, see: Mine Dogantan-Dack (ed.), “Introduction,” in Artistic Practice as Research in Music: Theory, Criticism, Practice (Farnham and Burlington: Ashgate, 2015).


Ludomusicology Conference Alumni Contribute to New Collection

A new book of essays has been published, featuring a number of contributions on game music.

The Routledge Companion to Screen Music and Sound

The Routledge Companion to Screen Music and Sound

Some of these chapters have been written by scholars who have joined us for the Ludo conference in previous years.

The book is The Routledge Companion to Screen Music and Sound, edited by Miguel Mera, Ronald Sadoff and Ben Winters and published by Routledge. The essays include:

  • ‘Musical Dreams and Nightmares: An Analysis of Flower’ by Elizabeth Medina-Gray (Ludo2013),
  • ‘Music, Genre, and Nationality in the Postmillennial Fantasy Role-Playing Game’ by William Gibbons (Ludo2013 keynote),
  • ‘Drive, Speed, and Narrative in the Soundscapes of Racing Games’ by Karen Collins (Ludo2015 keynote) and Ruth Dockwray, 
  • ‘Simulation: Squaring the Immersion, Realism, and Gameplay Circle’ by Stephen Baysted (Ludo2014 host and conference regular),
  • ‘Dimensions of Game Music History’ by Tim Summers (Ludo regular),
  • ‘Roundtable: Current Perspectives on Music, Sound, and Narrative in Screen Media’, featuring Anahid Kassabian (Ludo2012 keynote and Ludo2013 host) and Roger Moseley (Ludo2017 keynote).

There are also essays by Kevin Donnelly (Ludo2014 keynote and Ludo2016 host) and other essays that include game sound:

  • ‘Emphatic and Ecological Sound in Gameworld Interfaces’ by Kristine Jørgensen (eminent game sound scholar),
  • ‘Idolizing the Synchronized Score: Studying Indiana Jones Hypertexts’ by Ben Winters (Hollywood music specialist and noted film music expert).

The table of contents, listing all 46 chapters, is available on the publisher’s website here.

Congratulations to Miguel, Ron and Ben on their achievement, and for producing a fascinating volume!

Spread the word and tell any interested libraries or other parties.

%d bloggers like this: