Ludo 2024 – Programme and Schedule

For more details and registration link, see the main conference page.

Day 1: Thursday, July 11th

9:00–9:30“Loading”
Welcome and Registration
9:30–11:00Session 1New Horizons of Game Audio
Rocket League, developed by Psyonix and first launched in 2015, has been known in the gaming scene both for its imaginative combination of racing cars and football, and its thoroughly curated soundtrack. The game originally included music developed by the (now former) Audio Director Mike Ault, who combined his original work with that of Hollywood Principle, a band shared with other members of the company, as a way to include a dynamic music foundation to the title. These early compositions helped define the sonic identity of Rocket League as an EDM-sounding game, a genre that was further perpetuated by the inclusion of featured artists such as TheFatRat or Drunk Girl in later soundtracks. The incorporation of record label Monstercat as a partner in 2017, in response to the growing popularity of the title, afforded the expansion of the game in this very direction, which progressed as a platform for the discovery of new artists.

This paper examines the intersection between Rocket League and Monstercat as a way to exemplify the growing interdependence of the gaming and music industries. I highlight how the success of the game was key in securing the presence of the record label within the music market, and how the music curation by the label has, in return, contributed to expanding the game’s popularity among wider audiences. For this purpose, I follow the classic chain value of the music industry and how it intersects with that of the gaming industry, in line with Ivănescu’s analysis (2021), while contrasting digital-ethnographic findings with the testimony of participants of both brands.
The purpose of this paper is to describe the technological process of creating a simple test application for mobile devices. The application is part of a longer-term process of developing a digital audio game designed for individuals with visual impairments. The author of this article is a lead sound designer and partial sound implementor in the team. The Unity 3D engine will be in use in conjunction with the FMOD middleware. The attenuation capabilities of FMOD were used in the previous audio game called Via Echo (which is scheduled for release in first half of 2024) and some of its mechanics will be incorporated into the current project. The main goal of this app, however, will be finding the most effective way how to navigate visually impaired players in open 3D space. This article focuses on the player’s ability to accurately identify the direction of specific sound objects, primarily distinguishing between front and back, as well as the game’s ability to communicate the borders of a given space. The aim is to determine the most effective limitations on the number of sounds and local ambient atmospheres that can be used while still allowing players to easily navigate the space without any issues. The purpose of this application is to primarily examine the combination of FMOD and the Resonance Audio plugin. It will focus on the direction parameter and attenuation based on distance, directivity, and continuous modulation of low-pass filter, volume, and reverberation. The aim is to test all parameters with voice and standard foley to create maximum immersion while minimizing the use of artificial sounds. The app will be tested with both sighted and visually impaired players, and the results will be used to develop the audio game.

In my proposed presentation, I will discuss a prototype approach to generating melodies in open world games, which has been guided by practice-based research inspired by genetic programming techniques. The system implements agent-based modelling to simulate the interactions of NPCs within a game, using these interactions to guide the process of genetic crossover between agents. Each agent, which represents an NPC or NPC faction, carries a composer-written melody that is based on a leitmotif. The system analyses each melody, identifying the motific features and encodes each melody as a series of beat-by-beat melodic units. The relationships between these units, and between the motif and the units, are relatively encoded, allowing for the modification of a part of the melody whilst maintaining an overall contour and cohesive motivic construction. When agents interact, parts of the melodies are genetically crossed over and, after a number of successful interactions, a new agent is spawned based on a combination of the parents’ melodies.

By modelling agential interaction in this way, the melodies in the world naturally propagate through the space, representing trade networks and NPC relationships through the transmission of melodic material. In future versions of the system, there is potential for a player who navigates the space to be able to infer which NPCs have interacted with each other, and to interact with and manipulate these relationships through the transmission of their own melody. This novel approach explores the potential for music to represent dynamic, emergent behaviour in games such as immersive simulators, whilst prioritising the composer’s overall creative agency.
Chair: Michiel Kamp
11:00–11:30Break
11:30–13:00Session 2 – “Not that way”: Spaces and Negotiation
The player’s voice can conjure and alter worlds in ludic spaces; it “has an impact on the virtual world and the figures acting within it” (Stingel-Voight, 2020, p.39), possessing a force normally ascribed to ritual, or magic (Ghosh, 2011). Voice is noted by Agrippa to hold “so great a power, that oftentimes they change not only the hearers, but also the other bodies, and things that have no life” (Tyson, 1993, p.211). This investigation presents escape rooms as ludic spaces in which players possess the power to conjure and manipulate worlds solely with their voice.

While scholars have discussed the role of the player’s voice in virtual ludic spaces (Cheng, 2013., Tatlow, 2020), there remains a gap in research addressing voice in the non-virtual; where the player’s voice affords them control over play. Our investigation identifies four parties shaping the soundscape of escape rooms: The Game, Game Masters, Actors, and Players, as shown on the included model of the communicative chain of an escape room.
The outcome of this ongoing research is a framework delineating the contributions of each party to the soundscape of escape rooms. Additionally, our framework identifies the player’s performative speech (Austin, 1955., Derrida, 1988) as the guiding force. These speech acts, defined by Austin and Derrida as ‘performatives’, are of a form distinct from other types of speech — they possess an ability to “accomplish something through speech itself”, and “produce or transform a situation” (Derrida, 1988, p.13). They allow participants to exert control over the space and manipulate the ludic experience, communicating upward to the game itself, which is demonstrated visually through the dotted arrows on the model. This
research contributes to the understanding of escape rooms as playful, participatory environments and aims to offer insight into the power of performative speech and the voice
within ludic spaces.
In an increasingly digital age, the fusion of technology and education presents novel opportunities for engaging students in musical learning. This paper explores the integration of video game music, particularly from the Nintendo universe, into jazz education as a means of fostering improvisational skills and enriching pedagogical spaces. Drawing upon the rich improvisational elements found in Nintendo’s ludic soundscapes, the study analyzes three selected solos from three Nintendo games: Super Mario Odyssey (2017), Dr. Mario (1990), and Mario Kart 8 Deluxe (2017). The central thesis posits that video game music offers a unique and valuable resource for teaching jazz theory and improvisation within educational spaces. Through detailed case studies and analysis, the paper demonstrates the pedagogical affordances of incorporating video game music into jazz education, ultimately advocating for the establishment of a database of transcriptions to facilitate further exploration of jazz improvisation within spaces of learning through the lens of video game music. By building on the work of scholars, including Andrew Lesser, Brent Ferguson, T.J. Laws-Nicola, Stefano Marino, and Alan Elkins, this research contributes to the evolving discourse on jazz education by offering innovative approaches to teaching improvisation and analysis while celebrating the cultural significance of video game music in contemporary music studies.
As early as 2008 Collins identified the sonic ‘logjam’ sometimes associated with audio mixing in video games. Combined with the absence of the spatial cues that enable selective attention in the real world (Hofman, 1998), the unpredictable non-linear experience of gameplay frequently leads to sounds overlapping and competing in spatial, frequency and time domains. The many technical and pragmatic solutions to these sometimes-crowded spaces have typically taken a top-down approach, based on a necessary supposition about what the player needs to hear at any given moment (Bridgett, 2021). 
 
Inspired by recent developments in technologies of video game audio this paper revisits an idea first posited in Grimshaw’s seminal 2007 paper; that we could consider video game audio as an acoustic ecology (Grimshaw, 2007). Bernie Krause’s (2012) concepts of the Biophony (sound produced by organisms), Geophony (sound produced by natural non-biological causes) and Anthropophony (sound with human origin) are used, developed, and adapted as a series of lenses through which to examine the sonic domains of video game audio. Looking in turn at how nature adapts in the domains of space (territorial and acoustic adaptation), frequency (the niche hypothesis), and time (temporal distinction) the paper examines the extent to which game audio does, and does not, replicate acoustic ecologies in the natural world. Finally, in light of current technological trends we suggest how future solutions may involve more of a bottom up, agent-based approach where sound finally begins to enter the once silent world of the NPC.
Chair: Jennifer Smith
13:00–15:00Lunch & siesta
15:00–16:30Session 3 Performing Traditions
Piano arrangements and performances of video game music have never received more attention. In 2023, solo piano concerts of all Zelda and Final Fantasy music were featured all over the globe. The Super Mario Bros. Movie (2023) contains a scene with a duet performance between Bowser and Kamek of music and sound effects on the piano from the 1985 NES game, Super Mario Bros. Online communities celebrating performances and pedagogy of this repertoire are more accessible and expansive than ever. Companies like Materia Collective have sprung up with specializing in publishing game music scores, particularly in arrangements for the piano. 

But where and when did the idea of “video game piano” originate? To answer this question takes a journey through first printings of video game sheet music, the earliest video game sound tracks, the pioneering video game concert performances, nascent appearances of piano in video games, and performance videos from early game audio composers. The presentation will focus particularly on video game piano publications, performances, soundtracks, and in-game appearances of video game piano of the past, from mid-1985 to 1993, but will also briefly contrast these with current examples as well. The history of printed video game music publications will be explored in-depth as well as how the early publication conventions continue to influence “video game piano” publications today. The idea of an origin to all things video game piano related will be appreciated in its multifaceted complexity.
This paper presents an overview on what I refer to as Live Accompaniment concerts – performances in which an orchestra or smaller ensemble play accompaniment live to a screening of moving image media – of video games via particular focus on case studies of Journey Live, Untitled Goose Game Live and Undertale Live. This paper is informed by interviews with Dan Pinchbeck (Dear Esther Live), Dan Visconti (Journey Live, Undertale Live), and Dan Golding (Untitled Goose Game Live). I discuss the impact of recontextualising video games to the new space of the live concert environment due to dynamic music systems in the source media. Considerations of interactivity and spontaneity in Live Accompaniment video games will lead me to propose the terms audience-performer and player-performer. This demonstrates difficulties with translating dynamic music to the live environment and I will discuss how this is achieved through different methods within each case study. Furthermore, classical music sensibilities will be questioned as video games cross into the new space of the concert hall. Through this paper, I will demonstrate the transformations undertaken by video game music and the video game experience from its usual space of reception (the home, for the individual) to a new collective space with different sensibilities. To conclude, I culminate these ideas in the demonstration that Live Accompaniment video games are a new and unique hybrid art form.
Historically, play and music have intersected in myriad ways, from evocations of play (Robert Schumann’s piano pieces “Scenes from Childhood”) to gamified forms of composition (the famed Viennese Musikalisches Würfelspiel, or musical dice games). In the past several decades, a new category of pieces involving play and games through musical performance has emerged, allowing for musicians to partake in the play process during the course of a piece’s performance. In my doctoral dissertation, I coined the term “ludic piece” to describe and highlight this new cross-section of play and music, from the perspective of a classical musician studying ludology.

Ludic pieces contain play structures and game mechanics so that a play event and musical work unfold simultaneously. This is a multi-level experience from either side of the stage; the musicians interact spontaneously within the rules of play (sometimes to win a game) while aiming to deliver a compelling musical performance, while the audience spectates a game while receiving musical information. During my presentation, I would like to introduce, define, and explore the possibilities—musical, theatrical, social, technological, and game—of ludic pieces, including the conception of the term and the two broad categories under which they can be categorized, after Caillois’s categorization of play (ludus and paidia). I would also like to introduce some pieces (scores and/or performance videos) in the genre representing both categories, such as the ludic pieces of Iannis Xenakis, John Zorn’s Cobra, Remy Siu’s Foxconn Frequency (no.2), and Aidan Gold’s I’m Actually Just Making Stuff Up.
Chair: Raymond Sookram
    16:30-17:00Break
    17:00-18:00Session 4 – Worldbuilding and Identity
    From film, to television, video games, and more, more and more creative projects engage ethnomusicologists to advise their world building efforts and soundtracks based in non-Western settings and traditions. Scholarship has such projects are rife with political issues and disagreements shown (e.g, Bryant 2012; Cheng 2012; Stock 2021). This paper will discuss how one such project deals with these tensions between appropriateness and appropriation, through the lens of video game modding (editing existing video games as individuals or volunteer teams, usually to be released for free online). In particular, it presents a problem for the community-made mod Esroniet: Domain of Lost Unity for the 2011 game The Elder Scrolls V: Skyrim, which seeks to implement gamelan into the soundtrack of the dark Nordic fantasy game, more befitting
    of our Southeast Asian-inspired setting. As one of the project’s composers, I will discuss several issues the project has encountered thus far and the conclusions and compromises we’ve made, such as our concerns with potential cultural appropriation, accessibility, the potential of gamelan communicating misinformation to the player during gameplay, and how this compares with scholarly understandings of representation and appropriation (e.g. Robinson 2020). I also discuss how our concerns sometimes mirror those in high-profile projects, like James Cameron’s 2009 film Avatar and the 2024 video game Diets and Deities, despite our project’s volunteer, leaderless nature. Through these cases, I will discuss how ethnomusicologists can approach such situations they are called upon to advise, be it a volunteer project or multi-million-dollar blockbusters.
    In Hades (Supergiant Games, 2020), the player-character, Zagreus, has the chance to encounter an important side character, Eurydice, on his journey to escape from the Underworld. Of note is Eurydice’s portrayal in this video game, as she appears distinctly nonhuman while also being racially coded as Black—a significant departure from how Western European art has portrayed her. In the first and subsequent meetings with Eurydice, the player-character will often hear her singing the song, “Good Riddance,” which is revealed to be something she composed sometime in the aftermath of Orpheus’s attempt to save her, reflecting her decision to move on from the mortal past and failures for something more for herself.

    In this paper, I perform a close reading of the music and lyrics of “Good Riddance” to consider Eurydice’s worldmaking in the afterlife. I do so while attending to her racialization and the largely absent conversation around her Blackness, despite its importance to understand her song. I consider Eurydice’s autonomy after her death primarily through the lens of Black worldmaking and aliveness as theorized by Jayna Brown (2021), Saidiya Hartman (2019), and Kevin Quashie (2021), all of whom have discussed Black people—particularly Black women—imagining an otherwise and elsewhere surpassing our current spacetime. Ultimately, I argue that “Good Riddance” symbolizes and celebrates Black worldmaking that is untethered from human expectations, a realized dream demanding something more than what has been given to Black people.
    Chair: Michael Austin
    18:00Break
    18:30Evening Concert
    I worked as a composer of game music in the 1980s, as part of a broader interest in electronic music and computer music, in the age of 8-bit home computers. Some of my pieces can still be found in archives of ‘classic’ games for the Commodore 64. I also wrote two books on the usage of Commodore 64 as a music instrument and started the first course on this subject in an Italian music school. At the Ludomusicology conference I would like to present this work, framing it into the context of my other activities during that period: as a composer and member of a rock band, as a performer of electronic music and author of computer-assisted audio-visual projects, and also as a musicologist (my early articles on genre theory and my participation in the establishment of popular music studies belong to the same years).

    The topics covered in my presentation would be: 1) a brief, general overview of the sound capabilities offered by 8-bit computers in the early 1980s (mainly the Belgian-built DAI, Commodore 64, Apple II+, BBC Acorn); 2) an aesthetic and technical coverage of my works with the DAI and the Commodore 64; 3) a brief presentation of the resources used for game music composing, and of the requests by game programmers; 4) a comment on the integration of home computing into my other musical works based on more traditional electronic instruments (synthesizers, sequencers, etc). Music examples would be included.
    While from seemingly disparate disciplines, designing a game and composing a musical work for improvisers have much in common. In both circumstances, a framework is provided for the achievement of goals. The designer/composer will communicate to everyone involved what that framework is in some manner, whether that be verbally, graphically, in writing, or perhaps even by demonstration. Meanwhile, all participants internalize these rules and realize them in time, yielding a result that is born out of the moment. All minds are voluntarily present and actively in play.
    I hypothesize that the exploratory nature of gameplay can be seen as a framework for improvisation-based musical composition, and I will demonstrate the result of the experiment
    during this talk: A duet where one person performs live on a flute (the presenter’s primary
    instrument) and the other plays a video game. The video game is designed around an original
    framework that considers musical improvisation as a core guiding principle. The dynamic interactions between the flutist and changes in the game environment, between the game
    player and the game environment, and between the flutist and game player are critical elements of this project.
    The technology used in this version of the project is the Unity game engine and Cycling ‘74 Max with bidirectional communication done with OSC messaging. A volunteer from the audience can act as the duet partner for the live instrumentalist.
    In a masterclass with John Corigliano, I asked him “what is the most difficult instrument to compose for?” He answered, “the guitar. The guitar hands down is the most difficult due to the techniques and tuning” (Corigliano 2019). This is likely due to the guitar not being taught to composers as much as bowed stringed instruments in orchestration classes (Harrison 2010, Noble and Cowan 2023). Furthermore, this is especially the case for flamenco guitar, which includes demanding techniques and virtuosity unique to each performer or school of performance (Kalkan and Sazak 2022). The influence of flamenco music has been a mainstay in popular music from rock to anime. It has been pivotal in my musical development as a guitarist and composer, in addition to influencing video game composers (Summers 2011, López Gómez 2021, Laws-Nicola and Ferguson 2022). In my recent album recording, Guitar for Everyone’s Souls, I merge flamenco concepts with existing video game music for an album of arrangements. In this presentation, I will provide a guide for arranging video game music classics for flamenco guitar including common tuning schemes, chord structures, and techniques associated with the style. I will also exhibit how to put these concepts together with melodic treatment using examples from my album and transcription work. Finally, I will give access to this guide to help further arrangements and compositions for the instrument in the 21st century.

    Day 2: Friday, July 12th

    9:30–11:00Session 5Playing with Representation
    China has become the largest online video game market in the world: in 2023, the number of online game users reached 550 million (CNNIN,2023). Of all types of Chinese games, 71.4% were based on Chinese martial arts chivalry, historical and mythological stories, and Chinese classical literature (iResearch, 2015). I refer to this game genre as Chinese Ancient-Style Fantasy Game, characterized by imaging a fantasy world from a modern perspective that is based on some elements of the past, but may blend historical periods, geographic areas or present a fictional world inspired by some ancient Chinese elements.
     
    In this paper, I will be defining the genre of Chinese Ancient-Style Fantasy Games including what common elements or features are characteristic of this genre, and how Chinese histories, classical literature and mythical stories are treated in the games. Then these questions will be investigated: What are the features of music in this type of game? How does music interpret both ‘constructive authenticity’ and ‘objective authenticity’ (Lind, 2022)? How does it effectively bring players into the setting of an ancient Chinese world?
     
    I address these questions through studies of medievalist fantasy game music (Cook, 2020 & Collyer, 2023 & Cook.K.M, 2019) in tandem with musical analysis, player comments and composer interviews. I will use games from different periods as case studies for discussion, such as Fantasy Westward Journey (2003), Honor of Kings (2016), and Genshin Impact (2020), to show how Chinese game music finds a balance between the global market and local culture in the context of globalization.
    After several chapters set in hi-tech worlds, Final Fantasy XVI (2023) has brought the players of the well-known JRPG franchise back to a classic (dark) fantasy setting, explicitly inspired by the European Middle Ages. Just like the developers were suggested to take inspiration from Game of Thrones, the main composer Masayoshi Soken was asked to adapt his usual style to the Hollywoodian conventions of music for fantasy cinema. But Final Fantasy XVI cannot be defined as a plain “pseudo-European” fantasy story, as it features a gradual shift towards narrative topics that are more typical of JRPGs and shōnen manga and anime – and I argue this passage from the West to the East is subtly reflected by the music as well. In this paper, I offer a thematic and topical analysis of some relevant compositions from the soundtrack of Final Fantasy XVI, to show how Soken and his fellow composers and arrangers have injected “foreign” (mainly electronic) musical elements in a generally symphonic context, thus evoking an idea of “otherness” that points in two directions: the (quite literally) alien spaces and characters that gradually become central in the story – which are musically accompanied by unexpected genre synecdoches; and the related “alien” narrative topics that are typical of JRPGs and anime and end up largely replacing the classic fantasy setting in the second half of the game – thus allowing into the music elements that betray the authenticity of the “Western(ised)” soundtrack, in favour of eclectic choices more typical of other Japanese media.
    Perhaps it goes without saying that queer spaces are an important part of the LGBTQIA+ community. As a group that finds itself on the margins of society, there is not only a need for public places in which queer people feel free to be themselves, queer spaces also offer community members opportunities to meet others, share experiences, and fully express oneself without fear of discrimination.
     
    Although there are not many examples, the gay bars, clubs, and other queer spaces are sometimes featured in video games. And even the earliest queer video games used music and sound to represent queer spaces, such as Caper in the Castro (Macintosh 1989), which included sound effects and short samples of popular dance music played from a jukebox to represent some of the sounds heard in gay bars. Although there is still a clear tendency to rely heavily on queer tropes and stereotypically “gay” styles and genres, music plays an important role in queer placemaking in later games, too, as in scenes at the Hercules gay club in Grand Theft Auto: The Ballad of Gay Tony (Rockstar Games, 2009) which exclusively plays disco music from the 1970s. Other games, such as 2064: Read Only Memories (MidBoss, LLC, 2015), use original music to create a more true-to-life sonic experience of queer club space. In this paper, I investigate the role that music plays in queer placemaking in video games and explore the various ways that spaces for the LGBTQIA+ community are represented sonically in video games.
    Chair: Beth Hunt
    11:00–11:30Break
    11:30-12:30Keynote: Stephanie Lind, associate professor at the Dan School of Drama and Music, Queen’s University
    12:30-15:00Lunch & siesta
    15:00–16:00Session 6 Comfort and Unease
    A Little to the Left, by the Canadian duo of developers Max Inferno (Annie Macmillan and Lukas Steinman), is a puzzle-solving game in which players have to order household items in pleasing (but not always obvious) ways, while a naughty cat tries to cause disorder. Both the game and its DLC Cupboards and drawers (2023) have built a solid fandom that appreciates their aesthetic qualities and subdued charm, in which the music by Canadian fellow composer Justin Karas plays a central role. Drawing from written interviews with Justin Karas and Max Inferno, who have provided invaluable insights into their creative processes, this paper will analyze particularly interesting moments in the different levels of the game to discover how the music seeks to construct a safe and relaxing mental space for puzzle solving. At the same time, I will show through musical analyses how Karas’ music manages to create interest and variety through rhythm, timbre, and different compositional strategies. I will also pay attention to the combination of the music with different types of (more or less realistic) sound effects mainly resulting from the interaction of players with everyday objects, which convey a sense of domestic space.
    Little scholarly analysis has been done on how family-friendly video games—especially those with simplistic plots—portray complex emotions. I demonstrate how Super Mario Bros. music contextualizes its portrayal of fear within its jovial identity, presenting fear as elements of unease within its light-hearted musical formulae.

    First, I build on prior work to establish some characteristics of the broad Mario style including easily singable melodies frequently using chromatic lower neighbors (8-bit Music Theory 2017), mostly diatonic harmonies with some chromatic embellishing chords (Lerner 2014; 8-bit Music Theory 2018), extensive use of dance topics (Lerner 2014; Reale 2021), and juxtaposition of different styles or moods (Grasso 2020; Schartmann 2015; Lavengood and Williams 2023).

    Next, following Schartmann (2015) and Hatten (1994) and content by 8-bit Music Theory (2016), I establish elements of “spooky” Mario music including emphasis on the note a tritone away from tonic, temporary blurring of the meter, playing with expectations of resolution, chromatic chords that temporarily destabilize tonic, sonorities containing multiple dissonant intervals, and use of “spooky” timbres.

    Then, building upon research by Cheng (2013), Roberts (2014), and van Elferen (2016), I compare Mario music’s approach to fear to characteristics of horror game music. These characteristics include the potential absence of melodic and harmonic components, dissonant and/or atonal sounds, industrial or other noises, use of instruments as frightening diegetic sounds, and use of horror film tropes.

    The examples in this paper portray fear as unease within the broader Mario cartoonish sound. When Mario’s style veers “spooky” and occasionally uses tropes from horror game soundtracks, Mario’s fearful music remains more subtly unsettling than “anxiety”-inducing or “potentially threatening” (Roberts 2014, 149; van Elferen 2016, 41).
    Chair: Millicent Gunn
    16:00-16:30Break
    16:30–17:30Session 7 – Soul(s) Music
    While the dichotomy between exploration and combat is present in most video game genres, and is often emphasised by the role of music, certain titles take a different approach to the structural role of their sonic language. The Soulsborne franchise, for example, is known for its difficulty, cryptic narrative, repetitive gameplay, and for using music almost exclusively in certain moments of the game, namely boss encounters and cutscenes. Although it shares many characteristics with its predecessors, 2022’s Game of the Year, Elden Ring, on the other hand, took a musical approach closer to that of a standard RPG, offering an open-world experience with the presence of non-diegetic music in almost every situation and not exclusively during bosses. Nevertheless, the musical language, the eerie atmosphere and the inhuman sound valleys are elements common to both instalments and are fundamental to the distinctive features of these games.

    Taking these aspects into consideration, I aim to examine Elden Ring and Dark Souls by placing their aural differences in dialogue with their symbolic synergies, analysing the role of sound in the spaces of the game worlds, while articulating with the uniqueness of each narrative moment alongside its immersive qualities. With stylistically and functionally similar soundtracks, but with distinct world-building, both games use carefully designed soundscapes to convey fantastic environments, whether in a vast green land or a dark dungeon. In a remediation process of medieval and gothic horror tropes, both games create their own dark fantasy universe through the ergodicity of the soundtrack and the sonic thresholds of each virtual space, informing the player’s agency and affective engagement during their journey through the narrative.
    Fantasy inverts modernity’s linear narrative temporality, invoking the imaginative precedents of medievalism to dream of a different reality, albeit in decline. FromSoftware’s Soulsborne games transform this already vexed politics of time further still: abandoning the player beyond decline’s cataclysm—after history—to pick up the pieces. Amidst its signature ‘environmental’ storytelling, in a world cursed by endless (difficult) repetition, players recreate an uncanny reality in Demon’s Souls. With this in mind, there is something doubly strange about its otherwise slavishly ‘authentic’ 2020 remake.

    The paper, then, proposes to unpack the uncanniness of this ‘authenticity’ focussing on the soundtrack’s ‘recomposition’: imagining both the new and original as ‘(in)authentic’ reproductions of a third, lost, soundscape. That is, as musical-material emanations and (re)re-creations of an absent and enigmatic (ludic) past awaiting discovery. Like its layered fantasy-medieval visuals, the soundtrack is both rigorously ‘authentic’ to its source material (often reproducing melody and harmony verbatim) and (to purists’ dismay) radically different. Indeed, the remake embraces a provocative excessive maximalism of ‘epic’ orchestration and dynamic intensity, which only, in turn, reimbues the 2009 score with something newly anachronistic and creepy. This dynamic, I argue, has its precedent in discourses around historical performance authenticity.

    This play of excess and restraint—presence and absence—invokes, then, Mark Fisher’s dialectical conception of horror: the Weird and Eerie. Brought into dialogue along these lines, Demon’s Souls now double (musical) incarnation dramatises a pertinent a wide-ranging politics of authenticity, and, in turn, actualises new and profound experiences for players and listeners.
    Chair: James Heazlewood-Dale
      17:30-18:00Break
      18:00-19:00Publishing Workshop

      Day 3: Saturday, July 13th

      9:30–11:00Session 8Ways of Listening
      Twitch is a virtual performance venue and a space for relational and playful listening. It positions sound and play as events and activities that gather physically isolated listeners. In this paper I study four ethnographic vignettes that highlight varied genres of Twitch streaming. Observing these vignettes I identify key elements of the Twitch space: the Streamer, the Chat, the Listening, and the Venue of Twitch itself. Matthew Rahaim’s ethnography on vocal relationality maps the relational circuit between Streamer and Chat. The Streamer initiates and the Chat reacts; the Streamer internalizes the reaction, then adjusts the performance answering to Chat’s feedback. Each circuit of sound is idiosyncratic to every stream since every Chat gathers differently, and because the Streamer administers what audial and non-audial “sounds” Chat can make. Considering Rajni Shah’s performance study Experiments in Listening I interpret Twitch Listening as compassionate “besideness” akin to theater. Furthermore, Karen Collins’ theory on interactive spectatorship and Melanie Fritsch’s position on games and music as playful
      performance practice explains Chat’s tendency to engage in declamatory and participatory listening. Finally, I use Jacques Attali and Shah to understand Twitch the Venue: both a corporate entity that oppresses and censors performance and an online invitation to listen to new voices. Analyzing Twitch through these key elements demonstrates how Twitch allows Small’s “musicking” to take place even while in physical isolation. Please note that this paper contains discussions of isolation and illness from the ongoing COVID-19 pandemic, as well as brief commentary on surveillance and online harassment.
      It could be argued that old arcades were places where unwritten rules, socially established around the game’s own rules, emerged, which also gave rise to particular and noisy soundscapes. In this respect, one question is to what extent the acoustic characteristics of those spaces affected the design and mixing of the game’s audio. A few years ago, the celebrated composer Yoko Shimomura confessed to Karen Collins her frustration at the difficulty of the Street Fighter II (Capcom, 1991) soundboard to produce sounds at an audible volume for players in spaces where sound perception is constantly challenged. Our proposal for this communication is a phenomenological study on how the musical material of three of the main titles of the Street Fighter saga is configured and how the player perceives it: Street Fighter II (Capcom, 1991), which incorporated Capcom’s CPS1 board; Super Street Fighter II Turbo (Capcom, 1994), executed on a CPS2 board with the Qsound 3D audio system and, finally, Street Fighter III: Third Strike (Capcom, 1999), which ran on the less widespread CPS3 board. To achieve this, we have access to a private game center where we will recreate the typical soundscape of these kind of spaces, including sounds such as loud conversations, shouts of frustration, hard button presses, and the cacophonic sound produced by several machines operating simultaneously. We will take acoustic intensity measurements and audio recordings from the player’s position
      while the games are running on the original boards in Japanese candy-type cabinets to determine how this multiplicity of factors affects the user’s musical experience.
       Videogame music involves players in audiovisual playgrounds, establishing virtual spaces for play. This paper will outline how music can act as incongruent with the movement of the player/ avatar, breaking down the aforementioned ludomusical illusion.

       Graphically mapping musical contours will reveal how moments of disjunct audiovisual incongruity can alter the affective experience of individual players. This phenomenon, a concept I term ‘gestural rupture,’ actively operates against the desired feeling of immersive play. Two forms of gestural rupture will be identified, either occurring due to a lack of player skill in the face of a ludic focus on momentum or a disjunct interpretation of perceived authentic experiences emerging due to discordant game mechanics and musics being utilised. Through an analytical theory of musical gesture, influenced by musicology (Hatten 2004; 2018), dance pedagogy (Albright, 2013) and ludomusicology (Lind, 2022), we can precisely examine videogame music to reveal how players can feel a disconnect from the sensation of playful abandon and the virtual world they inhabit.

       By analysing the ludomusical content of Sonic the Hedgehog (1991) and Sonic CD (1993), games that focus on player skill and momentum, this paper reveals how jarring musical gestures and a sudden lack of momentum can impact player engagement, resulting in the potential shattering of the illusion of play and fantasy of speed. In a world where audiovisual media forms are vying for our prolonged engagement within their virtual worlds, this paper’s analysis of how that immersion can be broken is contemporarily apt. 
      Chair: Richard Stevens
      11:00–11:30Break
      11:30-13:00Session 9“When You Say Nothing at All”: Voices Heard and Unheard
      I am currently undertaking a PhD investigating the role of performative play in presenting game sound to digital audiences and communities, with a focus on imagined voices in voiceless video games (specifically Undertale). The proposed paper is based on one of my thesis chapters, covering text ‘completion’ and interactivity in the YouTube video series Undertale: The Cinematic Dub.
       
      Undertale: The Cinematic Dub is a series of YouTube videos that reimagines Undertale with voice acting and more ‘cinematic’ qualities (e.g. a non-looped musical score). I will investigate text ‘completion’ by focusing on the inclusion of voice acting in Undertale. Imagined voices are very important in Undertale fan communities, with fans being quite protective of their chosen voices for the characters. In a project like Undertale: The Cinematic Dub, can fan preferences for specific character voices be satisfied, and if they can, does this lead to a more ‘complete’ text for fans of the game?
       
      Secondly, I will investigate the interactive qualities of the series through ludomusicology, specifically through Karen Collins and Iain Hart’s work proposing that interactive texts like video games are incomplete without a player. When the player is removed from Undertale entirely, what then happens to the interactive qualities of the original text? From here, I’d will explore the series’ interactivity through fan studies: in Undertale: The Cinematic Dub, many fan ‘traditions’ are present or at least referenced the final text. Does this make the series interactive through a fan studies lens if not a game studies one?
      This paper explores the aesthetics of non-verbality in gaming and the related notion of sonic spatiality. By drawing attention to a series of recent debates and case studies that question the centrality of speech as a communicative apparatus, I seek to highlight the perceived relationship between dialogue and mediatised notions of “reality”, as well as the influence of fan communities in encouraging developers to rethink vococentric approaches.
      In the last few years, there has been a marked increase in debate regarding gaming’s apparent (over-)reliance on spoken dialogue, particularly in filling spaces that could otherwise have been perceived as purposefully and meaningfully absent. For instance, following the release of God of War: Ragnarök (2022), many media outlets remarked upon the arguably superfluous verbosity of its NPCs. More recently, Super Mario Bros. Wonder (2023) sparked impassioned debate about the apparent vexatiousness of its Talking Flower characters, with Nintendo allowing players to selectively mute them, or even to translate their speech into other languages.
      Relatedly, in the Dead Space remake (2023), newly-recorded dialogue is appended to the game’s previously silent protagonist, whose muteness, according to Sweeney, had enabled ‘a symbiotic relationship between player and avatar’ (2016, pg.172). This decision was divisive – some critics argued that it was a logical consequence of gaming’s continued drive to emulate “reality”, but others suggested that the original game’s oppressive absences (and the unfilled space it forced players to confront) were more affectively impactful.
      By exploring the aforementioned examples of perceived vocal extraneity in depth, this paper will highlight often-overlooked tensions regarding the implementation of speech in gaming. The proposal also relates closely to the conference theme by construing “space” – and non-verbal spatiality – as a phenomenon that offers valuable insights into the purposeful absence, omission, and elision of sound in audiovisual media.
      League of Legends (Riot Games, 2009) hosts several alternate universes that have relocated the MOBA’s champions into new settings and storylines with the purpose of market diversification, transmedia expansion, and increased community engagement. Commercial strategies implemented by Riot Games include novel musical acts comprising fictional characters voiced by living artists. The latest of these virtual groups, HEARTSTEEL—debuted October 2023—spawned a surge in fan content set within the music industry. One notable fanfiction, Steel Your Heart, I’ll Be Your Guide by independent musician Marcus Skeen, inspired the companion fansong “Lost in Silence”, performed by Skeen impersonating canonically selectively mute character Aphelios. Acclaim garnered by this track compelled Skeen to develop the creative project into a fully-fledged EP titled North Star, which expanded on Riot’s IP by crafting a wholly original musical personae and artistic journey for voiceless Aphelios, while simultaneously providing Skeen with a profitable musical alter ego
      Following previous sociological and ethnographic work on virtual artistry, this paper adheres to Auslander’s (2006, 2020) performer-centered theory of musical performance in the examination of HEARTSTEEL Aphelios’ musical personae as fleshed-out and enacted by Skeen; and negotiated with Riot Games as IP holder, on the one hand, and the LoL fandom as target audience, on the other. To accomplish this, this paper explores the tensions arising from the interaction between artist and personae concerning identity. It also delves into the conflicts resulting from Riot Games’ fanwork exploitation rights and investigates parasocial affairs between artist-personae and the LoL fandom at large. The overarching goal is to assess aesthetic and semiotic awareness and continuity from canon to fanon in musical LARP as a unique form of performance, shaped by a myriad of interlinked agents and interests in the context of fandom.
      Chair: Alex Kolassa
      13:00-15:00Lunch & siesta
      15:00–16:30Session 10 – Soundscapes: Stables, Fields and Gardens
      R. Murray Schafer’s Soniferous Garden describes a concept that entails the harmonious and intentional design and auditory augmentation of environmental soundscapes to enrich our sonic experience. Modern game engines allow for several ways of implementing sound events and controlling
      environmental parameters. Soundgarden is a research-creation project that uses the Unreal 5 engine to adapt Schafer’s concept into a playful soundscape game that utilizes generative music. It explores virtual acoustic ecologies, the role of sound as a secondary object within such settings, examining the relationship between natural and artificial sounds, the meaning of nature, how ‘natural’ sounds are represented, and how they can be augmented to enhance a musical gameplay experience. This project presents a possible implementation of player interactions with sound and sound interacting with its environment. Soundgarden applies principles of acoustic design as described by Schafer, with the intention to assess their impact on gameplay and player affect. Further, the project aims to generate insights into the acoustic design of virtual spaces, as well as possible implications for the design of real-world sound-based experiences. Grounded in Schafer’s soundscape theory and the concept of acousmatic sound, the research aims to bridge digital and physical acoustic design, seeking to offer insights into the integration of sound in games, contributing to practices in auditory game scenography, and generate an awareness for the sonic environment of games and beyond.
      The main research questions of my dissertation (Mauch 2024b [in press]), how to conduct ethnomusicological field research studies in video game spaces and how virtual reality spaces are created, manipulated, and experienced through sound and music, offer two different perspectives which are strongly intertwined:
      In a theoretical aspect I propose the broader concept of the constant conversation (Jacobsen 2016), further developed and concretized, as a clearer and more adequate understanding of the ergodic process (Aarseth 1997). Towards a comprehensive understanding of the coded multilayered audiovisual (also haptic) language between player and video game, I introduce the sound-strand model to describe the basic sonic vocabulary a player must interpret and understand to play a game, which is understood as a contextualized synthesis of already existing theories in video game sounds and music.
      Methodologically, I regard video game spaces as heterotopias, which reflect, mirror, or stand in any other direct relation to the (social, cultural, and physical) space we live in (Foucault 1986), in which fieldwork is conducted. To examine a more focused perception of in-game sounds and the individual and subjective connection to a specific soundscape (Schafer 1969), I follow a rather experimental approach by conducting Soundwalks (Westerkamp 2007) in different case studies.
      The gained insights demonstrate a highly subjective perception of game soundscapes. Referring to the concepts of Promenadology (Burckhardt 2006) and Walkscapes (Careri 2002) I argue for a more subjective research approach in game studies in general, which is also applied within my current subproject Gamescapes in the SNF project CH-Ludens.
      Following our hack in the forest I return to the stable to untack Darby. Using a hoof pick I remove mud and small stones compacted in the underside of her hoof and groom her coat with a curry comb using firm circular motions. These actions produce soothing ASMR-like haptic sounds. I contextualize the affective coziness of the Horse Club Adventure series’ (Wild River Games) environmental and musical soundscapes within the pony fiction genre, stories set at a pony club or riding school that center coming-of-age stories of young women that emphasize the importance of friendship, teamwork, cross-species empathy, and human-horse communication and relationality (e.g., Misty of Chincoteague, The Saddle Club). In Horse Club Adventures—a contemporary approach to pony fiction—Winnifred Phillips’ score aids in the repair and reclamation of a genre historically “relegated firmly to the sidelines” (Haymonds 2004: 360). I offer three scenes of affective horse-human relations: a forest hack, a jumping course, and untacking and grooming in the stable. Each scene provides a path for encountering the question: how do players internalize the affective sonic coziness and horse-human relations of pony fiction in experiential media? Games that are perceived as cozy are hardly ever only cozy. In Horse Club Adventures the sonic coziness of the environmental sound design and Phillips’ score offer players a strong sense of place and a supportive space to explore empathetic gameplay as they listen to and sense the interactions negotiated between young women and their horses in and beyond the saddle.
      Chair: Mattia Merlini
      16:30–17:00Break
      17:00-18:00Session 11 – Performing in Virtual Spaces
      Karaoke and singing games enjoy an enduring, unbroken popularity all around the world. They invite players to perform in various kinds of spaces, as the games provide a framework for their musical performance. After a brief outline of the development of singing games, I’d like to focus my presentation on singing games in virtual spaces. XR is opening up new opportunities for the interaction between and experimentation with space, voice and body. Impressive examples like interactive soundscapes, controlled and animated with the pitch, vibration and intensity of the participant’s voice, show that the success story of singing games is far from over.

      I would like to present an ongoing project at the University of Bayreuth that deals with music games and theatrical performance in XR spaces. The aim of the project is to support students in their personal development by reflecting on their voice and posture: How do young adults learn to accept their body and their voice? How can we simultaneously strengthen our selfawareness, do something for our health and improve our ability to concentrate – all while playing?

      The didactic benefits of this body-activating practice within university teaching range from reducing the fear of presentations and oral exams to increasing motivation and willingness to learn. Let us explore what singing games can do for the body and mind!
      Over the past 5 years, the world has experienced a surge of in-game concerts. This tendency was heralded by Marshmello’s concert in Fortnite (February 2019) and accelerated dramatically with the beginning of the Covid-19 pandemic, leading to over 70 in-game concerts being organised in less than 5 years. One such concert stood out for the alleged intention from its organisers to challenge the boundaries of social interactivity that had been previously set by other, mainstream in-game events: Aurora’s concert in Sky: Children of the Light, launched in December 2022. ThatGameCompany, which is responsible for Sky, used this concert to launch technology that allowed up to 4,000 players to interact simultaneously, whereas other commercial in-game concerts typically held no more than 40 players per game room. Furthermore, several declarations from ThatGameCompany’s CEO indicated an intention to create an in-game concert experience that was more meaningful than its predecessors from a social standpoint. Thus, this paper investigates how successful ThatGameCompany was in implementing this view, and in what ways it was able to impact its players’ perceptions of liveness, sociability and inclusivity during the event. It does so by drawing from a virtual ethnography of the community surrounding the game and concert, and furthermore by relying on a combination of liveness theory (Auslander, 2008), Social Inclusion Theory (Bailey, 2005) and Social Dominance Theory (Sidanius & Pratto, 1999). It argues that the effects demonstrated by this concert call for a heightened attention to the social aspects of liveness theory.
      Chair: Andra Ivănescu