Ludo 2025 – Programme and Schedule

For more details and registration link, see the main conference page.

Asynchronous Talks

Back to Black(wood): Remaking Music in Branching Narrative Media
Andrew S. Powell
Easy-Listening and the Musical Imagery of Gambling in Balatro (2024)
Michiel Kamp

Day 1: Thursday, July 10th

9:00–9:30“Loading”
Welcome and Registration
9:30–11:00Session 1Modalities of Musical Interaction
Minigames such as Super Mario Party (2018) and Nintendo Switch Sports (2022) are popular social games.  However, minigames also occur in more narrative-centric games such as the Legend of Zelda series, and in these sorts of games music and sound are essential in demarcating minigames from their overworlds.  This paper will examine how sound sonically separates the minigame from regular game space, defining a shift of gameplay frame (a specially demarcated time and space, as defined by Salen Tekinbas and Zimmerman 2004, 99).
 
Graph with three vertical segments. The middle one has an inverted u-shaped line curve, while either side have flat lines. The column on the left is marked 'prior gameplay' and the right 'subsequent gameplay'. The three columns are separated by white space.
Figure 1:  The mirrored gameplay frame in minigames

As represented in Figure 1 (inspired by Summers 2016b, 14), minigames often feature a mirrored gameplay frame:  they are 1) introduced by silence followed by an instructional phase, 2) a sonic cue begins the minigame, 3) the minigame increases the gameplay and musical rhythms, 2) a sonic cue ends the minigame, and 1) a brief moment of silence transitions back to regular gameplay.  As Grasso outlines in RPGs, musical transitions function as framing devices to distinguish different gameplay modalities. Ludic temporalities are established by these musical events, and those “commingle with these other ludic temporalities in games.” (Grasso 2020, 98).  This is certainly true of minigames as well:  the grey zones on the figure represent player-engaged gameplay (versus the white zones of more passive engagement), but they also correspond to the shift from more active sound to silence.  This paper will further pull from methodologies established by Hart 2014 (278; gameplay rhythm) and Lind 2023 (47 and 144-145; the interaction of sound and gameplay modality) to analyse such sonic frames.
Animals and audio are often discussed together when examining the soundscape and environments of a video game space, explored considerably in the book Music and Sonic Environments in Video Games. (Galloway & Hambleton, 2024) Animals in video games, however, also have a large part to play in our interaction with rhythmic and musical gameplay in non-rhythm games. Games like Roots of Pacha specifically gives the player a flute tool which they use to befriend animals, and when the player encounters a wild animal to tame, they must engage in a rhythm game.
Animal interaction and befriending can often be a specific musical process, such as in Legend of Zelda: Ocarina of Time where the player specifically performs Epona’s song on an ocarina. From catching fish in the Animal Crossing series to the horse flute in Stardew Valley (v.1.5) there has been a distinct link between animals and interactive gameplay – leading to the successful and socially important campaign of the ‘Can You Pet the Dog?’ social media account.
The aim of this paper is to examine why interacting with animals and rhythmic gameplay is a large part of non-musical games, gameplay that is generally positively received by players. Is it our innate need as humans to interact with and (musically) play with animals, even if they are virtual? And does this have some relationship with our ability to survive and thrive as humans through our shared love and use of animals and music?
Videogame music can connect players to a game’s virtual reality and may help players build a sense of their identity and role within that world. This paper analyses an example where musical gestures, especially aspects of timbre, melodic contour and harmonic movement, are used to articulate the player’s role and identity in the world of a game. 
In Outer Wilds (2019), players perform as the silent protagonist, ‘The Hatchling,’ exploring the end of the universe with a cast of characters who come together to perform a song at the game’s conclusion. Over the course of the game, players collect instruments and help nonplayer characters, culminating in a cathartic concluding performance as their friends assemble to play as the world ends. Players forge their in-game identity through facilitating this musical performance, where the legacy of their deeds is given musical, specifically gestural, sounding. The concluding song is an archive of the identity they have created through playing the game.   
Through ludomusical gestures, players build an identity in a performative sense. Drawing on musical gesture theories (Hatten, 2004; 2018), ludomusicology (Lind, 2023), identity scholarship (Butler, 1990) and games studies (Isbister, 2016), this analysis reveals how overlapping musical gestures can sonically articulate community, prioritizing human connection, despite the game’s apocalyptic outer-space environment. The music aids the sense of player identity as they perform/enact that identity through play. With games increasingly recognized for the complex play of identity they provide, this paper explores the role of musical discourse and gestures in such critical issues.
Chair: TBC
11:00–11:30Break
11:30–13:00Session 2 – Player Cultures
This paper considers the modding of custom sounds into Counter-Strike: Source (CSS) (Valve Software, 2004) dedicated servers as an example of participatory culture (Jenkins et al., 2009, Sihvonen, 2011). While the modding of custom audio into a game has been subject to research before (Freitas, 2021), the motivation to do so was identified as a desire for the game to better satisfy a set of players’ idea of immersion. With the nature of the typical sound modded into CSS being totally incongruous with the theme of the game itself, the same explanation cannot apply here. The custom sounds in CSS were typically internet memes or sound effects from other games, the use of which can be considered examples of textual poaching (Jenkins, 2012). Neither does the remixing of the games soundscape compliment the gameplay, with the existing sounds they obscure important for skillful play (Reeves et al., 2006).
Postigo (2007) argues that a key motivation for modders is to be able to create a game that they more strongly identify with. Sotamaa (2010) agrees, adding that “modding is obviously just one example of the ways of acquiring “gaming capital””. Following these approaches to modding, I argue that the use of these custom sounds was to set a social vibe and to demonstrate familiarity with a genre, rather than to serve a ludic purpose, or, that players modified the soundscape of the game to strengthen and confirm their identities as gamers. 
This paper also seeks to gain a better understanding of the context that mods for CSS were created in, the distinct practices of the community, and will perform a technical analysis of the mods themselves and the process of getting them in a custom dedicated server.
One of many creative fan practices carried out by the Minecraft community in the 2010s was popular music parodies. These were a prolific YouTube phenomenon, with a strong emphasis on the animated component to their music video format. Though initially an amateur pursuit for many content creators — reflected in the machinimatic production of visuals — this gradually became an increasingly produced, polished, and narrative form of ludomusical fan work.
There are two primary aims to this paper: framing the musical parody as it manifests in the Minecraft YouTube tradition against previous research; and contrasting those understandings against the playful, although satirically unconcerned, process by which fans both write and improvise parodies in Minecraft. This requires examination of the corpus of Minecraft parodies as well as instances (and surrounding critical literature) of musical improvisation spontaneously performed in Minecraft, as evidenced through YouTube videos, and analysing how parody manifests within them.
Many scholars have explored the parody in a plethora of musical contexts, historical and modern, through the lens of satire and/or irony. This research proposes that, due to insubstantial musical transformation, the gradual shift away from humour, and their frequently non-satirical relationship with their source material, Minecraft parodies function differently. The Minecraft parody place a playful emphasis on the game itself and the paratextual fan structures surrounding it, often distancing themselves uncritically from satiring their original works. The key argument highlights that parodic relationships between the Minecraft parody and its source text manifest in form, but not always function.
In the last three years, there has been a growing body of literature dealing with the recent formation of Hyperpop as a music genre stemming from vibrant LGBTQ+ online communities (Bates, Delphis, Moraes, Santoli, 2024; March, 2022; Miller, 2023; Montgomery, 2024). So far, other than myself (Moritzen, 2022) no other researcher dealing with Hyperpop has paid attention to the role that Minecraft Music Festivals (MMFs) played in providing a stage to showcase independent artists to a larger online audience. Arguably the most successful project signed to a major label frequently described as Hyperpop, 100gecs started performing under that moniker at an in-game concert hosted by producer Open Pit in Minecraft in 2018, years before the Covid19 pandemic redirected media and game developers’ attention towards the many possibilities afforded by videogames as a place for music performances. 
This proposal intends to asks the following question: what actors and actants are present in the controversies surrounding Hyperpop? In order to answer it, I have conducted 11 semistructured interviews collected between 2023 and 2024 with artists and event producers connected to MMFs, including Umru, a DJ and producer who has worked with the artists such as 100gecs and Charli XCX; and Gavin Johnson, Head of Gaming at Monstercat Records, the organiser of the first MMF in 2013. I have asked the research participants about their personal relationship with videogames and music, as well as their personal opinions on the term Hyperpop.
Inspired by Actor-Network Theory (Latour, 2005), I analyse the human and non-human elements in their stories to track down the socio-technical networks connecting musicians, music lovers, DAWs, mash-ups, Youtube and Minecraft which resulted in what now is known as Hyperpop.
Chair: TBC
13:00–15:00Lunch break
15:00–16:30Session 3 Interactive Interfaces
Soulless is a series of five songs composed by ‘ExileLord’; a prominent member of the online Guitar Hero (GH), and later Clone Hero (CH), community, newly flourishing due to pandemic conditions in the early 2020s. They are original compositions which are released alongside incredibly challenging levels (charts) for the games. They are notorious among players for their long-runtimes, relentlessness, and meta-redefining technical demands of the player.
Since GH’s inception, research has paid attention to the many ways in which the GH player-as-performer challenges understandings of musical performance, the various music-pedagogical opportunities associated with the practice of playing the game, and shows new ways in which gamers can do ‘musicking’ through playing (in many senses) GH.
Much analysis understands GH as designed for players to perform existing music, predominantly by notable figureheads of the rock and metal scenes. In these understandings of GH and similar games, players emulate real or imagined rock legends through a process of mimicry. Currently, there is little discussion surrounding the compositional strategies of individuals like ExileLord and others championing the charting their original compositions written expressly for the game.
Soulless is just as, arguably more, concerned with the challenge of its performance as it is in its musical appeal. Music and gameplay challenge are intertwined not retroactively, but throughout the composition process. This paper examines what songs like Soulless tell us about music written to be performed by players using game-as-instrument; the particular ludic and musical decisions made by the composer, and challenges certain existing understanding of rhythm-game-play as a process rooted in mimicry, one which merges familiar tropes of both the video game and traditional musical performance traditions, such as the adversarial relationship between player and challenge, and cultivation/adulation of the virtuoso.
Videogames are disappearing; and so too, are experiences of play. Where preservation exists, Western perspectives are emphasised with global narratives excluded. This is significant in Japanese contexts. While large companies in Japan have shaped gaming history, there is a gap in English-language work that explores Japanese indie games.
This talk outlines the findings of a research project that documented the sound of Japanese indie games and their playable contexts. This work consisted of conducting interviews and recording game play at BitSummit 2024. The project examined the experiences of Japanese indie developers working on alternative-controller games. While commercial developers are rooted in game history narratives, there is little work on vulnerable non-commercial games. The project’s interviews captured the everyday challenges of maintaining alt-controller games as playable entities – many that were foundationally held together by electrical tape.
Through audio documentation, the project also captures the fleeting sound of grassroots Japanese indie games, demonstrating an approach to how these experimental and experiential videogames can be preserved. This is particularly significant as these game experiences cannot be easily accessed by audiences outside of Japan.
The project captures the sound of diverse collaborative play and spectating at BitSummit 2024. As these experiences immediately disappear once the festival closes, there is an urgency to which preservation practitioners need to engage in creative ways to preserve distinct moments of play, rather than play itself. By documenting audio of Japanese-made indie games within this context, the project presents a valuable approach to preserving the underexplored sound of play of grassroots and vulnerable independently-made videogames.
There has recently been more research emphasis on embodiment and musical performance (Fisher and Lochhead 2002). This analysis is often done with respect to the affordances and constraints of a particular instrument, such as analysis of “touch” on piano keys (Doğantan-Dack 2011).
Such focus on the affordances and constraints of an interface are closely tied with the construction of new musical instruments (Magnusson and Mendieta 2007): an instrument that is ergonomic in all uses might be desirable. For composers, knowing what is comfortable to perform on an instrument can be used to decrease physical discomfort in a piece. To what extent, however, has physical discomfort been actively sought out as a compositional device, or by instrument makers as intentional constraint? Are there reasons for which physical discomfort is purposefully sought out in musical performance, other than virtuosity (Howard 1997)?
I approach these questions through rhythm games and their controllers as interfaces. For rhythm games, the physical embodiment on a controller is not only a byproduct of music; instead, the embodiment is a central product delivered by the games (Lind 2024). As an illustrative example of this belief, I present charts or maps that intentionally ask for physically uncomfortable motions as motifs, thereby also dictating form alongside the music. Since such physicality is not captured solely by the chart visualization or music, I conclude that rhythm game analysis should focus on embodiment on the controller as “music” itself, as has been suggested with analysis of choreo-musical dance (Leaman 2022).
Chair: TBC
    16:30–17:00Break
    17:00–18:00Session 4 – Black Myth: Wukong
    Black Myth: Wukong (2024, hereafter BMW) is widely considered the first Chinese AAA game. Echoing the protagonist’s name, “the Destined One,” I argue that BMW’s soundtrack manifests a tripartite “destined” implication. First, to address interpretive controversies over BMW‘s fragmentary storytelling, I present a musical-dramatic analysis of its soundtrack, revealing how BMW’s hidden storylines related to the theme of “confronting destiny” are powerfully revealed through sonic means. Besides showing how techniques like eponymous omission and thematic hybridization (Anatone 2023) help convey BMW’s hidden storylines, I argue that its organic synthesis of Chinese and non-Chinese instruments and genres attests to Chinese cosmopolitanism by “creatively realizing coherence out of diversity (Xiang 2024).”
    Second, I address the trope of rebellion both in the original novel, Journey to the West (1592), and in BMW, where instrumentation and genre can be understood as the story from a post-CER (Chinese Economic Reform) intercultural perspective. I conclude by examining BMW’s engagement with the soundtrack of the 1986 TV series Journey to the West (hereafter JTTW) through lenses of intertextuality and collective memory. I demonstrate that BMW’s “Main Title” is an intertextual reference to Celestial Symphony, the main title for JTTW, which is a collective memory held by contemporary Chinese people. I also show that the intercultural approach, particularly in instrumentation and genre, is inherited from Celestial Symphony to BMW. Contextualizing the impact of CER, I argue that BMW’s soundtrack is destined to be cosmopolitan and intertextually intertwined with JTTW.
    The Chinese video game industry has experienced rapid expansion, driven by globalization, government policies, and player communities.
    Chinese game history can be understood through games inspired by martial arts, history, myths, and classical literature (what I call Chinese Ancient-Style Fantasy Games). This paper proposes four periods of Chinese video games: “Chaotic,” “Budding,” “Developing,” and “Nationalism Revival,” reflecting a shift from imitation to nationalism, internationalization, and reinventing nationalism. This history reveals China’s gaming industry’s changing responses to globalization, navigating the twin demands of globalization and Chinese culture. Music in Chinese games is an important aspect that reflects these issues.
    Music reveals the changing dynamics of Chinese games and globalization. A “Developing period” game like Genshin Impact caters to international markets by presenting a simplified version of Chinese culture through musical stereotypes and “Alienated Orientalism”,  diminishing the complexity of Chinese cultural expression. Contrastingly, Black Myth: Wukong, representing a new approach to cultural representation (“nationalism revival”) refers to a variety of Chinese musical elements, including folk and religious musics, which constructs a game world firmly rooted in the context of traditional Chinese culture. while still using a musical language communicative to global audiences.
    In this paper, I offer a specific music analyses from Black Myth Wukong to explore the features and functions of the music.This paper explores the image of Chinese culture provided by the music and how the music builds and reshapes Chinese cultural contexts within games.
    Chair: TBC
    18:00Break
    18:30Evening Session – Game Sound in Practice (Informal style)
    I will present a recent audiovisual composition which playfully pays tribute to the ways in which pioneering Japanese video game composers of the 1980’s managed to circumvent the limitations of the relatively basic sound chips of the time, most notably by using rapidly-arpeggiated ‘pseudochords’ to create an illusion of polyphony, and ultra-short, percussive ‘blips’ to form primitive but distinctive beats. Here, the sounds (and sights) being treated in this way are recordings of a bass clarinettist playing long, sustained notes; although we never hear or see more than one of these at a time (a stricter limitation than even the likes of Koji Kondo faced), they are rapidly intercut to create not only polyphonic musical textures but also imitated loading sequences and bursts of noise created by random successions of pitches. In this way, the piece leans into the evocative style of the era to which it refers.
    I am the composer and audio director of City Of Beats, an indie music-shooter game that was nominated in 2024’s Game Audio Network Guild Awards for Best Audio in an Indie Game, Technical and Creative Achievement in Music, and Technical and Creative Achievement in Audio.
    Our aim was to create a sense of flow state within the player, something that can be easily achieved through rhythm-action game mechanics.  But we didn’t want to exclude the large number of gamers who feel they lack a good sense of rhythm. Our key philosophy is that music should not be the main focus of the game, but instead can be a powerful tool to achieve the immersion and involvement that we want to provide, for all players. 
    We created a world in which all enemy actions and movements are synced and driven by the music, in which everything that happens in the environment is linked to different elements of the soundtrack, but the player is not required to hit buttons in time to that music. In theory, learning the music would help the player navigate and time their actions accordingly.  But to truly connect the player into that environment, we provided weapons that effortlessly participate in that music, without requiring anything more than holding down a trigger; creating a sense that those weapons are musical instruments, that automatically contribute to a musical score without requiring any understanding of the music that the player is now a part of.
    In this paper I will detail the approach we took, the software we used, and how these techniques can be universally applied to any game to increase immersion and reward, while avoiding the limitations of the rhythm-action genre.
    One of the challenges around the use of music in video games is the inherent conflict that occurs between the player’s ability to instigate events at any time and time-based musical structures. Indeed, it could be argued that the stylistic traits of much action music in games, such as the use of static tonal centres and rhythmic and diegetic ambiguity, are a product of trying to mitigate potentially jarring transitions (Stevens, 2021) in an effort to maintain musical smoothness (Medina-Gray, 2016).
     An additional opportunity for a more truly interactive, bi-directional relationship between video game events and music has been discussed for some time from both a theoretical (Stevens & Raybould, 2011; Stevens & Raybould, 2014) and programmatic perspective (Walder, 2018) and yet, with a few notable exceptions, it remains a rarity in modern video game production. For ‘Zombie Dog’ (currently in development) this interactive relationship is one of the core design pillars of the game. In this practitioner showcase the creative director and audio designer of ‘Zombie Dog’ will demonstrate a number of ‘integrated music design’ features from the game. Through a series of live practical examples they will show how these techniques were implemented, and will discuss how the approach is motivated by the desire to heighten the comedic aspects of the game, and to support rhythmic entrainment.
    Chair: TBC

    Day 2: Friday, July 11th

    9:30–11:00Session 5Listening Between Realities and Fictions
    Music plays a significant role in adding depth and detail to world-building in open-world adventure video games, transmitting crucial vibes—essential yet intangible aesthetic information—from the moment the game loads through the entire gameplay experience. In the action role-playing game Cyberpunk 2077 (CD Projekt Red, 2020), the game’s sound designers were confronted with a critical challenge when selecting the music that would fill the sonic landscape of Night City, an independent megacity on the west coast of an alternate-history North America: how is it possible to create a comprehensible music of the future when we are limited to knowledge of the genres and styles of our real-world past and present?
    While the communicative potential of real-world music present on in-game radio stations has been examined by other ludomusicologists such as Kiri Miller and Will Cheng, the music in Cyberpunk 2077 stands out from that in games like Grand Theft Auto: San Andreas and Fallout 3 because Cyberpunk 2077’s radio is full of hundreds of unique pop songs composed specifically for the game. Remixing literary theorist Mikhail Bakhtin’s concept of the chronotope (lit. time-space) and applying it to these original songs provides insight to how the curated vibes evoked by specific musical styles and genres communicates a wealth of texturing and narrative information, made effective by the way that the intentionally deployed references to our real-world music cultures invite players to participate in world-building as they forge connections between what they see and hear with their own associations.
    The writhing hellscapes of Konami’s Silent Hill video game franchise are animated by Akira Ymaoka’s soundscapes, which have fascinated players and ludomusicologists alike for their ability to subvert, terrify, and tear at the boundaries between the real and the virtual (See Cheng 2014; Summers 2016). Where scholarly focus on the music of Silent Hill has predominantly assessed its in-game effects and mechanics, less recognised are the ways in which cues from this nightmarish (other)world have become a nostalgic refuge. As with other game franchises, music from the Silent Hill series has found its way into paraludical usage (cf. Diaz-Gasca 2022, p. 46), beyond the game world and into remix cultures among other kinds of fan activity. One such usage has given rise to ‘Silent Chill’ (see Atkinson 2020) and ‘Fogcore Playlists’ on YouTube, promising listeners melancholia, soothing emptiness, and dreamy suspension. At times running up to 10 hours in length – looping abandoned, foggy Silent Hill visuals against selected cues from the series – many fans have experienced these audio-visuals as nostalgia for an escapist nothing, a liminal void. This paper will theorise these experiences as a form of ‘zerofuturism,’ exploring the audio-visual affects that induce participation in the chronotopes (cf. Van Elferen and Weinstock 2016, pp. 78-86) between here, there, and – perhaps most prominently – nowhere. In doing so, it not only considers the specific contexts of Silent Hill’s paraludical connections and fandom, but also the broader implications relating to this form of escapist nostalgia and late capitalist strata.
    As children, we often fantasised about putting ourselves in the shoes of the athletes we admired. We dressed like them, behaved like them, and narrated our goals as if we were someone else. These behaviours were shaped by audiovisual products that extended beyond sports performance —advertisements, mainly—, whose perception, at the same time, was influenced by conventional gestures and narratives in television and radio broadcasting. Games like Tony Hawk’s Pro Skater (Activision, 1999) not only built a bridge between the user and these representations, but also helped establish a musical aesthetic around a subculture placing players in the role of skateboarding superstars. In FIFA (now EA Sports FC, ElectronicArts, 1993), this shift came with the introduction of the practice mode of its 07 release, which positioned the player in scenarios akin to everyday play. Both games created connections between music and the virtual representation of daily settings, potentially triggering associations and reminiscences when listening to the soundtrack during real-life sports performance. In this paper, we reflect on the capability of musical listening to enhance sports performance — Ergogenesis—, encouraging the self-perception of the player as “someone else” or “in another state of feeling/being” (Schechner 1985, 37) —what we call Egogenesis—. Using Tony Hawk’s Pro Skater, as well as FIFA 07-to-10 as case studies, we delve into the role played by audiovisual productions related to football and skateboarding in evoking these thoughts, the music industry’s capability in driving consumption trends associated with the phenomenon, and the impact of gaming as a mediating factor between audiovisual representation and the real-life performance.
    Chair: TBC
    11:00–11:30Break
    11:30–13:00Session 6 Gender and Vocalities
    The monstrous feminine is an old trope in literature, film and video games (Creed 2007 [1993], Appleton Aguiar 2001, Ivănescu 2024). In recent times, the monstrous femme has continued, but there is also a section of the population seeking to reclaim monstrosity as a source of empowerment. Musical artists such as Jill Janus from Huntress and Janelle Monae use gore and afrofuturism, respectively, to challenge feminine humanist perspectives, turning the strange and unusual into agential strength. In this paper, we examine the Elder Brain (Baulder’s Gate 3, 2023) and Meredith (Dragon Age II, 2011) as powerful video game figures bearing complicated agency and moralities.
    We examine vocal timbre, trends in their respective sonic treatment, and how each of these two characters change and transform into their final monstrous forms. At the forefront, we analyze power—how power materializes through sound, how representation affects power, and how important critically examining multimedia can redistribute power. The Sonic Gaze (Laws-Nicola, 2025) provides a foundational framework to deconstruct and analyze the sounds, noises, and musics within the games and surrounding both characters. The paper serves as a concentrated examination into body horror, white feminism, and the sounds of monstrous feminine. We use the sounds and music in these two games as representative of how society views women with power—as monsters.
    But monsters are powerful. They are arbiters of change and force; they necessitate revolution (Rhoades and McCorkle 2018, and Laws-Nicola 2025). Especially because these characters are removed from human femininity, their voices, actions, and surrounding musics are an important part of their narrative role and representation.
    Elden Ring (2022), developed by FromSoftware, is an open-world action RPG that builds upon the studio’s successful formula for environmental storytelling, atmospheric world design, and punishing difficulty. As players traverse the Lands Between, they encounter many enigmatic figures. Among them, the Chanting Winged Dames stand out as enemies whose diegetic singing transforms them into embodied performers as well as deceptive threats. In this paper, I examine these creatures from a ludomusicological perspective to present how Elden Ring uses their singing voice to subvert player expectations and reinforce its themes of loss and betrayal.
    Building on Rebecca Roberts’ work on sonic signifiers in horror games, I examine how the Dames’ alluring Latin chants serve as auditory cues that lull players into a false sense of security before being violently subverted. Unlike traditional horror games, where unseen sounds cultivate fear, the Dames destabilize the expected relationship between sound and safety, using their enchanting voices as a deceptive lure. I also examine their intertextual ties to sirens, harpies, and Shakespearean witches, drawing on Diane Purkiss’ work on monstrous femininity to argue that these figures embody both remnants of lost grandeur and the horror of transformation.
    Further, this paper explores how Elden Ring disrupts the diegetic/non-diegetic divide through its sound design. Drawing on Jennifer Smith’s ideas on vocal disruptions in game soundscapes, I explore how the Dames’ chanting acts as both a storytelling device and a mechanic of environmental interaction. Finally, through comparisons to the Milfanito of Dark Souls II, I argue that Elden Ring takes advantage of player familiarity to create an affective experience of betrayal by turning nostalgia into dread.
    This paper explores the cultural convergence of nu metal and digital gaming during the early to mid 2000’s, investigating the music genre’s influence on the masculine and white default hegemonic gamer identity and how this impacted inclusion and exclusion in broader gaming cultures. Nu metal, characterized by its fusion of heavy metal, alternative rock, and rap, offered games of this period a rebellious and aggressive soundtrack that complemented the dystopian landscapes of Twisted Metal and Fight Club and the violent reimaginings of real-world sports in NHL Hitz 2002, and Tony Hawk’s Pro Skater. As a genre, nu-metal is described as “alienation incorporated,” a commodification of (predominantly) white, male anger (Halnon 2005, p. 441; Duncombe 1997). Nu-metal embraced a politics of otherness built on appropriation (Hoad 2023; Middleton and Beebe 2002), one that game scholars have located in gaming discourses from that period (Leonard 2004; Gray 2012). This synergistic hailing of white, male anger connected gaming to wider popular cultures, but it also reinforced problematic cultural norms. Both nu metal and gaming of the era emphasized hyper-masculinity, whiteness, and fragility, framing anger and rebellion as central to their appeal. This convergence excluded diverse voices and reinforced toxic behaviors, including gatekeeping and resistance to inclusivity within early gamer subcultures. By examining the overlap of nu-metal and game culture in a post-9/11, pre-Gamergate world, we highlight how this “loser-y attitude” (cf. Suits 2005/1978) shaped a predominantly white, male gaming identity and consider the impact of these “vibes” on contemporary gaming cultures.
    Chair: TBC
    13:00–15:00Lunch break
    15:00–16:30Session 7 – Rhythm
    Games scholar Jesper Juul writes that “we tend to believe that games should make players fail, at least some of the time” (Juul 2013). Puzzle games are among the most naked instantiations of games as structured failure: a puzzle game with either no obstacles or inscrutable mechanics would quickly lose a player’s interest.
    In this paper, I present a theory of puzzle games as rhythmic. I posit that the puzzle game experience is comprised of thinking, experimenting, and feedback time containers. Though the lengths of these time containers vary by player, I maintain that the puzzle game developer speculatively distributes positive feedback in a compositional fashion, stymieing and rewarding the player at a pace that encourages but does not patronize. A wellconstructed pace facilitates entrainment as a “phase-locking of the [player’s] attentional rhythms with temporal regularities” of the game (London 2012). 
    Indeed, a satisfying puzzle experience repeatedly features “something new and unforeseen that introduces itself into the repetitive: difference” (Lefebvre 1992). Each successive puzzle must balance the introduction of novel difficulty (difference) with the utilization of prior learned knowledge (repetition). This overlap is crucial: “since repetition can be perceived in an unfamiliar style, innovations… can appeal to repetition to clarify their vocabulary and procedures” (Lidov 1979).
     I then apply the rhythmic puzzle framework to GNOG (KO_OP 2018), a puzzle game featuring distinctly musical puzzle-box deities. Through a temporal analysis of thinking, experimenting, and feedback containers in two recorded playthroughs, I deduce an attentional meter (London 2012) in opening GNOG’s god-boxes. I conclude by examining GNOG’s preemptive and confirmational sounds (Collins 2013) as ways of establishing or transitioning between different time containers. I hope this case study can be a fruitful model for rhythmic analyses of other puzzle games.
    Contemporary videogame music is commonly composed for “kinesonic synchresis,” matching of player action to musical moment. These systems can often misstep, musically announcing an enemy before the player has visually located them. These mismatches can also be intentional for comedic or narrative effect. Applying Erick Verran’s work on silence in a videogame’s musical soundscape and perceptions of affordance,  I argue games that use music in antagonistic, confusing, or mis-choreographed moments develop an affective silence. These moments of kinesonic dissonance – friction between a player’s actions and the music they hear– can be deafeningly loud and entirely musical, but the structure of affordance they convey remains ambiguous. Inappropriate chimes, poorly crossfades, and jarringly balanced earcons resist the tonal epistemologies that Robin James critiques as reinforcing neoliberal eurocentric forms and formalisms sonic scholarship often claims to upend. Drawing on examples from Indika (Odd Meter, 2024) and The Elder Scrolls IV: Oblivion (Bethesda Game Studios, 2006), this paper explores the moments of play intentional and incidental where a players actions and the videogame’s soundtrack fail to reach kinesonic synchresis resulting in arrhythmic forms of play. Using Michiel Kamp’s analysis of background music as equipment, arrhythmic play demonstrates the network of literacies – haptic, sonic, and visual– that allow us to understand our choreographed role within a given game. By grappling with sensory literacies beyond moments of sound-event harmony and attending to instances of noisy silence, this research contributes to scholarship developing sonic understandings of games that prioritizes the range and diversity of perceptions and experience of those who play them.
    One of the most powerful and easily recognisable genre synecdoches is undoubtedly the rhythm section. Despite subtle timbral differences between a “dancey,” “rocky,” or “symphonic” beat, its strong connotative power allows it to import distinct genre-specific nuances into any stylistic context. This quality has made rhythm an essential tool in the “enrichment” or “augmentation” of symphonic sound, as often heard in film and video game soundtracks. By integrating beats from diverse stylistic backgrounds, composers introduce unexpected vibes into their music and, by doing so, they speak of their own cultural and industrial background. This paper identifies two primary values that drive this search for peculiar rhythmic stimuli in video game music: “coolness” and “opulence”. Seeking these ideals, composers craft music that sounds catchier – closer to popular traditions, thus creating a “middlebrow” product appealing to broad audiences – and/or denser in its arrangement – big, epic, “maximalist” and blatantly not cheap. My research explores how this compositional manifesto takes shape across different industrial and cultural contexts, comparing two main approaches. The first, shaped by the transmedia Hollywoodian industry and typical of Western productions, incorporates (mostly) discrete electronic beats into an overall cohesive orchestral sound. The second, emerging from the anime-manga media mix system and typical of Eastern productions, favours rhythm sections tied to rock music, in a sonic environment where stylistic contradictions remain largely unresolved. These different approaches to rhythm and beats reflect broader industrial and cultural paradigms (and hegemonies), ultimately contributing to a rich and diverse global soundscape in video game music.
    Chair: TBC
    16:30–17:00Break
    17:00–18:00Keynote address by Tharcisio Vaz

    Day 3: Saturday, July 12th

    9:30–11:00Session 8Playing in Time: Synchronizations and Asynchronization
    Video games are distinguished by their non-linear interactive elements (Collins, 2008), yet most games utilise only very basic interactive audio systems. Interaction is frequently only present in small ways, with simple transitions or states controlling the music, however, deeper levels of musical interactivity can promote stronger levels of engagement through synchronicity (Kagan, 2020). Some games take elements of interactivity further and connect gameplay and music at a fundamental game design level. These mechanisms are often approached through overt musical connections such as beat synchronisation locked to core gameplay elements, shooting and reloading in BPM: Bullets Per Minute (Awe Interactive, 2020), or covert connections where the music functions either in a supporting or hidden fashion (Kagan, 2024; Stevens et al., 2015). When composing and designing for these integrated covert/overt game-music interactions, it is imperative to consider how they connect the gameplay and musical experience (Awe Interactive, 2022; Phillips, 2014). While overt musical systems are more common and easier for players to recognise, covert systems can support games in numerous ways. Through the development of the practice-based research game I developed, Elementalis (Kagan, 2023), I was able to explore how we can integrate covert musical synchronicity throughout the gameplay experience. Music was directly mapped to gameplay experience, but in an almost inverted fashion to traditional rhythm games, which circumvented issues of playability (Chen & Lo, 2016). By breaking down the techniques that worked, and didn’t work, along with some discussion of middleware programs and how they can help bridge gaps in the design process, we can explore further the benefits of more hidden, yet intrinsic, compositional design decisions.
    Whether pressing buttons in time with colorful bars in Guitar Hero (2006) or knowing when to jump over obstacles in Super Mario Bros (1985), the ability to internalize and execute an action in time is a foundational gameplay mechanic. Prior research on timed mechanics has focused on their use in rhythm games and their relation to aural cues (see Austin, Kagan, and Lind). I argue that analysis of timed mechanics can be extended beyond rhythm games to examine how they encourage musical thought and rhythmic entrainment in players. In this paper, I explore how gameplay in Soulsborne games can be analyzed musically, using Dark Souls III (2016) as a case study. Unlike rhythm games, Soulsborne games encourage entrainment to unsounding rhythms, cueing player input visually. Adopting Anabel Maler’s definition of music as “culturally defined, intentionally organized movement,” I argue the act of play should be considered musical, especially as players become familiar with the “rhythm” of a fight or action. Drawing on research on rhythmic entrainment and perception by Martin Clayton and Mariusz Kozack, I suggest the gameplay in Soulsborne games can be analyzed rhythmically. Beyond my analysis, I further illustrate that players themselves conceive of their play as musical. Close readings of player discussions and gameplay shows that players describe their play in musical terms, using metaphors of dance or rhythm to interpret their actions. These players, I argue, are engaging in a form of “musicking,” à laChristopher Small. I, therefore, read gameplay as an embodied, performed implementation of what Olivia Lucas calls “vernacular music theory.”
    Fighting games and rhythm games are two popular genres of action games, games in which real-time input is tracked and assessed by the game system. While not all are programmed to a musical track, use of signal and singalong listening (Huron, 2002) are essential to how the player interacts with and understands the game world in both genres.  This paper seeks to provide a technical discussion of collision detection (how virtual objects intersect, and thus, interact) alongside Lefevbre’s (2004) rhythmanalysis, which looks at how we use musical listening and rhythmic understanding in everyday life, to unify discussion of how players experience and interpret both rhythm (time) and collision detection (space) in their understanding of play.
    The discussion will be investigated through case studies of several action games: Thumper (2016), Hi-Fi Rush (2023), and Street Fighter 6 (2023); a rhythm game, rhythm-based action game, and fighting game respectively. Together they represent a cross-section of action games and a variety of ways in which a key challenge of the game (timed button presses that rely on knowledge of precise collision detection events) is understood and expressed through rhythm requiring the use of signal and singalong listening. Each has a diegesis through which rhythm and collision detection interact to make clear both the fictional world and underlying structure of each game, working with or subverting the role of audio as an interface into the game world. This paper will thus be useful for discussing how rhythm and collision operate in terms of game design, storytelling, and learning game goals.
    Chair: TBC
    11:00–11:30Break
    11:30–13:00Session 9Timbres: Soggy Synths and 8-bit Bangers
    Hirokazu Tanaka is seminal in the history of video game music and sound, having contributed to the earliest games published by Nintendo, such as Donkey Kong (1980) and Metroid (1986). Tanaka is particularly renowned for the latter soundtrack, which defied the lighthearted conventions of earlier soundtracks for the Nintendo Entertainment System (NES) and the Family Disk System (FDS). His artistic vision for Metroid is illuminated in a 2014 interview, where he described his aim to evoke a sense of “ultimate catharsis” by composing the soundtrack in a darker aesthetic, to brighten up at the very end. The catharsis is encapsulated in the game’s “Ending” track, where Tanaka employs a disco style.
    Technical analyses of the noise channel reveal Tanaka’s meticulous effort to emulate three core drumkit instruments essential to a disco groove: the snare drum, kick drum, and hi-hat. This track is not an isolated example, however, and can be contextualized within Tanaka’s broader corpus of software published for the Family Computer (1984-86). Analyzing earlier soundtracks which employ disco, such as Balloon Fight (1985), Wrecking Crew, Stack-Up, and Gyromite (1985) reveal Tanaka’s gradual development in percussion orchestration for the genre.
    Using technology-based methods from Caskel, Vollmer, and Wozonig (2023) reveal intricacies reaching to Tanaka’s hexadecimal modifications. Frequency spectrum analysis aid in differentiating the timbre between three instruments, while waveform analysis presents lengths for percussion sounds. The frequency settings and lengths are often identical across multiple software, further validating Tanaka’s gradual development in percussion orchestration. Cross-referencing between liner notes, historical advertisements, and interviews will confirm the technical analysis and highlight his contributions leading up to Metroid.
    Koji Kondo’s ‘Dire, Dire Docks’ theme from Super Mario 64 (1996) is one of the most well-loved pieces of videogame music (Davis 2021). In addition to analyses of the reactivity, fluidity and soothing qualities of the original (Balmont 2023), YouTube overflows with countless cover versions and recreations. In this paper, I am particularly interested in the assumptions made about the composition and sound design of the original piece, the ways these notions are reflected in cover artists’ choices of instruments and patches, and even in the ways these aesthetic and technical decisions have been subsequently codified in the design and voicing of commercial instruments, that, in turn, are used to create more cover versions of ‘Dire, Dire Docks’… In particular, this paper seeks to explore the origin of the shimmering electric piano that is the signature sound of Kondo’s masterpiece.
                   ‘Tell me that electric piano sound doesn’t just sound like water’ (Cornell 2024)
    The general consensus is that the sound has the hallmarks of digital ‘Frequency Modulation’ (FM) synthesis (itself, a staple of computer and videogame music). However, with the Nintendo 64 console’s sound engine being based around samples rather than a built-in FM sound chip, this paper joins the search for the source of the much-loved watery keys.
    Most commentators and cover artists unequivocally plump for the Yamaha DX7 as the source of Kondo’s samples (reverbmachine 2023). Certainly, while the DX7 had an extremely broad sound palette, its electric piano sounds quickly became its signature sound (and key selling point). In particular, Factory Preset 11 ‘E PIANO 1’, which sought to digitally recreate (and replace) the electromechanical Fender Rhodes piano, became a staple of genres like R&B and the power ballad with The Economist reporting that 40% of country number ones and 60% of R&B number ones in 1986 featuring the sound (B.R. 2020).
    With Kondo being well-known for explicitly drawing on the production techniques, instruments and palettes of contemporary music production, it should hardly surprise us to find the glassy, crystalline sound of the best-selling DX7 taking centerstage. To compound the connection between Kondo’s underwater masterpiece and Yamaha’s best-selling keyboard, French instrument maker Arturia include a patch in their software recreation of the DX7 called ‘Dire Dreams’. This patch has also spawned numerous sound files and online cover versions of the theme using this authentic software recreation of the authentic hardware instrument.
    Except, that neither this nor the DX7 original are exactly the sound we hear in ‘Dire, Dire Docks’. Mario and Kondo’s electric piano is fuller, with a washy chorused sound creating swirling movement and life so perhaps what we hear also benefits from the addition of some external processing to colour the DX7? Certainly, such processing was – and remains – extremely common and lent the extra body that might be seen to be missing in Yamaha’s attempt to recreate the richness of the original instrument.
    However, the truth is that the ‘Dire, Dire Docks’ piano actually comes from a different source. As this paper will demonstrate, Kondo’s sound is, in fact, built from samples of a preset from a flagship synthesizer released by Japanese manufacturer Roland in 1993 and presciently named the Super JD. A decade after the Yamaha DX7 launched (and four years after it had been discontinued), Roland’s sound design unquestionably builds on the (in)famous ‘E PIANO 1’, but reimagines it with the trappings of 1990s sound design. Lush reverb and rich, swirling chorus combine with the underlying samples and digital filters to create the distinctively warm yet pristine and glassy tonality that Roland had perfected in its digital synthesisers since the late 1980s. Ultimately, this paper argues that, just as David Wise had done a few years earlier with his use of the Korg Wavestation synthesizer in the underwater ‘Aquatic Ambience’ of Donkey Kong Country for the SNES (Wise 2021), ‘Dire, Dire Docks’ sees Kondo taking advantage of the N64’s samplebased sound engine to draw on the instruments and studio techniques of the time and connect Nintendo’s videogame soundtracks to contemporary popular music aesthetic and production.
    Perhaps the best-known and most-beloved theme from Donkey Kong Country (Nintendo, 1994) is “Aquatic Ambience,” composed by David Wise. In his article “Chill out with Donkey Kong Country,” Ignatiy Vishnevetsky describes “Aquatic Ambiance” as, “a placid piece of music that uses a sophisticated palette of synthesized instruments and futuristic sound effects to create a mood of calm that’s very different from the sped-up themes usually associated with platform games.” With its undulating synths, rippling arpeggios, and copious reverb, “Aquatic Ambience” represents a “wet” aesthetic present in fashion, design, and various popular media released around the same time, such as the album cover for Nirvana’s Nevermind (DGC Records, 1991), the TV programme Liquid Television (MTV, 1991-1995), and the hairstyle known as “the wet look.” “Aquatic Ambience” represents broader consumer aesthetic trends and styles such as Y2K Futurism, Memphis Design, and Frutiger Aero, and continues to impact more contemporary media; for example, the song was sampled by Childish Gambino (“Eat Your Vegetables,” 2018) and inspiring several TikTok videos in 2023-24 in which creators discuss how the track ironically elicits both the “nostalgic” and “creepy” vibes from present-day listeners.
    In this paper, I investigate the influence of water-themed music from early-to-mid-1990s games such as Donkey Kong Country, Ecco the Dolphin (Sega, 1992), and other related titles as both a reflection of the 1990s “wet” aesthetic at the time of their release and their influence on the development of internet-based, mood-themed, retro-futurist microgenres of music and visual art such as chillwave, vaporwave, and seapunk and other media since the late 2000s and early 2010s.
    Chair: TBC
    13:00–15:00Lunch break
    15:00–16:30Session 10 – Pitching and Glitching: Musical Mechanics
    The Super Mario LEGO system offers a unique convergence of sonic play and musical meaning, combining the Lego Group’s “system of play” (Wolf 2014), sonic inclusion in toys (Dolphin 2014), and Nintendo Co., Ltd.’s history of musical play (Moseley 2014, 2016). This presentation demonstrates how the music and sounds of the Super Mario LEGO system inhabit multiple spheres of interactivity (Packwood 2020, Schefcik 2020). This multidimensional, modular approach to analyzing game sounds provides a framework for deriving exchanges of sonic meaning when these sounds migrate between virtual and actual environments (Smucker 2024). Central to this investigation are questions regarding whether the Super Mario LEGO system is game or a toy, and how the meanings of these sounds shift through realizations in different media. 
    I use three primary axes of interactivity to address these questions: 1) interplay in a ludic-paidic spectrum (Caillois 2001, Kendrick 2011, Huizinga 2016); 2) distinction between virtual and actual environments (Galloway and Hambleton 2024); and 3) blurred lines between music and sound effects (Medina Grey 2021). Collectively, these axes express shifting semiotic relationships between the “gamer/player” and the sound design of the Super Mario LEGO system. I show how this specific LEGO system can inhabit both spheres of games and toys by expanding Smucker’s “ludic-consumer exchange” of sonic value between virtual and actual environments. Through examples of “gameplay” and “play sessions,” I further show how initial associative meanings of these sounds may semiotically evolve (Hart 2021, Pozderac-Chenevey 2014), based on how users engage with the system.
    With gambling prohibitions sweeping across the USA in the late-nineteenth and early twentieth century, manufacturers of gambling machines turned to loopholes to stay in business: you were not gambling if your nickel also bought a small commodity each turn, like chewing gum, or so was argued. Stretching this logic, some made gambling machines with music attachments, the “commodity” here a short mechanical song (Collins 2016). Stretching it further still, from ca.1925 the Mills Novelty Company offered a “Toy Race Course”—the “toy” designation not-so-subtly concealing that users would bet on the miniature horses traversing its track—which attached to player pianos and piggybacked on their internal mechanisms. With such an attachment, paying for the piano was simply a pretence for having a gamble. 
     Engaging with extensive period sources—patents, judicial opinions, advertisements, and a surviving example of a Mill’s Toy Race Course—I explore the strange, triangular tug-of-war between competing musical ontologies which emerges from this object. “Music” here is at once an abstract idea, leveraged to make gambling moral; a discrete commodity (Taylor 2007), as if it were gum; and a realized performance which must be decoupled from the gaming act for the legitimizing foil to be sustained.
     No specialist music was composed for this gambling “instrument,” and multiple races could be initiated during one “performance”; there was no synchronisation beyond music and play commencing together. Music was therefore legally vital but sonically incidental—music was equipment (Kamp 2024). Given these generative ambiguities, the Toy Race Course piano invites reflection on music-as-sound, music-as-concept, and the mobilising of this division through music-as-play.
    When encountering imaginary worlds (paracosms) in video games, technological failures known as glitches can occur. From repetitive audio clipping to missing dialogue, glitch audio can happen whilst the visual world remains unbroken, or with fragmented visuals on-screen. Audiovisual glitching within video games has featured in several ludomusicological texts in passing or discussions of technological and compositional work (Farrell 2022; Collins 2013; Reid 2020; Délécraz 2023). These studies evidently demonstrate the importance and commonplace experiences of sonic glitching to video game experiences. However, there is minimal focus on glitch audio aesthetics and their consequences on gamers’ potential immersion breaking of virtual open worlds. This paper examines the role glitch sound has in player explorations and receptions of open world video gameplay. With consideration of glitch aesthetics (Cascone 2000; Sangild 2004; Russell 2020) and their relation to sound, I suggest two important questions on glitch game sound and virtual worlds that should be considered. Can game sound demonstrate representations of communities, national identities and historical events when immersion of the paracosm itself appears to have become fragmented?
    Furthermore, can gamers engage in critical reflection of virtual soundworlds when their experience is disrupted by moments of audiovisual glitch? I approach these questions through my proposed framework of paracosmic multimedia, derived and adapted from the paracosm and worldplay theories of childhood imaginary play (Cohen and MacKeith 1991; Root-Bernstein 2014). This paper culminates in an examination of narrative-based and deliberately triggered glitch audio by the online speedrunning community and through self-analytical gameplay in Stray (2022).
    Chair: TBC
    16:30–17:00Break
    17:00–18:30Session 11 – Styles. Fusions and Topics
    When turning on Mario Kart 8 Deluxe (2017) on the Nintendo Switch, players are met with a virtuosic slap bass line, a driving drum groove, and a flurry of distorted guitar lines—hallmarks of jazz fusion. Nintendo’s music blends diverse styles and traditions. In commenting on Nintendo’s ludomusical diversity, composers such as Kenta Nagata, Soyo Oka, and Hirokazu Tanaka have highlighted Japanese jazz fusion groups as a key influence on their work. Jazz fusion, a subgenre emerging in 1970s North America, combines electronic instrumentation and dance grooves with jazz improvisation, emphasizing the virtuosity of rhythm section musicians. While Weather Report and Return to Forever defined the genre in North America, Japanese bands such as Naniwa Express, Casiopea, and T-Square adapted and expanded its reach—some members even contributing to video game soundtracks. This paper situates Japanese jazz fusion as a key influence on rhythm section writing in the Mario Kart series. It examines the role of the rhythm section in shaping the franchise’s sound, focusing on Super Mario Kart (1992) and Mario Kart 8 (2014). This research contributes to discussions of transpacific musical exchange, highlighting how Japan’s integration of North American popular music into its own musical landscape is represented in video game music. Drawing from scholarship on video game sound technology (Newman 2021; Summers 2024; McAlpine 2019) and jazz in Japanese culture (Atkins 2001; Pronko 2018, 2021; Wright 2020; Bridges 2017; Asaba 2025), this study explores jazz fusion’s role in shaping Nintendo’s rhythm sections, the driving force behind Mario Kart’s frantic, upbeat underscoring.
    Video games possess the distinct property of formal malleability; each person’s experience of a game—and therefore, their interactions with its music—is unique. This effect is amplified when the game itself is nonlinear; a track’s topical associations may change entirely. Studies in ludic form typically highlight musical changes as a result of player action (Medina-Gray, 2019; Collins, 2008). Relatedly, studies in topic theory often identify topical musical associations as multipartite juxtapositions (Lavengood & Williams, 2023; Atkinson, 2019) of many unmarked phonemic musical elements that together form unique semiotic information (Johnson, 2017; Neumeyer, 2015). My study combines ludic and topical perspectives, showing that formal positioning is yet another such element.                
    For instance, in Octopath Traveler II (2023), players may complete stories in any order. One city, New Delsta, evokes 1920s American city life in an otherwise pastoral fantasy setting with a jazzy style employed in a track titled “A Sensational City.” When Agnea, a young woman from a small village, visits New Delsta, the music communicates hope and opportunity. However, for Throné, an enslaved thief operating within that same city, the same music instead communicates corruption and classism. Hope, opportunity, corruption, and classism are all historical elements of 1920s American culture (Currell, 2009); however, which elements are musically foregrounded is determined by player input and the resulting altered story context. The track’s initially unmarked placement in the open form is yet another semiotic component, demonstrating that our reactions to music are dictated by extramusical time in addition to extramusical space.
    In 2021, Melanie Fritsch noted a Nintendo-centric and Western-centric slant to Anglophone ludomusicological research. She argues, this has often resulted in the exclusion of Japanese perspectives or the sociocultural background behind game music coming from Japan.
    Inspired by the previous literature’s discussion of the “prog-rock” aesthetic of the Megadrive, I show the technical affordances and sound capabilities of Megadrive sound hardware to have facilitated complex baroque and related progressive rock architectures in terms of rhythm, melody, and harmony. I connect these compositions to contemporary Japanese notions of Bach as the “father of music” (ongaku no chichi) in Japan at a time when the national music curriculum included Bach’s music as mandatory repertory for every child to “appreciate” (kanshō). I argue that the music of the Megadrive shows these composers to have taken creative musical decisions to play with baroque formal conventions to fit on-screen 16-bit game aesthetics.
    This paper also aims to complement previous discussions on the use of Bach’s music and baroque atmospheres (or “vibes”) in 8-bit music Nintendo games, with my own analysis of the music of the Sega Megadrive. I include insights from interviews and my own discussions (in Japanese) with composers Iwadare Noriyuki and Tamiya Junko. By analysing their compositions, among other musical examples, I hope to further previous work on the gothic, sublime, and baroque musical techniques in game music beyond their well-documented connections to anglophone cinematic tropes.
    Chair: TBC