9:30–11:00 | Session 8 – Playing in Time: Synchronizations and Asynchronization
Video games are distinguished by their non-linear interactive elements (Collins, 2008), yet most games utilise only very basic interactive audio systems. Interaction is frequently only present in small ways, with simple transitions or states controlling the music, however, deeper levels of musical interactivity can promote stronger levels of engagement through synchronicity (Kagan, 2020). Some games take elements of interactivity further and connect gameplay and music at a fundamental game design level. These mechanisms are often approached through overt musical connections such as beat synchronisation locked to core gameplay elements, shooting and reloading in BPM: Bullets Per Minute (Awe Interactive, 2020), or covert connections where the music functions either in a supporting or hidden fashion (Kagan, 2024; Stevens et al., 2015). When composing and designing for these integrated covert/overt game-music interactions, it is imperative to consider how they connect the gameplay and musical experience (Awe Interactive, 2022; Phillips, 2014). While overt musical systems are more common and easier for players to recognise, covert systems can support games in numerous ways. Through the development of the practice-based research game I developed, Elementalis (Kagan, 2023), I was able to explore how we can integrate covert musical synchronicity throughout the gameplay experience. Music was directly mapped to gameplay experience, but in an almost inverted fashion to traditional rhythm games, which circumvented issues of playability (Chen & Lo, 2016). By breaking down the techniques that worked, and didn’t work, along with some discussion of middleware programs and how they can help bridge gaps in the design process, we can explore further the benefits of more hidden, yet intrinsic, compositional design decisions. Whether pressing buttons in time with colorful bars in Guitar Hero (2006) or knowing when to jump over obstacles in Super Mario Bros (1985), the ability to internalize and execute an action in time is a foundational gameplay mechanic. Prior research on timed mechanics has focused on their use in rhythm games and their relation to aural cues (see Austin, Kagan, and Lind). I argue that analysis of timed mechanics can be extended beyond rhythm games to examine how they encourage musical thought and rhythmic entrainment in players. In this paper, I explore how gameplay in Soulsborne games can be analyzed musically, using Dark Souls III (2016) as a case study. Unlike rhythm games, Soulsborne games encourage entrainment to unsounding rhythms, cueing player input visually. Adopting Anabel Maler’s definition of music as “culturally defined, intentionally organized movement,” I argue the act of play should be considered musical, especially as players become familiar with the “rhythm” of a fight or action. Drawing on research on rhythmic entrainment and perception by Martin Clayton and Mariusz Kozack, I suggest the gameplay in Soulsborne games can be analyzed rhythmically. Beyond my analysis, I further illustrate that players themselves conceive of their play as musical. Close readings of player discussions and gameplay shows that players describe their play in musical terms, using metaphors of dance or rhythm to interpret their actions. These players, I argue, are engaging in a form of “musicking,” à laChristopher Small. I, therefore, read gameplay as an embodied, performed implementation of what Olivia Lucas calls “vernacular music theory.” Fighting games and rhythm games are two popular genres of action games, games in which real-time input is tracked and assessed by the game system. While not all are programmed to a musical track, use of signal and singalong listening (Huron, 2002) are essential to how the player interacts with and understands the game world in both genres. This paper seeks to provide a technical discussion of collision detection (how virtual objects intersect, and thus, interact) alongside Lefevbre’s (2004) rhythmanalysis, which looks at how we use musical listening and rhythmic understanding in everyday life, to unify discussion of how players experience and interpret both rhythm (time) and collision detection (space) in their understanding of play. The discussion will be investigated through case studies of several action games: Thumper (2016), Hi-Fi Rush (2023), and Street Fighter 6 (2023); a rhythm game, rhythm-based action game, and fighting game respectively. Together they represent a cross-section of action games and a variety of ways in which a key challenge of the game (timed button presses that rely on knowledge of precise collision detection events) is understood and expressed through rhythm requiring the use of signal and singalong listening. Each has a diegesis through which rhythm and collision detection interact to make clear both the fictional world and underlying structure of each game, working with or subverting the role of audio as an interface into the game world. This paper will thus be useful for discussing how rhythm and collision operate in terms of game design, storytelling, and learning game goals. Chair: TBC |
11:00–11:30 | Break |
11:30–13:00 | Session 9 – Timbres: Soggy Synths and 8-bit Bangers
Hirokazu Tanaka is seminal in the history of video game music and sound, having contributed to the earliest games published by Nintendo, such as Donkey Kong (1980) and Metroid (1986). Tanaka is particularly renowned for the latter soundtrack, which defied the lighthearted conventions of earlier soundtracks for the Nintendo Entertainment System (NES) and the Family Disk System (FDS). His artistic vision for Metroid is illuminated in a 2014 interview, where he described his aim to evoke a sense of “ultimate catharsis” by composing the soundtrack in a darker aesthetic, to brighten up at the very end. The catharsis is encapsulated in the game’s “Ending” track, where Tanaka employs a disco style. Technical analyses of the noise channel reveal Tanaka’s meticulous effort to emulate three core drumkit instruments essential to a disco groove: the snare drum, kick drum, and hi-hat. This track is not an isolated example, however, and can be contextualized within Tanaka’s broader corpus of software published for the Family Computer (1984-86). Analyzing earlier soundtracks which employ disco, such as Balloon Fight (1985), Wrecking Crew, Stack-Up, and Gyromite (1985) reveal Tanaka’s gradual development in percussion orchestration for the genre. Using technology-based methods from Caskel, Vollmer, and Wozonig (2023) reveal intricacies reaching to Tanaka’s hexadecimal modifications. Frequency spectrum analysis aid in differentiating the timbre between three instruments, while waveform analysis presents lengths for percussion sounds. The frequency settings and lengths are often identical across multiple software, further validating Tanaka’s gradual development in percussion orchestration. Cross-referencing between liner notes, historical advertisements, and interviews will confirm the technical analysis and highlight his contributions leading up to Metroid. Koji Kondo’s ‘Dire, Dire Docks’ theme from Super Mario 64 (1996) is one of the most well-loved pieces of videogame music (Davis 2021). In addition to analyses of the reactivity, fluidity and soothing qualities of the original (Balmont 2023), YouTube overflows with countless cover versions and recreations. In this paper, I am particularly interested in the assumptions made about the composition and sound design of the original piece, the ways these notions are reflected in cover artists’ choices of instruments and patches, and even in the ways these aesthetic and technical decisions have been subsequently codified in the design and voicing of commercial instruments, that, in turn, are used to create more cover versions of ‘Dire, Dire Docks’… In particular, this paper seeks to explore the origin of the shimmering electric piano that is the signature sound of Kondo’s masterpiece. ‘Tell me that electric piano sound doesn’t just sound like water’ (Cornell 2024) The general consensus is that the sound has the hallmarks of digital ‘Frequency Modulation’ (FM) synthesis (itself, a staple of computer and videogame music). However, with the Nintendo 64 console’s sound engine being based around samples rather than a built-in FM sound chip, this paper joins the search for the source of the much-loved watery keys. Most commentators and cover artists unequivocally plump for the Yamaha DX7 as the source of Kondo’s samples (reverbmachine 2023). Certainly, while the DX7 had an extremely broad sound palette, its electric piano sounds quickly became its signature sound (and key selling point). In particular, Factory Preset 11 ‘E PIANO 1’, which sought to digitally recreate (and replace) the electromechanical Fender Rhodes piano, became a staple of genres like R&B and the power ballad with The Economist reporting that 40% of country number ones and 60% of R&B number ones in 1986 featuring the sound (B.R. 2020). With Kondo being well-known for explicitly drawing on the production techniques, instruments and palettes of contemporary music production, it should hardly surprise us to find the glassy, crystalline sound of the best-selling DX7 taking centerstage. To compound the connection between Kondo’s underwater masterpiece and Yamaha’s best-selling keyboard, French instrument maker Arturia include a patch in their software recreation of the DX7 called ‘Dire Dreams’. This patch has also spawned numerous sound files and online cover versions of the theme using this authentic software recreation of the authentic hardware instrument. Except, that neither this nor the DX7 original are exactly the sound we hear in ‘Dire, Dire Docks’. Mario and Kondo’s electric piano is fuller, with a washy chorused sound creating swirling movement and life so perhaps what we hear also benefits from the addition of some external processing to colour the DX7? Certainly, such processing was – and remains – extremely common and lent the extra body that might be seen to be missing in Yamaha’s attempt to recreate the richness of the original instrument. However, the truth is that the ‘Dire, Dire Docks’ piano actually comes from a different source. As this paper will demonstrate, Kondo’s sound is, in fact, built from samples of a preset from a flagship synthesizer released by Japanese manufacturer Roland in 1993 and presciently named the Super JD. A decade after the Yamaha DX7 launched (and four years after it had been discontinued), Roland’s sound design unquestionably builds on the (in)famous ‘E PIANO 1’, but reimagines it with the trappings of 1990s sound design. Lush reverb and rich, swirling chorus combine with the underlying samples and digital filters to create the distinctively warm yet pristine and glassy tonality that Roland had perfected in its digital synthesisers since the late 1980s. Ultimately, this paper argues that, just as David Wise had done a few years earlier with his use of the Korg Wavestation synthesizer in the underwater ‘Aquatic Ambience’ of Donkey Kong Country for the SNES (Wise 2021), ‘Dire, Dire Docks’ sees Kondo taking advantage of the N64’s samplebased sound engine to draw on the instruments and studio techniques of the time and connect Nintendo’s videogame soundtracks to contemporary popular music aesthetic and production. Perhaps the best-known and most-beloved theme from Donkey Kong Country (Nintendo, 1994) is “Aquatic Ambience,” composed by David Wise. In his article “Chill out with Donkey Kong Country,” Ignatiy Vishnevetsky describes “Aquatic Ambiance” as, “a placid piece of music that uses a sophisticated palette of synthesized instruments and futuristic sound effects to create a mood of calm that’s very different from the sped-up themes usually associated with platform games.” With its undulating synths, rippling arpeggios, and copious reverb, “Aquatic Ambience” represents a “wet” aesthetic present in fashion, design, and various popular media released around the same time, such as the album cover for Nirvana’s Nevermind (DGC Records, 1991), the TV programme Liquid Television (MTV, 1991-1995), and the hairstyle known as “the wet look.” “Aquatic Ambience” represents broader consumer aesthetic trends and styles such as Y2K Futurism, Memphis Design, and Frutiger Aero, and continues to impact more contemporary media; for example, the song was sampled by Childish Gambino (“Eat Your Vegetables,” 2018) and inspiring several TikTok videos in 2023-24 in which creators discuss how the track ironically elicits both the “nostalgic” and “creepy” vibes from present-day listeners. In this paper, I investigate the influence of water-themed music from early-to-mid-1990s games such as Donkey Kong Country, Ecco the Dolphin (Sega, 1992), and other related titles as both a reflection of the 1990s “wet” aesthetic at the time of their release and their influence on the development of internet-based, mood-themed, retro-futurist microgenres of music and visual art such as chillwave, vaporwave, and seapunk and other media since the late 2000s and early 2010s. Chair: TBC
|
13:00–15:00 | Lunch break |
15:00–16:30 | Session 10 – Pitching and Glitching: Musical Mechanics
The Super Mario LEGO system offers a unique convergence of sonic play and musical meaning, combining the Lego Group’s “system of play” (Wolf 2014), sonic inclusion in toys (Dolphin 2014), and Nintendo Co., Ltd.’s history of musical play (Moseley 2014, 2016). This presentation demonstrates how the music and sounds of the Super Mario LEGO system inhabit multiple spheres of interactivity (Packwood 2020, Schefcik 2020). This multidimensional, modular approach to analyzing game sounds provides a framework for deriving exchanges of sonic meaning when these sounds migrate between virtual and actual environments (Smucker 2024). Central to this investigation are questions regarding whether the Super Mario LEGO system is game or a toy, and how the meanings of these sounds shift through realizations in different media. I use three primary axes of interactivity to address these questions: 1) interplay in a ludic-paidic spectrum (Caillois 2001, Kendrick 2011, Huizinga 2016); 2) distinction between virtual and actual environments (Galloway and Hambleton 2024); and 3) blurred lines between music and sound effects (Medina Grey 2021). Collectively, these axes express shifting semiotic relationships between the “gamer/player” and the sound design of the Super Mario LEGO system. I show how this specific LEGO system can inhabit both spheres of games and toys by expanding Smucker’s “ludic-consumer exchange” of sonic value between virtual and actual environments. Through examples of “gameplay” and “play sessions,” I further show how initial associative meanings of these sounds may semiotically evolve (Hart 2021, Pozderac-Chenevey 2014), based on how users engage with the system. With gambling prohibitions sweeping across the USA in the late-nineteenth and early twentieth century, manufacturers of gambling machines turned to loopholes to stay in business: you were not gambling if your nickel also bought a small commodity each turn, like chewing gum, or so was argued. Stretching this logic, some made gambling machines with music attachments, the “commodity” here a short mechanical song (Collins 2016). Stretching it further still, from ca.1925 the Mills Novelty Company offered a “Toy Race Course”—the “toy” designation not-so-subtly concealing that users would bet on the miniature horses traversing its track—which attached to player pianos and piggybacked on their internal mechanisms. With such an attachment, paying for the piano was simply a pretence for having a gamble. Engaging with extensive period sources—patents, judicial opinions, advertisements, and a surviving example of a Mill’s Toy Race Course—I explore the strange, triangular tug-of-war between competing musical ontologies which emerges from this object. “Music” here is at once an abstract idea, leveraged to make gambling moral; a discrete commodity (Taylor 2007), as if it were gum; and a realized performance which must be decoupled from the gaming act for the legitimizing foil to be sustained. No specialist music was composed for this gambling “instrument,” and multiple races could be initiated during one “performance”; there was no synchronisation beyond music and play commencing together. Music was therefore legally vital but sonically incidental—music was equipment (Kamp 2024). Given these generative ambiguities, the Toy Race Course piano invites reflection on music-as-sound, music-as-concept, and the mobilising of this division through music-as-play. When encountering imaginary worlds (paracosms) in video games, technological failures known as glitches can occur. From repetitive audio clipping to missing dialogue, glitch audio can happen whilst the visual world remains unbroken, or with fragmented visuals on-screen. Audiovisual glitching within video games has featured in several ludomusicological texts in passing or discussions of technological and compositional work (Farrell 2022; Collins 2013; Reid 2020; Délécraz 2023). These studies evidently demonstrate the importance and commonplace experiences of sonic glitching to video game experiences. However, there is minimal focus on glitch audio aesthetics and their consequences on gamers’ potential immersion breaking of virtual open worlds. This paper examines the role glitch sound has in player explorations and receptions of open world video gameplay. With consideration of glitch aesthetics (Cascone 2000; Sangild 2004; Russell 2020) and their relation to sound, I suggest two important questions on glitch game sound and virtual worlds that should be considered. Can game sound demonstrate representations of communities, national identities and historical events when immersion of the paracosm itself appears to have become fragmented? Furthermore, can gamers engage in critical reflection of virtual soundworlds when their experience is disrupted by moments of audiovisual glitch? I approach these questions through my proposed framework of paracosmic multimedia, derived and adapted from the paracosm and worldplay theories of childhood imaginary play (Cohen and MacKeith 1991; Root-Bernstein 2014). This paper culminates in an examination of narrative-based and deliberately triggered glitch audio by the online speedrunning community and through self-analytical gameplay in Stray (2022). Chair: TBC |
16:30–17:00 | Break |
17:00–18:30 | Session 11 – Styles. Fusions and Topics
When turning on Mario Kart 8 Deluxe (2017) on the Nintendo Switch, players are met with a virtuosic slap bass line, a driving drum groove, and a flurry of distorted guitar lines—hallmarks of jazz fusion. Nintendo’s music blends diverse styles and traditions. In commenting on Nintendo’s ludomusical diversity, composers such as Kenta Nagata, Soyo Oka, and Hirokazu Tanaka have highlighted Japanese jazz fusion groups as a key influence on their work. Jazz fusion, a subgenre emerging in 1970s North America, combines electronic instrumentation and dance grooves with jazz improvisation, emphasizing the virtuosity of rhythm section musicians. While Weather Report and Return to Forever defined the genre in North America, Japanese bands such as Naniwa Express, Casiopea, and T-Square adapted and expanded its reach—some members even contributing to video game soundtracks. This paper situates Japanese jazz fusion as a key influence on rhythm section writing in the Mario Kart series. It examines the role of the rhythm section in shaping the franchise’s sound, focusing on Super Mario Kart (1992) and Mario Kart 8 (2014). This research contributes to discussions of transpacific musical exchange, highlighting how Japan’s integration of North American popular music into its own musical landscape is represented in video game music. Drawing from scholarship on video game sound technology (Newman 2021; Summers 2024; McAlpine 2019) and jazz in Japanese culture (Atkins 2001; Pronko 2018, 2021; Wright 2020; Bridges 2017; Asaba 2025), this study explores jazz fusion’s role in shaping Nintendo’s rhythm sections, the driving force behind Mario Kart’s frantic, upbeat underscoring. Video games possess the distinct property of formal malleability; each person’s experience of a game—and therefore, their interactions with its music—is unique. This effect is amplified when the game itself is nonlinear; a track’s topical associations may change entirely. Studies in ludic form typically highlight musical changes as a result of player action (Medina-Gray, 2019; Collins, 2008). Relatedly, studies in topic theory often identify topical musical associations as multipartite juxtapositions (Lavengood & Williams, 2023; Atkinson, 2019) of many unmarked phonemic musical elements that together form unique semiotic information (Johnson, 2017; Neumeyer, 2015). My study combines ludic and topical perspectives, showing that formal positioning is yet another such element. For instance, in Octopath Traveler II (2023), players may complete stories in any order. One city, New Delsta, evokes 1920s American city life in an otherwise pastoral fantasy setting with a jazzy style employed in a track titled “A Sensational City.” When Agnea, a young woman from a small village, visits New Delsta, the music communicates hope and opportunity. However, for Throné, an enslaved thief operating within that same city, the same music instead communicates corruption and classism. Hope, opportunity, corruption, and classism are all historical elements of 1920s American culture (Currell, 2009); however, which elements are musically foregrounded is determined by player input and the resulting altered story context. The track’s initially unmarked placement in the open form is yet another semiotic component, demonstrating that our reactions to music are dictated by extramusical time in addition to extramusical space. In 2021, Melanie Fritsch noted a Nintendo-centric and Western-centric slant to Anglophone ludomusicological research. She argues, this has often resulted in the exclusion of Japanese perspectives or the sociocultural background behind game music coming from Japan. Inspired by the previous literature’s discussion of the “prog-rock” aesthetic of the Megadrive, I show the technical affordances and sound capabilities of Megadrive sound hardware to have facilitated complex baroque and related progressive rock architectures in terms of rhythm, melody, and harmony. I connect these compositions to contemporary Japanese notions of Bach as the “father of music” (ongaku no chichi) in Japan at a time when the national music curriculum included Bach’s music as mandatory repertory for every child to “appreciate” (kanshō). I argue that the music of the Megadrive shows these composers to have taken creative musical decisions to play with baroque formal conventions to fit on-screen 16-bit game aesthetics. This paper also aims to complement previous discussions on the use of Bach’s music and baroque atmospheres (or “vibes”) in 8-bit music Nintendo games, with my own analysis of the music of the Sega Megadrive. I include insights from interviews and my own discussions (in Japanese) with composers Iwadare Noriyuki and Tamiya Junko. By analysing their compositions, among other musical examples, I hope to further previous work on the gothic, sublime, and baroque musical techniques in game music beyond their well-documented connections to anglophone cinematic tropes. Chair: TBC
|