Ludo2023 Programme and Schedule

For more details and registration link, see the main conference page.

Day 1: Thursday, March 23rd

Welcome and Registration
9:30–11:00Session 1Performing for Virtual Worlds
On June 18th 2022, I attended pop diva Charli XCX’s performance on the popular Massively Multiplayer Online Game (MMOG) Roblox. Charli’s Roblox appearance has been one of multiple in-game concerts that have been presented in the game since 2020. Lil Nas X, Lizzo, Pinkpantheress, Zara Larsson, David Guetta among others have also explored the in-game concert experience. As a format, in-game concerts have been receiving more attention by the audience as well as the games and music industries since the Covid-19 pandemic halted the traditional live music circuit, forcing musicians and professionals to turn towards the digital in the hopes of finding ways to connect with their audiences during a time when physical proximity was not a possibility.
After in-person events returned, in-game concerts like Charli’s are still produced and experienced by players. This fact proves the point that in-game concerts were not just compensation for the lack of traditional live music, but a new format of entertainment targeted at the intersection of gamers and music fans. By applying game mechanics inside musical experiences, in-game concerts provide sonic fun while allowing a closer engagement between fans and idols, even if through avatars.
            For a pop diva, the embodied musical performance is one of the most important aspects of their spectacle (SOARES, 2020). What then happens when the embodied musical performance of a pop diva is translated to the gaming environment, through the now-digital performance of an avatar? The present paper will address possible answers to this question, by drawing on Performance Studies, Game Studies, Ludomusicology and writings on virtuality. It is interested in the digital performance of the pop diva on in-game concerts, drawing on Charli XCX’s Roblox event as an example.
Twilight Imperium Fourth Edition is a critically acclaimed board game published by Fantasy Flight
Games. This epic space opera features 24 factions trying to conquer the galaxy. Each faction offers not only a unique play style but also a brief description of the faction’s history and motivations for galaxy-wide domination. As is typical for this genre, and science fiction in more general, worldbuilding of this game balances between familiar and strange elements. Accordingly, the factions include a diverse set of species, for example, human-like aliens, enigmatic entities made of fire, life threatening viruses, and monsters from other dimensions. Also, a few factions include music in their descriptions.
In this presentation, I examine how the depictions of music is used to create strange and unfamiliar entities or, conversely, to render otherwise alien beings more akin to human subjects. By adopting Tim Summers’s (2013) Spectrum of Alterity, and further developing the notion into a multidimensional tool for analysis with the semiotic square, I categorize the functions of music. Furthermore, the results are discussed in the light of posthumanist ontology of music.
The Pathless tells the story of the Hunter, the player character, on a mission to purify the corrupted spirits of an island and stop the antagonist, the Godslayer, from destroying the world. The setting of the game relies heavily on the virtual environment and location of the player. Game composer Austin Wintory used music he recorded with the traditional Tuvan music group Alash Ensemble. In his approach to making the music for the game, he held a jam session with Alash Ensemble and later orchestrated around the music he recorded with them. In the game, there are four bosses—the corrupted spirits—before the final boss, each which Wintory has represented with a different instrument. This instrument is tied to the spirit’s domain, so as the player traverses the map the solo instrument playing the melody of the environmental music will change based on their location. As the player defeats each boss, that instrument and music in that domain is removed from the game leaving only sounds of nature. This removal can be inferred as associating the music with danger. Wintory has participated—as Theodore Levin has explored—in a history of the West exporting Tuvan music for Western media. This history has stereotyped Tuvans as barbaric, which can be interpreted in some applications of Wintory’s music. However, despite this complicated history, this paper argues Wintory seeks to maintain a connection to the natural and supernatural Tuvan music has by emulating this relationship in the virtual environment of The Pathless.
Chair: Michael Austin
11:30–12:30Session 2 – Non-Human Encounters
With this paper I am drawing a correlation between the use of sound in late nineties video game Duke Nukem 3D, the philosopher Michel Foucault’s discourse on discipline and punishment, and Gilbert Simondon’s notion of individuation. I am observing that sound in Duke Nukem acts as an operator of punishment, paraphrasing Foucault’s notion. Foucault’s discourse on the various technologies of power resonates with Duke Nukem’s implied formula of jurisdiction, restraint, and control. Sound which texturises the temporality of the game by punctuating, and very often anticipating, the presence of antagonist alien creatures: a constellation of different species emitting distinctive, and peculiar sounds. A virtual space that unfolds through a soundography (defined as mapping space through sound) of creatures to be met, fought and punished, for having invaded the planet Earth. Here a Simondonian principle of individuation is brought about because of the specificity of the sounds uttered by the aliens encountered: sound univocal and repetitive that define the extraterrestrial species, and its individuation, its alterity, by juxtaposition with the voice and sounds of Duke Nukem, the only human protagonist of this classic first-person shooting game.
Through an analysis of the sounds and the music utilised in the video game, I maintain that the sonic environment of Duke Nukem 3D is not simply cooperating with the visual design, and in the overall outcome of the gaming experience, but is manifestly establishing principles of authority, discipline and individuation of what is alien to us, different and therefore to be restrained, punished and/or rejected.
In my paper, I will focus on presenting research findings concerning non-human ludomusicological spaces in the analysis of In Other Waters, a minimalist narrative walking simulator game created in 2020 by the one-person studio Jump Over the Age run by Gareth Damian Martin. Retro mechanics and aesthetics of the game are combined with modern ideologies (or ways of thinking) and unconventional narrative strategies (Byrd 2020).
During gameplay, the player unfolds the plot through carefully crafted narration and learns about the consequences of the failed attempts to manipulate the planet’s ecosystem. This process can be cathartic and thought-provoking. Therefore, under the simple yet engaging gameplay, In Other Waters smuggles heavy discourse on vital questions of contemporary humanity. These problems, however, are not new. In fact, the author’s choice of retro 80’ entourage may carry additional sense.
Researching humanistic and spatial aspects of the game, I will focus on the connections between contemporary sensibilities present in the game and problems of 80’ new wave in terms of apocalypse predictions (Janssen and Whitelock 2009), searching for identity (Cvejić 2009), and extra-terrestrial lifeforms (McLeod 2003). I will also show the clash between the terraforming urge of humans (Bratton 2019) and the mesmerising beauty of The Other, especially in terms of the spatial features and aquaticity of music.
The game touches on, explores and problematises multiple topics, such as space exploration and exploitation; ecological catastrophe, terraforming and humans playing god; corporatism and financial primate; identities of humans, AI and aliens. My analytical methods will be interdisciplinary, with particular emphasis on existing musicological research of aliens and aquaticity by, among others, Stock (2021), Summers (2013) and Szulakowska-Kulawik (2008).
Chair: Raymond Sookram
13:30–15:00Session 3 – Interactions with Alterity
This paper stems from a key section of my PhD thesis, which explores the concept of the “neo-silent” aesthetic in gaming – in other words, the selective omission (or muting) of diegetic sound and dialogue, and the foregrounding of non-diegetic music, in a similar vein to modern silent cinema. The paper considers the sonic lineage which links the development of game sound to the practices, limitations, and perceived universalism of silent cinema, and the engagement with this lineage by contemporary developers, sound designers, and composers. It also considers the particularities of avoiding diegetic sound in a gaming context, and the varying degrees of muteness adopted by practitioners, including the near-total diegetic suppression of games like Undertale and Fez (with continuous underscoring and quasi-musical approximations of speech), the entirely wordless (and textless) soundscapes of Limbo and Journey, the non-verbal (and non-human) “speech” of Stray and Untitled Goose Game, and the silent protagonists of Half-Life and The Stanley Parable (whose muteness exists, often inexplicably, in an otherwise “realistic” diegetic environment).
Throughout the paper, I will draw on the existing (somewhat disparate) scholarly literature on game silence to examine the prevalence, impact, and modalities of diegetic absence and wordlessness in gaming. I will also link this selective, self-constraining impulse to wider academic debates around meaningful absence, creative limitation, media obsolescence, and nostalgia. Overall, the interdisciplinary themes raised in this paper draw attention to an often-overlooked aspect of game sound, tying together several strands of existing research and contemplating games as an evocative counterfactual to the vococentricity of other audiovisual media. The paper also links to the ‘beyond human’ themes of Ludo2023 by exploring the idea that wordlessness in gaming extends beyond typical conceptions of human speech and offers valuable insights into paralinguistic communication.
How do we advocate for those who cannot speak? Can we sonically create meaningful “conversations” between the player and non-human entities? What techniques can we employ to promote deep listening as a gameplay mechanic? What does a future without humans sound like? This talk examines an approach to these questions through recent projects, focusing specifically on the experimental game installation TIKATMOS.
TIKATMOS is a deeply speculative game that explores gaps in conversation, sustainability, the future of humanity, and what it means to help. In this interactive installation, it is the distant future. Humanity has wiped itself out, but every single entity on Earth has become sentient. You run the info booth at the only Mall on Spaceship ATMOS, the ark that has left earth carrying one of every single being as they search out another place to call home. The installation has the player sit down at a physical work-station complete with a life-size “window” into the mall, a boutique other-worldy computer station, and a microphone in which to speak to the customers. Players tune into these unique languages, helping customers find the information they are looking for. While many voice-controlled games place the player in a role of commanding authority, TIKATMOS positions the player as a helper, trying their best to communicate with these NPCs even amidst their lack of a common language. TIKATMOS won the Live Action Game at the 2022 IndieCade awards.
Through its narrative, Bloodborne (2015), in typically Lovecraftian fashion, thoroughly problematizes the “virtue of knowledge”; the otherworldly nightmare that has fallen upon Yharnam stems directly from the irresponsible application of eldritch knowledge by those who have been tempted by its power, and the noble figures who seek to rectify those mistakes inevitably succumb to the temptation of power as well. Bloodborne’s gameplay systems, by contrast, establish the protagonist as an incorruptible, infinitely persevering hero, with the “Insight” system explicitly quantifying and rewarding (never penalizing) their accumulation of eldritch knowledge. This paper attempts to situate Bloodborne’s score among these contradictory thematizations, ultimately constructing two analytic interpretations-one “allied” with the narrative’s thematization and another with the gameplay’s-which the player can hear simultaneously, thus reconciling the apparent dilemma.
I first consider the perceptual properties of Bloodborne’s score: I identify a musical topic of “innocence” and demonstrate how Bloodborne mutates and parodizes this topic in its soundtrack ( especially in “Lullaby for Mergo”) in order to show various narrative-musical parallels on the subject of innocence. I then examine the function of Bloodborne’s score in three important contexts: first, the “boss battle” context, where music sounds as an extension of a monster’s materiality (Kolassa 2020) and as a frame for a scene where the player’s knowledge is particularly rewarded; next, the “Insight” context, where sound acts simultaneously as a herald and a reward for the protagonist’s knowledge; finally, the “community” context, where sound acts as evidence in a collaborative effort to solve the
opaque mystery that is Bloodborne’s story.
Chair: Karina Moritzen
    15:30-16:30Session 4 – Embodiments
    Near the end of Fire Emblem: Three Houses, ostensibly a fantasy game, players enter the technological realm of Shambhala and are greeted with an intense dubstep track, a radical change from the primarily orchestral instrumentation throughout the rest of the game. It is populated with difficult, robotic enemies called Titanus, and Viskam, indestructible turrets that attack from out of range. Similarly, in Astral Chain, after defeating “Noah, Soul of Ambition,” underscored by epic orchestral music, players battle the incorporeal “Noah Core,” the game’s penultimate challenge, in a visually foreign, technological arena, accompanied by an electronic track. Other notable examples such as guardians from Breath of the
    Wild (Bradford, 2020) fit this category.
    These tracks accompany a technological foe and are preceded by music that is embodied (Cox, 2016) as “playable” in that players recognize them as stemming from human action. The music of Shambhala and Noah Core feel generated or synthesized; in a sense, less “human.” In addition to being a clear filmic trope, this paper expands Bradford’s (2020) ludic examination of the “antagonistic mode” of the “mechanistic topic.” Composers can discernibly change a game’s musical style to amplify a robotic encounter’s perceived difficulty through the use of relatively less human or more noise-like timbres (Wallmark, 2014). This encompasses a specific embodied form of musical mechanism I label the “abrupt mode”; it incorporates the ludic trope of suddenly-technological enemies being threatening encounters. I encourage discussion regarding topical, embodied musicality on the connection between difficulty and immersion in games.
    Videogame music engages players, summoning us into the magical, virtual world it soundscapes, encouraging us to adhere to the ludic parameters at play. In this paper, I outline a new gestural analytical framework better suited to the playful audiovisual individualities of videogame design in order to reveal how players might become immersed in games.
    I will present a new analytical theory, graphically mapping gestures so as to determine the ways in which videogame music can successfully engage players to feel part of the ludo-narrative journey through a concept I term the ‘gestural potential’ of music. This paper presents a remapped recontextulisation of musical gesture theories presented by Robert Hatten (2004; 2018) in combination with a further fusion of scholarship, synthesising concepts from film and media studies, dance pedagogy, art, research that have, so far, been marked by their limited contact. By exploring ideas of design (Isbister, 2017), culture (Kassabian, 2013), and analysis (Summers, 2016; Middleton, 1993), we can identify how best to examine videogame music to reveal how players engage with games.
    By analysing the intriguing, juxtapositional ludomusical content of the videogames Super Mario World (1990) and Super Metroid (1994) side by side, this paper reveals how musical gestures can immerse players in disparate game worlds, leading to a audiovisual phenomenon I term ‘ludomusical cocooning.’ In a rapidly altering world, in which primarily audiovisual technologies of virtual entertainment and escape are competing for our attention, this paper’s analysis of how that very attention can be grasped is a timely one.
    Chair: Milly Gunn
    19:00Escape room game at ‘Escape’, 26-28 Morrison Street, Edinburgh.

    Day 2: Friday, March 24th

    9:30–11:00Session 5Sounds of Fantasy
    The Elder Scrolls V: Skyrim (Bethesda, 2011) remains a titan of video game history, regularly ranking in critics’ lists of the greatest games of all time. In a game full of enchanted swords and magical spells, any Skyrim player knows that the most powerful weapon in your character’s arsenal will be the ancient incantation of “Fus Roh Dah,” which sends enemies flying to their doom. In the lore of the Elder Scrolls, these abilities position the player character of Skyrim within the long history of mythological beast-heroes, mortals whose animal qualities lift them to superhuman status. This vocal power is rare in the wider genre of role-playing video games, giving Skyrim’s silent protagonist the ability to exercise said power as a diegetic voice, literally speaking the language of the dragons, a power central to both the protagonist’s identity as the Dovahkiin and hero of Tamriel and the ludic design of the game. Drawing on scholarship in both voice studies and ludomusicology, including Yvonne Stingel-Voigt’s work on the multiple functions of the voice in video games and Federica Buongiorno and Helena De Preester’s discussions of re-embodiment in the digital space, I argue that the Dragonborn acts through the use of these vocalic-ludic commands not only as an anchor between the worlds of animal and man and mortal and divine, but also as a third link, one between the physical and virtual worlds, through the game’s re-voicing of its silent protagonist.
    Dante Alighieri’s Divina Commedia and the nine circles of hell, in which the damned are eternally doomed to hellish torments, have influenced generations of artists for centuries. Whether in visual arts, music or contemporary pop culture, the list of works in which the Divina Commedia has been processed or reinterpreted is long and constantly growing. It is therefore not surprising that quotes and references to Dante’s poetry can also be found repeatedly throughout the history of video games, when hell, demons or Satan himself are part of the game. One of the most famous and controversial adaptations in video games is Dante’s Inferno (2010).
    Compared to other video games, in which hell is represented by a soundtrack based on heavy metal, rock or EDM, audio director Paul Gorman wanted “… a score that leans more towards 20th century academic music than a Hollywood film score” for Dante’s Inferno (Mitchell 2014, p. 21). Compositions by Krzysztof Penderecki and György Ligeti were used for the temp-tracks, and compared to Deadspace, Gorman wanted a different form of aleatoric and melodic treatment. Together with the composer Garry Schyman, who already worked with avant-garde techniques in his compositions for Bioshock, the idea of a musical duality was born. While the choir represents the “sacred ethereality” and the “after-worldliness”, the orchestra and percussion supply the action and chaos.
    But what does hell sound like in Dante’s Inferno? Based on selected compositions such as “Dies Irae”, “Donasdogama Micma”, “Beatrice Taken”, “Babalon Ors” and the original scores provided by Garry Schyman, the paper aims to explore this question and show which techniques the composer used to translate the descent to hell into music. In what way does he adapt the anguished cries of the damned in his compositions? How does he stage the fight against Satan in his music? And finally, what information and interpretative possibilities can be derived from original scores and musical analysis in the research on video game music?
    There are a great number of film-to-video game adaptations, ranging from the monumentally catastrophic ET (1982) to expansive, beautifully crafted open world games depicting the Star Wars and Lord of the Rings universes. It is the latter of these that this paper will focus on. Both Star Wars and The Lord of the Rings, in book and film format, are rich, diverse fictional universes, where the inclusion of the non-human is essential for narrative purpose. As is often the case in any narrative, the non-human is often associated with danger, threat, unease, or at the very least, ‘the other’. However, whilst these will be mentioned for reasons of brief context, it is the non-human allies that provide arguably the most fruitful analytical discussion.
    Musical exoticism and cultural appropriation are two ongoing sources of controversy in film scoring, with retrospective and contemporary film scores being subjected to higher levels of scrutiny than in the past. Video games are not immune to such discussions, and the case study in this paper – Lord of the Rings Online (2005 – present) – has many instances where ‘real world’ musical styles have been ‘othered’ for non-human species and locales in Tolkien’s fictional fantasy world.
    We will question (for example) the appropriateness and effectiveness of composer Chance Thomas depicting the Elves of Rivendell and Lothlorien with ethereal ‘far eastern’, Asian-inspired harmonies, whilst the humans of Rohan receive a strait-laced Western, Nordic-imbued underscore (all of which was inspired by Howard Shore’s film scores of 2001-03). We will conclude by asking: is this the only way, or can gamer understanding of the non-human through music lose the culturally-loaded, ‘exotic’ baggage?
    Chair: Dean Chalmers
    11:30-12:00Session 5.5
    On June 18th 2022, I attended pop diva Charli XCX’s performance on the popular Massively Multiplayer Online Game (MMOG) Roblox. Charli’s Roblox appearance has been one of multiple in-game concerts that have been presented in the game since 2020. Lil Nas X, Lizzo, Pinkpantheress, Zara Larsson, David Guetta among others have also explored the in-game concert experience. As a format, in-game concerts have been receiving more attention by the audience as well as the games and music industries since the Covid-19 pandemic halted the traditional live music circuit, forcing musicians and professionals to turn towards the digital in the hopes of finding ways to connect with their audiences during a time when physical proximity was not a possibility.
    After in-person events returned, in-game concerts like Charli’s are still produced and experienced by players. This fact proves the point that in-game concerts were not just compensation for the lack of traditional live music, but a new format of entertainment targeted at the intersection of gamers and music fans. By applying game mechanics inside musical experiences, in-game concerts provide sonic fun while allowing a closer engagement between fans and idols, even if through avatars.
                For a pop diva, the embodied musical performance is one of the most important aspects of their spectacle (SOARES, 2020). What then happens when the embodied musical performance of a pop diva is translated to the gaming environment, through the now-digital performance of an avatar? The present paper will address possible answers to this question, by drawing on Performance Studies, Game Studies, Ludomusicology and writings on virtuality. It is interested in the digital performance of the pop diva on in-game concerts, drawing on Charli XCX’s Roblox event as an example.
    12:00-12:30Community Engagement Discussion
    13:30–15:00Session 6 Rise of the Robots?
    In 1976, the term ‘fembot’ first appeared in the science-fiction vernacular within the film The Bionic Woman (1972) and was further popularised by the adventures of Austin Powers (1997) as a term describing feminine presenting robots or mechanical beings. 47 years later, research and discourse surrounding fembots
    and feminine AI within our pockets and everyday life is increasingly common (Ivy, 2012., Sutton, 2020). Virtual assistants such as Siri, Cortana (whose namesake is an AI from the Halo franchise), and Alexa, alongside voices for our SatNavs devices, and even the self-checkouts at local supermarkets are vocalised by disembodied female voices. These help us navigate everyday life, and provide advice and support, reproducing ‘stereotypical representations of gendered labour’ (Natale, 2020). Fembots, or ‘Gynoids’ (Asimov, 1979) have maintained their popularity as character tropes within the genre of sci-fi games for decades, although little research has been done to cross-examine relationships between the fembots in our everyday life, and those we encounter in our gaming lifes.
    This paper will highlight how the female voice is used in games to sonically portray either subservience or the murderous and sinister, yet often seductive, voice of a mechanical she-devil. This paper draws from research on speech within games, gender studies, and user interface design, alongside case studies from the franchises and games of Halo (343 Industries, 2001), Portal (Valve, 2007), System Shock (Nightdive Studios, 1984), Mass Effect (EA, 2007), and NUDE – Natural Ultimate Digital Experiment (Red Entertainment, 2003). From this methodology, a framework will be developed outlining the uses of vocal synthesis as a means to dehumanise the feminine voice and accompany descents into machinic madness from once subservient AI gynoids which players are often familiar with from everyday life.
    System Shock 2 remains one of the most lauded and influential video games of all time. While its genre is generally characterised as horror and action role-play, I suggest that this is an insufficient description. Rather the game is a tension between horror and ecstacy, agency and helplessness; these paradoxes are exemplified by the audio design of character dialogue. Our entanglement with the game operates in two directions: our “limit-experience” of the game world, as characterised by the multitudinous, medusoid sound of The Many and the cyborg connectivity of SHODAN’s emotionless affect; and our “limit-experience” as the player, our agency in building our character and our ability to give voice to these entities through our interaction with the UI. Yet the protagonist remains silent, and even as we project into the game world we are faced with certain expressive impossibilities. By contrasting the voicefullness of the non-player characters and voicelessness of the player character I identify the true “horror” of the game as lying in a tension between a desire to join the chorus of The Many and the fact that we cannot chose to do so. If The Many is comprised of multiple voices, then is our silence another expression of these voices or their antithesis? If The Many is a distinct, Gestalt voice, then is our external silence is more closely related to SHODAN’s simultaneous cyborg interconnectedness and singular god-like megalomania? Does the audio design include our silence, or is the player outside of the game’s sound?
    In August 2022, Gameloft and FreshPlanet issued a press release announcing that their music quiz game SongPop was now available for Amazon’s Alexa-enabled devices in the United States, Canada, and the United Kingdom. SongPop was initially released as a social game for smartphones and Facebook in 2012 and shares mechanics with “Name That Tune,” a radio and television programme from the 1950s. By 2015, the game had been downloaded 100 million times and continues to maintain its popularity via newer iterations and a growing number of playlists and platforms on which the game can be played.
     Because a variety of “smart home” devices are Alexa-enabled, as are many headphones, speakers, watches, phones, speakers, and even cars, the press release boasts that “this means players are never far from some music trivia!” While this may be a selling point to some, it is disconcerting to others. Aside from long-standing concerns about privacy and corporate surveillance of users of virtual assistants (such as Alexa and Siri), SongPop itself has faced recent criticism related to the possible use of AI opponents (or bots) that encourage players to make in-game purchases in order to increase their chances of winning. In addition to addressing these issues, this paper will also discuss how this voice-forward game also raises additional issues related to accessibility as players negotiate the positive and negative aspects of this gamified sonic technology.
    Chair: Ben Major
    15:30–16:30Session 7 – Displaced and Recontextualized Voices
    By studying anime voice actors/actresses (“seiyuu” in Japanese) as a Japanese cultural export since the 1990s to the present day, I discuss the negotiation of transnational and national desires of a Cool Japaneseness through a “transmedia listening” of “monstrous seiyuu.”
    A response towards the “Lost Decades” of Japanese economic stagnation (1990s-present) was the policy of “Cool Japan” which aimed to restore national pride among Japanese citizens and promote it to a global audience. This policy led to the seiyuu’s “maturation period” (Yamasaki 2014) which featured the commodification of the seiyuu’s body along with their corresponding anime characters. Listening to seiyuu, therefore, includes the multimedia consumption of voice, body, and affect, which I call “transmedia listening,” a term derived from “transmedia storytelling” (Jenkins 2006). The international scale of “Cool Japan” creates this transmedia encounter of an imagined Other, culturally constructed through imaginal bodies.
    Through virtual ethnography on Chinese and English-speaking fan communities, I show
    how seiyuu are imagined as superior voice actors/actresses with voice-acting skills that break bodily limitations. Fans attribute these factors to a perceived Japaneseness, and are the basis of why fans often describe seiyuu as monsters. This perception constructs seiyuu as culturally “authentic,” emphasizing their affective and diverse vocal ability compared to voice actors of other countries. I argue that the myth of seiyuu’s Japaneseness positions them as essential to transmedia listening, as seiyuu are considered the only ones qualified for connecting their own bodies to anime characters through voice.
    In Remedy Entertainment’s Control (2019), the player joins the protagonist Jesse Faden as she is faced with a corrupting and malevolent enemy: the Hiss. As a form of parasite, this entity spreads by invading human hosts and seizing control of their bodies from within. Strikingly, victims possessed by the Hiss can be recognized thanks to a poem, which all hosts recite, with their distorted voices composing a haunting litany for the player to listen to.
    This proposal is a case study of Control’s Hiss, understood here as a vocal apparatus central to the game’s sound design. We will demonstrate how this antagonist and its behaviour inside Control’s acoustic ecology set up a rhetoric of bodies, in which the litany is both a signifier of potential danger and narrative elements, and a catalyst for player immersion. We will mainly analyse two game sequences, first a cutscene of Jesse’s first encounter with the Hiss, then a documented series of interactions with possessed victims.
    To study these sequences, we will focus on vocality (Bossis, 2005) as a critical component of Control’s rhetoric of bodies. This will allow us to use inputs from voice studies, and especially from Steven Connor’s cultural history of ventriloquism (2000). Listening to the Hiss as a ventriloquist without a body, but a multiplicity of dummies, we will demonstrate how the vocal aesthetics of Control relies on poetics of possession (Connor, 2000; Dolar, 2006) to subvert the usual processes through which video game characters are granted voices and interact with players with these voices.
    Chair: Jennifer Smith
      19:00 Evening: Mortal Kocktail, Edinburgh EH1

      Day 3: Saturday, March 25th

      9:30–11:00Session 8Prickly Questions of Sonic Experiences
      This paper explores two different versions of Sega’s flagship Sonic the Hedgehog platform game. Both versions were official Sega releases, both were released in 1991, both were distributed on cartridge and designed to run on the Sega Mega Drive console platform and both versions of the game cartridge contain exactly the same code. Both titles were advertised and marketed on the basis of the almost unbelievable speed of their gameplay. ‘Once you’ve played Sonic the Hedgehog, everything else seems a bit… slow’ ran Sega’s UK TV campaign. Yet, for all these similarities, one version of this flagship game ran 17.5% slower than the other.
      And, to be clear, this was not simply a change in the speed of the gameplay. It was not merely that Sonic ran slower, everything in the game ran slower, including the music. The same score, the same melodies, harmonies and drum patterns, were all performed on the same instruments, but with a nearly 20% difference in tempo. The surprising truth is that Sonic the Hedgehog ran faster in Japan than in Europe. As such, depending on where you were in the world, Sonic looked, played and, most noticeably, sounded different even though the same program was being executed. And, it was not just Sonic the Hedgehog. Throughout the 1990s and beyond, games as varied as Super Mario 64 and Tekken all exhibited dramatic variations in performance and playback speed.
      By exploring the inseparability of home videogame console platforms, domestic audiovisual display technologies, and international variations in television display standards and formats, this paper analyses the reasons for the existence of these multiple instances of ‘the same game’ (Giordano 2011) and, following Swalwell et al (2017) asks what might constitute the ‘original’ game in such circumstances? To complicate matters further, the paper notes that, like many other publishers who have mined their back catalogues, Sega’s subsequent re-releases of Sonic the Hedgehog have left the European and Japanese versions unequally available to players in 2023. Through these commercial processes that effectively canonise one version of the game, not only are complex, local histories erased (see Wade and Webber 2016), but also the experiences of millions of players are selectively forgotten.
      It is known that people develop attachments to certain pieces of music with which they have had meaningful experiences. As many of today’s adults have grown up in a culture of playing digital games, games can arguably be accounted for as a viable musical resource to engage with both within and outside gameplay, and to which people may develop music-based attachments they cherish in their lives. By focusing on fond memories of game music, Game Music Everyday Memories (GAMEM) project investigates how game music meaningfully embed into experiential, psychological and sociocultural processes of people’s lives. The present study provides an overlook of the four branches of exploratory research conducted in the project. In particular, the focus here is in discovering the potential of each type of research as a different approach for gaining an understanding on the more general phenomenon of game music attachment. In our proposed account, attachment to a particular game music is outlined (1) as an amount of emotional involvement in reminiscing related to the game music, (2) as cognitive-linguistic structures, incorporating the music and the self, being disclosed in the verbal reminiscing, (3) as personal motives (i.e., psychological functions) for coming back to the music, and (4) as varieties of situated gameplay experiences aesthetically entangled with the personally valued game music. Our hypothesis is that these four dimensions provide complementary views for utilizing interdisciplinary research in the future studies on the personal meaningfulness of game music.
      Attending a concert in Animal Crossing: New Horizons to get a commemorative recording, picking up cassettes in Metal Gear, unlocking and purchasing songs in Hades, or finding all the hidden jukeboxes in Final Fantasy VII Remake are only some examples of how music can become a collectable and a reward in video games. From a game studies perspective, rewards have been defined as “a positive return that serves to reinforce player behavior within a videogame” (Philips et al., 2013). Thus, the pursuit of completing a collection is one of the accustomed creator’s strategies to achieve the player’s commitment to the game. Reward is a concept that consistently appears in academic articles, but –mainly due to its complexity and varied typologies– only a few scholars have attempted to create a general taxonomy for reward types in which music and sensorial elements are taken into consideration (Philips et al., 2013 and 2015; Hallford & Hallford (2001). Ludomusicology, on the other hand, has tended to consider music as a reward by itself (Summers 2016: 58; Kamp, 2010, 138) or as a direct recompense for following a specific action (Wood 2009, 133; Summers 2016: 166, 193). In order to bring together these two seemingly unconnected perspectives, in this communication I will apply the existent reward systems taxonomies to collectable music objects proving its restricted applicability, mainly due to the double slope of the analyzed element: a collectable object in the game and an intangible product that fulfils functions of aesthetic enjoyment (Merriam 1964,223).
      Chair: James Ellis
      11:30-12:30Keynote: Yann van der Cruyssen, Music Music Composer and Sound Designer for games including Stray (2022), Game of Thrones (2012) and Test Drive Unlimited 2 (2011).
      13:30–15:00Session 9 – Expressive Technological Timbres
      Since the IBM 7094 surprised its audience in 1961 with the first computerized singing voice, many engineers started searching for technologies to (re)produce human speech with electronic aids. Because of the enormous cost and effort for digitizing sounds back in the day an alternative had to be found. For this, the insights of phonetics and techno-historical experiments with voders and vocoders were combined. From the midst of the 1970s hardware and software solutions for computerized speech output came to market – a market that had just been conquered by microprocessors and home computers which then got their own voice.
      Our talk will recapitulate this genealogy of digital-synthetic voice generation and its epistemological sources to show a specific application – first in hardware (arcades and pinball machines) then in software (home-computer games) based games. The essential technologies (formant synthesis, pulse-width modulation and low-resolution sampling) will be explained and performed by the apparatuses and programs of that time.
      Questions about how those games benefit from speech synthesis aesthetically and how the somewhat creepy synthetic voices from QBert, S.A.M., and low-res sampling became more common through their application within games will be discussed. To argue this we will show how specific speech sounds enrich the game soundscapes and thus the gaming experience. With the help of brought-in hardware and software speech synthesizers (Votrax SC-01, 1-Bit-Voice-Sampling, S.A.M.) this will be exemplified audibly. For this, the technologies shall enter the stage and explain themselves as co-speakers in a lecture performance.
      Through decades of (sub)cultural appropriation, chiptune has come a long way from vitalising the pixelated geometries of in-game worlds. Yet the hypermediacies of its micro-audio technologies remain the consistent distinctiveness of its ludomusicality (cf. Bolter and Grusin 2000, pp. 31-44; Hodkinson 2002, pp. 30-31). Chip-musicians unabashedly foreground anachronistic digital aesthetics; timbres are unmistakably synthetic, and obsolescence is celebrated beyond commercial abandonment. Consequently, questions regarding nostalgia’s role in chiptune’s longevity are common, often dramatically dividing opinion: phenomenologically, nostalgia is either relegated to gamers and demoscene veterans of a certain age or dismissed entirely as bearing no influence on chiptune’s progression as a scene (see Yabsley 2007, pp. 13, 27; Scheraga 2007; Carlsson 2010, p. 11, 42-50; McAlpine 2018, pp. 256-7).
      Overlooked in this discourse are younger participants who didn’t live through late 20th century video game and demoscene culture, yet nostalgia is cited as integral to their love of chiptune. Far from nostalgia’s typically ascribed bittersweet pessimism, some have paradoxically highlighted the liminality and uncanniness of nostalgia itself as a means of identification and belonging – playing within the hazy interstices of cultural memory, time, and place. Why is this phenomenon occurring for these demographics? How do such experiences of nostalgia become an affirmative form of identification, and how do chip-musical agencies engender these responses? Through interdisciplinary critical theory and autoethnographic reflections on chiptune and queer (listening) subjectivity, my paper sheds light on these questions by exploring the non/human encounters of chip-musicking (cf. Small 1998, pp. 1-8). In doing so, it will not only illuminate the much overlooked ‘drastic’ of chip-musical performativity (cf. Abbate 2004, pp. 505-36; cf. Van Elferen 2020, pp. 103-5), but also show how nostalgia for fictional times and places becomes an affirmative source of ludomusical self-expression.
      For as long as video games have been presenting characters, game designers have been faced with the question of how to give those characters a voice. Voice acting a full game is a resource-intensive undertaking, both technologically (especially in early games) and financially — the matter of fair pay for video game voice actors is an issue that is still of great concern for the industry (Plant 2022). Another aspect to this is that, to communicate the right kind of connection the players should have with characters, full voice acting might not be the best option available. Simlish is an example where the developers at Maxis deliberately chose to have their hapless characters speak in gibberish to make them seem relatable, but not attached to any one locality (although their success is debatable) (Lam 2018; Stoeber 2020; Adams 2011).
      One way in which this dilemma has been tackled by designers is to indicate speech using synthesized “beeps”, synchronized with text appearing onscreen. By combining the visual and audio cue, the impression of spoken dialogue is created, with the audio cues of the beeps semiotically and, critically, non-verbally, indicating that speech is occurring, while the accompanying text-box provides the verbal content of that speech. This approach can be seen in games including Earthbound (Ape & HAL Laboratory 1995), Undertale (Fox 2015)/Deltarune (Fox 2018), the Ace Attorney (Capcom 2001-2021) series, and Animal Crossing series (Nintendo 2001-2020), and others. This has been described by Stoeber as “Beep Speech” (2020). In this paper, we will explore the many ways in which beep speech manifests and has been developed upon, and the ways in which game designers have implemented it to convey the emotion and personality of their characters, as well as mediate the relationship between player and characters.
      Chair: Lidia López Gómez
      15:30–16:30Session 10 – Perceiving Worlds
      Games disrupt our sense of being in the world, allowing players to become, play as, and communicate as things, characters, and nonhuman actors. By playing as the nonhuman animal in ecologically tenuous situations, play is paired with precarity. In Shelter (2013), for instance, you play as a mother badger in a constant state of risk, guiding her five cubs as they search for food and shelter while protecting them from immediate harm inflicted by floods, forest fires, and other such natural disasters. For example, the cubs are occasionally startled by unfamiliar noises, running away from the player/mother outside their radius of safety. As their mother you must listen for and anticipate these noises while also chasing after the cubs, gathering them together to protect them from potential danger. Shelter’s posthumanist sensory elements call into question the human desire to connect with, play as, sense through, and even control the nonhuman animal across digital spaces. And like Alexandra Daisy Ginsberg’s video installation The Substitute (2019), these digital objects interrogate humanity’s “preoccupation with creating new life forms, while neglecting existing ones” (Ginsberg 2019). Operating at the intersection of multispecies ethnography, digital culture, and interface studies this presentation offers a posthumanist reading of multispecies sonic cultural phenomena in digital games. I argue that in-game multispecies listening, sounding, and playing along “as” and “with” the nonhuman avatar articulates the complexity of human-animal relationships, displaces the boundaries between human and other, and articulates ways of listening beyond the human to actual and virtual sensory ecologies.
      This paper explores the ontological horror of virtual rats, and the relationship of European epidemiological, historical and animal imaginaries in game soundscapes of the Black Death. Contributing to Green Game Studies recent turns to ecocriticism (Abraham 2018; Chang, 2019; Redder, B. D. & Schott, G. 2022) and early music studies (Cook et al., 2018, Cook, 2019; Meyer & Yri, 2020), I take A Plague Tale: Innocence (2019) and A Plague Tale: Requiem (2022) as case studies to put pressure on director Choteau’s assertion that: “If we have no rats, we have no game” (2022: n.p.). Rendering thousands of rats audio-visual-haptically (Keogh, 2018) through tense cello sautille and the feedback of repeating sampled screeches, the rodents embody plague as a ‘fluid’. This homogenization of rats reinforces the medium’s problematic tendency to enumerate rather than individuate animals in games (Tyler, 2022: 29-40), as well as contemporary and modern elisions of species as ‘rodents’ (McCormick, 2003) and ‘vermin’ (Holmberg, 2014).
      If the medieval often signifies the abject in relation to modernity, from which game music promises safe distance/‘authenticity’ (Cook et al., 2018), here the alternately messy/clean audio mix instead blends traces of folk and noisy feedback to conjure abject ‘vermin’ that both constitute and trouble relations (Cole, 2016). I connect Serres’ work on the parasitic interdependence of signal and noise (2007), Connor’s thesis that animals mediate medieval and modern phenomenology (2006: 6) and theories of atmospherics (Bohme, 2017; Griffero, 2017) to explore how writhing swarms of rats play with the senses. I argue that their tense and disorienting affects demonstrate the power of game atmospheres to enact affective forms of Chang’s mesocosm—experimental spaces between lab and nature (2019)—which here horrifyingly confront us with both our inability to either hear the animal or escape it.
      Chair: Liam Clark
        %d bloggers like this: