Ludo2021 Programme

The programme below is subject to change. Information on registration and how to join online can be found here. Attendance is free of charge.

Please note: All times in the schedule are in UTC!

Day 1: 23rd April

11:45–12:00“Loading”
Welcome
12:00–13:30Session 1Links to the Past: Histories and Remediation
My proposal for Ludo2021 is about one of game sound newest Others: neo-medieval game music covers. With the popularity of neo-medieval music covering on YouTube – such as the bardcore genre – many game musics get a make-over in these styles, including game music. Often influenced by styles such as folk rock, British folk rock and neofolk, many cover artists follow the popular strategy of combining medieval music with electronic music, influenced by bands such as Corvus Corax. Sonemic, Inc. (2021) notes that neo-medieval music originated in Germany from the neo-medieval movement. As Tanatarova (2020) explains, the music represents nostalgia towards an older civilization. It is mythology reminding people of a more peaceful natural time.

According to scholars like Kreutziger-Herr (1998), neo-medieval music reduces complex ideas about/from the Middle Ages into a decorative function that incorporates the period into contemporary vocabulary. But I think there is more to these neo-medieval Game Musics than meets the ear. By creating their own versions of a piece of video game music, the musicians often also add a local flavour to the global musics, like the pentatonic accompagnements of Asian covers to the different tuning achieved by playing the games’ themes on the Indian bansuri. As these covers are often covered themselves, a whole new spectrum of “Droste”/”Spiegel im Spiegel” interactions across game-musical and cultural contexts thus arises. In my paper for Ludo2021, I would like to explore this phenomena by combining musicological analyses with a fan studies approach, to conclude with a neo-medieval game music cover of my own.
So-called “copyright strikes” over commercial music have long been a bane of many content creators on platforms such as YouTube, but have rarely been a problem for gaming content creators – until now. With more and more broadcasters on YouTube and live-streaming platform Twitch creating game content that doesn’t use in-game music as a background but instead uses music streaming services such as Spotify, this has suddenly become an issue in gaming content creation.
This paper examines how music copyright strikes against gaming videos – primarily on YouTube, but increasingly on live-streaming platform Twitch – are reshaping how we think about the soundscape of digital play, and is changing how gaming content is created. In the first case my talk will examine how the soundscape of contemporary gaming has been altered by streaming and video content creation, in which creators often disregard packaged soundtracks and produce soundtracks of their own. How are games and play being altered by this change in audio preferences? How are content creators thus changing the kind of music we associate with games?
In the second case I will address the industry / corporate response to this (via copyright strikes) and the various responses game content creators have made to these challenges, such as deleting videos immediately after recording, talking so much that a copyright strike cannot be lodged, or returning to “intended” game music. These show a new impact that corporate actors are having on gamers and gaming, and point towards future research addressing the relationship between music and gaming industries.
Warhorse Studio’s 2018 action role-playing game Kingdom Come: Deliverance has been described as a ‘peasant simulator’. Its stated focus is on historical accuracy and realistic gameplay, often at the expense of other aspects such as accessibility for new players. The level of realism extends not only to the need to eat, drink, and sleep – but also to the need to learn how to read (vernacular and Latin separately) if the player wishes to glean any information from written texts.
The highly effective musical score of this game, written by Jan Valta and Adam Sporka, features an interactive and adaptive score, making use of a new engine developed specifically for the game called ‘sequence music engine’. There is already a neat tension here between the stated aim of historical fidelity and the modern expectation of an adaptive score; no pre-existent ‘authentic’ music can be used in this manner. As with all games, and indeed all screen media, composers and studios must work hard to balance authenticity to the medium and the genre against authenticity to the period. We expect fairly ubiquitous non-diegetic music in screen media – and its absence carries very particular semantics – but 15th-century Bohemia was conspicuous in its absence of invisible, itinerant symphony orchestras, roaming its landscape.
When writing about their approach to scoring, the composers have stated that the producer requested music in the style of ‘traditional’ film scoring, following composers such as John Williams – as well as noting an approach based on ‘Ravel-style voicing’. Clearly, then, authenticity to the medium was at the forefront of the composers’ and studios’ mind too. Interestingly, both composers and producers also spoke about influence from Czech New Wave cinema composers such as Zdeněk Liška, William Stromberg, and Luboš Fišer, as well as influence from other ‘Bohemian’ Art Music composers such as Bartók. This raises another kind of authenticity – national authenticity – drawing influence from composers writing in the countries that once were the Bohemian lands in which the game is set.
This paper focuses on the interplay between fantasy and authenticity, especially considering the ways in which the composers work creatively to give an immersive and coherent soundworld, within the limits of what I term an ‘aesthetic of authenticity’. It explores the creative interplay between the ‘real’ historical environment and that which is understood to be real in the popular consciousness, as well as the interconnected web of demands set by the medium, the interactive and narrative genres, and the historical setting.
Chair: Karen Cook
13:30–14:00Break
14:00–15:00Keynote Session – Hillegonda Rietveld
Professor of Sonic Culture at London South Bank University
“Digital Muse: Game Culture Enters the Dancefloor”
15:00–15:30Break
15:30–17:00Session 2 – Resources, Environments and Production
I’m catching bugs and digging up fossils in Animal Crossing: New Horizons and donating them to Blathers at the Natural History Museum on my recently colonized island, flying as the thunderbird reviving animals harmed by the extractive industries and sabotaging pipelines across the Albertan tar sands in Thunderbird Strike, and clearing rocks and grass for my expansive farm outside Pelican Town in Stardew Valley. Each of these entertaining causal games uses animation, simulation, sound, and music to engage with the actual world issues of settler-colonial capitalist resource extraction and labour. I ask: How is actual world resource extraction animated, scored, and represented through sound effects and design in games? The “sounds of extraction” and “extractive music” refers to music where compositional and listening practices ambiguously serve as an ecological remedy while also inflicting environmental harm. In these contexts, I’m evoking the removal of industrial contaminants embedded in the nonhuman environment by human industry, but I’m also referencing traumatic acts of natural resource removal by settler-colonial extraction industries. For example, this includes the sonic environments of animated “foreigners” discovering remote islands, settling, exploiting their natural resources in games with narratives focused on community settlement, agricultural development, or the energy and extractive industries. It also includes the extraction of sound from a site using field recording equipment and relocating it into the sound design of an animated environment. These are instances where animated representations of actual world environmental issues and human-nonhuman-natural resource relations/power dynamics are played out in interactive audiovisual environments.
Real-time strategy (RTS) games have layers to their gameplay and mechanics that encourage the player to focus on achieving victory by any means. Existing scholarship on the genre primarily explores topics such as AI programming, opponent behavior, and game theory (see Ontañón et al. 2007; Dereszynski et al. 2011; and Tavares et al. 2016). However, the socio-political ramifications of RTS, and particularly its colonialist mechanics and narratives, have been largely overlooked in game studies scholarship.
We focus on Pikmin 3 (2013), an RTS in which the player must acquire resources from the Pikmin’s planet for their starving homeworld. Due to its charming audiovisual aesthetics and user-friendly gameplay, the game’s colonialist undercurrents are easily overlooked. We focus on the game’s audio and the ways it transforms the Pikmin into the subaltern. Drawing on post-colonial theory including the seminal work, “Can the Subaltern Speak?,” by Gayatri Chakravorty Spivak, we consider how the Pikmin are oppressed through limited forms of vocality which, combined with sedative music numb the player into ignoring their colonialist enactments.
We explore the game’s soundtrack by rooting it in science fiction narratives and musical tropes that displace the colonial voice with a playful, exotic one. We also demonstrate how the player suppresses the Pikmin’s vocality through a weaponized colonial aurality, which hears the Pikmin as expendable resources. Ultimately, we trouble the charm and playful experiences of the Pikmin series to address its violence that is furthered with the use of audio to suppress the subaltern.
Considering the relevance of the interactive aspect of listening in the video game experience, we have researched the intersection between a) their music and sound production conditions, b) the characteristics of the musicians’ poetics, and c) the reception by the public of the music from a set of video games set to music by Chilean producers and composers, which includes:
– Rock of Ages 2(Ace Team, 2017), Patricio Meneses.
– Headsnatchers (Iguanabee, 2018), Ronny Antares.
– Jamestown: Legend of the lost colony (Final Form Games, 2011), Francisco Cerda.
– Defenders of Ekron (InVitro Games, 2017), René Romo.
– Omen of Sorrow (AOne), 2018, Francisco Cerda.
To this end, we have consolidated an analytical framework that includes methodologies and categories that allow us to approach the phenomenon from different perspectives. First of all, we have carried out an analysis of the experience of listening to and playing these titles, in an exercise that draws on theoretical perspectives of Latin American aesthetics (Mandoki, 2006), through an autoethnographic (Ellis, 2019) process that included self-recording. Also, a semiotic analysis was carried out, applying the methodologies proposed by Philip Tagg (2013), and finally, interviews with the composers, which were analyzed understanding music socio-affective communication matrix (Martínez, 2017).
In this presentation, we will give an account of the most relevant findings of this process, which include a) continuities and discontinuities in poetics, b) theoretical limitations to the models used and our strategies to overcome them, and c) a comment on the national scene and its dialogue with videogame production internationally.
Chair: Jennifer Smith
    17:00–18:00Break
    18:00–19:30Evening Session – Tuning into Chiptune
    Composing a soundtrack for a game that fits retro specifications requires both technical knowledge as well as compositional techniques linked to both counterpoint and popular music. I recently completed the score for Mystiqa: The Trials of Time, a roguelike dungeon crawler made using the specifications for Game Boy that will be released for the Nintendo Switch in 2021. There are two different versions of the game, each one presented its own challenges and opportunities. I worked with sole creator, Julian Creutz, through an 8-bit game jam. Creating a soundtrack within the limited specifications of the Game Boy was a challenge. For the game jam version, Tower of Time (2020), I relied heavily on three tracks of two square waves and a noise channel for percussive effects. This quickly became an exercise in two-voice counterpoint, and many of my compositions for this game are either in styles of neo-classical or progressive rock. For the full game, Julian proposed that I could alter the music slightly by adding effects or having more than the two square waves. While all but two cues utilize the three track texture, I do add effects as the player progresses through the game, such as reverb and delay. The penultimate boss features a minor mode arrangement of the “Queen of the Night” aria from The Magic Flute by Mozart, but the other 22 cues are original. In this presentation, I describe the contrapuntal composition process and retro technical aspects of creating the music for this game.
    The migration of chiptune concerts to online spaces brought on by the COVID-19 pandemic has proven to be a bit of a blessing in disguise for underrepresented chiptune artists, as some organizers became more inclined to book diversity on their lineups, a consequence of both the uprising of voices urging for a change of practices in lineup curation and the attenuation of risky expenses like travel or venue costs. As a gateway to documenting chiptune with broader gender and cultural diversity, building on my work at Ludo 2020, I would like to argue that framing chiptune as historical music within a defined period of existence, with a clear turning point in 2020, allows us to create a rupture between subject and object, facilitating a reframing of the study of the chiptune medium, including its shifting definition and denomination. Chiptune is dead, long live chiptune.
    Reviewing past academic literature on chiptune, mostly centered on male protagonists, artefacts, and/or hardware, and comparing it with the recent production of vernacular discourse highlights the importance of documenting this historical paradigm shift regarding the music made with – or sounding like it was made with – videogame platforms. In this paper, I address the history of the discourse around chiptune as it is – history – and build on certain points of discussion around the medium which have been encouraged to the forefront by the creation, or existence, of inclusive and diverse online spaces.
    The soundscape of many early video game systems and computers was extremely limited. Some of these (Apple IIe, Sinclair ZX Spectrum, &c.) produced sound in a very primitive way: a single digital CPU pin wired directly to a speaker or audio jack. Traditionally, these “1-bit” audio systems (also sometimes called “beepers” or “PC beepers”) have used only a small palette of timbres: mainly square waves, pulse waves, and impulse trains. Accordingly, working with 1-bit sound poses technical, practical, and artistic challenges. Although video game audio systems have long ago moved on to dedicated programmable sound generator chips and later full PCM audio, the constraints and charm of 1-bit music are still explored today by composers including Tristan Perich, Shiru, utz, and Blake Troise (Protodome).
    My own contribution to this small corner of video game audio has been to design novel sound synthesis and audio effect algorithms, specially adapted to the 1-bit domain, to expand the range of timbres that can be created in this unique idiom. In this talk, I will present several advanced (by 1-bit standards!) synthesizers and audio effects that have been devised through my research and compositional practice, including mathematical analyses and sound demos. Special emphasis will be given to non-standard noise synthesis algorithms such as “generalized binary sequences” and “crushed velvet noise,” and resonance-based audio effects including “1-bit resonant filters,” a special form of hard-sync, and a variant that mimics vocal formant filtering.
    Chair: George Reid
      – End of Day 1 –

      Day 2: 24th April

      11:45–12:00“Loading”
      Welcome
      12:00–13:30Session 3 – Dancing and Moshing: Performing in and with Games
      Non-narrative video games -like fighting, racing or platform genres-, usually change the spatial setting of their levels in order to avoid visual monotony through the gameplay. In some cases, the selection of locations includes real places, as happens in OutRunners (Sega, 1992) or Super Pang (Capcom, 1990), which have levels set in various cities of the globe. These location changes come along with adaptations of the game music that try to illustrate the new atmospheres with different rhythms, melodies and musical clichés, as an effort to represent the identity of the place while keeping the general musical and sonic style of the game.
      The present proposal will examine a selection of non-narrative video games with levels set in Spain. We will analyze the fragments that represent the country applying a methodology that complements the musical and audiovisual languages and studies the relevant socio-cultural elements that permeate the music as clichés. This analysis will also confirm if the musical aesthetic of the Spanish-located levels is based in the customary audiovisual commonplace: the extrapolation of traditional and folk Andalusian music and flamenco as a generalization of all Spain.
      Introduction
      This project seeks to facilitate the process of preserving the intangible cultural heritage through an enjoyable performance for youngsters and new generations. To peruse our plan, we benefited from a multidisciplinary approach that brings music composition, choreography, and computer gaming together.
      The combination of choreography, sonification, and gaming for the aid of cultural heritage does not have a long history but a rapid attention has been drawn in past decade including (a) sonification [1–5], (b) gesture and choreography analysis [6–8], and (3) cultural heritage [9–11]. We tried to merge and benefit from various disciplines mentioned above to achieve our goal.
      Materials and Methods
      This can be attained by using the folk dance movement data in the creation of melodic and rhythmic patterns according to the culture’s music and its structural elements [12]. In this case, we have focused on the nature of Azerbaijani folk music and dance, particularly a folk dance called Tərəkəmə. We used MoCap data in this study [13].
      We used a choreography of Tərəkəmə dance, arranged by a master performer. In the first step, we decomposed the entire piece of dance to find how the performer sets up each body gesture and to figure out the pattern of her body movants based on music. A category of various moves was established such as walking, circulating, on-place actions, single hand motion, and compound moves. These patterns of body movements, then, were given as a list of game components, from which players can choose to set a desired order of choreography. Other choices were offered to players such as repeating the patterns, and the desired count of selection.
      Results and Conclusion
      Finally, the selected order is performed by the avatar within the 3D environment, and a generated piece of music will be audible synchronously. The generated music is completely in the frame of Azerbaijani music and its structural elements like motivic structure using modal scales, variations, and two core phrases [14]. The proposed method can both facilitate the new generation in learning folk dance moves and sensing the melody and rhythm within the cultural frame. For future work, more features will be adding to improve the game-based learning approach and engagement will be measured.
      Travis Scott’s Fortnite appearance on April 23rd set a new standard for in-game concerts. More than 12 million players stopped their tasks to watch an animated avatar of the famous rapper perform a 10-minute live concert on a virtual beach, which is to date Fortnite’s biggest event. However, this experience was a costly one and could not have been organised by players themselves: a gamer’s role in an in-game concert in Fortnite is that of the audience. On the other hand, in Minecraft, a volunteer-run collective called OpenPit has been producing music festivals since 2018. New but quickly developing artists such as Charli XCX, A.G. Cook and 100 gecs are frequent attractions in their lineups.
      This paper aims to raise questions about the social processes related to in-game concerts by popular music and independent artists inside the environment of the Massively Multiplayer Online Games (MMO’s) Fortnite and Minecraft. Is it possible to identify virtual music scenes around in-game concerts in Fortnite and Minecraft? How is their social aspect articulated? How do the categories of race, class and gender influence the way these players interact with each other? Is there any particular music genre developing inside these music scenes? Can the concept of music scenes as defined by Will Straw (2004) be adapted to the online games’ environment? What are the main differences between musical events in Fortnite and Minecraft?
      Chair: Costantino Oliva
      13:30–14:00Break
      14:00–15:00Keynote Session – Markus Zierhofer
      Composer of The Wagadu Chronicles and founder of AudioCreatures
      „Afrofantasy Game Music: Discoveries, possibilities and difficulties while defining the sonic world of the African MMO The Wagadu Chronicles“
      15:00–15:30Break
      15:30–17:00Session 4 – Starts, Stops and the In-Between
      Released in October 2020 by Green Tile Digital, Strobophagia is a first-person rave horror game that tasks its players with one objective: survive. Set in a dimly lit forest, players experience sound and music as orienting forces as they navigate between different dance floors with no visually discernible path to guide them. The dynamic relationship between the rave music of the dance floors and the sounds of the forest direct players as they move around the game space. By requiring players to navigate primarily by ear, Strobophagia’s soundscape uses sound effects as affects: I argue that the tension between sound and music invites players to feel aurally vulnerable.
      I divide my presentation into two sections. I first offer a phenomenology of gameplay that focuses on Strobophagia’s use of queer affects (e.g., rave iconography and androgynous NPCs) and transformational disorientations. I read players’ disorientations as a form of affectively charged queer time that facilitates musical disorientation (Halberstam 2005; Ahmed 2006). Then, I use a black box approach to approximate the transformational limits of the game’s musical and sonic system (Medina-Gray 2019). Through the black box approach’s emphasis on audition as a form of navigation, I show that the relationship between sound and music is one of the most compelling sources of horror in the game (Whittington 2014; Perron 2018). I conclude by reflecting on the broader implications of sonic and musical disorientation as a generative source of affect in gaming experiences.
      Our world is informed by boundaries. But humans have also always tried to find ways to cross these boundaries. One of the most effective boundary-transgressors is music. This is no less the case with music in Digital Games. In this paper, I will argue that Game Music can be considered a boundary object. The term was first coined by Susan Leigh Star and James R. Griesemer in their 1989 article Institutional Ecology, ‘Translations’ and Boundary Objects. Christine Hanke first proposed to apply the idea of boundary objects to Games in 2008 (ibid., 8), interpreting boundary objects simply as objects, that are boundaries, objects, that are at boundaries, borderline things (Bergermann and Hanke, 2017, p. 117). If we go one step further, not only can we consider Digital Games as a whole to be boundary objects but also one of their core components – Game Music.
      It likewise transgresses boundaries as well as marking them, emphasizing compositional techniques and aesthetics that were less explored before (aleatoric composition, vertical orchestration etc.), while remediating (see Bolter and Grusin 1996) already ‘established’ conventions from earlier media forms (leitmotif-technique, underscoring, several mood- and ambient-techniques etc.) to further weave a web of aesthetic and cultural knowledge of society. Throughout this paper, I will first explain what Star and Griesemer meant when they coined the term boundary object and how it is applicable to the objects of media science, i.e., Digital Games. I will then give two examples, why it might be beneficial to view Game Music as a boundary object: the first being a possible explanation for the complex relationship between sound effects and music; the second being the ability to work as a vanishing point in the transdisciplinary, multidivergent and often chaotic nature (see Juul 2005, n.p.) of Game Studies, marking but also transgressing boundaries at the same time.
      Much has been written about videogame ‘platforms’ (e.g. Montfort and Bogost’s influential volume (2009) and MIT series) with important work such as Altice’s (2015) drawing attention to the role of hardware and software in shaping distinctive sonic identity. Where this work tends to concentrate on the implementation of in-game sound, this paper seeks to move forward by rewinding in order to focus on perhaps the most iconic, identifiable and most oft-heard sound of a gaming platform – the system startup chime.
      The particular focus here centres on the Sony PlayStation (1994) boot sound designed by Takafumi Fujisawa (Cork 2019a). The paper begins with an analysis of the design and function of the sound. This might be presumed to stream from the CD-ROM drive so typically understood as a defining feature of the PlayStation platform. However, the sound is actually the product of a highly complex, highly efficient combination of code and composition that is performed in real time using a custom sequencer and three extremely short samples stored in the PlayStation’s BIOS . In addition to providing the PlayStation with an immediately recognisable sonic fingerprint and acting as an anticipatory cue for the forthcoming gameplay, the sound also has important communicative and diagnostic functions that are signalled by the sequential playback of different audio elements (Cork 2019b). Just as crucial is the potentially agonising pause as the PlayStation performs disc region, readability and compatibility checks and exercises its inestimable power as the gatekeeper of gameplay.
      The paper concludes by exploring the ‘afterlife’ (Guins 2014) of the PlayStation startup sequence and how recent player/hacker practices have transformed it into an unexpectedly creative site of audiovisual expression and experimentation. With specific configurations of glitched startup sounds documented, codified and given hauntingly evocative names such as ‘Personified Fear’ and ‘Fearful Harmony’ (Llamas 2018) perhaps recalling the dreaded possibility of startup failure, these re/decompositions are the result of the injection of malformed data into the PlayStation BIOS and the deliberate and playful use of incompatible or damaged discs.
      Chair: Raymond Sookram
        17:00–18:00Break
        18:00–19:30Evening Session  Learning from Practice
        1. Dragica Kahlina – Tutorial/Practice Session: Procedural Music with Unity and CSound
        2. Christof Ressi – Lecture Performance
        Chair: Elizabeth Hambleton
          – End of Day 2 –

          Day 3: April 25th

          11:45–12:00“Loading”
          Welcome
          12:00–13:30Session 5 – Japan, Music and Culture
          Kōichi Sugiyama (b. 1931), the composer of the Dragon Quest series, is something of an oddball among prominent game music composers. Unlike most composers who debuted in the 1980s, such as Nobuo Uematsu and Kōji Kondō, Sugiyama had already established his reputation as an influential composer and producer of popular songs and film music when he wrote his first game score at the age of 55. Apart from composing, Sugiyama has been active in producing live performances of game music; in fact, he organized and conducted the first orchestral game music concert in the Suntory Hall in 1987. Although orchestral performances of game music have since become a global trend, the idea of arranging game music for a live orchestra was rather curious in contemporary Japan, where video games were regarded as a mere pastime for kids. What made Sugiyama come up with the idea of organizing an orchestral performance of game music and how did this idea relate to Japanese music history at large? In this presentation, I will review Sugiyama’s musical career and argue that precisely his established status and previous activities significantly influenced not only his musical work but also his decision to organize the first orchestral game music concert. By highlighting this aspect, I propose that, although Japanese game music historiography tends to discuss game music as a genre independent of the wider musical sphere, game music history should also be contextualized in the broader trajectory of modern Japanese music.
          Among all new media, Japan especially excels in producing video games. If we take a listen to the musical aspects of those products, we can easily notice some consistencies, for instance when it comes to their frequent eclecticism – horizontal (different styles in different moments of the soundtrack) and vertical (different style flags acting at the same moment of the soundtrack). Basing my argumentation on case studies taken from different video games series, I will try to give an interpretation to the persistence of such a phenomenon, that is not that frequent in Western counterparts. Why Japan, and why video games? Is it something that we can find in Japanese anime and live action movies, too? What role could Western instances of eclecticism have had on such an approach? And what about Japanese-based genres such as Visual Kei or J-pop, which were apparently often interested in eclecticism? What is the role of Japan’s postmodern culture in all of this, and what about the technologies involved in the creation of those soundtracks? Different paths can be taken to understand this phenomenon, situated as it is at the crossroads between local and global, but I especially aim at understanding why and how new technologies and media have in this case worked as agents of postmodernity, by fostering hybridization and eclecticism, and by spreading them to a very wide and popular audience, that was previously most likely not very much into similar kinds of music – and, possibly, making such an approach even more popular.
          Touhou Project, a shoot-them-up game series, is notorious for its musical fan arrangements. While the phenomenon of fan-made remixes is not novel, the scale of sustained fan-engagement directed at single-developer game series spanning almost two decades is remarkable and worthy of study.
          Through merging of exhaustive findings of analogous fan behavior, musical game theory, and dōjin cultural background, this essay will aim to comprehensively enrich the scholarship available on the topic and how Touhou Project encourages the production of derivative musical arrangements.
          The paper will investigate the motivations prevalent in existing Japanese Dōjin Soft culture (Kobayashi and Koyama, 2020), the prominence of Touhou music in-game (Phillips, 2014), and the common practices and processes of the Touhou Project fandom, by exploring past interviews with developers, previous research of the non-economic drivers present in dōjin game development (Hichibe and Tanaka, 2016), politics of peer production (Galbraith and Karlin, 2016), the role of music in the shooting game genre (Newman 2013), and media convergence in derivative fan works.
          An anticipated problem of this investigation is the lack of available scholarly resources centered on Touhou fandom and dōjin music production available in the English language as well as the lack of complete data regarding musical derivative works from Touhou Project games. To mitigate these circumstances, the research will acknowledge other fields and aspects of dōjin production, including anecdotal evidence and fan efforts of documentation from within the Touhou Project fandom.
          Chair: Andra Ivănescu
          13:30–14:00Break
          14:00–15:00Keynote Session – Poornima Seetharaman
          Carnatic music enthusiast, Director of Design at Zynga and Ambassador of Women in Games (WIGJ) 
          “‘Sangamam’ – A union of Carnatic music, South Indian culture, Video games and Emotions”
          15:00–15:30Break
          15:30–17:30Session 6Depicting Worlds and Cultures
          The now-classic PlayStation 2 game Kingdom Hearts (2002) was the result of a synergetic collaboration between two media powerhouses: Walt Disney Studios and SquareSoft. In the game, characters from both franchises cohabitate the many in-game “worlds” players must save from evil. These worlds are largely built upon the settings of Disney movies (e.g., the “Halloween Town” world based on Disney’s Nightmare Before Christmas (1997)), with Kingdom Hearts composer Yoko Shimomura oftentimes arranging the original music from these films to be incorporated into the game. Here, then, preexisting music literally contributes to the process of worldbuilding. In this paper, I draw on the Kingdom Hearts series (2002—present) to show how arrangements of preexisting music can be used as worldbuilding devices across and between franchises. I accomplish this by expanding upon James Buhler’s (2017; cf. Godsall 2019) notion of musically “branding” the franchise, considering the politics of what happens when two media franchises are merged. Drawing on the writings of Robert Hatten (1994, 2014) and David Neumeyer (2015), I analyze this dialogic relationship between preexisting musics (as well as newly composed music) through the lens(es) of musical markedness and troping, expanding these theories to the level of the franchise. I conclude the paper by considering how “Dearly Beloved”—Kingdom Hearts’ main theme—has similarly been arranged for the concert hall, thus bridging our “real” world with the virtual world(s) of the game series through an asymmetrical and marked process of remediation.
          Denunciations of racism and whitewashing in the live-action adaptation of Avatar: the Last Airbender erupted over its all-White casting of protagonists in a fictional world saturated with Asian and Native American influences. But why should racial representation matter in fantasy worlds? The reason may be termed racialized fantasy – designing a fantasy world’s culture with traits associated with a particular real-world culture. After the 2020 death of George Floyd resulting in worldwide cries for racial justice, examining racial representation in video game music is crucial. Super Mario Odyssey sparked controversy over its Mexico-themed Tostarena, widely criticized by Latinx communities. Though the case’s specifically-musical considerations remain underexplored, detailed music-theoretical analysis yields fruitful results.
          Producer Yoshiaki Koizumi describes Super Mario Odyssey’s central theme as ‘world travel,’ affording a tantalizing case study of musical globalism in a fantasy gameworld. I analyze two tracks from Sand Kingdom Tostarena and two from Bowser’s Kingdom, respectively influenced by Mexican and Japanese music. One possible approach evaluates authenticity – fidelity to the original culture’s musical traditions. However, all four exhibit both congruence with and divergence from tradition; additionally, discourse over authenticity ultimately contributes to dynamics of commodification, appropriation, and power. An alternative lens employs stereotype to identify problematic cultural representation, drawing on scholarship in media studies, exoticism, orientalism, and ‘world music.’ The critical distinction now becomes clearer; whereas the music of Bowser’s Castle moves beyond simple exoticism to a productive blend of Japanese and European styles, Tostarena’s score trades on stereotypical mariachi music as a marker of difference rather than its own rhetorical argument. Music-semitoic analysis justifies critique of Tostarena’s soundtrack, articulating a heuristic for discerning problematic racial representation.
          Due to its exclusively white cast, The Witcher III: Wild Hunt joined other video games in the ongoing controversy surrounding racial representation in gaming. Arguments in favor of greater racial diversity in games (Moosa) were met with predictable fearmongering about “racial quotas in art” (Chmielarz), creating two distinct sides to the debate. As is typical of gaming discourse in a post-GamerGate world, standard stratagems of coded white nationalist politics (Hartzell) informed the arguments against diversity. I seek to add further nuance to this debate by exploring the complexity of whiteness in The Witcher III’s setting and, in particular, its music. The Witcher franchise positions itself against mainstream fantasy fiction like The Lord of the Rings by presenting a particularly Slavic form of medievalism with its creatures and curses based loosely on Slavic fairy tales. Jakub Szmalek, one of the lead writers for the game, even described their approach as being to show “fifty shades of white” (Messner), indexing both a multiplicitous whiteness and, tellingly, a gendered power dynamic. CD Projekt’s collaboration with the Polish folk band Percival, active participants in Polish-pagan Rodzimowierstwo culture, lends the game’s soundtrack a sense of alterity in relation to “typical” fantasy scoring through their emphasis on folk and pagan musical practices. While this fracturing of monolithic whiteness counters white nationalist claims of purity and unity in their vision of the European medieval past, unexpectedly using whiteness to destabilize white nationalism, it also blithely reinscribes West-European whiteness as default dominant.
          This presentation will explore how Pokémon Sun and Moon uses Hawaiian and Polynesian musical tropes and diegetic signifiers throughout the game, helping to ‘situate the player’ and enable them to ‘identify his or her whereabouts in the narrative and in the game’ (Collins, 2008, p.130). This identification relies on a combination of player cultural literacy (Hirsch 1988) and musical literacy (Levinson 1990) to contextualise the Pokémon region of Alola (Nintendo, 2016). The soundscape of the game is made up of the underscore, incorporating traditional instruments from the steel guitar to Ka’eke’eke drums, alongside diegetic sounds to evoke and situate gameplay in a culture and geography most likely foreign to the player. The players ability to contextualise and situate themselves in this region relies on a combination of their cultural and musical literacy.
          This investigation will also address the consumption of Hawaiian culture within Japan, and the portrayal of its traditional music and performance within a hugely commercialised Japanese Role Playing Game. The use of these musical tropes and diegetic signifiers simultaneously ground the player in the region of Alola, whilst constructing a sense of ‘Otherness’ (Kurokawa 2004) in a Hawaiian soundscape designed by Japanese composers.
          The outcome of this is that the soundscape developed in Alola takes inspiration from traditional Hawaiian & Polynesian culture and music, but is ultimately a divergence and thereby produces a sonic environment unique to the region. Consequently a new literacy is built through a return to the sounds traditionally associated with the Pokemon game franchise, whilst drawing upon the new addition of Hawiiaan and Polynesian culturally significant music and diegetic sounds.
          Chair: Peter Smucker
            17:30–18:30Break
            18:30–19:30Evening Session  **Annual Ludo Pub Quiz Extraordinaire**
              – End of conference –
              %d bloggers like this: