11:45–12:00 | “Loading” Welcome |
12:00–13:30 | Session 1 – Links to the Past: Histories and Remediation
My proposal for Ludo2021 is about one of game sound newest Others: neo-medieval game music covers. With the popularity of neo-medieval music covering on YouTube – such as the bardcore genre – many game musics get a make-over in these styles, including game music. Often influenced by styles such as folk rock, British folk rock and neofolk, many cover artists follow the popular strategy of combining medieval music with electronic music, influenced by bands such as Corvus Corax. Sonemic, Inc. (2021) notes that neo-medieval music originated in Germany from the neo-medieval movement. As Tanatarova (2020) explains, the music represents nostalgia towards an older civilization. It is mythology reminding people of a more peaceful natural time.
According to scholars like Kreutziger-Herr (1998), neo-medieval music reduces complex ideas about/from the Middle Ages into a decorative function that incorporates the period into contemporary vocabulary. But I think there is more to these neo-medieval Game Musics than meets the ear. By creating their own versions of a piece of video game music, the musicians often also add a local flavour to the global musics, like the pentatonic accompagnements of Asian covers to the different tuning achieved by playing the games’ themes on the Indian bansuri. As these covers are often covered themselves, a whole new spectrum of “Droste”/”Spiegel im Spiegel” interactions across game-musical and cultural contexts thus arises. In my paper for Ludo2021, I would like to explore this phenomena by combining musicological analyses with a fan studies approach, to conclude with a neo-medieval game music cover of my own. So-called “copyright strikes” over commercial music have long been a bane of many content creators on platforms such as YouTube, but have rarely been a problem for gaming content creators – until now. With more and more broadcasters on YouTube and live-streaming platform Twitch creating game content that doesn’t use in-game music as a background but instead uses music streaming services such as Spotify, this has suddenly become an issue in gaming content creation. This paper examines how music copyright strikes against gaming videos – primarily on YouTube, but increasingly on live-streaming platform Twitch – are reshaping how we think about the soundscape of digital play, and is changing how gaming content is created. In the first case my talk will examine how the soundscape of contemporary gaming has been altered by streaming and video content creation, in which creators often disregard packaged soundtracks and produce soundtracks of their own. How are games and play being altered by this change in audio preferences? How are content creators thus changing the kind of music we associate with games? In the second case I will address the industry / corporate response to this (via copyright strikes) and the various responses game content creators have made to these challenges, such as deleting videos immediately after recording, talking so much that a copyright strike cannot be lodged, or returning to “intended” game music. These show a new impact that corporate actors are having on gamers and gaming, and point towards future research addressing the relationship between music and gaming industries. Warhorse Studio’s 2018 action role-playing game Kingdom Come: Deliverance has been described as a ‘peasant simulator’. Its stated focus is on historical accuracy and realistic gameplay, often at the expense of other aspects such as accessibility for new players. The level of realism extends not only to the need to eat, drink, and sleep – but also to the need to learn how to read (vernacular and Latin separately) if the player wishes to glean any information from written texts. The highly effective musical score of this game, written by Jan Valta and Adam Sporka, features an interactive and adaptive score, making use of a new engine developed specifically for the game called ‘sequence music engine’. There is already a neat tension here between the stated aim of historical fidelity and the modern expectation of an adaptive score; no pre-existent ‘authentic’ music can be used in this manner. As with all games, and indeed all screen media, composers and studios must work hard to balance authenticity to the medium and the genre against authenticity to the period. We expect fairly ubiquitous non-diegetic music in screen media – and its absence carries very particular semantics – but 15th-century Bohemia was conspicuous in its absence of invisible, itinerant symphony orchestras, roaming its landscape. When writing about their approach to scoring, the composers have stated that the producer requested music in the style of ‘traditional’ film scoring, following composers such as John Williams – as well as noting an approach based on ‘Ravel-style voicing’. Clearly, then, authenticity to the medium was at the forefront of the composers’ and studios’ mind too. Interestingly, both composers and producers also spoke about influence from Czech New Wave cinema composers such as Zdeněk Liška, William Stromberg, and Luboš Fišer, as well as influence from other ‘Bohemian’ Art Music composers such as Bartók. This raises another kind of authenticity – national authenticity – drawing influence from composers writing in the countries that once were the Bohemian lands in which the game is set. This paper focuses on the interplay between fantasy and authenticity, especially considering the ways in which the composers work creatively to give an immersive and coherent soundworld, within the limits of what I term an ‘aesthetic of authenticity’. It explores the creative interplay between the ‘real’ historical environment and that which is understood to be real in the popular consciousness, as well as the interconnected web of demands set by the medium, the interactive and narrative genres, and the historical setting. Chair: Karen Cook |
13:30–14:00 | Break |
14:00–15:00 | Keynote Session – Hillegonda Rietveld Professor of Sonic Culture at London South Bank University “Digital Muse: Game Culture Enters the Dancefloor” |
15:00–15:30 | Break |
15:30–17:00 | Session 2 – Resources, Environments and Production
I’m catching bugs and digging up fossils in Animal Crossing: New Horizons and donating them to Blathers at the Natural History Museum on my recently colonized island, flying as the thunderbird reviving animals harmed by the extractive industries and sabotaging pipelines across the Albertan tar sands in Thunderbird Strike, and clearing rocks and grass for my expansive farm outside Pelican Town in Stardew Valley. Each of these entertaining causal games uses animation, simulation, sound, and music to engage with the actual world issues of settler-colonial capitalist resource extraction and labour. I ask: How is actual world resource extraction animated, scored, and represented through sound effects and design in games? The “sounds of extraction” and “extractive music” refers to music where compositional and listening practices ambiguously serve as an ecological remedy while also inflicting environmental harm. In these contexts, I’m evoking the removal of industrial contaminants embedded in the nonhuman environment by human industry, but I’m also referencing traumatic acts of natural resource removal by settler-colonial extraction industries. For example, this includes the sonic environments of animated “foreigners” discovering remote islands, settling, exploiting their natural resources in games with narratives focused on community settlement, agricultural development, or the energy and extractive industries. It also includes the extraction of sound from a site using field recording equipment and relocating it into the sound design of an animated environment. These are instances where animated representations of actual world environmental issues and human-nonhuman-natural resource relations/power dynamics are played out in interactive audiovisual environments. Real-time strategy (RTS) games have layers to their gameplay and mechanics that encourage the player to focus on achieving victory by any means. Existing scholarship on the genre primarily explores topics such as AI programming, opponent behavior, and game theory (see Ontañón et al. 2007; Dereszynski et al. 2011; and Tavares et al. 2016). However, the socio-political ramifications of RTS, and particularly its colonialist mechanics and narratives, have been largely overlooked in game studies scholarship. We focus on Pikmin 3 (2013), an RTS in which the player must acquire resources from the Pikmin’s planet for their starving homeworld. Due to its charming audiovisual aesthetics and user-friendly gameplay, the game’s colonialist undercurrents are easily overlooked. We focus on the game’s audio and the ways it transforms the Pikmin into the subaltern. Drawing on post-colonial theory including the seminal work, “Can the Subaltern Speak?,” by Gayatri Chakravorty Spivak, we consider how the Pikmin are oppressed through limited forms of vocality which, combined with sedative music numb the player into ignoring their colonialist enactments. We explore the game’s soundtrack by rooting it in science fiction narratives and musical tropes that displace the colonial voice with a playful, exotic one. We also demonstrate how the player suppresses the Pikmin’s vocality through a weaponized colonial aurality, which hears the Pikmin as expendable resources. Ultimately, we trouble the charm and playful experiences of the Pikmin series to address its violence that is furthered with the use of audio to suppress the subaltern. Considering the relevance of the interactive aspect of listening in the video game experience, we have researched the intersection between a) their music and sound production conditions, b) the characteristics of the musicians’ poetics, and c) the reception by the public of the music from a set of video games set to music by Chilean producers and composers, which includes: – Rock of Ages 2(Ace Team, 2017), Patricio Meneses. – Headsnatchers (Iguanabee, 2018), Ronny Antares. – Jamestown: Legend of the lost colony (Final Form Games, 2011), Francisco Cerda. – Defenders of Ekron (InVitro Games, 2017), René Romo. – Omen of Sorrow (AOne), 2018, Francisco Cerda. To this end, we have consolidated an analytical framework that includes methodologies and categories that allow us to approach the phenomenon from different perspectives. First of all, we have carried out an analysis of the experience of listening to and playing these titles, in an exercise that draws on theoretical perspectives of Latin American aesthetics (Mandoki, 2006), through an autoethnographic (Ellis, 2019) process that included self-recording. Also, a semiotic analysis was carried out, applying the methodologies proposed by Philip Tagg (2013), and finally, interviews with the composers, which were analyzed understanding music socio-affective communication matrix (Martínez, 2017). In this presentation, we will give an account of the most relevant findings of this process, which include a) continuities and discontinuities in poetics, b) theoretical limitations to the models used and our strategies to overcome them, and c) a comment on the national scene and its dialogue with videogame production internationally. Chair: Jennifer Smith
|
17:00–18:00 | Break |
18:00–19:30 | Evening Session – Tuning into Chiptune
Composing a soundtrack for a game that fits retro specifications requires both technical knowledge as well as compositional techniques linked to both counterpoint and popular music. I recently completed the score for Mystiqa: The Trials of Time, a roguelike dungeon crawler made using the specifications for Game Boy that will be released for the Nintendo Switch in 2021. There are two different versions of the game, each one presented its own challenges and opportunities. I worked with sole creator, Julian Creutz, through an 8-bit game jam. Creating a soundtrack within the limited specifications of the Game Boy was a challenge. For the game jam version, Tower of Time (2020), I relied heavily on three tracks of two square waves and a noise channel for percussive effects. This quickly became an exercise in two-voice counterpoint, and many of my compositions for this game are either in styles of neo-classical or progressive rock. For the full game, Julian proposed that I could alter the music slightly by adding effects or having more than the two square waves. While all but two cues utilize the three track texture, I do add effects as the player progresses through the game, such as reverb and delay. The penultimate boss features a minor mode arrangement of the “Queen of the Night” aria from The Magic Flute by Mozart, but the other 22 cues are original. In this presentation, I describe the contrapuntal composition process and retro technical aspects of creating the music for this game. The migration of chiptune concerts to online spaces brought on by the COVID-19 pandemic has proven to be a bit of a blessing in disguise for underrepresented chiptune artists, as some organizers became more inclined to book diversity on their lineups, a consequence of both the uprising of voices urging for a change of practices in lineup curation and the attenuation of risky expenses like travel or venue costs. As a gateway to documenting chiptune with broader gender and cultural diversity, building on my work at Ludo 2020, I would like to argue that framing chiptune as historical music within a defined period of existence, with a clear turning point in 2020, allows us to create a rupture between subject and object, facilitating a reframing of the study of the chiptune medium, including its shifting definition and denomination. Chiptune is dead, long live chiptune. Reviewing past academic literature on chiptune, mostly centered on male protagonists, artefacts, and/or hardware, and comparing it with the recent production of vernacular discourse highlights the importance of documenting this historical paradigm shift regarding the music made with – or sounding like it was made with – videogame platforms. In this paper, I address the history of the discourse around chiptune as it is – history – and build on certain points of discussion around the medium which have been encouraged to the forefront by the creation, or existence, of inclusive and diverse online spaces. The soundscape of many early video game systems and computers was extremely limited. Some of these (Apple IIe, Sinclair ZX Spectrum, &c.) produced sound in a very primitive way: a single digital CPU pin wired directly to a speaker or audio jack. Traditionally, these “1-bit” audio systems (also sometimes called “beepers” or “PC beepers”) have used only a small palette of timbres: mainly square waves, pulse waves, and impulse trains. Accordingly, working with 1-bit sound poses technical, practical, and artistic challenges. Although video game audio systems have long ago moved on to dedicated programmable sound generator chips and later full PCM audio, the constraints and charm of 1-bit music are still explored today by composers including Tristan Perich, Shiru, utz, and Blake Troise (Protodome). My own contribution to this small corner of video game audio has been to design novel sound synthesis and audio effect algorithms, specially adapted to the 1-bit domain, to expand the range of timbres that can be created in this unique idiom. In this talk, I will present several advanced (by 1-bit standards!) synthesizers and audio effects that have been devised through my research and compositional practice, including mathematical analyses and sound demos. Special emphasis will be given to non-standard noise synthesis algorithms such as “generalized binary sequences” and “crushed velvet noise,” and resonance-based audio effects including “1-bit resonant filters,” a special form of hard-sync, and a variant that mimics vocal formant filtering. Chair: George Reid
|
| – End of Day 1 – |