Ludo2018

Ludo2018, the Seventh European Conference on Video Game Music and Sound, will take place April 13th-15th at HMT LeipzigGermany.

This seventh annual conference is focused on the theme of ‘Soundscapes and Interfaces’. The conference will feature keynote addresses and sessions by:

  • Michael Austin (Howard University), editor of Music Video Games (Bloomsbury, 2016)
  • Adele Cutting, BAFTA-winning audio professional whose credits include the Harry Potter and The Room franchises
  • Kristine Jørgensen (University of Bergen), author of Gameworld Interfaces (MIT, 2013)

Draft programme and abstracts available here!

The conference is hosted by HMT Leipzig and the University of Leipzig. We are grateful to be further sponsored by Stiftung Digitale Spielekultur and EA Blog Digitale Spielkultur.

Registration

Registration is now closed.

Location & Travel

The conference will take place at the recently founded Zentrum für Musikwissenschaft, which is a collaboration of the Hochschule für Musik und Theater „Felix Mendelssohn Bartholdy“ (or HMT for short) and the University of Leipzig.

The conference venue is the HMT, which is within 15 minutes walking distance from Leipzig Hauptbahnhof (main station) and located on the brink of the city centre. The address is:

Hochschule für Musik und Theater „Felix Mendelssohn Bartholdy“
Dittrichring 21
04109 Leipzig
(HMT Homepage)

The HMT is barrier-free and provides parking space for the disabled. Please let us know in advance if you need any assistance, so we can make sure that a parking spot will be available.

By airplane
You can travel to Leipzig/Halle airport directly in case you find a good connection from your home airport. There is a direct S-Bahn connection going from the airport to Leipzig Hauptbahnhof (ca. 15 minutes travel time).

Other options are flying in via Berlin (Tegel or Schönefeld airport) or Frankfurt International Airport.

  • Berlin Tegel has a direct bus connection to Berlin Hauptbahnhof (main station, Bus TXL).
  • Berlin Schönefeld is connected to Berlin Südkreuz per bus, per S-Bahn or per local trains.

Both railway stations, Berlin Hauptbahnhof and Berlin Südkreuz, have direct speed train connections (ICE / IC) to Leipzig Hauptbahnhof. For information regarding travelling by train see the item By train below. Estimated travel time from the airports to Leipzig Hauptbahnhof is ca. 2 hours.

Frankfurt International Airport has a train station and is connected to the speed railway network (ICE). There are some direct connections to Leipzig. For information regarding travelling by train see the item By train below. Estimated travel time from Frankfurt Airport Railway Station to Leipzig Hauptbahnhof is between 3 ½  – 4 hours.

By train
Leipzig Hauptbahnhof (main station) is located on the north brink of the city centre. Information about schedules, train tickets etc. can be gained via the Deutsche Bahn website.

Regarding traveling by train in Germany: When buying your ticket via the Deutsche Bahn website in advance there are usually two rates offered:

  • The cheaper price (savings fares) is bound to the specific train connection you booked – so if you miss that specific train the ticket is invalidated. You can also make seat reservations for 9€ extra in the ICE/IC trains, but same here: if you miss your train, the reservation is gone as well.
  • The standard rates (Flexpreis) are usually more expensive, but this ticket allows you to take any connection, so in case you miss your train, just jump on the next one.

For more information consult the Deutsche Bahn website or information desks at the train station you are starting from.

By car
Leipzig is connected to the Autobahnen A9 (Berlin – Munich), A14 ( Wismar – Nossen ) and A38 (Leipzig – Göttingen). Directions can be found on the City of Leipzig website.

Please note: As the HMT is located right next to the city centre parking could be very problematic, if not a mission impossible. The HMT only provides parking space for the disabled. Please let us know in advance if you need any assistance, so we can make sure that a parking spot will be available.

Accommodation

There are many hostels, hotels, apartments and bed and breakfasts available in the city centre, fitting all pocket sizes and – as the entire city centre – within walking distance from the conference venue. Here are a few suggestions (please note: Prices may vary because of occupancy and breakfast is not always included in the basic price!):

Seaside Park Hotel Leipzig (4*)
http://www.parkhotelleipzig.de/en/
Standard Single Room with Private Bathroom from 90 €/ night

Meininger Hotel Leipzig Hauptbahnhof (3*)
https://www.meininger-hotels.com/de/hotels/leipzig/hotel-leipzig-hauptbahnhof/
Standard Single Room with Private Bathroom from 60 €/ night
Bed in small mixed dormitory (4 person) from 21 €/ night

IBIS Budget Leipzig City (1*)
http://www.ibis.com/united-kingdom/index.en.shtml
Standard Single Room with Private Bathroom from 53 €/ night

B&B Hotel Leipzig City (1*)
https://www.hotelbb.de/en/leipzig-city
Standard Single Room with Private Bathroom from 56 €/ night

5 Elements Hostel
https://5elementshostel.de/leipzig/en/
Bed in mixed dormitory (10 person) from 12,50 €/ night;
many other options available (4 / 6 / 8 bed dorms, 4 bed women dorms, 4 bed private dorms, single rooms, private apartments and more)

Preliminary Programme

This draft programme is subject to change.

Day 1: 13th April 2018

9:00 – 09:30 Registration, Coffee & Welcome
09:30 – 11:00 Session 1Interfaces and Performance
“GAPPP: Gamified Audiovisual Performance and Performance Practice” is an arts-based research project based at the University of Music and Performing Arts Graz. It has been conceived by composer, audiovisual artist and project leader Dr. Marko Ciciliani who together with his team, the artistic researcher and performer Dr. Barbara Lüneburg and musicologist Andreas Pirchner, investigates the combination of game elements and performer interactions for their artistic potential in contemporary audiovisual artworks. This paper offers a perspective on performers’ agencies to (musical) meaning making, and to the creative and strategic shaping of audiovisual gamified audiovisual works of the artistic research project GAPPP (gappp.net).Barbara Lüneburg will introduce her model of ‘performative involvement’ that describes how agencies that are afforded to the performer( through the introduction of game elements, software design and control devices for musical or visual interaction) in GAPPPs works may influence the player’s range of expression and artistic and emotional involvement and meaning making during a live concert performance. She investigates how agencies and affordances translate firstly into game related involvement, and can secondly be transformed into ‘performative involvement’ that ideally transfers to their audiences. In her model she bears in mind, that in a performance situation the gamified interactive musical systems not only concern performer or instrument or composer, but that the spectator is part of the performance ecosystem. Following Gurevich when he speaks about a performer’s ‘skill’, Lüneburg states that ‘meaning’ – like ‘skill’ – “emerges from a performance ecosystem that includes a performer, instrument, and spectator, all as active participants that also exist within a society and draw upon cultural knowledge.” (Gurevich 2017). This investigation is based on three case studies of the artistic research project GAPPP.

When reading about the works of Japanese game designer Tetsuya Mizuguchi, Russian artist and theorist Wassily Kandinsky and his idea of synaesthesia are usually mentioned as his main inspiration. But this is just one piece of the puzzle. In order to explore and extend his own overarching concept of what he calls “music interactives”, which include games as well as his music project Genki Rockets, Mizuguchi always used the latest state of the art gaming technology as his experimental playground. While pursuing his main goal to find new ways of interactive musical experiences and forms of musical expression, he further processed diverse musical cultures and aesthetics in his projects. Whereas for example his „Space Channel 5“ series was strongly drawing on music video aesthetics as promoted i.a. by MTV, “Rez” and its successor “Child of Eden” instead were mainly influenced by European techno culture as encountered by Mizuguchi himself.
In this presentation the particular case of “Child of Eden”, being the latest point of culmination of Mizuguchis design approach, will be carefully examined. It will be shown that the game offers a form of active critical involvement with techno, by being closely designed around the underlying “social practices” (Stefani 1987, Strötgen 2014) of techno culture, which are again interwoven with aspects of Japanese Idol culture. That way, a coherent music-based gameplay gestalt (Lindley 2002, Fritsch 2014) is created during play that allows the player to experience “techno” not just as a sounding phenomenon, but as an idea via his own body that “expands into technology” (Klein 2001) –a “human interface”.
With the release of Connected Gaming: What Making Video Games Can Teach Us about Learning and Literacy (Kafai, Burke & Steinkuehler, 2016) and the edited collection, Exploding the Castle: Rethinking How Video Games & Game Mechanics Can Shape the Future of Education (Edited by Young & Slota, 2017), the topic of video games and game technology serving as tools for education has very much been propelled into the pedagogical zeitgeist.
One of the tools I experiment with in my thesis, is a virtual instrument created in Minecraft that affords the students a “constructivist” learning experience, in that learners may complete set tasks in a playful, constructive and exploratory manner. These tools emphasise embodied cognition and pattern recognition in an attempt to ‘quantify’ musical concepts, and allow learners to bridge their tacit musical knowledge with practical knowledge. In doing so, topics such as intervals and note relationships can be seen in vivid detail and interacted with. This method combines goal based, ‘gameified’ teaching methods that allow autonomy over learning outcomes, as opposed to more conventional methods such as reading and being taught in more traditional lesson situations. I discuss in my paper, the ways in which the tools I developed can elucidate chosen aspects of music theory in an interactive setting in which learners can engage with the learning experience at their own pace.
11:00 – 11:30 Tea & Coffee Break
11:30 – 13:00 Session 2 – Virtual Worlds
The steady development of virtual reality has increased the awareness of academics on the issues
associated with understanding both its derivation and originality. One of the primary questions becomes: how do we contextualised VR in such a way that pays attention to both its antecedence and its uniqueness? Answering this question is made more complex by a focus on the soundscapes of VR. Focus appears to continue to be placed on the visual novelty, rather than the sonic or the audio-
visual relationship. While links between VR and gaming have been dominating the former’s discourse, in this paper I want to place VR soundscapes as part of the wider filmic audio-visual experiences of the early 1900s, such as Hale’s tours which would develop into VR experiences at Universal Studios and other such amusement parks. By returning to the early years of film we can draw parallels between the novelty of audio-visual soundscapes in film and VR, as well as understanding how VR soundscapes are being sculpted based upon audience sonic literacy, which developed over the last one hundred or so years of film experience. Rick Altman and Ian Christie have written extensively about the early soundscapes of film and how audience’s negotiated them for the first time. What needs to develop now, however, is a discussion on how these early filmic experiences created a way of understanding audio-visual experiences that are today dictating VR soundscapes.
The recent boom in virtual reality is firmly rooted in video game culture and technology through its devices’ connection to gaming platforms (HTC Vive and Valve’s Steam) and consoles (PlayStation VR). Not all of the applications offered on these devices can be called ‘games,’ however, and both HTC/Valve and Oculus prefer the label ‘experiences’ on their websites. Many of these experiences are VR versions or ‘ports’ of products from older media, like Minecraft and Superhot VR, or genres in older media, such as space simulation or racing simulation games. As a consequence, there is a certain amount of remediation (Bolter and Grusin 1999) in terms of audiovisual aspects and protocols for interaction (Gitelman 2006), including soundtrack elements and their affordances (Clarke 2005; Grimshaw 2008; Kamp 2014).
This paper looks at a non-game, Google Earth VR, that in its porting from Windows desktop and Chrome browser application to VR experience gained a soundtrack that features, in addition to environmental and interface sounds, dynamic music (Collins 2008; (Käser et al. 2017). GEVR’s score resembles aspects of film and video game music, in particular games with visual perspectives such as Cities: Skylines and SimCity (2013). I will interpret this not as GEVR becoming more game-like by virtue of its score, but that, taking a cue from Eric Clarke’s ecological approach to listening, the experience’s soundtrack acquires certain affordances remediated from those older audiovisual media.
Set in a post-apocalyptic, alien ravaged, earth, NieR: Automata (2017, Platinum Games) is a video game that encompasses diverse environments and gameplay styles; from 3D open worlds, to 2D side scrolling platforms, to shoot ‘em up and bullet-hell styles. The player traverses changing environmental visual spaces whilst shifting between these different styles of combat, accompanied by a soundtrack that adapts to the in-game environments. The significance of this adaptive soundscape in NieR is its intense focus upon the location and status of the player-character, which determines the various,individually altering, aspects of the soundscape. James Cook speaks of the medieval soundscape in his case study The Witcher 3: Wild Hunt, identifying that the game addresses ‘not only the musical score but also wider aspects of soundscape such as vocal accent, foley, and manipulation of the aural field.’ This paper will discuss these wider aspects of NieR’s audio world, following the introduction of various languages within the music, that incorporate an android/human culture within the soundscape of the game. It will identify the significance of the composer’s decision to incorporate quiet, medium, and dynamic variations of each area theme,building the soundscape alongside the player’s progress within the game’s narrative. It is the identification of that progression which triggers the introduction, and intensity, of vocality and song within the soundscape.
13:00 – 14:00 Lunch
14:00 – 15:30 Session 3 – Music During and After Play
This paper explores music in and around the digital game Dota 2 (2013, Valve). I draw on a range of methods and sources, including the analysis of musical style and reception, an anonymous online survey, and three months of immersion in the game-world and its surrounding social spaces such as Reddit and the broadcast platform Twitch. Firstly, it is shown how the game’s modularly-constructed soundtrack is affective and functionally successful for players. The music facilitates strategic gameplay and group sociality despite its use of often-criticised stylistic conventions of contemporary film scoring, such as loud synthesised bass lines and repetitive ostinati as primary vehicles of thematic development. Secondly, when a large portion (70%) of players choose to instead listen to their own personally-curated playlists while playing Dota 2, I suggest they employ music as a tool for managing their affective state. This brings into question the relatively new cultural assumption that music and mood are naturally connected, and allows for broader reflection on music’s relationship with individual agency and modern technology. Thirdly, considering Dota 2’s broader social world, I show how music for gameplay melds into the soundscapes of spectacular, Olympic Games-style performance events in the professional esports arena, while professional players also broadcast their musical taste to a global audience through Twitch. The paper ultimately aims to highlight Dota 2’s place within existing musical traditions, bring its music into dialogue with contemporary musicology, and set the stage for further scholarly investigation of music in similarly popular competitive games like League of Legends (2009, Riot).
Game preservation has risen the research agenda in recent years (e.g. Lowood 2009; Guttenbruner et al 2010; McDonough et al 2011) with much emphasis centring on emulation and the original experience (Serbicki 2016; Swalwell 2017) and the importance of documentation (Lowood 2013; Newman 2012). However, comparatively little work has been conducted on the theory and practice of game sound archiving (though note the commentary in McDonough et al 2010 on the limitations of videogame sound emulation). This paper reports on one recently-founded project that seeks to tackle this issue. The National Videogame Foundation ‘Game Sound Archive’ (GSA) operates in partnership with The British Library and centres on the creation and curation of archival-quality recordings of the distinctive sounds of digital games and gameplay.
Importantly, the GSA is not a collection of abstracted music files or sound effects and does not focus only on capturing the raw output of hardware systems and sound chips. Rather, the scope of the project extends to actuality recordings of games being played. This decision has two immediate consequences. In the first instance, it means that the recordings account for the totality of game audio emanating from systems and games at play. As such, music and effects intermingle, sometimes complementing one another and sometimes competing for sonic space depending on the design of the audio engine. However, more than this, the interest in documenting the actuality of gameplay brings the sounds of player interactions and the operation of the physical interface within the scope of the project.
Giving examples of some of the recordings, the paper explores the rationale and development of the GSA in the context of extant formal and informal game preservation projects and (game) sound collections such as the HVSC (High Voltage SID Collection) and VGMRips. The paper continues by considering the implications of these various projects’ approaches, the (patchy) state of current game sound emulation and its role in preservation, and use-cases for archival recordings of game sound and actuality recordings such as those in the GSA. The paper concludes with an outline of the curatorial development plan for the GSA and an invitation to collaborate in the recording process.
Travellers in Japan, whether strolling amongst skyscrapers in the large urban centers of the country or passing through train stations in remote rural areas, are sure to encounter sonic environments colored by the sounds of gaming. Handheld devices, the solitary machines ubiquitous in public spaces, and the full-fledged gaming parlors that populate the country have changed the Japanese soundscape drastically.
My paper is an examination of a music genre whose primary points of sonic reference are these aspects of the Japanese sound environment. An onomatopoetic phrase used in the Japanese language to signify the sound of video games, “pico-pico” now also signifies this musical new genre. Pico-pico has reinvented its other major point of reference, the popular underground Shibuya-kei style, for a younger generation of listeners, the first generation that grew up with video games as a pastime and important source of cultural reference.
A reading of the ways in which the aesthetics of video game sound has affected pico-pico and other recently emergent musical genres will lead into questions of how sound and music can serve as the means through which the virtual travel central to the experiential aspects of gaming is written into the bodies of listeners outside of game space. Making reference to fieldwork I completed in Tokyo, I will discuss how certain forms of game sound have become codified for certain circles of listeners in manners that allow the sounds to afford kinds of imagined travel that have profound effects on how time, space,distance, and place are experienced by listeners
15:30 – 16:00 Tea & Coffee Break
16:00 – 17:00 Keynote Address: Adele Cutting
 Evening Evening out in Leipzig at Local Kneipe

Day 2, 14th April 2018

9:30 – 10:30 Keynote Address: Kristine Jørgensen
10:30 – 11:00 Tea & Coffee Break
11:00 – 12:30 Session 4 Information from Music
Payday 2 seems superficially similar to many other first-person shooters and stealth games. The Graphical User Interface (GUI) contains typical shooter indicators for health and ammunition alongside typical stealth-game indicators for suspicious and alerted enemies. However, Payday 2 also omits or limits a number of elements found in GUIs common to these genres, such as player radars, objective markers and ability timers. Instead, these commonplace GUIs are replaced with auditory interfaces throughout the game.
This paper deconstructs two levels from the co-operative first-person stealth-shooter Payday 2 to demonstrate how auditory elements can be used within interactive media to replace elements of user interface that are conventionally visual. It examines music, dialogue and sound to build an understanding of how players must interact with the audio of the game.
To successfully navigate the game world and find ludic success, players must develop an understanding of the game audio in what seems similar to the knowledge described by Bourgonjon as “video game literacy”. This may help to immerse players more completely within the game following principles of Grimshaw and Ward, and allow us to establish a basis for examination of immersive audiovisual environments such as those found in virtual reality.
This paper focuses on how video game audio can be transformed into new auditory information systems (Fritsch 99; Jørgensen 168; Summers 130) in the act of speedrunning. Hereby, audio is taken to subsume music, sound(effects) and dubbing (Fritsch 96) and speedrunning is conceptualized as the goal to finish a game as fast as possible under certain rules. Since perceivable visual information is tendentially reduced in speedruns, such new auditory information systems allow to compensate visual information loss (Jørgensen 164).This paper argues that linear or reactive music can become proactive music (Liebe 47) during speedrunning. Existing video game audio can be recontextualized as audio cues for tricks and glitches what constitutes new auditory information systems that evoke actions from the players.Audio cues can be realized in two different fashions: Either, those audio cues are intended and properly designed by the developer, or they are made up by runners and communities through the aforementioned recontextualization, which cuts through the intended sound design of the game. In so far,it is plausible to say that speedruns are not just non-narrative interventions, but can also be non-auditory interventions.All those aspects are analyzed with examples from actual speedruns in contrast to the normally played games. Blindfolded speedruns will also be analyzed as extreme cases in which the newly created auditory information systems have to compensate for the complete loss of visual information
In musical aesthetics, it was widely accepted from the German romanticism until the early 20th century, that music would carry extra-musical meaning by virtue of its expressiveness. This notion seemed particularly appealing in cases when several arts met in a synthesis of visual, lyric, dramatic and audible elements, thus forming a Gesamtkunstwerk. E.T.A Hoffmann, Richard Wagner, Hans Pfitzner and Ferruccio Busoni are representatives of the common view lasting more than a century, whereby opera and music drama were capable of expression beyond speech.Video Games resemble the concept of Gesamtkunstwerk. They are as well constituted by a combination of multiple arts. However,what is the role of music in such a composition? Ever since the linguistic turn in philosophy not only the idea of music capable of expressing “higher truth” beyond speech is counted as obsolete, also its ability to impart semantic meaning of any kind is generally doubted.Yet music and sound in video games can be crucial carriers of information: They support the atmosphere, anticipate events or provide feedback to the player. In this way they are, either to a greater or lesser extent, an important part of the game interface. How exactly do music and sound perform this function? What is the relationship to the further arts involved? This paper shall approach these questions from a musical aesthetic viewpoint, using methods of semantics, semiotics, musicology and music theory.
12:30 – 14:00 Lunch
14:00 – 15:30 Session 5Music and Personal Experience
‘Ludomusicality’ refers to the processes and gestures involved in musical play (Mosley 2016), but not strictly in relation to instrumentality. Through such musical play, chiptune – as a genre and (sub)culture –is in continual expansion beyond its origins in technological necessity (Collins 2008; Jenkins, Ford and Green 2013). At the ludomusical hands of its creative participants, chiptune has spread into the fannish ‘margins’ (Jenkins 2013) of existing media and musical texts. Whether a nostalgic ‘miracle fuel’ (cf. Cheng 2014) or a display of multiple tastes, composers fuse chiptune with – among many other genres – jazz and reggae, and transform television and film soundtracks. If music is integral to ‘producing’ identity (Frith 2002), then how is intertextual chip-ludomusicality – which moves through cultural boundaries towards interconnection – contingent to the fannish identity of the chiptune composer?Through Rosi Braidotti’s framework of nomadic subjectivity (2011), my paper approaches this question by theorizing chiptune fan identity –as a fannish persona –as a nomadic play of subjectivities. I contend that chiptune fan personas are temporary and dynamic amalgams of heterogeneous and posthuman (Braidotti 2013) elements. Not an unchanging, fixed or lone ‘unity’ (cf. Stanyek and Piekut 2010), but a temporary and fluid persona reliant on a nomadic interplay of fannish yearning for identification (Sandvoss 2005), and the agency of musical/non-musical, human/non-human ‘actors’ (Blake and Van Elferen 2015; cf. Latour 1996). In addition I propose that not only do chiptune and non-chiptune elements allow the participant to synthesize a stance of fan identity – as subjectivity – through musical ‘encounters’(Massumi 2002), but also that chiptune composers in turn can influence and tailor these encounters – for specific desires and senses of “self”– through ludomusical gestures.
Websites like AutismGames offer online ‘serious games’ or ‘educational games’ for people (especially children) with Autism Spectrum Disorders (ASD). But all respondents in my survey indicated that they prefer to play casual games. Research on neurotypical persons has shown that casual video games also improve mood and decrease stress (Russoniello, Fish, O’Brien, Pougatchev, & Zirnov, 2011; Russoniello, O’Brien, K., & Parks, 2009a, 2009b; Ryan, Rigby, & Przybylski, 2006). Could this also be the case for people with ASD?
My own experiences as a high-functioning Aspergirl, in researching autism and in teaching the piano to autistic children, have ignited me with a wish to understand what is so soothing about casual games, especially in the variant that can be described as
‘sound toys’. Combining anecdotal evidence with survey research, embedded in literature, this article will answer the question ‘what is so appealing about sound toys, that autistic people like to play (with) them?’.All respondents were asked to play Ariel’s Symphony, a Disney mini-game in which the player can combine various musical fragments. The little mermaid Ariel is happy with everything the player does and the game cannot end itself. That could mean that Ariel’s Symphony is not a game, for a game should have an in-game goal (Suits, 2005). But if it is not a game, then: what is it? And why would people with ASD be so happy to play (with) it? In order to answer these questions, this paper will explore the personal goals of “playing” this “game” for people with ASD.
Musical entrainment is widely recognised as occurring in a variety of situations (Merker, 1999), ranging from music listening (Large & Kolen, 1994), ensemble performance (Huron, 2001), armies marching in step (McNeill, 1995), and mother-infant bonding (Feldman, 2007). As a result it appears to be a universal and fundamental human trait (Philips-Silver, 2009).
Entrainment has been linked to feelings of enjoyment and pleasure (McNeill, 1995), the ability to perceive time (Clayton et al, 2005), and the loss of awareness of surroundings (Woody & McPherson, 2010). The implications of this are that entrainment can be linked to experiences of flow (Csíkszentmihályi, 1992), thought to be a key motivational factor for engagement in video games (Stevens & Raybould, 2014).
There is anecdotal evidence to suggest that musical entrainment occurs during video game play, but to date there has been very little work conducted in this area (Phillips-Silver, 2009). However, links can be drawn between the rhythmic interplay of the game and player, and the overall play experience (Costello, 2016). The majority of entrainment studies tend to focus on
relatively simple movements, such as finger tapping (Repp, 2006), rather than more complex whole-body movements. As a result their findings are directly applicable to video games where the input movement tends to be a button press or joystick movement.
So, if entrainment does indeed occur within video games, then its successful facilitation should lead to a more enjoyable experience for the player.
This presentation will provide an overview of current research into musical entrainment and what parallels can be drawn to the field of video game music. It will also discuss a study that aims to investigate the extent to which entrainment occurs within video games, and what affect the phenomenon has on playing style, as well as provide some ideas as to what may impact on the likelihood of entrainment occurring.
15:30 – 16:00 Tea & Coffee Break
16:00 – 18:00 Session 6 – Aesthetics and Ethics (or, Drinking and Thinking)
Games are an integral aspect of human culture and they are always an aesthetic experience (Mandoki, 2016); understanding aesthetics in relation to the subject’s condition of openness to her context, whether natural or social. Thus, videogames are a more recent iteration of an cultural-aesthetic human activity that, as all human activity, has a normative moment, and is expressive of a particular ethics (Dussel, 2014).
To study the relationship between music and ethics, the present work will explore how the musicalization of three common situations (Introduction, first overworld presentation and regular combat) in the games Chrono Trigger (1995) and Final Fantasy VI (1994) is a relevant component in the characterization of the videogame space (Roth, 2017). I will review how this characterizations are expressive of some qualities of a particular ethicalmythical nucleus (Ricoeur, 1990), that Enrique Dussel explains as “el complejo orgánico de posturas, concretas de un grupo ante la existencia” (1975, ix).
To that effect, I will do an aesthetic analysis of this three situations, applying a model proposed by Katya Mandoki (2001), stressing the acoustic register and the tonical and proximity modalities; framing it in the general perspective that she stablishes for relations
between aesthetics and games (Mandoki, 2006). All of this, in the general narrative context of both games.
Finally, I will also comment eventual conflicts that could stem from the musical inmersion that chilean players could experience in videogame worlds created by foreign designers; by applying the ALI model (Isabella van Elferen, 2017), stressing -from a local perspective- the musical affect and literacy dimentions.
The authoritarian metropolis with its rampant social inequalities plays an important part in video games. From the Midgar slums of Final Fantasy VII (1997) to the elvish ghettoes in the Dragon Age series (2009-2014), visions of poverty act as confirmation of the need for a player-hero to intervene. This paper seeks to contextualise the class structures in Revolution Software͛’s Beneath a Steel Sky(1994) within the wider depictions of class in videogames (and dystopias more specifically) and their relationship to Jameson’s idea of the ͚’political unconscious͛’, and analyse how the class-based environments in the game are characterised musically. The 1994 cyberpunk adventure game takes places largely in a Ballardian literally stratified metropolis; the three levels of the city reflect the social classes of the inhabitants, with the working class occupying the industrial top layer, the middle class occupying the middle level, and the upper classes occupying the ground floor, away from the pollution of the upper levels. Each of these levels has its own individual sound, from the harsh rhythm of the working-class level, which blends seamlessly with the industrial sounds of its factories, to the much more diversified soundscape of the city͛’s upper-class level, which also boasts a bar with a live band, as well as a jukebox. This paper will analyse these soundscape from a semiotic perspective (e.g. Tagg, 2012), while also drawing on Henry Jenkins͛’s ideas surrounding narrative architecture and environmental storytelling (2004), finally explaining how music is not here an expression of class identity, but a sonic characterisation of class structure.
This paper aims to define and bring a historical perspective of aleatoric music composition relating to terms like chance music, dice music game, open music form, mobile music form, procedural music, undetermined music, among others and to discuss some possibilities in the computer games field using audio middlewares like FMOD Studio, Wwise and Pure Data in order to create a more immersive audio experience for the player, avoiding excessive music repetitions, using less music material and computer memory. During the presentation I demonstrate the different compositional processes applied in the Audio Game Breu, a thriller computer game using only audio resources implemented in the audio middleware FMOD Studio, middleware that allows the creation of audio adaptability according to the definition of game parameters. Considering the particular way of creating music for games using non-linear materials in the form of loops, differing from linear media like movies and animation, the continuous growth of the game industry and the new audio technologies brought by vr devices, it is important to investigate new ways of creating and providing music to games in order to bring a more immersive experience to the player.
This paper develops a framework for associations between sounds,video games, and alcohol. Some recent studies (Kowert and Quandt 2016; Cranwell et al. 2016) examine concerns regarding representations of drugs and alcohol in video games, while others (Montgomery 2006; Schultheis and Mourant, 2001) use Virtual Reality to simulate intoxication. These studies primarily focus on the presence and stereotypical use of alcohol, but offer little attention to related sounds and music, or the increased integration between sound design and game-play. This paper lays historical, cultural,and music-theoretical groundwork for creating an associative soundscape of alcohol in multimedia experiences, particularly video games.
I first define four primary areas of inquiry into sonic representations of alcohol in multimedia: 1) Sound Iconography, which highlights representative sounds of objects and personal behaviors;2) Sound Environments, or unique sonic locations and settings; 3) Musical Depictions of Drunkenness, such as the use of specific orchestrations and cultural influences; and 4) Simulation of Intoxication, which looks specifically at altered sonic perceptions and experiences. I then demonstrate attributes of these four features through examples from the following video games (and other media): Bioshock; Red Dead Redemption; the Final Fantasy franchise; Warner Brothers’ cartoon High Note; World of Warcraft; a 2017 advertisement series for Busch beer; and others. I conclude the paper by considering a larger context of sound and music studies related to alcohol, drugs, and addiction.
 Evening Ludo2018 Presents: Bonus Levels

Day 3, 15th April 2018

9:30 – 11:00 Session 7 – Sonic Engagement
In the age of participatory and convergence paradigms, videogame music has its own networked culture with cybercommunities that discuss, share and create, thus allowing an open space of creativity and artistic activities in a digital and constant flow. One of these practices is music composition and production for this medium, available on several platforms such as Soundcloud, Youtube and specifically in the format of modification files (or mods). In my Masters dissertation, I demonstrated the existence of a new model of online artistic production and circulation of musical mods composed and shared in the Nexus Mods platform for the The Elder Scrolls IV: Oblivion and The Elder Scrolls V: Skyrim videogames. These mods consist in adding new musical material similar to the pre-existent soundtrack of both titles; however, the majority of the files present in the audio category in this platform are related to sound only. With the usage of titles such as “better sounds” or “immersive sounds”, many modders aim to provide a more immersive experience to other gamersthrough the application of their mods in the game(s). In this case, immersive relates not only to the sound quality of all the aural effects, but especially regarding a plausible construction of reality, in which the gamers are living, playing and negotiating meaning in their own social context. Intersecting playbour, fandom, aural immersion and audiovisual literacy, these audio modders work on adding new layers to the soundscapes and ambiances of the virtual worlds presented in these two objects, placing the immersion component as a key aspect of design, playability, and using this material as a way of enabling their social capital and visibility on online platforms
Sound and ambience have been widely considered as major factors of player engagement for casino games (Dixon et al., 2013; Marmurek et al., 2010). From 20th century slot machines to present-day video bingo games, both composers and sound designers have provided the medium with stimulating soundscapes as a means to keep players aroused. Arousal, according to Brown (1986), is the major reinforcer of gambling behavior.
This paper aims to outline some of the techniques used to provide increase of arousal— such as anticipation (Inouye, 2013) and rolling (Collins et al., 2011) sounds— in casino-themed video games from a composer’s perspective by analyzing selected cases and discussing the existing bibliography on the subject. Considering the environmental influence on players’ listening and psychological responses (Collins, 2013; Griffiths & Parke, 2005; Marmurek et al., 2007), this research also intends to discuss differences between soundtracks in gambling computer games and physical betting machines.
Our presentation focuses on sound and music as key –though often neglected–points of interface in VR experiences, and how the technology of VR gaming might be used to reconstruct historic performances and spaces, situating both audiences and performers in a shared virtual auditorium to connect and share the ephemeral elements of music performance that might otherwise be lost.In the last few years, Early Music has grown in popularity. With audiences increasingly demanding ‘authenticity’, there has also been a concerted effort to create historically-accurate performances, featuring musicians in period dress performing on period instruments, and on occasion performing in physical reconstructions of period venues. While this approach has clear benefits –it offers new experiential perspectives on Early Music and its performance –it also has its limitations; physical spaces are expensive to build, and very difficult to modify and investigate systematically, and by performing in venues that have been custom-built for these concerts, geographical limits are imposed on potential audiences.This is where VR technologies have real potential. Our project explores how they might be used as a platform for investigating historical performance spaces and the music that was performed within them. Using a mixed methods approach,combining 3D modelling, acoustic modelling, ambisonics and immersive interfaces, we are recreating two virtual auditoria–St. Cecilia’s Hall in Edinburgh and the Chapel at Linlithgow Palace –and recreating performances from historical records.In our presentation, we will discuss in detail our approach to modelling, highlighting the key psychophysical cues that encourage and inhibit presence and immersion within the virtual space, the implementation of different aspects of the virtual auditorium, and some of our preliminary findings. We will conclude by discussing emerging lines of enquiry and how these have shaped the next phase of the project.
11:00 – 11:30 Tea & Coffee Break
11:30 – 12:30 Keynote Address: Michael Austin
12:30 – 14:00 Lunch
14:00 – 15:30 Session 8 – Composition and Design
Research has shown that peak emotional responses to music are often associated with expectation (Huron, 2008), or to borrow Chion’s terminology, music that is highly vectorized; “…sound vectorizes or dramatizes shots, orienting them toward a future, a goal, and creation of a feeling of imminence and expectation” (Chion, 1994. p13-14). Outside of linear cut scenes or predestined sequences the use of such vectorized music in video games is highly problematic given the aesthetic preference for smoothness (Medina-Gray, 2016), since the temporal indeterminacy that is a consequence of interactivity is likely to lead to musically jarring transitions (Munday, 2007).This paper attempts to examine the stylistic characteristics of active game music, revealing harmonic stasis, metrical ambiguity and ‘shortness’ (motivic rather than phrase based structures) as products of indeterminacy and noting parallels in the work of film composers facing similar ‘indeterminacy’ through the rise of non-linear editing and compressed production schedule. Finally the paper will discuss how the ‘infinite riser’ (the Shepard-Risset glissando) heard in the recent film work of Hans Zimmer, and in games such as Wolfenstein: The New Order (2014) and Doom (2016) might point the way towards the compositional holy grail – music that is both vectorised and temporally ambiguous.
The manner in which soundscapes evolve and change during gameplay can have many implications regarding player experience. Playdead’s 2016 release INSIDE features a number of gameplay sections in which rhythmic audio cues loop continuously both during gameplay and player death. During these sections the game will wait to respawn the player at on opportune moment during the loop. This paper uses one such section as a case study, building on the ideas put forth in Bash (2014) regarding spectromorphology in games to examine the effects of transitioning from diegetic sound effects to abstract musical cues on player immersion, mastery and narrative cohesion. The “musical suture” (Kamp 2016) created by continuously looping audio during death and respawn is also examined with regards to immersing the player within an evolving soundscape.
Scoring interactive experiences, such as video games and VR, is remarkably different from
creating a film soundtrack. In interactive content, users can choose their own path through the story, whereas in linear content, there is only a single way of progressing. This linear/non-linear dichotomy has a major impact on music. For the music for interactive content to be effective, it has to adapt dynamically to the interactions and decisions of the users.
Video game composers have developed a number of techniques such as vertical layering and horizontal resequencing, which allow the music to respond to the non-linear nature of a game. This is generally referred to as adaptive music. Although at times particularly effective, these techniques are limited both in (musical) scope and to the extent they can match the musical content with the in-experience events on a granular basis.
In this paper, we argue that in order to create video game music that could score the events of interactive content as granularly as linear music does for films, a collaboration between composers and Artificial Intelligence (AI) is necessary. To support this thesis, we introduce the concept of Deep Adaptive Music (DAM), wherein music is generated in realtime directly in the experience. The resulting collaboration between an AI and a composer augments the possibilities of traditional adaptive music by enabling infinite variation, and complex musical adaptation. We present some examples of DAM and also discuss preliminary results of a psychological experiment, which indicate that DAM is able to significantly increase engagement of VR players.
15:30 – 16:00 Tea & Coffee Break
16:00 – 17:30 Session 9 – Soundscapes
Spatial characteristics of music have been recognised and exploited in the West since the
early Christian antiphony and became clearly defined as parametric elements during the 20th century. American composers began experimenting with the use of space in composition during the early 20th century – these experiments were eventually encouraged by a national drive to create participatory, democratic forms of art, in opposition to fascist authoritarian modes of communication. Fred Turner coined the term ‘Democratic Surround’ to describe these new media models – multi-image, multi-sound source environments created by artists associated with the 1960s counterculture, and designed to model and produce a more democratic society. Contemporary mass media is directly connected to these media practices that emerged during the 1950s and ‘60s but are also expressions of something different – a ‘Commercial Surround.’ We are surrounded by media and music but not in the way that John Cage or La Monte Young envisioned. Video game music surrounds the listener too, but the impetus to design these enveloping audiovisual environments doesn’t come from the confrontation with fascism – it comes from an overarching media and consumer culture. This paper explores the spatial deployment of music in video games, its connections with the ‘Democratic Surround’ and how it can be analysed in the context of contemporary algorithmic culture. Examples from contemporary games and my own spatial music experiments in Unreal Engine will be used to illustrate.
The use of “virtual reality” VR-technologies is rising in computer games, but also in research fields like psychology and cognitive science. One reason for that is the wide range of new possibilities that opens up with this technology. Recently, VR has been tested for new ways of learning, playing with spatial perception, and new approaches in psychotherapy. Common ground for all these applications are the visual possibilities. Not yet explored is the wide range of options within audio-visual projects that explicitly focus on audio for VR-applications. From our daily life, we know that our spatial orientation is influenced by the things we hear. From film studies we know that audio can help to increase immersion in audio-visual works.
In our project, we ask if it is possible to come as close to reality with VR technology and 360° audio that we can develop a serious game for ear/perception training on VR. The content was filmed and recorded at several indoor and outdoor locations in Berlin and Bayreuth with a 360° GoPro rig and a 360° microphone prototype developed by TU Berlin. Each level is bound to a new location, in which the player has to solve perception tasks, such as matching the direction of the audio to the visual, dismissing the filtered audio in order to find the ‘real’ sound, etc. The game design therefore is based on the idea of an escape-room concept. It is meant to be motivating and entertaining, but at the same time train the participants for a differentiated auditive perception.
Based on this game, which is still in development, we also want to stress some questions that go beyond the scope of the game itself, such as: What can and/or should we achieve with spatial audio in VR? How do spatial audio and VR-visual interact? How can we influence the perception of space by audio and visuals in VR? How can we use VR for training and knowledge transfer?
The Dark Souls and Bloodborne series is notorious for its difficulty and challenge employed by the unique mechanics and concepts behind its peculiar narrative. Through the application of a sound and silence dichotomy, the player is presented with a constant need to carefully inspect the environment through sound and, in key moments, through music. For that reason, the Soulsborne’s soundscape and music are very important pieces to the puzzle of the online fan community phenomenon that has been built around the series throughout the recent years.
The crossover between Victorian, Gothic and Medieval archetypes are conducted through the music which sets the mood to several locations and characters, mainly in boss fights, laying out the foundation for the sound vs silence argument in the game. Furthermore, the eerie aesthetics are carried out to the strategic sounds of steps, weapons and general creature onomatopoeia heard over by the player.
As such, my proposal is based on two relatable hypothesis, the first one being the
employment of the sound and silence dichotomy as a way of absorbing the player into the character’s standpoint, allied to Daniel Vella’s “ludic sublime” (2015), the application of Lovecraft’s Cosmicism and how it affects the gamer’s agency within that digital world. Secondly, the way how Soulsborne’s music is based in specific tropes within the horror, gothic and epic musical imageries and yet it deconstructs the usual functions of music in videogames, defining itself as a crucial element in the overall gameplay and as the establishing factor of key segments in the narrative arch.