Ludo2015 Programme

The revised programme for Ludo2015 is as follows.

Day 1: 9th April 2015

9:00 – 9:30 Registration
9:30 – 11:00 Session 1Game Music Audiences and Reception
On November 29, 800 people met for sold out representations in the Cortot’s concert hall, of a chamber orchestra directed by Arnie Roth. Audiences,approximately 35 years old aged, came to hear one hour and a half of unpublished arrangement, symphonic rhapsodies, fantasy pieces, realized from original compositions byNubuoUematsu,KumiTanioka and HitoshiSakimoto. While it is known that classical music’s (Donnat, 2009 ;Babé, 2012) and jazz audiences (Lizé, 2014) are increasingly aged and more and more elitist, it is less known who are the people who go to concerts devoted to symphonic interpretation ofvideogames musical soundtracks.More than 800 quantitative analysis questionnaires make us able to draw the characteristics of these population (age, cultural practices, gaming habits, etc.). 20 interviews with some of these spectators give us qualitative information about the career of these amateurs. Is there some footbridges betweenvideogames and classical concerts? What can musicology and “cultural studies” learn from these publics?Through this sociomusical studies about “game audio outside game”, we guess that we can understand the connection of young people with classical music, and more generally the evolution of participation in the arts and music in the digital age.
Through the history of video games players not just played the games as intended by the designers, they also started to play with the games and their music. Hereby a wide range of participatory practices has emerged using both as material basis, such as remixing or the creation of fan videos. These practices raise questions regarding the issue of musical meaning: During gameplay, the music is presented within the contextual frame of the game as one part of a multimodal structure of which the player has to make sense inter alia by interpreting the music. But what happens to this relationship, when players rip apart a game and its music, using sounds or songs in other contexts? In this talk an analytical approach towards such practices will be proposed that sees music as a culturally induced and learned system regarding several cultural contexts, rather than something which is just contained in the notes. A broadened concept of game musical literacy building on the approach proposed by Isabella van Elferen (2012) will be outlined and used as theoretical framework. The ‘Super Mario Bros.: The 8-Bit Opera’ by Jon and Al Kaplan will serve as a case study.
The Fallout series of games presents us with a post-apocalyptic world that looks and sounds a lot like the past. Drawing heavily on the popular culture of mid-century America, it transforms cultural currency into literal currency in the form of Nuka-Cola caps, but also in the curatorial side-missions that buy and sell the American Dream in many of its forms.
The music that haunts the airwaves of this dystopian world acts as more than temporal signifier, comic relief or poignant juxtaposition; it acts as a fright of semiotic ghosts that tell the player of a past that never was and of a future that never would be. Close harmony singing groups, leading ladies of the blues, multi-instrumentalist Country and Western singers and rat pack crooner types populate the various radio stations of Fallout 3 and Fallout: New Vegas and their music paints a picture of America that is distilled and simultaneously critiqued. The optimism of Americana comes through as strongly as its dark underbelly through these radio stations that, along with the other remnants of popular culture strewn across this Wasteland, describe a view of America that goes beyond the socio-political satire of a Grand Theft Auto to the way it sees itself, its values and the American Dream.
11:00 – 11:30 Tea & Coffee Break
11:30 – 13:00 Session 2Technological Intersections
Perhaps it is no coincidence that music videos and video games became iconic audiovisual phenomena in popular culture at about the same time, and it was only a matter of time before the two blended together in various ways. Since the 1970s and early 1980s, music video games have become much more interactive, and video games have become the inspiration for music videos, and at times, have become music videos themselves.This paper explores several ways in which the boundary between music videos and video games have been blurred, such as:
  • “Get Down (or Geddan)” and other glitches, tropes, 8-bit sound and imagery, or other aesthetic elements of videogames that influence visual or sonic elements within music videos.
  • Machinimatic music videos, a sub-genre of machinima wherein practitioners create new music videos for commercial recordings or re-create pre-existing live-action music videos within the virtual world of a videogame, such as those featured on MTV’s Video Mods.
  • 8-bit re-makes – or “demakes” – of pop music videos and video musicals, such as Levi “Doctor Octoroc” Buffum’s 8-bit game-inspired version of “Dr. Horrible’s Sing Along Blog”
  • App-based concept albums or music videos that are music videogames, such as Björk’s “Biophilia,” Polyphonic Spree’s “Bullseye,” and the music game, “Inside a Dead Skyscraper,” created by Molle Industria in 2010 for Jesse Stiles’ song “The building.”

I will also discuss the relationships of these media forms to participatory/DIY cultures and fan cultures and fan labor, and possible implications regarding the political economy of music and new media.

Throughout its history, MIDI technology has routinely been seen as a notation tool, a transcription of performance which can be routed to any sound module with the appropriate inputs. The emergence of a growing number of experimental practices around MIDI, however, most notably the infamous “black MIDI” subculture, have led to a resurgent interest in MIDI’s creative possibilities, particularly in the domain of gaming. While vast archives of MIDI transcriptions of classic videogames are easily available on the web today, these are not the focus of this paper, which explores instead the reciprocal relationship between gaming and MIDI technologies: on the one hand, games in which gameplay generates MIDI-based music; on the other, the use of MIDI patterns to sequence game graphics, architectures, and events in real time.

I begin by looking at the work of the Japanese game developer Toshio Iwai, whose early Famicom games such as Otocky (1987) can be seen as prototypes for more recent explorations of generative music and gaming. Iwai’s later and better-known invention, the handheld tone-matrix sequencer Tenori-on, can also be seen as a prototypical kind of game, anticipating later projects such as Electroplankton (2005). This provides the background for a discussion of more recent examples of MIDI-game convergence, including 8-bit games in which gameplay generates chipmusic; an adaptation of the early game Pong playable on the MIDI controller app Lemur; and UK game composer Will Bedford’s project The Adventures of General MIDI, which turns Apple’s DAW software Logic into a graphic game generator. I plan to interview Bedford about this project before the conference and will include excerpts from this as part of my presentation.

The paper will conclude by considering future directions for experimentation in the increasingly productive interface between MIDI and gaming technologies.

Apps have become such a ubiquitous presence in most people’s lives that we no longer notice how much time we are spending on them every day. However, despite the pervasiveness of this time-consuming new media form, scholarship in this area is lacking.
This paper seeks to set the foundations for the various elements that one must consider, when studying apps with music as a primary feature. Instead of relying solely on ludomusicological texts, theories on games, interactivity, and musical instruments will help shed light on the nature of these apps. Two case studies are offered for analytical purposes: My Singing Monsters (Big Blue Bubble Inc.) and Bebot (Normalware), which each highlights certain aspects of the aforementioned theoretical areas, and which will lead to a classification of these apps as interactive interfaces, or “sound toys” (Robson, 2002; Dolphin, 2014), due to their goal-less nature and the way they put the user squarely in the role of creator. The paper will conclude by offering an interpretation of these music apps as part of a minimalist aesthetic, which, together with repetition and the concept of Zen, draws attention to how we use apps in everyday life.
13:00 – 14:00 Lunch
14:00 – 15:30 Session 3Performing and Playing
Until very recently we were forced to use special controller in video games like Guitar Hero. These times are over. Now it is possible to bring your own electric guitar, connect it to video game console or computer and turn it thus into the essential part of the game. The graphical user interfaces of these games mirror primarily the neck and strings of the guitar, while dynamically placed marks indicate what users are expected to play and what they actually do play. Users respond to the musical forms displayed on the screen, and software continuously monitors their performances with regard to timing and pitch, awarding points for successful interpretations of songs. In this sense, these games both continue the tradition of classical notation and adopt it to include new elements from video games. This presentation focuses on Ubisoft’s Rocksmith, which is the most popular example of this new video game genre. This game not only allows users to choose from a broad variety of well-known rock songs, but it also tries to match precisely the sounds of users’ guitars and the sounds of the original recordings. These new possibilities offered by sophisticated video game technology raise numerous questions: What are the obstacles with regard to the transmission of any users own instrument into virtual space and with regard to the virtualization of individual styles? What are the limits and possibilities in such methods of gamification? To what extent do guitar games represent a continuation of traditional notation and instruction? Is this kind of software able to recognize and assess virtuosity? To answer these questions on the one hand references are made to music didactics and media technology and on the other hand songs or selected riffs by guitarists such as Jimi Hendrix are presented live with electric guitar in applications such as Rocksmith. This approach will reveal the limits of guitar games, especially with regard to their discrete grids and limited presets, but also suggest the potential for gamers and musicians alike.
This paper takes as its starting point my encounter of two “inappropriately epic-sounding tutti chords” in the game Diablo III (2012). What are the underlying mechanisms that can make chords stand out as “inappropriate” and “epic” at the same time? By combining and comparing several methods—reflections on an autoethnographic account, musical analysis of the score, and close readings of several video recordings of the event in question—I will show how this encounter exemplifies the myriad ways in which we hear video game music. More specifically, I will argue that a clash between two systems of expectation is at work here: musico-narratological understanding and ludic experience. This “ludo-musical dissonance” echoes Clint Hocking’s term “ludonarrative dissonance” (and more loosely Jesper Juul’s rules-fiction binary).
I will theorize this phenomenon, and attempt to encapsulate it in a broader understanding of the ways in which we hear video game music, by characterizing it as a “broken sign”, following Heidegger’s account in Being & Time. Diablo III’s chords, encountered as “unready-to-hand”, show how we experience background music in video games as a form of equipment that withdraws from our attention when it works correctly, but reveals its historical, film-musical context when it breaks down.
Mozart, who allegedly composed the parts of the “Musikalisches Würfelspiel” (musical dice game) also made a quite conscious game design decision. He recognised chamber music as a participatory musical form in the need for an interactive diversion for non-musicians. Thus he introduced two dice, thrown todetermine one of many possible combinations of musical segments of waltz music played afterwards.One of the core challenges in designing musical gameplay for entertainment, and even more so for learning, is to make music accessible to people who don’t necessarily play an instrument or are literate in musical notion. In Mozart’s case he succeeded to make music more varied and introduced aparticipative mechanic. While this game mechanic is purely based on luck it still involves the audience and makes the musical result feel more personal and unique. For this purpose Mozart abstracted waltz music from continuous pieces of music to smaller segments, whichcan be rearranged freely.The proposed talk will also build on examples of commercial music-based games, serious games, sound art pieces and participative musical live performances created by the author. The common denominator of these examples is that they make aspects of playing music and composition accessible to players by abstracting from its original complexity. Designing for abstraction and finding parameters to make music interactive are the core challenges in creating musical gameplay. This is of particular relevance when games are used to trigger learning experiences. In this case the process of abstraction is even more delicate. On the one hand there is a need to reduce and abstract complexity to make a game playable, on the other hand the complexities and intricacies of musical play must not be lost. This talk is situated in design and will use the author’s work to illustrate the balancing of these two aspects.
15:30 – 16:00 Tea & Coffee Break
16:00 – 17:00 Keynote 1 (David Roesner)
Conference Dinner at De Oude Muntkelder

 

Day 2, 10th April 2015

9:00 – 10:30 Session 4Musical Mechanics, Game Mechanics
The origin of casual puzzle games (CPGs)can be traced back as far asTetris at the earliest period of the video game industry, now 30 years later they make up a large majority of the mobile game market.Despite the innovation and development in the genre,the implementation of music has remained largely the same with players offered a choice of linear music tracks that do not respond to changing game states or variables.Recent research supports that there is a deficiency in the way audio generally is used within casual games to provide additional feedback and reinforce motivation and reward mechanisms.

Having conducted a study of the development and implementation of music in CPGs, comparisons were drawn between the implementation of adaptive music in other genres and CPGs, highlighting the disparity between the sophistication of adaptive music techniques used in each.

The use of adaptive music techniques in cinematic games or games with a large element of fiction are largely justified by trying to marry the filmic traditions of music and the image with the indeterminate nature of video games, however the lack of such narrative in CPGs demands a re-evaluation of the role of music in this genre.

I propose that the lack of development in casual game music and the lack of adaptive techniques shows the under utilisation of music as a mode of feedback, and how it could and perhaps should function to further support reward mechanisms in the future.

While artificial intelligence (AI) is often developed to procedurally generate video game content (PCG) like map generation in Minecraft or dynamic item creation in Borderlands, rarely are video game soundtracks procedurally created. This paper discusses current methods for music generation and explores how they can assist in soundtrack creation for video games.Several arguments can be made on why generating music dynamically can benefit digital games specifically. As digital games become more procedural, the creation of more sophisticated soundscapes capable of dynamically adapting to the scenario generated could both provide consistent immersion for players and lower content creation burdens for developers. Procedural audio could also offer another layer of interactivity between players and the digital game, such as dynamically generating sounds or music based on actions performed by the user.

This paper will attempt to shorten the gap between what kind of approaches are currently available in the field of music generation, and how it could potentially be applied specifically in generating dynamic sound or music in digital games. We will cover a wide variety of systems varying from fully autonomous to systems involving various forms of human intervention. Example systems presented range from academic approaches such as MaestroGenesis, LoadBang, The Indifference Engine and other types of creative systems whose general concepts could also potentially be applied for game music generation, and industry approaches such as Rocksmith’s session mode, Elektroplankton and Microsoft’s Songsmith.

10:30 – 11:00 Tea & Coffee Break
11:00 – 12:00 A Practical Demonstration of Game Music Implementation Methods with Richard Stevens, author of The Game Audio Tutorial
12:00 – 13:00 Keynote 2 (Karen Collins)
13:00 – 14:00 Lunch
14:00 – 15:30 Session 5Analysing Game Music
There is much debate about the sexist representations of women in video games, for example by Feminist Frequency. Game characters like Lara Croft and Triss seem to have become our modern day Sirens. But what about the true Sirens: mermaids? In European mythology, mermaids are often portrayed as beautiful, seductive maidens, greatly desired by lonely sailors. But new forms of media are creating new ways for people to record, express, and consume stories. Are mermaids still sexy after being re-mediated in games?Not per se. Disney’s The Little Mermaid was re-mediated in various computer games, which is interesting, because games are interactive as well as embodied. Now the mermaid can function as an avatar (an object of identification), which makes the player a determiner in her story. In the magic circle the player now becomes Ariel. A simply pixel-ed playable avatar, the opposite of the “The Women as Background Decoration trope”.What is the role of game audio in this re-construction of identity? The Little Mermaid games’ soundtrack echoes the songs of the movie and within this trans-media interaction it shifts them from diegetic to non-diegetic. As the Little Mermaid gave up her voice in exchange for legs (and human sexuality), the computer games gave a new voice to her identity.

After an examination of the figure-ground relationship, various games featuring mermaids will be analyzed to answer the main question: “how is the identity of the mermaid musically depicted in video games?”

Research on music and personality has revealed listening preferences as well as emotional contagion from music to be situation dependent. However, when determining the impact of music appraisal within an interactive setting, the notion of situation implies the product of subjective percepts, such as current motivational and attentional skills or the sense of environment and task that poses varying demands for adaptation to the organism.Recent work has shown interindividual differences in the appraisal of situational settings along personality dimensions such as Absorption, Sensation Seeking and the Big Five Factor Model. Similar results are obtained along cognitive styles of music listening for situations depicted in virtual scenarios containing music, such as video games. Put together, as much as music listening styles may change the impression of the space and place in which we are situated, there is reason to believe that similar interpersonal differences exist in processing styles for the evaluation of situation that in turn impact the experience of sound and music.

The envisaged presentation intends to apply the outlined thinking on previous findings as well as new results obtained from two on-going empirical studies in ludomusicology and sound perception. In addition, the implications of the above mentioned personality constructs will be outlined with regards to the effect of structure/form and expression on human computer interaction. The associated sonic characteristics in functional sound and music form the main part of this discussion, which intends to approach a first set of recommendations for creating personalised sonic contents for video games.

The piece ‘One Winged Angel’ from the videogame Final Fantasy VII composed by Nobuo Uematsu (Phillips 2014) has long been regarded by fans of the series as one of the greatest pieces of videogame music ever written. Composed in 1997 for the PlayStation and PC, it has the novelty of being the first work in the Final Fantasy series to use a recorded choir alongside the MIDI sounds used in the rest of the score. Due to its continuing popularity, including frequent renditions at live concerts (for example “Video Games Live” or “Final Fantasy: Distant Worlds”), it is timely to perform a critical analysis of the original piece, exploring the context under which it was originally composed and its status as a milestone in videogame composition.

A number of analytical techniques are applied, including consideration of the impact of the lyrics, a focus on the harmonic structure underpinning the composition, and formal thematic and timbral analysis. Schenkerian theory (Pankhurst 2008) is used as a tool to help dissect the work further; analysis is led both by the analyst’s ear and through an imported MIDI file from the PC version of the game. The paper discusses how ‘One Winged Angel’ can be viewed as a videogame composition, rather than just a simple linear piece in its own right; the use of a basic looping structure is contrasted with the greater potential in adaptive music (Collins 2008) exhibited by other video game works of the time.

15:30 – 16:00 Tea & Coffee Break
16:00 – 17:45 Session 6Immersion and Interaction
Player immersion in video-games is the most desired outcome for both the player and developers, relying on a perfect visual-audio connection to create what is commonly referred to as ‘flow’. When applied specifically to music in games, ‘flow’ defines a continuous soundtrack which actively follows the player’s lead, creating a sense of familiarity with the environment that may lead to an increased sense of realism in the game-world. Appropriate musical material that engages with the visuals, often in directly interactive or player-triggered ways, may create both an emotional and physical sense of involvement and thereby encourage immersion. I have created a model that synthesises a variety of scholarly approaches to flow, in order to present the most important set of parameters for videogame music and sound design to promote immersion.
This model will be used as the basis for an analysis of the indie styled, Action-RPG by Supergiant Games, Transistor (2014), showing how the different parameters are used within the game to promote immersion. Darren Korb’s score engages with the narrative’s sonically-focused storyline, which follows a protagonist whose voice has been trapped within the ‘transistor’ of the game’s title. Mutism, vocality, and musical rendering of characterisation are all a focus of Transistor’s soundtrack, and applied in ways that demonstrate the significance of interaction, alongside familiarity, to game ‘flow’ and thereby player experience.
This paper will combine a theoretical perspective on game audio with the experiences of a practicing game audio director. We begin by outlining some of the conceptual issues surrounding the production of virtual worlds in games, and, in particular, the role of audio in creating such worlds. How do we understand the gameworld with which we interact? What part might music play in creating these non-concrete universes? We will particularly draw on the work of Jesper Juul, Rob Shields and Brian Massumi to suggest some directions for answering these questions.
We will argue that, because audio plays a large part in forging virtual worlds, it is particularly significant for articulating realism. We use the frame of racing video games to discuss how audio (particularly music and sound effects) impacts upon the artistic realism of the created worlds, and, hence, helps to establish the distinct subgenres of racing games. As the virtual realities are projected partly through sound, it is an important part of how players conceive of the virtual space and their interaction with it. Using examples from Project CARS and World of Speed, we will explore the difficulties of combining music and car sounds in arcade racers, and how gameplay requirements can undermine realism in simulation racing games.
Non­linear and interactive music arewell known in,for example, certain video games, where the music adapts to the gameplay. Live performances of the interactive music of these video games, are extremely rare. This is partly due to the lack of interactive music solutions for live performances and results in concerts with linearly arranged versions of (originally) non­linear video game music, thus removing an important aspect of video games: live interaction. In this paper we discuss solutions for, and experiences with, live interactive orchestral (video game) music by means of two case studies.
  1. Karmaflow is a Rock Opera Videogame, which has been released on January 19th 2015. To celebrate this launch an interactive concert has been organised with the Metropole Orkest, an international cast of rock vocalists and a rock band on January 18th and 19th in theatre ‘aan de Parade’ in Den Bosch. Similar to interaction in the videogame, the user has certain moments during the ‘play’ (concert) to decide whether the narrative should go in one, or the other direction. Both music and narrative adapt to the choices the users make. In the case of Karmaflow in Concert the interaction consisted of a voting mechanism, where visitors that had the Karmaflow companion app installed on their smartphone, or tablet, could vote for one of two possible outcomes of the plot.
  2. NLN­live is an application for live non­linear and interactive instrumental music performances. The principle of NLN­live is simple: every musician repeatedly plays two staffs of music that are presented on a screen, say X and Y, with the musical content for X and Y being variable and, for example, related to interaction. NLN­live premiered at the 2013 ‘Museumnacht’ in Amsterdam where the Gelders Orkest performed the live interactive music, while a mod of the 1978 video game Space Invaders was played by a volunteer. NLN­live was also presented at the International Computer Music Conference 2014 in Athens and performed in the Athens Onassis Cultural Centre.

The interactive composition, based on the world­-famous 4 note melody that originally accompanied the game, is written by Stan Koch & Than van Nispen tot Pannerden.

Evening Celebration – Ludomusicology Pub Quiz

 

4 comments for “Ludo2015 Programme

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.

%d bloggers like this: