Category: Guest Contribution

We are proud to publish original articles from authors across the community here. If you wish to share news about your research, an event you are running or if have an article you would like us to feature, please contribute here or email us at ludomusicology@gmail.com.

What is 1-bit-music?

By Nikita Braguinski and ‘utz’

On November 20, 2018, the musicologist and media theorist Nikita Braguinski and the musician, programmer and historian of computer sound working under the assumed name of ‘utz’ convened in an IRC chat channel to discuss a musically and technically intriguing question: What is 1-bit-music?

NB: You are, as a musician and as a programmer, actively working in the genre of 1-bit music. Can you give a first, non-technical explanation of what it is?

utz: Sure. Let’s talk about 1-bit sound first. The subject sounds quite esoteric, but actually 1-bit sounds are more common than one might think. For example, if you’re a bit older you may remember that when switching on a PC, it makes a “beep”. Another common example that many people may be familiar with are those birthday greeting cards that play a simple melody when you open them. Or, a more annoying example: the alarm in smoke detectors. All these sounds share a common principle in that they are produced by repeatedly switching the current that goes to the built-in speaker on and off, or in other words, they are produced by toggling a signal between two states. These two states can be expressed by a number containing a single binary digit (bit). Basically, 1-bit music is music made using this principle of sound generation.

NB: We’re only talking about electronically produced sounds, right?

utz: Pretty much.

NB: Great. We’ll return again to this definition to discuss it in more technical terms, but let’s now take a look around this musical practice. It exists within the specific culture of chiptune artists, the demoscene and the retrocomputing scene. Can you give a kind of a first working definition for each of them?

utz: I’d somewhat argue with that statement, actually, but yes, there’s definitely a strong connection with these scenes. “Chiptune” is probably the hardest one to define. Generally, it’s a culture revolving around music made with old home computers and gaming consoles, or music that sounds like it would come from one of these machines. In 1-bit, we often use machines from the same era. The sounds can also be quite similar, though they can also be radically different. Furthermore, there is quite a bit of overlap in terms of people active in the chiptune and 1-bit scene. As a matter of fact, there are hardly any “pure” 1-bit musicians. There are also differences regarding technology. The machines commonly used by chiptune musicians generally have a sound chip, a dedicated piece of hardware that produces sound. Machines used for 1-bit music generally don’t have that.

NB: That’s a huge difference! How do you make sound without sound hardware?

utz: There are different ways to go about, though commonly we look for something like an data port. For example, many old computers can load and save data from/to tape. That’s something we can hook into. If you remember old modems, they make these gnarly sounds, right? That’s pretty much the same idea. Basically, these data ports can be active or inactive, on or off.

NB: If I decode correctly what you describe, the 1-bit-culture is all about difficult, “gnarly” sounds.

utz: A lot of the fun, for me at least, comes from how to get it to make meaningful sounds in the first place. But yes, it often has this very gnarly aesthetic and that’s something that’s very appealing to 1-bit folks. At the same time we strive to make it sound less crude, and make it produce more complex sounds. There’s a kind of very immediate “you vs. the machine” aspect to it.

NB: Does this also apply to retrocomputing and the demoscene?

utz: For retrocomputing, yes. For the demoscene, it’s mostly true for the “oldschool” part of the scene. The demoscene is a culture that focuses on producing non-interactive, realtime-generated audiovisual computer art. Retrocomputing is a movement that uses supposedly obsolete computer hardware in various different ways. As a good portion of the demoscene (the so-called “oldschool” scene) revolves around old computers, there’s significant overlap between the two, of course. Personally I don’t like the term “retrocomputing” very much. It carries this implicit connotation of “obsolescence”, which I find inappropriate. For me, these old machines aren’t obsolete at all, they’re valid contemporary musical instruments. Also, the software being developed for these machines is very much about exploring new possibilities and finding new algorithms, so “retro” isn’t a good fit in that respect either. Also, the thing with 1-bit music is that there are people in it who aren’t connected to either of these scenes. There are for example people who come from the calculator scene. There are these graphing calculators that are being used in school, right? Well, there are people who dig into these machines, and coincidentally they can make 1-bit music. Also, 1-bit music existed before chiptune/demoscene/retrocomputing.

NB: And all these cultures seem to emphasize technical restrictions. Is that right?

utz: Yes, there’s definitely some truth to that. I think there are actually two different aspects related to that, though. One is challenge: It’s a challenge to make something on these old machines, even more so to push the boundaries of what can be done. The other side is that paradoxically, restrictions can be artistically liberating, in a creative sense.

NB: I totally agree. But I have a question: If the term “1-bit music” is really only about 1-bit sound then any kind of music can be “1-bit” if it is played using this specific technology?

utz: I don’t think anyone ever gave too much thought to a solid definition of 1-bit music so far. But yes, I’d agree with that. In fact, it’s possible to employ 1-bit techniques on sound chips, for example. This contrasts it with the term “8-bit music”, where “8-bit” generally refers to the type of CPU used to make the music. 1-bit music can be 8-bit music, although it doesn’t necessarily have to be. There is a general-purpose music editor called Sunvox, which has a 1-bit mode, so you can make 1-bit music without actual 1-bit hardware. There are 1-bit VSTis (sound plugins) as well.

NB: So, unlike algorithmic composition it is not primarily about generating music, but the generation of sound.

utz: In a sense we have the same problem as with chiptune: everybody has their own idea about what qualifies as a chip and what doesn’t.

NB: Well, now that we have taken a look at the cultural environment in which 1-bit music is being created, could you give a more technical explanation of how it works?

utz: Ok, so, as mentioned before, 1-bit music is made by switching signals on and off. If you switch a signal at regular intervals 440 times a second, you get a 440 Hz note. These are the basics. The gist is that by switching in more complex patterns, one can produce more complex sounds. This includes polyphonic sound (containing several independent melodies or effects played at the same time). Modern 1-bit music is generally about polyphonic sound. There are a number of techniques that can be used to achieve polyphonic sounds, and there are a lot of different implementations of these techniques. By the way, that’s also something that sets 1-bit apart from, say, chiptune. If you take Gameboy music for example, at least 90% of it is made on one of 2 editors, LSDj and Nanoloop. In 1-bit, on the other hand, we use tons of different software regularly. So, there are two main approaches to generating polyphonic sound with a 1-bit signal. One is called Pulse Frequency Modulation (which is actually an established thing, we didn’t make that up), and the other one goes by a bunch of different names, I usually call it Pulse Interleaving. Say, we switch our signal at a rate of 440 Hz. Or any rate, as a matter of fact. One would assume that the signal is on for half of the time, and off the other half of the time (which is what’s known as a square wave). But it doesn’t need to be that way. We can also create a signal that’s on for 1% of the time, and off 99% of the time. As long as we switch at 440 Hz, we still get a 440 Hz note. However, we now have a lot of space between the “on” pulses. Within that space, we can render a second train of pulses at adifferent frequency. The result will be that both frequencies are audible. This gives a very characteristic sound. This is pulse frequency modulation. Funny side node, that’s actually how your brain cells communicate with each other. Anyway, Tim Follin’s early works on ZX Spectrum are typical examples of that. See, for example, this video.

NB: How do you know this specific technique was used in this tune?

utz: The sound is very characteristic, 1-bit musicians will instantly recognize it. Also, we know because some clever people analyzed the music driver behind it.

NB: Yes, that’s a fascinating story how people reverse-engineer other musicians’ programs.

utz: The Follin driver is also legendary because it was the first to do 5-voice polyphony on a 3.5 MHz machine with no sound hardware. The pulse interleaving technique on the other hand tackles the problem at hand from a completely different angle. Generally, pulse frequency modulation makes it possible to render a lot of voices even with slow hardware. The record is at 16 voices for the ZX Spectrum. Pulse interleaving has more constraints in that respect, because it requires tighter timing. The idea behind it is that loudspeakers have inertia. The hardware components in a computer have inertia as well, but to a much lesser degree. Generally, a CPU is much faster than a loudspeaker cone. So, when we switch our 1-bit signal from off to on, the speaker cone will (slowly, from the CPU’s perspective) start to extend outwards from it’s current, contracted position. While this is still happening, the CPU can already switch the signal off again. Which means that before the speaker cone has even reached full extension, it will start contracting again. By precise timing, we can therefore control the speaker cone position, or in other words, we can produce different volume levels through our 1-bit signal. With different volume levels, we can of course produce polyphony. Generally, this technique will result in a more typical “chiptune” sound, though other applications are possible.

NB: It certainly requires a more complicated calculation than the previous technique.

utz: Yes, and what fascinates me the most is that it can be implemented in so many different ways. The funny thing is that for a minimal example, the actual code is nearly identical. Finding this similarity was a big “Aha!” moment for me. But generally pulse interleaving code tends to be more complex than pulse frequency modulation code.

NB: If I understand correctly, by creating cone movements of different amplitude, you basically make your own digital-to-analog converter which you can use to play anything, including polyphonic music (but also speech, for example).

utz: Yes, exactly. Speech is very simple, actually. Another way to put it is to say that for 1-bit sound, within a fixed interval time equals volume.

NB: Now, there is also a prehistory of 1-bit. There were mechanical devices that, to a degree, worked in the same way. For example, in the 17th century there were experiments with toothed wheels: in 1681, Robert Hooke presented to the Royal Society in London his experiments where a rotating wheel created individual “clicks” with its teeth. If rotated quickly enough, it produced not a pulsating rhythm, but a tone. This was later called a Savart wheel. There were also mechanical sirens. With an arrangement of holes, tones were produced by interrupting the stream of air going through it. Different machines existed that were based on this principle, one of them (already in 20th century) was the Rhythmicon, built by Leo Theremin and working with light instead of air. Sometimes, people would also put holes representing different frequencies into the same circular pattern, which basically corresponds to the “pulse frequency modulation” technique in 1-bit. This specific experiment was carried out in 19th century by Friedrich Wilhelm Opelt.

utz: That sounds very much like an application of PFM. So, they could do polyphony?

NB: Yes, by combining holes referring to two different tones (or teeth in the case of the wheel). Of course, you could not switch tones as easily as with the computer.

utz: Hmm, if they would’ve used a tape instead of a wheel it might have been possible.

NB: Too bad they can’t hear us! Now, coming back to 1-bit: during our conversation, I was listening to your album of 1-bit music, published under your artistic name “Irrlicht Project”. What was your own, personal experience in the creation of such music?

utz: Now that’s a long story… I’ll try to be brief. I started experimenting with computer music back in 2003-2004. I used the Fruity Loops software back in the day (a so-called digital audio workstation). I didn’t have any clue of what was going on in the outside world back in the day, since I had no internet. So, I just made music with sounds that I liked. Which happened to be mostly the sounds based on simple waveforms, such as pulse/saw/sine etc. Then, some years later I discovered that there’s a whole scene revolving around that kind of aesthetic, called chiptune. I also discovered that there are these old computers that you can use to make this type of music. At the same time I was running into a sort of a dead end with my digital-audio-workstation-based workflow. As these got more complex, and the synthesizers and effects started to arrive, I found myself more and more busy with tweaking sounds, rather than doing actual compositions. At that point I tried making music on the Atari ST computer, and it was a revelation. Suddenly, I could focus on composition again, because there’s only so much you can do to tweak the sound of the Atari’s sound chip.

NB: So, it was again all about the self-imposed technical restriction.

utz: Oh yes, absolutely. Though, for me it always felt like a liberation. However, as time went on, I discovered more and more ways to tweak the sound, and that became an issue again. At that point I discovered the works of a certain Mister Beep, who was pretty much the only active 1-bit musician at that time. That must have been around 2008-2009. Then, the available tools for making 1-bit music were decades old, and extremely limited. Basically, they didn’t facilitate tweaks to sound at all. Obviously, those old tools had proven to be a lot of joy, even though they were crude and super clunky to use. A few people started getting into 1-bit around the same time. Also, some new tools started to pop up. First, they were mainly ports of old drivers (versions of the musical software that were made compatible with a different computing environment), equipped with new, better user interfaces. Then, we got the first “new” drivers (most of them by Alex “Shiru” Semenov), and also PC-based editor called Beepola. It was a cross-platform program which means that it could run on different systems. At some point, I became interested in making my own driver.

NB: For that, you already must have been a programmer, not just a musician.

utz: Well, that was the problem, actually. I wasn’t. I hardly knew anything about programming. I’ve had only 3 years of computer studies in school, mostly dabbling in the Turbo Pascal programming language, which I had all but forgotten at this point. Also, there’s pretty much only one way to implement 1-bit drivers (at least on the usual 8-bit hardware), and that’s by using the Assembly programming language. I tried to wrap my head around it for ages, with no success. But I had this vision. You see, other kids, they’d have Gameboys and whatnot at school, and a Sega or NES video game console at home. Well, I didn’t. I was the dorky kid who had… a graphing calculator. Everybody had one because it was mandatory, but I was one of the few people who actually spent a lot of time exploring the thing. Now, going back to 2011 or so. I still had that calculator, a Texas Instruments TI-82. At some point I realized that it has the same CPU as the ZX Spectrum home computer (for which many 1-bit tools where available at that time). So, I had this vision of porting one of the ZX drivers to the TI. And after many attempts, I finally succeeded in making the thing beep.

NB: That must have been a special feeling, given that you returned to a tool from your teenage years.

utz: Yes, very much so. As time went on, I became better at this Assembly thing. I made my first own drivers. Of course, they were bad, but, hey, I was making progress. Then, some friends found out about this thing I had done with the calculator, and started asking me about a music editor, a one which could natively run on the TI. And so I sat down and did that. It took me about 6 months, I think, and the result was… hardly usable. Then, a couple of years later I re-did the whole thing, and this time it was somewhat decent.

NB: Do you have an example of this calculator music?

utz: Yes, here’s the “introductory video”. This one uses the pulse interleaving technique, by the way.

NB: Did you modify the calculator to give it an audio output?

utz: No, not at all. It has a “link port”, which is normally used to communicate between two calculators, or between a calculator and a PC. I’m using it for sound.

NB: A beautiful hack! I am sure that, for many people, making music on a calculator symbolizes the ages-long relationship between music and mathematics (and time, I would add, since this is also all about the correct timing).

utz: Oh yeah! So, over time, this programming aspect of the 1-bit universe has become increasingly prevalent in my work. This goes to the point that I now regard it as kind of an art form in itself. It’s not even so much about the music anymore, though I do still enjoy making a good tune. I guess that somewhat sums up my experience in 1-bit.

NB: That’s a great story of going from making music to making tools for making music.

utz: Let me try to find one more example perhaps, just to show what’s possible. This is one of my later drivers, called zbmod. It was used by Tufty, one of the most talented 1-bit musicians (who, by the way, has a new album coming out soon). This works on the classic Sinclair ZX Spectrum home computer. I’ve written many different drivers (or “engines”, as we call them in the 1-bit world). They are all open-source, by the way – including the calculator editor. The code is currently available on Github.

NB: Great! Well, there are so many things that we could add to this discussion of 1-bit, for example the early mainframe computers sounds that you have collected and presented in your recent talk in Berlin. But the nature of this phenomenon, which is still unfolding, is that we would need an infinite amount of time to fully describe it. Thus, I’d like to thank you very much for this conversation, and I am looking forward to hearing about novel creative approaches coming from this technology, which is both new and old.

utz: It’s been a pleasure. Thanks for giving me the opportunity to talk about my favorite subject!

Further reading:

Victor Adan, Discrete Time 1-Bit Music: Foundations and Models. PhD thesis, Columbia University (2010).

Kenneth B. McAlpine, “The Sound of 1-bit: Technical Constraint and Musical Creativity on the 48k Sinclair ZX Spectrum, GAME 6 (2017).

Chipmusic

See also the PhD project of Blake Troise and the recently published book Bits and Pieces. A History of Chiptunes by Kenneth B. McAlpine.

Nikita Braguinski’s research has been focused on the notion of unpredictability in game sound during his PhD work. His dissertation has been recently published in German. At the moment, he works on the predigital history of algorithmic music and on the mathematization of music in the Soviet musical avant-garde. His recent publications include an article on the notion of 8-bit in music and a discussion of the Speak & Spell electronic toy.

‘utz’ is a developer of sound drivers and music editors, with a passion for low-level synthesis algorithms. He is also an avid composer and performer of 1-bit music. He maintains a personal website with all his works.

Ludo2017 Conference Review by Ivan Mouraviev

Ludo17 Conference Report: Highlights and Themes

Ivan Mouraviev [1] reviews Ludo2017 for us, offering his thoughts on the experience.

Ivan is a student at the University of Auckland, New Zealand, where he specializes in game music. Ludo2017 was his first Ludo conference, where he presented a very well-received paper ‘Textual Play: Music as Performance in the Ludomusicological Discourse’.

Independent scholar Mark Benis, writing in his report for the 2017 North American Conference on Video Game Music, recently remarked that “video games have a way of bringing people together.” Indeed they do. This is how I felt at the sixth annual Ludomusicology conference held over 20-22 April at Bath Spa University. The event was hosted by Professor James Newman and organised by Ludomusicology Research Group members Michiel Kamp, Tim Summers, Mark Sweeney, and Melanie Fritsch. [2] As a student and newcomer to the world of academic conferences, I did not entirely know what to expect at Ludo17. However, delegates and organisers alike were superbly welcoming. Being at the conference was a fun and intellectually stimulating experience from start to finish. With 40+ attendees across three days, 30 diverse papers were presented that ranged from musicological and music-theoretical investigations of music in video games to studies of game music history, composition, technology, and performance. Indeed, diversity of approach and subject matter was a hallmark of the event. In what follows I report on the conference via a series of personal highlights, summarising what I found to be among the most significant research presentations; I also tease out emerging trends, questions, and possible points of departure for future research. The report is organised loosely around four themes rather than by presentations chronologically. Please forgive my inevitably many omissions.

 

  1. Constraints and affordances: game music technology and composition

Blake Troise opened the conference by presenting his research on the technological and creatives affordance of 1-bit music: a sub-category of chiptune based on a single square wave. [3] As the name of 1-bit suggests, the synthetic process of 1-bit music imposes binary limitations—a square wave can only be produced at either a high or low amplitude (that is, and on or off signal). However, through a live demonstration, Troise showed how sophisticated polyphony, timbral variation, and even supra-binary amplitudes can be achieved with 1-bit chiptune—for example by exploiting the limits of perception (since discrete transients less than 100 milliseconds apart are perceived by the human brain as a single sound), and using techniques like pin-pulse modulation (which can help avoid the mutual cancellation of two overlapping signals).

Composer Ricardo Climent offered a different flavour of research on a similar theme, also on the first day. Climent presented his fascinating use of the freely available game-design software Unreal Engine to unfold musical narratives ludically. [4] Specifically, this took the form of an interactive work titled s.laag, which serves as a game-level replica of the World’s Fair held in Brussels in 1958; primarily the player-character takes on the role of a bass clarinet to navigate through various mini-games and around architectural icons. Kevin Burke’s presentation was also retrospective but took a different, more analytical approach, examining how composer Hitoshi Sakimoto—of Final Fantasy and Valkyria Chronicles fame—utilised a custom Terpsichorean sound driver in the 1990s to produce musical results that significantly surpassed late-twentieth-century expectations for 16 bit sound synthesis. Come day three, Richard Stevens and Niklos Stavropoulos dealt with video game music from a more explicitly design- and implementation-focussed perspective, presenting some valuable techniques for manipulating and performing pre-composed sound in games (also using, like Climent, Unreal Engine). [5]

Ultimately it was Kenneth ‘Kenny’ McAlpine, though, the first of Ludo17’s three keynote speakers, who most compellingly synthesised the many diverse strands making up this broad theme of game music technology, composition, and affordances/constraints. [6] McAlpine showcased some of the research behind his forthcoming Bits and Pieces: A History of Chiptunes (Oxford University Press). He discussed the various affordances of technologies like the ZX Spectrum, Commodore 64, and more, presenting a broad range of historical and conceptual themes in a captivatingly personal way. Especially memorable was McAlpine’s emphasis on the idea that the near-total freedom of musical production available to us today, not least through digital audio workstations such as Apple’s Logic Pro, can be “crippling”. The goal of contemporary artistic practice—both within and beyond the realm of video game music—may not so much be a matter of “freedom of choice” as “freedom from choice”.

 

  1. Rule-bound musical play

What defines the “game” in “video game”? This was a question addressed by James Saunders, who highlighted Jesper Juul’s work on the topic (2003) as well as Huizinga’s important theorisation of play (1955), to pinpoint some insightful correspondences between rules in games and indeterminate music. [7] Saunders also noted how the structuring, constraining, and sometimes not immediately perceptible effects of video game rules can (re)present models of social interaction, and facilitate players’ agency by offering both choices and goals for game and music play. Two-way engagement between Twitch streamers and their often expansive audiences was raised as an example of such interaction in the discussion following Saunders’ presentation. Indeed, web audiences can significantly influence—and at times even determine—the structure and content of a streamer’s gameplay. Many streamers also publicly perform their musical taste by playing popular music as a kind of trans-diegetic underscore, that can be structured by audience interaction and be experienced as both external (non-diegetic) and integral (diegetic) to the streamer’s ludic performance. The 2015 article “From Mixtapes to Multiplayers” by Michael Austin (who also presented a fascinating paper on the participatory musical culture of “Automatic Mario Music Videos” on Day 2) certainly comes to mind, for Austin’s examination of how different kinds of social video gaming can serve as gamified “transmutation[s] of the mixtape” and displays of curatorial control. [8] As the professional players of massive online battle-arena (MOBA) games like Dota 2 continue to attract large streaming audiences, and video games become increasingly formidable icons in popular culture more generally, the realm of game-like musical interactions in virtual spaces seems ripe for further scholarly investigation. How, for example, are streamer-audience musical interactions shaped by the (in)formal rules that moderators enforce on platforms such as Twitch, perhaps contributing in turn to a broader fostering of online community?

On the broader theme of music and rule-bounded play it is hard not to mention the work of Roger Moseley. [9] On Day 3 Moseley presented a superb keynote that resonated with the approach and several themes within his recently-published and open-access monograph Keys to Play (University of California Press, 2016). [10] The keynote was titled “Recursive Representations of Musical Recreation”, placing “recursion”—signalling basic repetition and looping, the successive executions that occur in computation, and more specifically a kind of historical ludomusical praxis—in the critical spotlight. One particular argument was for “recreation” as a potentially more critically rewarding notion than “reproduction” when dealing with the recursive nature of ludomusicality, since “reproduction” has been historically more closely associated with a decidedly “serious” “phonograph ideology” rather than intrinsically creative and performative action (an association no doubt spurred by, or at least reflected in, Adorno’s and Walter Benjamin’s famous twentieth-century critiques of commercial culture). The first known use of the term “ludomusicology” can be traced to digital-game researcher and music theorist Guillaume Laroche in 2007; nevertheless, Moseley’s contributions to our understanding of the implications of the term “ludomusicology”—broadly construed as the study of music and play—have been seminal. [11] This is eminent not only in Keys to Play, but also in the 2013 chapter “Playing Games with Music” which elaborates play theory by Huizinga and Roger Caillois in the context of Guitar Hero after a much-needed historicization of work and play. [12] Indeed, central to Moseley’s work has been the goal of putting “play on display” in historical terms within a “media archaeology” framework, illuminating the possibility that “notions and terminology associated with digital games are capable of enlightening historical ludomusical praxis, just as the latter informs the former.” [13]

 

  1. Video game music as performance and/or culture

Several papers dealt with video game music and broader notions of culture, performance, or both. Presenting on the first day, Donal Fullam discussed how video game music can be understood as an expression of “algorithmic culture”. [14] For Fullam this cultural expression is a relatively recent incarnation of a more long-standing impulse, one that “treats music as an algorithmically determined system” and can be traced to the twentieth century avant garde and even further, to the foundations of functional harmony (which in turn represents a more basic tendency to systematise musical sound as a “cultural articulation”). A similar theoretical view of music as performing cultural and aesthetic functions was explored on Day 2 by Edward Spencer. His study investigated the bass-music signification and broader sociopolitical implications of Major League Gaming Montage Parodies, or MLGMPs. These represent a specific music video genre that employs audiovisual memes and “canonic” dubstep tracks by the likes of Skrillex to parody montages of skillful first-person shooter gameplay. [15] As Spencer convincingly showed through a critique of recent postmodern theory around notions of meaningless in contemporary culture, MLGMPs should not be automatically dismissed simply because they may, at first glance, seem to represent “ultimate” instances of “media convergence and ludic semiotic excess”.

Melanie Fritsch presented and applied also on Day 2 a theoretical platform for the analysis of music in video games.  principally argue that music in video games may be, but so far largely has not been, studies through the lens of interdisciplinary performance studies—which generally favours an ontology of music that is necessarily behavioural and social. [16] Fritsch did note, however, that scholars such as Tim Summers, Kiri Miller, and Karen Collins (and, I would add, William Cheng) have started to investigate music in video games beyond the basic paradigm of musicological close reading; both Miller and Cheng have favoured ethnographic paradigms while Summers is broadly interdisciplinary and Collins has tended towards embodied cognition and performance analysis. [17] Fritsch also introduced the German terms aufführung and leistung for understanding performance in a novel and more multi-dimensional way, with the former referring to presentation, aesthetics, and artistry and the latter encapsulating notions of skillful display, effort, and efficiency. Fritsch’s transnational perspective resonates with Moseley’s valuable historicisation of work and play in that both serve as a reminder that fundamental terms in music scholarship like “performance” and “play” are historically and socially contingent. Indeed, what one group of gamers or scholars regards as “play”, whether ludically or musically or both, may take on dramatically differently meaning across different times, spaces, or sociocultural settings. Or, put differently, the somewhat taken-for-granted idea that both games and music are inherently playful may be more thoroughly examined in a more empirically grounded, historically and socially (and perhaps even politically) specific way.

This last question may apply equally to video game music—that which is produced, performed, and listened to beyond conventional gameplay, such as in the concert hall. Video game music in this sense was explored in a concentrated and lively manner across four back-to-back presentations in Session 7, titled “In Concert”. In the first half of the session, Joana Freitas and Elizabeth Hunt drew attention to how notable organisations like Video Games Live have sought to “gamify” the concert hall in order to achieve “collaborative immersion and experience”. [18] James S. Tate and Ben Hopgood then dealt more specifically with music associated with Japanese Role-Playing Games (JRPGs) and Final Fantasy respectively; Tate presented convincing evidence for, and hypotheses to explain, the widespread popularity of JRPG soundtracks in concert performance, while Hopgood’s study was more analytical in discussing the easy-to-forget but nevertheless prominent “classical music identifiers” that video game music often carries as part of its dense semiotic baggage. [19] Though it was only mentioned in passing, an exciting and potentially highly rewarding direction for future research in this area is the ongoing global concert tour of thatgamecompany’s broadly well-received PS3/4 title Journey; the tour features Chicago’s Fifth House ensemble performing the game’s soundtrack in real time in response to the actions of four-to-six players on stage. [20]

 

  1. Learning music through games and vice versa: video game pedagogy

Talks on the role of video games—and principles of play more generally—in education only made up a small portion of Ludo17, however, the quality of research presented and potential for growth on this theme certainly warrants its own sub-heading. On day three Meghan Naxer brought to light the value of video game game principles and practices to be fruitfully manifested in the classroom. [21] A personal anecdote in this regard was especially revealing: after responding student email queries with indirect suggestions to literature and other resources, Naxer’s students interpreted the interaction as a game-like “side quest” and subsequently became all the more excited to engage in independent study. Jan Torge Claussen next presented his ongoing research with 18 students learning to play guitar through Rocksmith, the decidedly more education-oriented competitor of Guitar Hero and Rock Band. [22] Claussen’s students have been video recorded and completed journals detailing their experiences with the game; early findings tentatively suggest that Rocksmith may be a useful means to learn how to play guitar through Rocksmith rather than to gain guitar proficiency in general.

 

Concluding remarks

In closing, I would like to draw attention to three talks that were especially intellectually stimulating, but do not fall neatly under any of the thematic categories I use above. Firstly, Stephen Tatlow and George Marshall expertly examined complex questions of music and diegesis, through voice communication in the fantasy role-playing game EVE Online and popular music in the racing title Forza respectively. [23] Implicit in Tatlow’s discussion was the possibility for in-game diegetic voices to function musically, or rather for music to function as a player’s in-game diegetic voice—as music arguably already does in Journey, where the only means of direct communication involves performing short musical pulses in the absence of conventional text- and voice-based chat. Secondly, James Tate discussed the problematic potential of developing a video game music studies canon, an especially important issue that we need not inherit from popular music studies and the Western ‘art’ music realm. [24] As Tate’s research showed, though, nostalgia is already something of a potent structuring force in steering which titles are most prominent in the game studies discourse. How, going forward, will we negotiate our personal tastes with academic integrity and maintain a field driven by egalitarian values that emphasise the embracing of diversity? Thirdly, Michiel Kamp’s “Ludo-musical Kuleshov?” drew much-needed attention to the importance of understanding the psychology of video game music perception and affect, including how strongly our interpretations of music can be guided by on-screen and vice versa. [25] Kamp also presented the exciting potential of his ludo-musical (practice-led) research paradigm whereby a relatively simple game design allowed flexible and iterative reformulation of research questions as uncertainties were clarified or new questions arose. In turn this brought to light how empirically-grounded musicological study tends to exist at the broader intersection of the ‘hard’ sciences and the humanities, drawing on the principles and techniques of both.

Finally, it is worth highlighting the concert curated by Professor James Saunders (with thanks to Alex Glyde-Bates) held at the end of day two. Performed works included the playful and aesthetically engaging, as was the case with Troise’s chiptune piece “FAMIFOOD” and Clement’s live play-through of s.laag. More overtly unconventional and thought-provoking compositions by Louis d’Heudieres and Ben Jameson explored, by ludic means, the ontological boundaries of “authentic” live performance—through a rule-based approach and Guitar Hero respectively. [26] Jameson’s piece in particular stood out, as a novel, compositional and performative elaboration of the seminal Guitar Hero research carried out by Kiri Miller. I believe our broad and fast-growing field of video game music studies should continue to feature and therefore encourage more work in this vein of artistic practice as research, which includes the studies by Clement and Kamp mentioned above; it is an emerging paradigm that has long been accepted in the visual and dramatic arts as a valid means of producing knowledge but remains relatively under-theorised and under-developed in music. [27]

In summary, Ludo17 was diverse, fun, and intellectually stimulating; it featured student and early-career researchers alongside established scholars; and it did what arguably most ‘good’ scholarship should do: open up, rather than close off, new and exciting lines of inquiry. To the curious reader I highly recommend visiting the #Ludo2017 twitter feed as well as the booklet of abstracts, for a more comprehensive look into the diversity of research that was presented beyond what I have been able to discuss here. I very much look forward to next year’s conference and wish to thank the organisers for organising a fantastic event.

 

Notes

  1. Ivan Mouraviev, BMus/BSc in musicology and biological sciences; currently undertaking a BMus (hons) in musicology at the University of Auckland, New Zealand.
  2. For the organisers’ biographies please see http://www.ludomusicology.org/about/.
  3. Blake Troise, University of Southampton.
  4. laag was composed especially for Dutch bass clarinettist Marij Van Gorkom, as part of the http://dutch-UK.network project started in 2015. For more information see www.game-audio.org.
  5. Richard Stevens, Leeds Beckett University; Nikos Stavropoulos, Leeds Beckett University. Stevens has co-authored with David Raybould the monograph Game Audio Implementation: A Practical Guide Using the Unreal Engine (Waltham, MA: Focal Press, 2015).
  6. Kenneth McAlpine, University of Abertay, Dundee.
  7. James Saunders, Professor of Music, Bath Spa University.
  8. Austin, “From mixtapes to multiplayers: sharing musical taste through video games,” The Soundtrack 8/1–2 (2015), 77–88.
  9. Roger Moseley, Assistant Professor in Musicology, Cornell University.
  10. Keys to Play is freely accessible at http://www.luminosoa.org/site/books/10.1525/luminos.16/.
  11. See Tasneem Karbani, “Summer research project was music to student’s ears,” folio, University of Alberta, published 7 September 2007, accessed 23 May 2017, https://sites.ualberta.ca/~publicas/folio/45/01/04.html.
  12. In Nicholas Cook and Richard Pettengill (eds), Taking it to the Bridge: Music as Performance (Ann Arbor: University of Michigan Press, 2013), 279–318.
  13. Keys to Play, 7.
  14. Donal Fullam, PhD Candidate, University College Dublin.
  15. Edward Spencer, DPhil Music student, University of Oxford.
  16. Melanie Fritsch M.A., PhD Candidate, University of Bayreuth.
  17. See, for example: Miller, Playing Along (Oxford: Oxford University Press, 2012; chapter four of Cheng, Sound Play (Oxford: Oxford University Press 2014); and Tim Summers, “Communication for Play,” in Understanding Video Game Music (Cambridge: Cambridge University Press, 2016), 116–142.
  18. Joana Freitas, MMus, Universidade NOVA de Lisboa; Elizabeth Hunt, University of Liverpool.
  19. James S. Tate, PhD Candidate in Musicology at Durham University; Ben Hopgood, Musicology at University of Goldsmiths.
  20. A recent review with Fifth House performers by CBC news is particularly illustrative of the unique challenges and interactive components of Journey: Live. See http://www.cbc.ca/beta/news/canada/calgary/journey-game-soundtrack-live-1.4100542.
  21. Meghan Naxer, Assistant Professor of Music Theory at Kent State University.
  22. Jan Torge Claussen, PhD Candidate, University of Hildesheim.
  23. Stephen Tatlow, MMus Royal Holloway; George Marshall musicology University of Hull.
  24. James Tate, BMus University of Surrey.
  25. Michiel Kamp, Junior Assistant Professor in Musicology, University of Utrecht.
  26. Louis d’Heudieres, Bath Spa University; Ben Jameson, composition PhD Candidate, University of Southampton.
  27. For an up-to-date account of artistic practice as research in music in both theoretical and practical terms, see: Mine Dogantan-Dack (ed.), “Introduction,” in Artistic Practice as Research in Music: Theory, Criticism, Practice (Farnham and Burlington: Ashgate, 2015).

 

Editing ‘Music Video Games: Performance, Politics, and Play’ by Michael Austin

Michael Austin gives us a little insight into his new anthology of essays on video game music,

Music Video Games: Performance, Politics, and Play.

 

Austin_Cover

Thanks to the hard work of a handful of dedicated ludomusicologists (from a variety of academic fields), I’m very happy to announce that Music Video Games: Performance, Politics, and Play was released last month by Bloomsbury Academic Press!

The book is the first anthology dedicated solely to the genre of music video games, stretching well beyond Guitar Hero and Rockband to include handhelds (such as SIMON from the late 1970s), to mobile music games, to music making and the representation of musicians in games in which performing music or rhythm matching isn’t necessarily the main objective. Other chapters investigate themes of composing with video games, authenticity and “selling out,” and pedagogical uses for music games.

The book is part of Bloomsbury’s Approaches to Digital Games series (Gerald Voorhees and Josh Call, series editors).  It was released on July 28, along with Gareth Schott’s Violent Games: Rules, Realism, and Effect – a monograph that investigates the mediation of violence in video games and gameplay.

In addition to excellent chapters by an international collection of scholars, Music Video Games also includes a “Glossary of Gaming and Musical Terms”  – for the benefit of non-specialists in either field.

 

Many thanks to scholars who contributed chapters to the project. Their chapters are listed below.

You can get your own copy of the book here. You can get 30% off of the price of your copy when you use the code “game studies” at checkout.

For more information about Bloomsbury’s Approaches to Digital Games Studies series (including current and pending volumes), or to propose a volume of your own, visit the series website here.

 

 

Introduction – Taking Note of Music Games (Michael Austin, Howard University, USA)

Part One: Preludes & Overtures
Chapter 1 – SIMON: The Prelude to Modern Music Video Games (William M. Knoblauch, Finlandia University, USA)

Chapter 2 – Mario Paint Composer and Musical (Re)Play on YouTube (Dana M. Plank, Case Western Reserve University, USA)

Chapter 3 – Active Interfaces and Thematic Events in The Legend of Zelda: The Ocarina of Time (1998) (Stephanie Lind, Queen’s University, Canada)

Chapter 4 – Sample, Cycle, Sync: The Music Sequencer and its Influence on Music Video Games (Michael Austin, Howard University, USA)

 

Part Two: Virtuosi, Virtues, & the Virtual
Chapter 5 – Consumerism Hero: The “Selling Out” of Guitar Hero and Rock Band  (Mario A. Dozal, University of New Mexico, USA)

Chapter 6 – Beat It! Playing the “King of Pop” in Video Games (Melanie Fritsch, University of Bayreuth, Germany)

Chapter 7 – Virtual Jam: A Critical Analysis of Virtual Music Game Environments (David Arditi, University of Texas at Arlington, USA)

 

Part Three: Concerts, Collaboration, & Creativity
Chapter 8 – Guitar Heroes in the Classroom: The Creative Potential of Music-Games

(David Roesner, University of Kent, UK, Anna Paisley, Glasgow Caledonian University, UK, and Gianna Cassidy, Glasgow Caledonian University, UK)

Chapter 9 – Rocksmith and the Shaping of Player Experience (Daniel O’Meara, Princeton University, USA

Chapter 10 – Rhythm Sense: Modality and Enactive Perception in Rhythm Heaven  (Peter Shultz, University of Chicago, USA)

Chapter 11 – Pitching the Rhythm: Music Games for iPad (Nathan Fleshner, Stephen F. Austin State University, USA)

 

Afterword – Toadofsky’s Music Lessons (William Cheng, Dartmouth College, USA)

 

Glossary of Gaming and Musical Terms
About the Contributors
Author Index

Game Index

General Index

 

 

How to Find Work Online as a New VG Composer

Contributor: Chris Lines (http://www.gamecomposeradvantage.com/) shares his advice on becoming a successful video game composer. This is a short version of a longer series of articles from Chris’s site to help game composers. You can check out the longer in-depth versions here.

Many composers have either studied music formally for a long time or are self-taught to a pretty good level, and yet they haven’t actually worked on any video games at all, let alone been paid for one.

FGI was in a similar position until a few years ago… I’d always written music since I was fifteen, been in bands, had my own studio set up for years. But apart from a small amount of production music, and the odd student film, I had never really achieved that much. I decided something had to change…

I noticed that there were plenty of game composer websites talking about VSTs and DAWs but none on the actual hard work of freelancing. So I invested thousands of pounds in the best freelancing courses and books I could, and learned about positioning, pitching, selling and running a freelance business in general. What I learned wasn’t specifically tailored for musicians – most of my fellow students in fact were designers, photographers, web developers or other freelancers, but I found universal lessons that could be applied to music too.

What Most Composer Do Wrong

It’s all too common to see posts on game developer forums where composers are offering their services – and often for free. I have never done this. If a composer does get an answer, they’ll generally be asked to write for free, or for ‘exposure’. More likely than not they just won’t get a reply. When they don’t get inundated with offers to write music they then get disappointed.  “Why on earth not?”, I hear them cry, “I’m offering to write for free! What could be better than that, right?”

Most composers don’t see things from a developer’s perspective though. Try it for a moment – why would they trust this person who posted on a forum offering to work for free? Is this the way a professional composer would act?

There Is Another Way

What I quickly learned from my studies is that rather than posting adverts on forums and waiting for the phone to call, I came to appreciate the power of the hustle. By spending time upfront researching the most suitable developers, picking the games I really wanted to work on, and only then contacting the developers directly, things seemed a lot more hopeful.

Now I rather glossed over the part where I mentioned research – but this is essential and is where most of the effort should go. There’s no point pitching just anyone who is making a game. You need to choose carefully – take your time. The best places to look are game developer forums where devs are posting about what they are working on, but there are also sites like Kickstarter. Here’s a link to Quora with some suggestions of game developer sites.

And once you have found a game you like the look of you need to find the developer’s email address. Sure you could contact them via the forum, but I think email is best. You might have to do some digging and Googling to get an e-mail address, but again it’s worth it. Once you have an e-mail address you can then quite honestly tell them who you are, what you do and genuinely offer to help. It’s not magic – just maybe a bit braver than the average composer, and that’s the point. You don’t want to be the same as everyone else.

Get Used to Hustling

It has to be said, 9 times out of 10 a cold pitch doesn’t work. Game devs either already have a composer or they have settled on an alternative approach to the music. Or they just weren’t a good fit in the first place and just don’t reply. Don’t worry! Keep trying and occasionally… just occasionally… it does work.

Now it has to be said that cold pitching (even with the right research) is a numbers game. You’ll send out dozens and dozens of e-mails before you get any interest. And even when you do, you might only get a ‘maybe’. It’s then your job to keep in touch, keep pitching, making contacts and eventually something good will happen.

The point of this article is to show one method of finding work online. There are others, and I
should make the point that real life meet-ups, conferences and networking are just as important – they just aren’t the focus of this article.

What If You Aren’t Ready?

I’ve found a lot of composers are put off getting themselves out into the market because they feel they aren’t ready. This could be for a variety of reasons:

  • they don’t have a good enough website or portfolio,
  • they don’t know enough about games in general or interactive music
  • plus many other reasons.

You should at the very least have some kind of portfolio showing off your music, even if this is just a SoundCloud page. Otherwise how on earth will a developer hear what you can do? More than that is obviously nice, such as a smart, clean website with a dedicated portfolio section and maybe a blog, but it’s not needed in the beginning.

As for having expert knowledge of interactive music and middleware? In reality for your first few gigs as a game composer you aren’t going to need to know much of this stuff, if anything. Don’t wait till you are ready… take action now and learn as you go.

%d bloggers like this: