Ludo2016 Announcement

We are thrilled to announce that Ludo2016 will be taking place on 8th-10th April 2016 at the University of Southampton. This will be our fifth anniversary, after conferences at Oxford, Liverpool, Chichester, and Utrecht Universities, and therefore it is a particularly special, celebratory occasion for us. We are very proud and grateful for the fantastic and diverse contributions to past events, and are looking forward to what we hope will be the best Ludo conference to date! We’ll be putting out a Call for Papers as soon as possible, but whether you’re able to submit or not, make sure you save the date for what promises to be be an auspicious event.

Ludo2016 will be hosted by Kevin Donnelly, the Music Faculty and the Film Faculty at the University of Southampton. You can expect more details to follow in due course; keep an eye out on the website for further announcements in the coming weeks!

Ludomusicology: Approaches to Video Game Music

Our forthcoming book, Ludomusicology: Approaches to Video Game Music, will be published by Equinox press next year as part of their Genre, Music and Sound series! The chapters largely originated from our inaugural conference in Oxford. We hope to share more news on the book at Ludo2016 (details on this to follow in due course!).

A permanent page has been created here with the latest information on the book, including the Table of Contents. Thank you to all our wonderful contributors – now Ludo regulars – we are very proud to be presenting your work in what promises to be an exciting contribution to the field!

Michiel, Tim & Mark

CFP: North American Conference on Video Game Music

Deadline: October 1, 2015

Conference Dates: January 16–17, 2016

Conference Venue: Davidson College (Davidson, NC)

 

 

Scholars are invited to submit proposals for the third North American Conference on Video Game Music, which will take place January 16–17, 2016 at  Davidson College in Davidson, North Carolina. The conference organizing and program committee is composed of Neil Lerner (organizer, Davidson College), William Gibbons (TCU), Steven Reale (Youngstown State University), James Buhler (UT Austin), Karen Cook (University of Hartford), and Elizabeth Medina-Gray (Humboldt State University).

The keynote speaker for this year’s conference will be ethnomusicologist Kiri Miller (Brown University), author of Playing Along: Digital Games, YouTube, and Virtual Performance (Oxford University Press, 2012).

We are soliciting proposals for presentations on any aspect of music in games, including, but not limited to:

  • The history of music in video games
  • Approaches to analyzing game music
  • Intersections of game music and other media (film, TV, etc.)
  • Critical and/or hermeneutic approaches to game music
  • Case studies of particular games
  • Game music and pedagogy
  • Ethnographic approaches to game music

 

Additional information regarding the conference:

  • Papers will be twenty minutes in length, with an additional ten minutes for discussion.
  • Proposals are limited to 250 words, and should include the title of the paper, but should otherwise include no identifying information, including metadata.
  • In the body of an e-mail, include your name, institutional affiliation (if applicable), contact information, and the title of your paper.
  • Email proposals to vgmconference AT com by October 1, 2015. Successful candidates will be notified on or around November 1, 2015.

For further information, please feel free to contact Neil Lerner (nelerner AT davidson.edu) or direct questions to the conference email address (vgmconference AT mail.com).

Compositional Strategies For Programmable Sound Generators With Limited Polyphony

We are proud to publish the following article by Blake Troise (ThinkSpace, ProtoDome) as part of our contributor articles series. Feel free to leave comments, and do let us know if you would like to send us articles to share with the wider community!

 

Contributor: Blake Troise

A programmable sound generator (PSG) is an integrated circuit (IC) with the ability to generate sound by synthesizing basic waveforms.1 PSGs are often called sound chips, however not all sound chips are PSGs. PSGs were designed to be instructed by software commands and would usually be housed alongside a microprocessor as part of a computer system. The general benefit of the sound chip was that audio processing could be delegated to a dedicated system, freeing processing cycles for other functions, or simply, as Radio-Electronics magazine 1981 explains,  “[controlling] music or sound effects from software, without overtaxing the computer”. One of the main reasons for the popularity of PSGs was that microcontrollers capable of generating pitches had become cheap enough to manufacture at the beginning of the 1980s.23 This became a commercially viable option for computational sound generation as part as an affordable home system. As such, PSGs were commonly utilised in the video game systems of the 1980s to mid 1990s, for example the Nintendo Entertainment System (Ricoh 2A03/2A07)4, Atari 2600 (Atari TIA)5, Sega Master System (Texas Instruments SN76489A, Yamaha YM-2413)6 and numerous others. The chips were also included in early home computers7, especially those with gaming capabilities; the most popular example being the Commodore 64’s PSG, the SID chip8. Perhaps one of the main reasons for the ubiquity of PSGs in video game applications is the medium’s requirement for a multimedia experience9, with the desire for sound effects and music in electronic games predating software games10.

 

Due to the prevalence of inexpensive computers, the eighties saw the emergence of a synergy of both programmer and composer practices due to both software accessibility and thus increased development in entertainment software111213. Many of these musicians, such as Hirokazu ‘Hip’ Tanaka, treated the computer (in Tanaka’s case, the Nintendo Entertainment System (NES)) as a means of expressing ‘serious’ music and approached their composition as such14. Each PSG however, had a set of numerous musical limitations the composer would have to adhere to.15 One of the most compositionally influential of these restrictions (and defining characteristic of computer music of the period) is the PSGs limited polyphony.

 

The amount of voices a PSG could provide varied between the different chips. The NES’ Ricoh 2A03/2A07, had five separate ‘channels’ (individual functions for generating a single waveform16); two pulse wave channels, one triangle channel, one pseudo-random noise generator and a rarely used Delta Modulation Channel (DMC) for playing Differential Pulse-Code Modulation (DPCM) samples1718 (Figure 1). The Commodore 64 had four channels with a very similar set of waveforms to the Nintendo Entertainment System, however, unlike the NES, these were not restricted to a single wave function.19 Other chips such as the General Instruments AY 3-8910 (found in the MSX computer2021 and Sinclair ZX spectrum 12822, to name two popular examples) included very similar waveforms with a very similar amount of channels; in the AY 3-8910’s case, three square wave channels and a pseudo-random noise generator.23 The limited number of channels each PSG provided posed a significant compositional challenge as to how best to maximise the musical content with only a few voices.

 

12

34

Figure 1. Oscilloscope examples of the basic common waveforms various PSG provided. Top Left: Saw Wave. Top Right: Pulse/Square Wave. Bottom Left: Triangle Wave. Bottom Right: Noise.

 

Writing music with limited polyphony is not unique to the sound chip; for hundreds of years composers have written for three, two and even single voices.24 Pieces such as Adagio in Bb (for two clarinets and three bassett horns) by Mozart25, Jägerlied by Schubert (for two horns or voices)26, most of Bach’s infamous chorales27 and even solo piano works for a single performer, make use of a small collection of monophonic voices (or fingers!) to create music. Even commercial synthesisers during the seventies and eighties would occasionally be dedicated to producing a single, monophonic voice2829 and often, when polyphony was available, was still limited to a maximum of only eight voices.303132 What separates these composers works (and other electronic hardware) from PSG composition is the idea of necessity. Whilst traditionally the basic form of the woodwind ensemble is the quintet33, this is usually a creative choice and can be “expanded and contracted to meet the needs of the composer”. The piano composer in desperate need of further polyphony can simply add another performer to allow access a further set of ten (or more) notes. No such luxury was (in most cases34) available to the sound chip composer.

 

Perhaps the simplest (and most common) approach seen in PSG composition was to consider each channel as an individual instrument, a similar method to the aforementioned ensemble writing.35 The iconic Super Mario Bros theme36 by K. Kondo37 is a good example of this process (figure 2). All three pitched channels of the Nintendo Entertainment System’s PSG move together as a three part harmony for the first thematic section. In the second section of the piece, the triangle channel diverges from rhythmic unison and plays a very simple arpeggiating bassline. This approach can be seen in video game soundtracks such as Mega Man 2 (1988) by T. Tateishi and M. Matsumae (programmed by Yoshihiro Sakaguchi)38, Legacy of the Wizard  (1989) by Y. Koshiro39 and Castlevania (1986) by S. Terashima and K. Yamashita (programmed by H. Maezawa)40 (Figure 3). In fact, these soundtracks work so much like a traditional three part ensemble (with a fourth percussion instrument), they have a plethora of YouTube a capella covers (of varying successes) utilizing the original writing for each channel.4142434445

5

First thematic section.

6

Second thematic section.

Figure 2. Manuscript representation of the Super Mario Bros theme.

7

Figure 3. Manuscript representation of the Legacy of the Wizard ‘intro’ theme.

 

The ubiquity of this practice has resulted in the emergence of a common technique found in Nintendo Entertainment System music- dubbed the Famichord by chip musician Linus ‘LFT’ Akesson46 (Figure 4). The Famichord is essentially the removal of the dominant in a four note major or minor seventh chord (Maj7omit5 or m7omit5) to fit the NES’ three channel limit. Whilst not unusual in a wider musical practice, the fact many composers independently utilised this technique makes it distinctive in PSG compositional procedure. Examples can be found in the NES soundtracks to Mega Man II (1988)47, Duck Tales (1989)48, Super Mario Bros (1985)49 and various others (Figure 5).

8

 

Figure 4. Manuscript example of the C Major 7th Famichord.

 

 9

Figure 5. D Minor seventh (b. 1) and C major seventh (b.3) Famichord in Super Mario Bros. ‘Invincibility’ theme.

 

The reason the ‘ensemble’ method of PSG writing was so popular was possibly because it was a very simple way of interfacing with the chip. Examining the documentation for the various sound chips, the way in which the microcontrollers expected to be instructed was with a simple ‘channel, pitch, duration’ format50515253, the way in which the MIDI (musical instrument digital interface) standard operates today.5455 A melody is easily built from instructions in this way and can be (and was often) very simply converted from score to code.565758 This could be done by either the composer himself, or given to a programmer to transcribe, which was common practice.596061 The downside to this technique is that the music is limited to as many instruments as there are PSG channels, resulting in a textually ‘thin’ soundscape. To fill the space, the prevalent mentality of trying to recreate externally composed music on the PSG had to shift to writing music for the PSG; treating the device as a unique, independent sonic medium.

 

The first technique to artificially expand the sonic environment is channel sharing, or splitting parts over a single voice.62 This technique is especially effective when two instruments can be distinguished from one another utilising the unique functions of the PSG channel. Often, each channel had an alterable amplitude for ADSR (attack, decay, sustain and release) envelope shaping63646566, and pulse waves frequently had the ability to alter their duty cycle (the percentage between the high and low of the wave cycle), both techniques for textural changes and emulation of instrumental transients.67 Even more basically, a change in pitch will dictate a sequence’s sonic responsibility (Figure 6). Figure 6 is a piece written by the author for a single beeper demonstrating the channel sharing procedure in its extreme. The bass line is separated from the melody mainly by pitch, however has the impression of being an individual instrument. Percussion is discriminated from other instrumentation by rapidly altering the channel’s pitch semi-randomly, with a negative correlation. The final element in the expansion of the soundscape is structural; by frequently altering the instrumental character in a rhythmic fashion, the listener has the impression that multiple voices are present, all achieved on a single voice. This false polyphony significantly alters the compositional process; the chip musician looks for ‘gaps’ in the melody in which to fill out the composition.

 

10

Figure 6. Scored extract from original composition for an ATMega328 microcontroller, using only a single voice. Percussion is created by rapidly changing pitch.

 

PSGs often had multiple channels however, and more waveforms than a single pulse/square wave. The Commodore 64 was particularly flexible as each channel could alter its waveform, allowing for a more freedom when composing for the chip.68 One notable example of this technique is T. Follin’s work on the NES game Silver Surfer (1990), regarded as one of the best PSG soundtracks of the era.697071 Follin utilises the triangle channel as both bass and drums whilst frequently switching melodic ‘licks’ between available channels.7273 Lead instrumentation is given to the first two pulse channels; handled by varying ADSR envelopes and altering pulse width, however, when a pulse channel is unused, it doubles up with the bass to provide a thicker texture. Each channel is always doing something helping to imply a greater polyphony than simply four voices.

 

The most common use of the channel sharing technique in other soundtrack writing was the utilisation of the pseudo-noise channel included in various PSGs7475767778, typically dedicated to accompanying the melodic or pitched elements with percussion, mimicking the characteristics of an acoustic drum kit (Figure 7).798081 This technique is popular as a repeated hi-hat figure is heard over a snare, kick or other percussive element, even though, when shared on a single channel, it is not present. Again, this fills the soundscape with a ‘false’ polyphony using listener expectation to ‘fill in the gaps’.

 

11

Figure 7. Typical PSG noise channel writing for a drum kit pattern. Kick, snare and hi-hat are all represented on a single channel. Vertical placement is representative of speed of noise randomization, which is perceived as a change in pitch.

 

Perhaps the most idiosyncratic and recognized feature of PSG writing is the super-fast arpeggio 86.82838485 Essentially, this is the same technique as instrumental channel sharing, however has the unique purpose of providing a ‘harmonic compression’; reducing all explicitly stated harmonic content to the fewest voices possible, liberating channels for other purposes. Both the Famichord and the super-fast arpeggio are solutions to the same problem, however the Famichord can only accurately represent a chord of four notes or less using all available channels whereas an arpeggio can cover any number of extensions by rapidly iterating through the chord on a single channel (Figure 8). This technique’s conception is sometimes credited to M. Galway with the score to Kong Strikes Back (1985) for the Commodore 64868788 and often appears in soundtracks throughout the computer’s lifespan.89

 

121314

 

Figure 8. A variety of super-fast arpeggio forms, all based on a C Major 9th chord.

The main drawback to the super-fast arpeggio, and many of the channel sharing techniques, is that they are both more difficult to program, demand more processing power and use more memory than simple three/four part writing.90 The Famichord may represent less extended harmony, with a less elegant approach to wave function economy, however it requires only three channel instructions to the sound chip to create a chord. The arpeggiation technique requires multiple instructions to a single channel, in a very small space of time (Figure 9). Often, the read only memory (ROM) on software distribution media was very small91 and developers would limit their music data due to memory constraints.92 Gratuitous use of musical content would quickly fill the memory limits, encroaching on space needed for the rest of the software.

A l64 o5 @00 c e d g e b g >c

e g d e c d c e d g e b g >c <g b e

g d e c d c e d g e b g >c <g b e g

d e c d c e d g e b g >c <g b e g d

e c d

Super-Fast Arpeggio, C Major Ninth, 1 bar

A l1 o5 @00 b

B l1 o5 @00 e

C l1 o5 c

 

Famichord C Major Seventh, 1 bar

Figure 9. Comparison of a single bar Famichord command compared to a single bar arpeggio command in ppMCK MML.

 

It seems that, whilst PSG music was dramatically shaped by polyphonic restrictions, perhaps the main pervasive limiting factor is ultimately the available memory the composer has to work with. As with the Famichord versus the super-fast arpeggio scenario, the decision seems to be less founded in compositional choice but in a constant creative balance between musicality and pragmatics. Perhaps the main difference between the Super Mario Brothers and Silver Surfer soundtracks was not how well the latter utilised channel sharing, or the simplicity of the former’s harmonic writing, but how much memory was delegated to each respective composition.

%d bloggers like this: