Category: Guest Contribution

We are proud to publish original articles from authors across the community here. If you wish to share news about your research, an event you are running or if have an article you would like us to feature, please contribute here or email us at ludomusicology@gmail.com.

New Game Music and Audio Post Graduate Degrees

ThinkSpace Education, a partner of the Ludomusicology research group, have finally revealed their new programmes dedicated to Game Music and Audio! Our colleagues and friends at ThinkSpace were a major sponsor of our recent five-year anniversary conference, held at Southampton University in April, and their participation was a significant part of its success. We are very excited to see their hard work in putting together these new courses come to fruition.

To show how the ThinkSpace approach differs from other current options in the academic world, Matt Lightbound, Course Producer of the Game Music and Audio courses has very kindly taken the time to lay out for our Ludo audience what ThinkSpace is striving to do.

When I joined ThinkSpace it became abundantly clear that everybody at the institution cared about game music. Our staff are built of 100% active practitioners, I myself am a Sound Designer working in video games right now, and everybody else is either working on games or has very recently. It’s a great environment to be in and it’s a great opportunity to pass that experience onto our students. Unlike traditional institutions, everyone our students speak to have current experience in the field they want to be in. From contacting support or even calling our office, students get to speak to their own kind the whole way through their course.

thinkspace-faculty-2016This is because the main objective of all three courses is to get students the most up to date information possible, so they can go and work in the industry to the best of their ability. The courses are focussed on creating the same content you will be expected to make when working at the biggest or the smallest game studios. Again all our tutors work on games right now, some of which are successful Audio Directors on some of the biggest and most exciting games being made today.

It’s also a key factor on why we teamed up with the Ludomusicology Research Group. We are all genuinely interested and passionate about both the professional and academic side of the practice. Dr Tim Summers will be heading up our research modules on the courses and all our students will receive access to selected recordings of the Ludo 2016 conference.

Attending the event this year was a great experience, meeting the many different minds and workflows that build up the academic community in Game Music and Audio. Other presenters such as Blake Troise (PROTODOME), are staff members here at ThinkSpace, and he will be providing students with lessons on Chiptune composition for those looking to master that particular sonic aesthetic.

I have been asked what makes ThinkSpace’s courses different from the small number of GMA qualifications available currently. Apart from the fact it’s taught entirely by working, not past composers and sound designers, it is also online. Created in partnership with the University of Chichester, students from anywhere in the world are able to take part and still receive a fully accredited post graduate qualification.

To add to this, unlike other courses, our degrees are practical project focussed. Students will work on games, using the same technology they need to know in the industry. By the end of the course they would have built up a substantial portfolio of work, showing a variety of styles and approaches, as well as receiving vital information on how to find work, written by the employers and practitioners themselves. The entire purpose is to teach them in a non-isolated environment, to keep students looking at what trends and developments are happening now and in the near future.

If you want to see more about the course, check out the webpages here:

MFA Game Music and Audio

MA Composing for Video Games

MA Sound Design for Video Games

Feel free to get in touch and chat about our courses or about your current situation, we’d love to hear from you!

#Ludo2016 Conference Review

We are proud to publish the following review as part of our contributor articles series. Feel free to leave comments, and do let us know if you would like to send us articles to share with the wider community!

Contributor: Sebastian Urrea

I came into Ludo 2016 as a newcomer, not knowing quite what to expect. I was coming down from an extraordinary experience visiting London and the surrounding area during the week leading up to the conference, and I was excited to see what it would be like. I didn’t know anyone, I wasn’t in academia, hadn’t done research, and I didn’t have any papers to present. I just loved video game music. I had studied music, and enjoyed theory and musicology, and had applied it to video game music on my own. I was thrilled when I learned that there were others who were doing similar things in an academic setting. I had been planning a trip that happened to align perfectly to allow me to be in England at the time of the conference. So on a whim I had registered, hoping to see what I could learn and who I could meet.

What I found exceeded my expectations in many ways. First, the papers. The presentations included discussions and examinations of a very diverse body of music, and everyone had a different way of examining their chosen interest. Papers included discussions of classic JRPGs and Nintendo games through old arcade games, indie games, hip hop, horror games, and new virtual reality games. Some papers looked backward, at history and culture, and some looked forward, to innovations in the field and new possibilities for integrating music and games. I learned about music that I had never really listened to (for instance, arcade music of the 70s and 80s), and I learned about new music that I didn’t even know about (Elise Plans and David Plans’ discussion on new developments in music and biofeedback in games makes me excited to see what the future of video game music holds).

At first I was disappointed that the presentations didn’t include more subjects with which I was familiar. But really, that would have been less interesting. I learned a lot more from the really diverse set of presentations than I would have otherwise. The topics discussed had a great balance across different aspects of video game music, and I am certain that anyone in attendance would have found things both familiar and new.

Amongst such diverse music, everyone focused on something different. Discussions ranged from the analytical (James Tate’s examination of the musical style of Jeremy Soule, or Morgan Hale’s analysis of the music of Undertale), to cultural/ethnomusicological (Hyeonjin Park’s discussion of musical representations of deserts across games, or Keith Hennigan’s critique of Irish music in video games), to technical (Blake Troise’s discussion of compositional techniques with NES hardware), and more. It made me really appreciate how diverse and expansive video game music really is, and how much opportunity there is to delve into different topics and explore and discover new things.

The choices of keynotes were excellent. Having someone like Andrew Barnabas in attendance with such a history of work in the industry was thrilling to everyone. It allowed for a bridge between the theoretical and academic to the practical, and was a good learning opportunity for everyone involved. It also gave rise to some great discussions (did you know he was responsible for adding the snippet of singing in “A Whole New World” in the video game version of Aladdin?). Neil Lerner’s talk of Pac-Man and its sounds was a great reminder of the technical aspects of video game music, and how it can be important to consider how they factor in to composition and production.

Spending time with everyone outside of presentations was equally as fun. Many of the attendees were already friends from previous conferences or from shared work. But most importantly, Ludo 2016 provided a friendly, open atmosphere to everyone involved. After all, we were all there because we were critically interested in a pretty geeky and new area of music, and this conference created a unique opportunity for everyone to explore that interest freely and openly. The fact that any of us could immediately go up to someone and express our interests, by saying something like, “Hey, have you played this game?” or “Did you ever listen to the soundtrack from this other game?” made for a really unique and refreshing experience. When presenting, the whole group was engaged in every talk, giving positive feedback and sharing knowledge from their own areas of specialty. And I think everyone who attended the pub trivia quiz night enjoyed being stumped by the questions that were just as diverse as the presentations that were given.

Looking back at the conference, my biggest takeaway is my impression that the field of video game music is really a lot broader than I had realized. I had my own interests that I had honed in on, but seeing so many people studying such a range of topics was inspiring. I left feeling that there is a lot of potential to be explored in studying music from a range of games larger than I had realized, and in ways that I had never even considered. I have a lot of faith in the people who attended the conference and who are dedicating themselves to studying it, each in their own way and with their own perspectives, and it makes me excited to see what the future of Ludomusicology will be as it continues to grow. I look forward to what future Ludo conferences will bring!

GameLark Records Volume 1 Released

Contributor: Allen Brasch, GameLark Records

GameLark Records is a new record label specifically for video games remixes and covers. The first album, GameLark Records Volume 1, features 19 tracks from 19 different artists in the video game remix community. I fell in love with the video game remix community while working on my Youtube channel, GameLark Remixes. As I scoured Youtube looking for new artists and remixes, I was astounded by the sheer diversity of the community.

Eventually, I was inspired by collaborative charity albums such as ‘Multiplayer: A Tribute to Video Games’ and ‘Operation 1-Up’ to create my own label. The goals were simple: find the most diverse group of artists possible, produce top-quality music, and build a platform for the selected artists. Believe it or not, most artists are busy making music and don’t always have the time to promote their work. The album helps to bring attention to all the artists on the label, both big and small, and new fans are created in the process.

This is just the first album from GameLark Records, but I believe the label has a bright future. Every song on this first album stands on its own, but I believe the myriad genres complement each other rather than detract from the album’s cohesion. After all, this album is as diverse as the community that it represents. GameLark Records Volume 1 releases today on Loudr, iTunes, Spotify, Google Play, and Amazon Music and I sincerely hope that you will enjoy it.

Compositional Strategies For Programmable Sound Generators With Limited Polyphony

We are proud to publish the following article by Blake Troise (ThinkSpace, ProtoDome) as part of our contributor articles series. Feel free to leave comments, and do let us know if you would like to send us articles to share with the wider community!

 

Contributor: Blake Troise

A programmable sound generator (PSG) is an integrated circuit (IC) with the ability to generate sound by synthesizing basic waveforms.1 PSGs are often called sound chips, however not all sound chips are PSGs. PSGs were designed to be instructed by software commands and would usually be housed alongside a microprocessor as part of a computer system. The general benefit of the sound chip was that audio processing could be delegated to a dedicated system, freeing processing cycles for other functions, or simply, as Radio-Electronics magazine 1981 explains,  “[controlling] music or sound effects from software, without overtaxing the computer”. One of the main reasons for the popularity of PSGs was that microcontrollers capable of generating pitches had become cheap enough to manufacture at the beginning of the 1980s.23 This became a commercially viable option for computational sound generation as part as an affordable home system. As such, PSGs were commonly utilised in the video game systems of the 1980s to mid 1990s, for example the Nintendo Entertainment System (Ricoh 2A03/2A07)4, Atari 2600 (Atari TIA)5, Sega Master System (Texas Instruments SN76489A, Yamaha YM-2413)6 and numerous others. The chips were also included in early home computers7, especially those with gaming capabilities; the most popular example being the Commodore 64’s PSG, the SID chip8. Perhaps one of the main reasons for the ubiquity of PSGs in video game applications is the medium’s requirement for a multimedia experience9, with the desire for sound effects and music in electronic games predating software games10.

 

Due to the prevalence of inexpensive computers, the eighties saw the emergence of a synergy of both programmer and composer practices due to both software accessibility and thus increased development in entertainment software111213. Many of these musicians, such as Hirokazu ‘Hip’ Tanaka, treated the computer (in Tanaka’s case, the Nintendo Entertainment System (NES)) as a means of expressing ‘serious’ music and approached their composition as such14. Each PSG however, had a set of numerous musical limitations the composer would have to adhere to.15 One of the most compositionally influential of these restrictions (and defining characteristic of computer music of the period) is the PSGs limited polyphony.

 

The amount of voices a PSG could provide varied between the different chips. The NES’ Ricoh 2A03/2A07, had five separate ‘channels’ (individual functions for generating a single waveform16); two pulse wave channels, one triangle channel, one pseudo-random noise generator and a rarely used Delta Modulation Channel (DMC) for playing Differential Pulse-Code Modulation (DPCM) samples1718 (Figure 1). The Commodore 64 had four channels with a very similar set of waveforms to the Nintendo Entertainment System, however, unlike the NES, these were not restricted to a single wave function.19 Other chips such as the General Instruments AY 3-8910 (found in the MSX computer2021 and Sinclair ZX spectrum 12822, to name two popular examples) included very similar waveforms with a very similar amount of channels; in the AY 3-8910’s case, three square wave channels and a pseudo-random noise generator.23 The limited number of channels each PSG provided posed a significant compositional challenge as to how best to maximise the musical content with only a few voices.

 

12

34

Figure 1. Oscilloscope examples of the basic common waveforms various PSG provided. Top Left: Saw Wave. Top Right: Pulse/Square Wave. Bottom Left: Triangle Wave. Bottom Right: Noise.

 

Writing music with limited polyphony is not unique to the sound chip; for hundreds of years composers have written for three, two and even single voices.24 Pieces such as Adagio in Bb (for two clarinets and three bassett horns) by Mozart25, Jägerlied by Schubert (for two horns or voices)26, most of Bach’s infamous chorales27 and even solo piano works for a single performer, make use of a small collection of monophonic voices (or fingers!) to create music. Even commercial synthesisers during the seventies and eighties would occasionally be dedicated to producing a single, monophonic voice2829 and often, when polyphony was available, was still limited to a maximum of only eight voices.303132 What separates these composers works (and other electronic hardware) from PSG composition is the idea of necessity. Whilst traditionally the basic form of the woodwind ensemble is the quintet33, this is usually a creative choice and can be “expanded and contracted to meet the needs of the composer”. The piano composer in desperate need of further polyphony can simply add another performer to allow access a further set of ten (or more) notes. No such luxury was (in most cases34) available to the sound chip composer.

 

Perhaps the simplest (and most common) approach seen in PSG composition was to consider each channel as an individual instrument, a similar method to the aforementioned ensemble writing.35 The iconic Super Mario Bros theme36 by K. Kondo37 is a good example of this process (figure 2). All three pitched channels of the Nintendo Entertainment System’s PSG move together as a three part harmony for the first thematic section. In the second section of the piece, the triangle channel diverges from rhythmic unison and plays a very simple arpeggiating bassline. This approach can be seen in video game soundtracks such as Mega Man 2 (1988) by T. Tateishi and M. Matsumae (programmed by Yoshihiro Sakaguchi)38, Legacy of the Wizard  (1989) by Y. Koshiro39 and Castlevania (1986) by S. Terashima and K. Yamashita (programmed by H. Maezawa)40 (Figure 3). In fact, these soundtracks work so much like a traditional three part ensemble (with a fourth percussion instrument), they have a plethora of YouTube a capella covers (of varying successes) utilizing the original writing for each channel.4142434445

5

First thematic section.

6

Second thematic section.

Figure 2. Manuscript representation of the Super Mario Bros theme.

7

Figure 3. Manuscript representation of the Legacy of the Wizard ‘intro’ theme.

 

The ubiquity of this practice has resulted in the emergence of a common technique found in Nintendo Entertainment System music- dubbed the Famichord by chip musician Linus ‘LFT’ Akesson46 (Figure 4). The Famichord is essentially the removal of the dominant in a four note major or minor seventh chord (Maj7omit5 or m7omit5) to fit the NES’ three channel limit. Whilst not unusual in a wider musical practice, the fact many composers independently utilised this technique makes it distinctive in PSG compositional procedure. Examples can be found in the NES soundtracks to Mega Man II (1988)47, Duck Tales (1989)48, Super Mario Bros (1985)49 and various others (Figure 5).

8

 

Figure 4. Manuscript example of the C Major 7th Famichord.

 

 9

Figure 5. D Minor seventh (b. 1) and C major seventh (b.3) Famichord in Super Mario Bros. ‘Invincibility’ theme.

 

The reason the ‘ensemble’ method of PSG writing was so popular was possibly because it was a very simple way of interfacing with the chip. Examining the documentation for the various sound chips, the way in which the microcontrollers expected to be instructed was with a simple ‘channel, pitch, duration’ format50515253, the way in which the MIDI (musical instrument digital interface) standard operates today.5455 A melody is easily built from instructions in this way and can be (and was often) very simply converted from score to code.565758 This could be done by either the composer himself, or given to a programmer to transcribe, which was common practice.596061 The downside to this technique is that the music is limited to as many instruments as there are PSG channels, resulting in a textually ‘thin’ soundscape. To fill the space, the prevalent mentality of trying to recreate externally composed music on the PSG had to shift to writing music for the PSG; treating the device as a unique, independent sonic medium.

 

The first technique to artificially expand the sonic environment is channel sharing, or splitting parts over a single voice.62 This technique is especially effective when two instruments can be distinguished from one another utilising the unique functions of the PSG channel. Often, each channel had an alterable amplitude for ADSR (attack, decay, sustain and release) envelope shaping63646566, and pulse waves frequently had the ability to alter their duty cycle (the percentage between the high and low of the wave cycle), both techniques for textural changes and emulation of instrumental transients.67 Even more basically, a change in pitch will dictate a sequence’s sonic responsibility (Figure 6). Figure 6 is a piece written by the author for a single beeper demonstrating the channel sharing procedure in its extreme. The bass line is separated from the melody mainly by pitch, however has the impression of being an individual instrument. Percussion is discriminated from other instrumentation by rapidly altering the channel’s pitch semi-randomly, with a negative correlation. The final element in the expansion of the soundscape is structural; by frequently altering the instrumental character in a rhythmic fashion, the listener has the impression that multiple voices are present, all achieved on a single voice. This false polyphony significantly alters the compositional process; the chip musician looks for ‘gaps’ in the melody in which to fill out the composition.

 

10

Figure 6. Scored extract from original composition for an ATMega328 microcontroller, using only a single voice. Percussion is created by rapidly changing pitch.

 

PSGs often had multiple channels however, and more waveforms than a single pulse/square wave. The Commodore 64 was particularly flexible as each channel could alter its waveform, allowing for a more freedom when composing for the chip.68 One notable example of this technique is T. Follin’s work on the NES game Silver Surfer (1990), regarded as one of the best PSG soundtracks of the era.697071 Follin utilises the triangle channel as both bass and drums whilst frequently switching melodic ‘licks’ between available channels.7273 Lead instrumentation is given to the first two pulse channels; handled by varying ADSR envelopes and altering pulse width, however, when a pulse channel is unused, it doubles up with the bass to provide a thicker texture. Each channel is always doing something helping to imply a greater polyphony than simply four voices.

 

The most common use of the channel sharing technique in other soundtrack writing was the utilisation of the pseudo-noise channel included in various PSGs7475767778, typically dedicated to accompanying the melodic or pitched elements with percussion, mimicking the characteristics of an acoustic drum kit (Figure 7).798081 This technique is popular as a repeated hi-hat figure is heard over a snare, kick or other percussive element, even though, when shared on a single channel, it is not present. Again, this fills the soundscape with a ‘false’ polyphony using listener expectation to ‘fill in the gaps’.

 

11

Figure 7. Typical PSG noise channel writing for a drum kit pattern. Kick, snare and hi-hat are all represented on a single channel. Vertical placement is representative of speed of noise randomization, which is perceived as a change in pitch.

 

Perhaps the most idiosyncratic and recognized feature of PSG writing is the super-fast arpeggio 86.82838485 Essentially, this is the same technique as instrumental channel sharing, however has the unique purpose of providing a ‘harmonic compression’; reducing all explicitly stated harmonic content to the fewest voices possible, liberating channels for other purposes. Both the Famichord and the super-fast arpeggio are solutions to the same problem, however the Famichord can only accurately represent a chord of four notes or less using all available channels whereas an arpeggio can cover any number of extensions by rapidly iterating through the chord on a single channel (Figure 8). This technique’s conception is sometimes credited to M. Galway with the score to Kong Strikes Back (1985) for the Commodore 64868788 and often appears in soundtracks throughout the computer’s lifespan.89

 

121314

 

Figure 8. A variety of super-fast arpeggio forms, all based on a C Major 9th chord.

The main drawback to the super-fast arpeggio, and many of the channel sharing techniques, is that they are both more difficult to program, demand more processing power and use more memory than simple three/four part writing.90 The Famichord may represent less extended harmony, with a less elegant approach to wave function economy, however it requires only three channel instructions to the sound chip to create a chord. The arpeggiation technique requires multiple instructions to a single channel, in a very small space of time (Figure 9). Often, the read only memory (ROM) on software distribution media was very small91 and developers would limit their music data due to memory constraints.92 Gratuitous use of musical content would quickly fill the memory limits, encroaching on space needed for the rest of the software.

A l64 o5 @00 c e d g e b g >c

e g d e c d c e d g e b g >c <g b e

g d e c d c e d g e b g >c <g b e g

d e c d c e d g e b g >c <g b e g d

e c d

Super-Fast Arpeggio, C Major Ninth, 1 bar

A l1 o5 @00 b

B l1 o5 @00 e

C l1 o5 c

 

Famichord C Major Seventh, 1 bar

Figure 9. Comparison of a single bar Famichord command compared to a single bar arpeggio command in ppMCK MML.

 

It seems that, whilst PSG music was dramatically shaped by polyphonic restrictions, perhaps the main pervasive limiting factor is ultimately the available memory the composer has to work with. As with the Famichord versus the super-fast arpeggio scenario, the decision seems to be less founded in compositional choice but in a constant creative balance between musicality and pragmatics. Perhaps the main difference between the Super Mario Brothers and Silver Surfer soundtracks was not how well the latter utilised channel sharing, or the simplicity of the former’s harmonic writing, but how much memory was delegated to each respective composition.

%d bloggers like this: