To register, please review the full conference details and purchase a ticket here. Don’t hesitate to get in touch with us at firstname.lastname@example.org if you have any queries.
Looking forward to seeing you in Leeds!
To register, please review the full conference details and purchase a ticket here. Don’t hesitate to get in touch with us at email@example.com if you have any queries.
Looking forward to seeing you in Leeds!
On November 20, 2018, the musicologist and media theorist Nikita Braguinski and the musician, programmer and historian of computer sound working under the assumed name of ‘utz’ convened in an IRC chat channel to discuss a musically and technically intriguing question: What is 1-bit-music?
NB: You are, as a musician and as a programmer, actively working in the genre of 1-bit music. Can you give a first, non-technical explanation of what it is?
utz: Sure. Let’s talk about 1-bit sound first. The subject sounds quite esoteric, but actually 1-bit sounds are more common than one might think. For example, if you’re a bit older you may remember that when switching on a PC, it makes a “beep”. Another common example that many people may be familiar with are those birthday greeting cards that play a simple melody when you open them. Or, a more annoying example: the alarm in smoke detectors. All these sounds share a common principle in that they are produced by repeatedly switching the current that goes to the built-in speaker on and off, or in other words, they are produced by toggling a signal between two states. These two states can be expressed by a number containing a single binary digit (bit). Basically, 1-bit music is music made using this principle of sound generation.
NB: We’re only talking about electronically produced sounds, right?
utz: Pretty much.
NB: Great. We’ll return again to this definition to discuss it in more technical terms, but let’s now take a look around this musical practice. It exists within the specific culture of chiptune artists, the demoscene and the retrocomputing scene. Can you give a kind of a first working definition for each of them?
utz: I’d somewhat argue with that statement, actually, but yes, there’s definitely a strong connection with these scenes. “Chiptune” is probably the hardest one to define. Generally, it’s a culture revolving around music made with old home computers and gaming consoles, or music that sounds like it would come from one of these machines. In 1-bit, we often use machines from the same era. The sounds can also be quite similar, though they can also be radically different. Furthermore, there is quite a bit of overlap in terms of people active in the chiptune and 1-bit scene. As a matter of fact, there are hardly any “pure” 1-bit musicians. There are also differences regarding technology. The machines commonly used by chiptune musicians generally have a sound chip, a dedicated piece of hardware that produces sound. Machines used for 1-bit music generally don’t have that.
NB: That’s a huge difference! How do you make sound without sound hardware?
utz: There are different ways to go about, though commonly we look for something like an data port. For example, many old computers can load and save data from/to tape. That’s something we can hook into. If you remember old modems, they make these gnarly sounds, right? That’s pretty much the same idea. Basically, these data ports can be active or inactive, on or off.
NB: If I decode correctly what you describe, the 1-bit-culture is all about difficult, “gnarly” sounds.
utz: A lot of the fun, for me at least, comes from how to get it to make meaningful sounds in the first place. But yes, it often has this very gnarly aesthetic and that’s something that’s very appealing to 1-bit folks. At the same time we strive to make it sound less crude, and make it produce more complex sounds. There’s a kind of very immediate “you vs. the machine” aspect to it.
NB: Does this also apply to retrocomputing and the demoscene?
utz: For retrocomputing, yes. For the demoscene, it’s mostly true for the “oldschool” part of the scene. The demoscene is a culture that focuses on producing non-interactive, realtime-generated audiovisual computer art. Retrocomputing is a movement that uses supposedly obsolete computer hardware in various different ways. As a good portion of the demoscene (the so-called “oldschool” scene) revolves around old computers, there’s significant overlap between the two, of course. Personally I don’t like the term “retrocomputing” very much. It carries this implicit connotation of “obsolescence”, which I find inappropriate. For me, these old machines aren’t obsolete at all, they’re valid contemporary musical instruments. Also, the software being developed for these machines is very much about exploring new possibilities and finding new algorithms, so “retro” isn’t a good fit in that respect either. Also, the thing with 1-bit music is that there are people in it who aren’t connected to either of these scenes. There are for example people who come from the calculator scene. There are these graphing calculators that are being used in school, right? Well, there are people who dig into these machines, and coincidentally they can make 1-bit music. Also, 1-bit music existed before chiptune/demoscene/retrocomputing.
NB: And all these cultures seem to emphasize technical restrictions. Is that right?
utz: Yes, there’s definitely some truth to that. I think there are actually two different aspects related to that, though. One is challenge: It’s a challenge to make something on these old machines, even more so to push the boundaries of what can be done. The other side is that paradoxically, restrictions can be artistically liberating, in a creative sense.
NB: I totally agree. But I have a question: If the term “1-bit music” is really only about 1-bit sound then any kind of music can be “1-bit” if it is played using this specific technology?
utz: I don’t think anyone ever gave too much thought to a solid definition of 1-bit music so far. But yes, I’d agree with that. In fact, it’s possible to employ 1-bit techniques on sound chips, for example. This contrasts it with the term “8-bit music”, where “8-bit” generally refers to the type of CPU used to make the music. 1-bit music can be 8-bit music, although it doesn’t necessarily have to be. There is a general-purpose music editor called Sunvox, which has a 1-bit mode, so you can make 1-bit music without actual 1-bit hardware. There are 1-bit VSTis (sound plugins) as well.
NB: So, unlike algorithmic composition it is not primarily about generating music, but the generation of sound.
utz: In a sense we have the same problem as with chiptune: everybody has their own idea about what qualifies as a chip and what doesn’t.
NB: Well, now that we have taken a look at the cultural environment in which 1-bit music is being created, could you give a more technical explanation of how it works?
utz: Ok, so, as mentioned before, 1-bit music is made by switching signals on and off. If you switch a signal at regular intervals 440 times a second, you get a 440 Hz note. These are the basics. The gist is that by switching in more complex patterns, one can produce more complex sounds. This includes polyphonic sound (containing several independent melodies or effects played at the same time). Modern 1-bit music is generally about polyphonic sound. There are a number of techniques that can be used to achieve polyphonic sounds, and there are a lot of different implementations of these techniques. By the way, that’s also something that sets 1-bit apart from, say, chiptune. If you take Gameboy music for example, at least 90% of it is made on one of 2 editors, LSDj and Nanoloop. In 1-bit, on the other hand, we use tons of different software regularly. So, there are two main approaches to generating polyphonic sound with a 1-bit signal. One is called Pulse Frequency Modulation (which is actually an established thing, we didn’t make that up), and the other one goes by a bunch of different names, I usually call it Pulse Interleaving. Say, we switch our signal at a rate of 440 Hz. Or any rate, as a matter of fact. One would assume that the signal is on for half of the time, and off the other half of the time (which is what’s known as a square wave). But it doesn’t need to be that way. We can also create a signal that’s on for 1% of the time, and off 99% of the time. As long as we switch at 440 Hz, we still get a 440 Hz note. However, we now have a lot of space between the “on” pulses. Within that space, we can render a second train of pulses at adifferent frequency. The result will be that both frequencies are audible. This gives a very characteristic sound. This is pulse frequency modulation. Funny side node, that’s actually how your brain cells communicate with each other. Anyway, Tim Follin’s early works on ZX Spectrum are typical examples of that. See, for example, this video.
NB: How do you know this specific technique was used in this tune?
utz: The sound is very characteristic, 1-bit musicians will instantly recognize it. Also, we know because some clever people analyzed the music driver behind it.
NB: Yes, that’s a fascinating story how people reverse-engineer other musicians’ programs.
utz: The Follin driver is also legendary because it was the first to do 5-voice polyphony on a 3.5 MHz machine with no sound hardware. The pulse interleaving technique on the other hand tackles the problem at hand from a completely different angle. Generally, pulse frequency modulation makes it possible to render a lot of voices even with slow hardware. The record is at 16 voices for the ZX Spectrum. Pulse interleaving has more constraints in that respect, because it requires tighter timing. The idea behind it is that loudspeakers have inertia. The hardware components in a computer have inertia as well, but to a much lesser degree. Generally, a CPU is much faster than a loudspeaker cone. So, when we switch our 1-bit signal from off to on, the speaker cone will (slowly, from the CPU’s perspective) start to extend outwards from it’s current, contracted position. While this is still happening, the CPU can already switch the signal off again. Which means that before the speaker cone has even reached full extension, it will start contracting again. By precise timing, we can therefore control the speaker cone position, or in other words, we can produce different volume levels through our 1-bit signal. With different volume levels, we can of course produce polyphony. Generally, this technique will result in a more typical “chiptune” sound, though other applications are possible.
NB: It certainly requires a more complicated calculation than the previous technique.
utz: Yes, and what fascinates me the most is that it can be implemented in so many different ways. The funny thing is that for a minimal example, the actual code is nearly identical. Finding this similarity was a big “Aha!” moment for me. But generally pulse interleaving code tends to be more complex than pulse frequency modulation code.
NB: If I understand correctly, by creating cone movements of different amplitude, you basically make your own digital-to-analog converter which you can use to play anything, including polyphonic music (but also speech, for example).
utz: Yes, exactly. Speech is very simple, actually. Another way to put it is to say that for 1-bit sound, within a fixed interval time equals volume.
NB: Now, there is also a prehistory of 1-bit. There were mechanical devices that, to a degree, worked in the same way. For example, in the 17th century there were experiments with toothed wheels: in 1681, Robert Hooke presented to the Royal Society in London his experiments where a rotating wheel created individual “clicks” with its teeth. If rotated quickly enough, it produced not a pulsating rhythm, but a tone. This was later called a Savart wheel. There were also mechanical sirens. With an arrangement of holes, tones were produced by interrupting the stream of air going through it. Different machines existed that were based on this principle, one of them (already in 20th century) was the Rhythmicon, built by Leo Theremin and working with light instead of air. Sometimes, people would also put holes representing different frequencies into the same circular pattern, which basically corresponds to the “pulse frequency modulation” technique in 1-bit. This specific experiment was carried out in 19th century by Friedrich Wilhelm Opelt.
utz: That sounds very much like an application of PFM. So, they could do polyphony?
NB: Yes, by combining holes referring to two different tones (or teeth in the case of the wheel). Of course, you could not switch tones as easily as with the computer.
utz: Hmm, if they would’ve used a tape instead of a wheel it might have been possible.
NB: Too bad they can’t hear us! Now, coming back to 1-bit: during our conversation, I was listening to your album of 1-bit music, published under your artistic name “Irrlicht Project”. What was your own, personal experience in the creation of such music?
utz: Now that’s a long story… I’ll try to be brief. I started experimenting with computer music back in 2003-2004. I used the Fruity Loops software back in the day (a so-called digital audio workstation). I didn’t have any clue of what was going on in the outside world back in the day, since I had no internet. So, I just made music with sounds that I liked. Which happened to be mostly the sounds based on simple waveforms, such as pulse/saw/sine etc. Then, some years later I discovered that there’s a whole scene revolving around that kind of aesthetic, called chiptune. I also discovered that there are these old computers that you can use to make this type of music. At the same time I was running into a sort of a dead end with my digital-audio-workstation-based workflow. As these got more complex, and the synthesizers and effects started to arrive, I found myself more and more busy with tweaking sounds, rather than doing actual compositions. At that point I tried making music on the Atari ST computer, and it was a revelation. Suddenly, I could focus on composition again, because there’s only so much you can do to tweak the sound of the Atari’s sound chip.
NB: So, it was again all about the self-imposed technical restriction.
utz: Oh yes, absolutely. Though, for me it always felt like a liberation. However, as time went on, I discovered more and more ways to tweak the sound, and that became an issue again. At that point I discovered the works of a certain Mister Beep, who was pretty much the only active 1-bit musician at that time. That must have been around 2008-2009. Then, the available tools for making 1-bit music were decades old, and extremely limited. Basically, they didn’t facilitate tweaks to sound at all. Obviously, those old tools had proven to be a lot of joy, even though they were crude and super clunky to use. A few people started getting into 1-bit around the same time. Also, some new tools started to pop up. First, they were mainly ports of old drivers (versions of the musical software that were made compatible with a different computing environment), equipped with new, better user interfaces. Then, we got the first “new” drivers (most of them by Alex “Shiru” Semenov), and also PC-based editor called Beepola. It was a cross-platform program which means that it could run on different systems. At some point, I became interested in making my own driver.
NB: For that, you already must have been a programmer, not just a musician.
utz: Well, that was the problem, actually. I wasn’t. I hardly knew anything about programming. I’ve had only 3 years of computer studies in school, mostly dabbling in the Turbo Pascal programming language, which I had all but forgotten at this point. Also, there’s pretty much only one way to implement 1-bit drivers (at least on the usual 8-bit hardware), and that’s by using the Assembly programming language. I tried to wrap my head around it for ages, with no success. But I had this vision. You see, other kids, they’d have Gameboys and whatnot at school, and a Sega or NES video game console at home. Well, I didn’t. I was the dorky kid who had… a graphing calculator. Everybody had one because it was mandatory, but I was one of the few people who actually spent a lot of time exploring the thing. Now, going back to 2011 or so. I still had that calculator, a Texas Instruments TI-82. At some point I realized that it has the same CPU as the ZX Spectrum home computer (for which many 1-bit tools where available at that time). So, I had this vision of porting one of the ZX drivers to the TI. And after many attempts, I finally succeeded in making the thing beep.
NB: That must have been a special feeling, given that you returned to a tool from your teenage years.
utz: Yes, very much so. As time went on, I became better at this Assembly thing. I made my first own drivers. Of course, they were bad, but, hey, I was making progress. Then, some friends found out about this thing I had done with the calculator, and started asking me about a music editor, a one which could natively run on the TI. And so I sat down and did that. It took me about 6 months, I think, and the result was… hardly usable. Then, a couple of years later I re-did the whole thing, and this time it was somewhat decent.
NB: Do you have an example of this calculator music?
utz: Yes, here’s the “introductory video”. This one uses the pulse interleaving technique, by the way.
NB: Did you modify the calculator to give it an audio output?
utz: No, not at all. It has a “link port”, which is normally used to communicate between two calculators, or between a calculator and a PC. I’m using it for sound.
NB: A beautiful hack! I am sure that, for many people, making music on a calculator symbolizes the ages-long relationship between music and mathematics (and time, I would add, since this is also all about the correct timing).
utz: Oh yeah! So, over time, this programming aspect of the 1-bit universe has become increasingly prevalent in my work. This goes to the point that I now regard it as kind of an art form in itself. It’s not even so much about the music anymore, though I do still enjoy making a good tune. I guess that somewhat sums up my experience in 1-bit.
NB: That’s a great story of going from making music to making tools for making music.
utz: Let me try to find one more example perhaps, just to show what’s possible. This is one of my later drivers, called zbmod. It was used by Tufty, one of the most talented 1-bit musicians (who, by the way, has a new album coming out soon). This works on the classic Sinclair ZX Spectrum home computer. I’ve written many different drivers (or “engines”, as we call them in the 1-bit world). They are all open-source, by the way – including the calculator editor. The code is currently available on Github.
NB: Great! Well, there are so many things that we could add to this discussion of 1-bit, for example the early mainframe computers sounds that you have collected and presented in your recent talk in Berlin. But the nature of this phenomenon, which is still unfolding, is that we would need an infinite amount of time to fully describe it. Thus, I’d like to thank you very much for this conversation, and I am looking forward to hearing about novel creative approaches coming from this technology, which is both new and old.
utz: It’s been a pleasure. Thanks for giving me the opportunity to talk about my favorite subject!
Victor Adan, Discrete Time 1-Bit Music: Foundations and Models. PhD thesis, Columbia University (2010).
Kenneth B. McAlpine, “The Sound of 1-bit: Technical Constraint and Musical Creativity on the 48k Sinclair ZX Spectrum”, GAME 6 (2017).
Nikita Braguinski’s research has been focused on the notion of unpredictability in game sound during his PhD work. His dissertation has been recently published in German. At the moment, he works on the predigital history of algorithmic music and on the mathematization of music in the Soviet musical avant-garde. His recent publications include an article on the notion of 8-bit in music and a discussion of the Speak & Spell electronic toy.
‘utz’ is a developer of sound drivers and music editors, with a passion for low-level synthesis algorithms. He is also an avid composer and performer of 1-bit music. He maintains a personal website with all his works.
We are excited to announce that Ludo2019, the Eighth European Conference on Video Game Music and Sound, will take place April 26th – 28th at Leeds Beckett University.
Please share our Call for Papers poster online and around your institutions.
The organizers of Ludo2019 are accepting proposals for research presentations. This year, we are particularly interested in papers that support the conference theme of ‘Implementation and Preservation’. We also welcome all proposals on sound and music in games.
Proposed papers might be presented as part of planned sessions on:
Presentations should last twenty minutes, to be followed by questions. The conference language is English. Please submit your paper proposal (c.250 words) plus provisional bibliography by email to firstname.lastname@example.org by February 15th 2019.
Practitioners and composers may submit proposals to present work. We also welcome session proposals from organizers representing two to four individuals; the organizer should submit an introduction to the theme and c.200 word proposals for each paper.
The conference will feature:
James Newman (Bath Spa University) as keynote speaker, who is co-founder and curator of the National Video Games Archive, author of Videogames (2004/2013), Playing with Videogames (2008), 100 Videogames (2007), Teaching Videogames (2006) and A History of Videogames (2018).
Paul Weir (EARCOM), as keynote speaker, who is a composer, sound designer and audio director, known for his work in games, generative audio, radio and audio books. He has soundtracked over forty games, including the widely acclaimed No Man’s Sky.
Joe Thom (TTGames), as keynote speaker, who is a sound designer best known for his work on the Lego game series.
Lydia Andrew (Ubisoft), Audio Director for the Assassin’s Creed series at the Montreal studio, with Joe Henson and Alexis Smith of “The Flight”, composers of Assassin’s Creed Origins.
Hosted by Richard Stevens (Course Director, MSc. in Sound and Music for Interactive Games; School of Film, Music & Performing Arts)
Organized by Melanie Fritsch, Michiel Kamp, Tim Summers & Mark Sweeney.
With apologies for the late posting of this roundup, thank you firstly to all our Ludo 2018 delegates! Aside from some frustrating but unavoidable travel issues due to poor weather conditions, the conference was a great success! It’s been the first Ludo conference ever held in Germany, and with around 80 participants it was also the biggest!
A special thanks goes out to our wonderful hosts Christoph Hust (Zentrum für Musikwissenschaft Leipzig) and Martin Roth (JGames Initiative Leipzig) and their teams, our fantastic Keynote speakers Kristine Jørgensen, Adele Cutting and Michael Austin, as well as our generous sponsors EA Blog für digitale Spielkultur and Stiftung Digitale Spielekultur.
We’d like to thank all you fabulous people who came to Ludo2018 and helped to make it such a memorable event for us! Thank you for all the excellent talks, discussions, and overall for being a lovely community. We hope to see all of you next year in… well, we will reveal that very soon. Stay tuned!
We’re delighted to share links to the following articles related to Ludo2018.
Other links not directly related to the conference to recent media features: