Music trackers are among the earliest means of using a computer to compose compose and sequence electronic instruments. They offer a rather matter-of-fact approach to electronic music and resemble the spirit of the particular hardware they evolved alongside of in that regard: few sounds evoke
digital the way that the sounds of the 8bit era of gaming and computation seem to; likewise, few means of composing music say
computer the way a tracker’s downward progressing sequence of commands and parameters does; and both just scream
electronic very much in the way we’ve come to understand it in the information age. Yet it could be said that what truly manifests the quality of
electronic is that which best conveys the fundamental physical aspect of the electronic signal in itself. And what is
digital but for the mere processing of information? And what is this but an aspect of computation?
Seen in this light, mine own favored approach of PSGs and trackers might well seem rather ancillary or superfluous to the nitty-gritty of electronic sound in music. And indeed there are, as I see it, at least two higher or more fundamental approaches: the generative approach to modular synthesis and the coded algorithmic approach to the computerization of sound control and generation.
Generative Sound Synthesis
This is a paradigm of electronic music that’s really ever only received anything like a formal definition in computer music thanks to the apparent popularization of the
generative music concept by electronic music pioneer
Brian Eno. Yet within the predominantly analog world of modular synthesis, it seems to have taken on a second life. Readers of this journal may recall my dropping an article in the
previous issue which identifies an emerging trend among synth-oriented video-streaming producers to merely set up a patch and, for the most part, just let it run while they record the device itself as it produces a typically ambient collage of sound, animated only in the satisfying flickering of LEDs.