This is a set of videos using my Pandemic Synthesiser, coded in Csound. This project is not in any way intended to make light of the current Covid-19 crisis, nor is it intended to be an accurate model of how a virus propagates.
In the current incarnation of the synthesiser, there are 63 “clusters” arranged in 9 columns (left to right in the stereo field) and 7 rows (high to low in terms of pitch). Each cluster contains 50 “individuals”. On the screen, these are represented by dark blue dots.
A piece starts with a random individual becoming infected, represented by a simple tone, and a blue dot turning light blue, showing that the disease has entered the incubation period.
Due to the initial parameters (displayed on the bottom left of the screen, with cyan labels) and pseudo-randomness, the “individual” may infect others in the same cluster, or adjacent clusters, or any cluster in the matrix. Once the synthesiser is set into motion the piece is entirely self-generating.
The dots show the following colours:
Dark Blue: Unaffected
Light Blue: Incubating
Pink Outline: Infectious
White / Swelling: Symptomatic
Green: Recovered and immune
Orange: Recovered but still vulnerable to reinfection
Further information on the progress of the “disease” can be found on the bottom right of the screen, with yellow labels.
This is definitely best listened to with a decent pair of headphones and on a big screen – there are a lot of frequencies generated.
Technical stuff: The Csound code takes parameters as a simple text file. It then generates 48k audio, and outputs .svg files for each video frame (as a compromise between quality and rendering time this is currently 10fps). I use an ffmpeg script to stitch everything together into a 1080p video. At first, with 5×5 clusters and no video, this ran in realtime on my computer, but it’s become much too complex now.