Course offered by: Dr. Julia von Thienen, Dr. Marisol Jimenez, Dr. Henrik von Coler, Dr. Nico Steckhan & Dr. Knut Kaulke
This document serves as the final documentation for the seminar "Data Sonification and Opportunities of Sound" at HPI.
The content is structured as follows:
1.) Project title
2.) Team members, affiliations, contact details
3.) The project aim and why this is important
4.) Theoretical embedding, related works
5.) Methods
6.) Work results
7.) Conclusion, discussion, limitations and avenues for future work
8.) Acknowledgements
Reference List
Musical Environmental Bike
Malte Barth
[email protected]
Data Engineering
Carla Terboven
[email protected]
IT-Systems Engineering
Air pollution is generated by emissions from road traffic, power plants, furnaces or heaters in residential buildings, and many more sources [2]. Most air pollution is produced by human activity, even though it brings critical health and climate problems.
When we breathe in polluted air, particles can get into our lungs and blood, leading to inflammation in the trachea, an increased tendency to thrombosis, or even changes nervous system like heart rate variability [2].
Particularly dangerous are the smallest particles in the air. They are grouped with the term particulate matter (PM) (german: "Feinstaub"). PM is a mixture of solid and liquid particles. Distinctions are made in the size of the particles, no matter what chemical elements are involved. More detailed information on PM is given in the data preparation section.
Because it leads to health and climate problems, PM also gets more and more attention from artists. Anirudh Sharma motivates people all around the world to think differently about air pollution. He produces Air Ink out of PM2.5 particles [1]. Artists can use this rich, dark black ink for paintings or textile printing. With Air Ink air pollution is turned into something useful. Most artists use Air Ink to communicate health and climate problems caused by air pollution in their paintings. We believe that more and more people are aware of these problems. But when we asked ourselves how much particulate matter we breathe in when we go outside in our own neighborhood, we did not have a clue.
Even though many city councils monitor air quality, the measuring stations are installed in fixed positions [3]. But how is the air pollution, right here and right now, in our neighborhood? How can we raise attention to our individual interaction with air and air pollution?
The 'Sonic Bike Project' is concerned about precisely this topic. The goal is to hear and understand sonified air pollution data while riding a bike. The black bikes look pretty natural but have an air pollution sensor as well as hardware and speakers attached to the bike so that the cyclist can independently ride around. While riding, the sensor measures the particulate matter, and this live data is directly sonified and presented to the cyclists via the speakers.
We joined the project after meeting Kaffe Matthews. Together with two colleagues, she introduced the Sonic Bikes to the public in Lisboa and plans the next Sonic Bike event in Great Britain in summer 2021. The Sonic Bike Project is concerned about raising awareness of air pollution, gathering environmental data, and attracting other pedestrians' attention. They aim to communicate the collected data in an engaging, artistic way. But at the same time, the cyclist should intuitively understand the data so that the sound should be meaningful and informative as well.
As computer scientists, we are quite new to the artistic, musical approach the Sonic Bike Project is taking right now. On the other hand, we believe that our background can add interesting insights to the project in the way we can process and work with large amounts of data.
Regarding the project outcome of our seminar project, we are interested in translating data to sound that the cyclist can intuitively understand. We aim to raise awareness of air pollution but have a sound that is interesting enough to ride the bike for an extended time period. Towards the end of the semester, we asked ourselves what air pollution could sound like and still discover this question with different samples we record ourselves.
As explained in the last section, we decided to sonify air pollution data. Since there are several possibilities of sonifications out there, we introduce some theoretical sonification approaches in the beginning. After that, we present different sonification projects around the topic of air pollution. Some of these projects serve as a source of inspiration for our own project. Directly related to our vision are the Sonic Kayak and Sonic Bike projects. In both projects, a green form of transportation (kayak and bike) serves as an artistic medium to raise the awareness of environmental pollution while at the same time using the medium of transport to explore the lake or city. The connection between these projects and our own idea is described at the end of this section.
Sonification is defined as "the transformation of data relations into perceived relations in an acoustic signal for the purposes of facilitating communication of interpretation" [7].
But why is it interesting for us to inform people about air pollution data via acoustic audio signals instead of using visual plots, as most people are used to consume scientific data via plots and tables? The users of our Musical Environmental Bike project are cyclists exploring parts of the city. Knowing that they use a medium of transportation and concentrating on traffic, we have to communicate the data in an intuitive, not too distracting way to get the user's understanding and awareness of the existing air pollution.
Paul Vickers describes in chapter 18 of "The Sonification Handbook" three different modes of monitoring: direct, peripheral, and serendipitous-peripheral [11]. When monitoring information in a direct way, the main focus of attention is claimed. For peripheral monitoring, the "attention is focused on a primary task whilst required information relating to another task or goal is presented on a peripheral display and is monitored indirectly" [11]. And the "attention is focused on a primary task whilst information that is useful but not required is presented on a peripheral display and is monitored indirectly" [11] when using serendipitous-peripheral monitoring. We aim to monitor air pollution data while the users explore the city with a bike. The users should concentrate on the traffic, so we do not want to present air pollution data in an attention-grabbing way. This means that we aim to deliver the data in a peripheral or serendipitous-peripheral way. Since the "human auditory system does not need a directional fix on a sound source in order to perceive its presence" [11], monitoring with audio seems to be a perfect way.
But how can the cyclist intuitively understand the monitored data? Rauterberg and Styger [9] advise to "look for everyday sounds that 'stand for themselves'". And Tractinsky et al. [10] state that user perception is driven by aesthetics and there is growing evidence that there is an increased usability of systems designed with an aesthetic focus.
According to Vickers [11], "the embedding of signal sounds inside some carrier sound" also leads to user satisfaction because of less annoyance and distraction. Vickers introduces approaches where user-selected music serves as the carrier sound of the sonification signals. He states that "such a steganographic approach allows monitoring to be carried out by those who need to do it without causing distraction to other people in the environment." So it might be an exciting thing to look at since the speakers on the bikes provide sound to the whole environment and not just the cyclist.
But apart from the aesthetic, user-centered everyday sound, we also found more theoretical design concepts for sonification.
Kramer [6] makes a distinction between analogic and symbolic representations of the data. An example of analogic representation is the Geiger counter because it directly maps data points to sound. The listener can understand the immediate one-to-one connection between data and sound signals. This is different for a symbolic representation. Here the data only gets represented categorically, and there is no direct relationship between data and relationship necessary. Examples are most control notifications in cars. To us, the analogic representation sounds interesting because it can directly transport a lot of the data's meaning. Moreover, the sound of the Geiger counter could communicate the association of poisoned air to the cyclist. A notification-like sound might be interesting when passing certain air pollution thresholds of the EU or WHO. We can imagine a symbolic representation at that point.
Another concept are semiotic distinctions. Here we can differentiate syntactic methods using, e.g., earcons, semantic methods like auditory icons, and lexical methods as parameter mapping [4].
Earcons are a very abstract representation of the data what makes them hard to understand. Because we want the cyclist to understand the data quite intuitively, we now take a closer look at semantic and lexical methods.
Semantic methods like auditory icons map data to everyday sound. This leads to familiarity and quick understanding as well as direct classification options. But the mapping of data to auditory icons is complicated, especially because we have to think about a good representation for air pollution data that does not have a natural physical sound.
When using parameter mapping, different data dimensions are mapped to acoustic properties like pitch, duration, or loudness. This way, one can listen to multiple data dimensions simultaneously and create a complex sound. But it is pretty challenging to balance everything in a way that the listener can still pick up the meaning of the data [4]. Moreover, it becomes unpleasant quite fast, and we have to balance the alarming content of air pollution and the confidence and well-being of the person riding the bike.
Thinking more closely about the continuous stream of air pollution data we want to sonify, we hope to find different air pollution levels in Potsdam. Then we do not need attention-grabbing sounds all the time but think about using them in situations where the pollution data gets alarmingly high. Looking at specific design recommendations in the literature, McGee advises keeping sound events as short as possible and the spacing between sounds reasonable to avoid interference and sound masking as well as preventing the user from being overwhelmed by too many sound events [8].
Apart from general sonification approaches, we also took a look at past projects that communicated air pollution data with sound.
Particularly interesting, we find a project by Marc St Pierre and Milena Droumeva [18]. They scale and map pollutants (CO, O3, SO2, and NO2) to individual frequencies using SuperCollider. The sound produced by these pollutants is already very telling, and it is possible to understand which pollutant is changing at any time once the listener knows the mapping. The sound is quite powerful and vibrant but becomes even more telling and interesting because of a Geiger counter/clicking sound that is somehow completing and competing with the rich sound of the other pollutants. The Geiger counter is based on PM2.5 data, which is "measured differently than the other chemicals and therefore receives a different mapping".
Listening to their work on soundcloud (https://soundcloud.com/marcstpierre retrieved 2021-03-16), we are fascinated by how the Geiger counter sounds interesting for a long time even though the clicking sound itself does not change in pitch but only in rhythm. We imagine this is caused by the mysterious sound patterns that are produced by the other pollutants. The combination of vibrant, rich sound and the clicking Geiger counter is fascinating. We believe that we can learn a lot from this project when developing our own sonification.
Julián Jaramillo Arango introduces a paper about AirQ Sonification [12]. It combines three different projects, all concerned with air pollution in 2016 and 2017. They are called AirQ Jacket, Esmog Data, and Breathe!.
"AirQ Jacket is a piece of clothing with an attached electronic circuit, which measures contamination levels and temperature and transforms this information to visual and acoustic stimuli" [12]. It is using multiple sensors to collect the data and small lightweight speakers and LEDs to communicate the data to the user. The designers imagined "to create healthy courses through the city" [12] with these jackets. The white jacket perhaps attracted a lot of attention but looks a bit too alien-like to us. Air pollution is happening all around us so we are exited about communicating our project with a 'normal' looking bike.
Esmog Data is an art installation using audio and motion graphics to achieve a "meaningful listening experience" [12]. Each collected sensor data point (CO, CO2, SO2, and PM10) is therefore connected to multiple parameters of the synthesizer to generate "more complex musical values" [12].
"BREATHE! is a multi-channel installation with no visuals, where the visitor should be able to identify each one of the measured toxic gases as a different sound source in the space. [...] The installation displays six human breathing sound loops, which shrink and stretch from different points in the space according to toxic levels." [12] Interestingly, the artists considered the breathing as some kind of communication that people can understand all over the world. We all breathe the same. We would love to achieve this intuitive communication of the data for our project, not with breathing sound but with our own samples.
Next to these scientific papers, we also found exciting sound examples online. For instance Space F!ght in collaboration with the Stockholm Environment Institute and NASA's Goddard Institute for Space Studies want to communicate the level of ozone data through art [14]. They perform a combination of parameter marking, speech, and improvisation of a trumpet player. We find the sound quite mystic as well as concerning and alarming. The improvising trumped is guided by the Ozon data and gives a sensitive touch to the performance. The group states that they chose to work with ozone data because ozone is proved to directly affect climate, and human health.
The auditory display by Kasper Fangel Skov [17] is concerned about climate and human health as well but focuses not on ozone but on dimensions like temperature, light, humidity, and noise. Interestingly this auditory display of urban environmental data of different cities also uses voice to classify the used data in categories like "high" or "medium".
Also interesting is a project by Jon Bellona and John Park [13] [16]. They are not directly sonifying air pollution data but carbon emissions of twitter feeds. This indirect concern about air pollution is communicated with a physical visualization. The auditive, as well as visual experience, aims to connect virtuality and reality. Based on the estimation that one twitter tweet produces 0.02 grams of CO2, gas bubbles inside a water tank are released based on personal twitter feed data. The physical visualization is supported by sound, making the feeling transported by the installation even more powerful.
Apart from the sonification projects described above, we got inspiration by the Sonic Kayak [15] and the Sonic Bike [19] [20] [21] projects. Both projects were introduced to us by Kaffe Matthews who works on these topics for some years.
The Sonic Kayak project generates live sound on the kayak, using sensors in the water as well as sensors for air particulate pollution, GPS, and time.
Also concerned about air particulate pollution is the Sonic Bike project. Here, an air pollution sensor with 12 channels gathers live data on a bike. The data is then processed at the back of the bike, using Raspberry Pi and PD vanilla. Finally, the bike rider can experience the sonified air pollution via two speakers attached to the bicycle handlebar and a subwoofer behind the bike seat.
Initially, we wanted to become part of the Sonic Bike project. But due to the COVID-19 pandemic, we decided to use recorded CSV data instead of the bikes with live data. Moreover, we play our strength in programming skills and use a Python script to step through the data and preprocess it. The original Sonic Bike project directly deals with the data in PD vanilla. We decided to change this approach because we are new to PD (Pure Data), and data preprocessing seems to be better supported in python since PD is mainly used as an interactive multimedia software. A more detailed overview of our current approach can be found in the next section.
On the way from the pure air particulate pollution data to a meaningful sonification, we have to deal with multiple challenges.
We prepare the data and map different dimensions of the data to pitch, volume, etc., to generate a telling sound. Finally, we use OSC (Open Sound Control) to send the processed information to PD. Here we read the messages as notes and play them with a synthesizer.
In the following sections, we take a more detailed look into each step we take to hear the sonified data in the end successfully.
Air pollution is different in every location. This is why Kaffe Matthews and her team cannot simply take the bikes from Lisboa to another city but have to adapt some thresholds to different levels to generate meaningful sonification in different areas.
Right now Kaffe Matthews is working in Berlin. She gathered PM data in seven rides she did in her neighborhood. The following figure shows raincloud violin plots with the distribution of the PM values for all seven rides.
Looking at the plot, we find that the distribution of PM2.5 and PM10 nearly seem to be the same. This can be explained by the definition of PM ("Particulate Matter"). PM1 describes the amount of µg/m³ of particles smaller than 1 µm. PM2.5 includes the amount of µg/m³ of particles smaller than 2.5 µm. And PM10 contains the amount of µg/m³ of particles smaller than 10 µm. All particles that are smaller than 1 µm or 2.5 µm are smaller than 10 µm as well. If there are only a few particles with a size between 2.5 and 10 µm, the PM10 values equal the PM2.5 values most of the time.
For sonification, we want to use the PM data to manipulate different aspects of the auditory representation. If PM2.5 and PM10 values behave the same most of the time, the sound representation hardly becomes exciting and meaningful to the listener. This is why we decided to subtract the smaller PM values of the bigger ones for each data point. This way, we generate something we call "disjoint PM". As expected, the following plot proves that the distribution of the PM values becomes more distinct for the "disjoint PM".
We have to keep in mind that we have to use the original PM data when comparing it to legal thresholds. There are statutory thresholds by the EU and more strict recommendations by the WHO to ensure human health. All average limits per year are presented in the following table (thresholds according to [2]):
PM 1 | PM 2.5 | PM 10 | |
---|---|---|---|
EU | - | 25 µg/m³ | 40 µg/m³ |
WHO | - | 10 µg/m³ | 20 µg/m³ |
As the pitch is the property of music with a high variance generally, we chose the PM 1 values, which have the biggest spread of values out of the three, to determine the pitch of our generated sounds. We thought about PM1 helping to understand the other two air pollution values because PM1 itself does not have well-defined limits. To achieve this, we do not just play random notes according to these values but instead create chords out of them where the current PM1 value corresponds to the chord's root note. Then we decided, based on thresholds on the PM2.5 and PM10 values at the same time, in which mode the chords should be. We used a major chord for low AP levels as they are most common in pop music and sound generally more happy, at least in our western culture [22]. Our hypothesis is that the listener can link those positive chords to low pollution.
If there is modest pollution, we play minor or minor 7th chords. They are mostly regarded as sad [22], which we thought was fitting because while not dramatically higher, these values should make the rider listen twice and notice the change in tone.
Lastly, we wanted to grab the listener's attention for the highest AP levels while still using solid musical foundations. That is why we went for polychords which are multiple chords stacked on top of each other and played simultaneously. In our case, we chose a minor chord stacked on top of a major chord, with both having the same root note. The sound of polychords is very dissonant compared to the two modes above, which should be apparent also for the untrained ear, suggesting that something is not right. It also makes the listening experience less pleasant for the rider, which could lead them to avoid high pollution areas in the future.
To sonify PM2.5 levels, we chose a rather simple linear mapping of them to the volume of the produced sounds. Rather than giving each different value a different volume, we opted to go with an approach where small value changes result in the same volume. Our musical inspiration here were the dynamic levels that range from pp (pianissimo) for very quiet to ff (fortissimo) for very loud. This was also useful in denoising the data for the listener. At the same time, it is very straightforward to understand as higher volumes lead to more attention and seem more dangerous.
For higher air pollution, we want the sonification to be more alarming and with a faster rhythm. To not monotonically use the same rhythm all the time after passing a threshold, we decided to implement a partly probability-based rhythm.
The rhythm can consist of eighth notes, quarter notes, and half notes. If the PM values are high, we want the music to be faster.
Imagine we have 120 bpm. That is two beats per second. Our sequencer steps one time every second, so we have for every data point 2 beats (= one half note) to compose.
For each data point, we generate two thresholds determining the probabilities for a slow, medium, or fast rhythm. In the figure below, these thresholds are 0.2 and 0.7. This means a 20% probability for a half note, 50% probability for a medium-fast rhythm with mostly quarter notes, and 30% probability for a fast rhythm with eighth notes.
As shown in the figure below, we generate a random number for each data point. By comparing that to the data-based thresholds, we get the rhythm.
To finally hear the sonified data, we use PD (Pure Data). We send the preprocessed information to PD via OSC (Open Sound Control). OSC is a network protocol mainly used for real-time processing of sound data. Having a background in computer science, we were able to set up the OSC client on the python side quite fast but needed support for the PD side. The tutorials by von Coler [24] and Davison [25] helped out so that messages with the preprocessed data can be used inside our PD patch.
As a last step before hearing the final sonification, we need a synthesizer. After starting with simple sine waves and pitches manipulated by the PM values, we realized that we need different approaches to communicate the meaning of the data to the listeners on the bike more intuitively.
We want to create this intuitive understanding with samples that sound like air pollution. Air pollution has no natural sound, but we instantly had some associations like traffic or engine exhaust, breathing, coughing, wind, bubble balks, doctor's stethoscope, and brass instruments. When thinking of sounds of measuring devices that measure hazardous substances in the air, we thought about Geiger counters and smoke detectors.
In our final demo for this semester, we decided to combine the rich sound of self-recorded saxophone samples and a Geiger counter. Via OSC, we trigger PD to play the samples at a certain speed. This speed is based on the calculated pitch. To manage this last step, we experienced great help by Kaffe Matthews, Hendrik von Coler, and PD tutorials by Kreidler [26] and Brown [23].
Code: https://github.com/malte-b/musical_env_bike
In-Class Demo: https://1drv.ms/b/s!AnD1AVr_uHBJkHPlirrGs40Kx7ko?e=8xV6Z3
Demo using chords for the first time: https://github.com/malte-b/musical_env_bike/blob/main/demos/demo_synth.wav
Final Demo: https://github.com/malte-b/musical_env_bike/blob/main/demos/Final_Demo_Env_Bike_WS20_21.m4a
Our goal of this project was to make people more aware of the air pollution in their daily life by sonifying real-time air pollution data on a bike. Simultaneously we used musical foundations for generating sounds to make differences of pollution more noticeable for a layman by sounding less and less pleasant the higher the pollution gets. We think that we have come quite far with our work considering we did not re-use any existing code of the Sonic Bike project but instead wrote all the logic behind our sound generation from scratch in Python and also wrote the necessary PD patches to get the sounds we envisioned without any prior experience with it.
Sadly, courtesy of the ongoing pandemic and lockdown measures, we were unable to make our solution work on an actual sonic bike with real-time data but had to rely on recorded rides instead. This functionality would be the first step in making our vision of cyclists hearing the air pollution around them a reality. The user feedback that would surely arise from those rides would certainly help us to make the experience even better.
Right now, the chords do not take into account which notes were played before them. Adding a memory of what was played before might help our sounds to become even more musical and help smoothen out transitions. This can be done using machine learning techniques, for example.
Furthermore, in one time interval, there is only the same chord played repeatedly. This can be changed by either playing less notes as it may become annoying over time or by playing a melody over a static chord. Also, there is always the same meter being 4/4, which could be changed and therefore given a different experience.
Another interesting consideration for the future could be interactive sonification. Herman and Hunt [5] state that most sonification "fails to engage users in the same way as musical instruments" because they lack physical interaction and naturalness. The usage of naturalness of sound in the real world already got our attention when thinking about an intuitive way of monitoring air pollution. But thinking of sonification with physical interaction possibilities would be a new level. For our sonification, we map multiple dimensions of data to acoustic properties. Herman and Hunt recommend including interactive controls and input devices in this mapping. For example, the cyclist could choose the sound of the most alarming data points to be a Geiger counter, smoke detector sound, or heavy breathing. Maybe the preferences even change when being in different locations of the city. We imagine interactive customization of the sonification would increase user satisfaction and usage and maybe consolidate personal associations with air pollution.
We would like to thank Kaffe Matthews and Henrik von Coler for their help and the expertise they shared with us.
Kaffe Matthews inspired and led us to the sonic bike topic. She provided data and new ideas in several meetings.
Henrik von Coler was our technical mentor and helped us in multiple meetings to set up the technical framework.
[1] Anirudh Sharma, TED@BCG Toronto (2018). Ink made of air pollution. Retrieved from https://www.ted.com/talks/anirudh_sharma_ink_made_of_air_pollution on 2021-03-27
[2] Umweltbundesamt (2021). Feinstaub. Retrieved from https://www.umweltbundesamt.de/themen/luft/luftschadstoffe-im-ueberblick/feinstaub on 2021-03-27
[3] World Air Quality Index Team (started in 2008). The World Air Quality Project: Echtzeit-Luftqualitätsindex (LQI). Retrieved from https://https://aqicn.org/here/de/ on 2021-03-27
[4] Barrass, S., & Kramer, G. (1999). Using sonification. Multimedia systems, 7(1), 23-31.
[5] Hermann, T., & Hunt, A. (2005). Guest editors' introduction: An introduction to interactive sonification. IEEE multimedia, 12(2), 20-24.
[6] Kramer, G. (1994). Auditory Display: Sonification, Audification and Auditory Interfaces. SFI Studies in the Sciences of Complexity, Proceedings Volume XVIII. Addison Wesley, Reading, Mass.
[7] Kramer, G., Walker, B. N., Bonebright, T., Cook, P., Flowers, J., Miner, N., et al. (1999). The Sonification Report: Status of the Field and Research Agenda. Report prepared for the National Science Foundation by members of the International Community for Auditory Display. Santa Fe, NM: International Community for Auditory Display (ICAD)
[8] McGee, R. (2009). Auditory displays and sonification: Introduction and overview. University of California, Santa Barbara.
[9] Rauterberg, M., & Styger, E. (1994). Positive effects of sound feedback during the operation of a plant simulator. In International Conference on Human-Computer Interaction (pp. 35-44). Springer, Berlin, Heidelberg.
[10] Tractinsky, N., Katz, A. S., & Ikar, D. (2000). What is beautiful is usable. Interacting with computers, 13(2), 127-145.
[11] Vickers, P. (2011). Sonification for process monitoring. In The sonification handbook (pp. 455-492). Logos Verlag.
[12] Arango, J. J. (2018). AirQ Sonification as a context for mutual contribution between Science and Music. Revista Música Hodie, 18(1).
[13] Bellona, J & John Park, J & Bellona, D. (2014). #Carbonfeed, About. Retrieved from https://carbonfeed.org/ on 2021-03-16
[14] cdm (2013). A Sci-Fi Band and Music Made from Ozone Data: Elektron Drum Machine, Sax Sonification. Retrieved from https://cdm.link/2013/11/sci-fi-electronic-band-music-made-ozone-data-elektron-drum-machine-sonification/ on 2021-03-16
[15] FoAM (2020). Sonic Kayaks. Retrieved from https://fo.am/activities/kayaks/ on 2021-03-16
[16] Harmonic Laboratory (2014). #CarbonFeed - The Weight of Digital Behavior. Retrieved from https://vimeo.com/109211210 on 2021-03-16
[17] Kasper Fangel Skov (2015). Sonification excerpt #4: Rio de Janeiro. Retrieved from https://soundcloud.com/kasper-skov/sonification-excerpt-4-rio-de on 2021-03-16
[18] St Pierre, M., & Droumeva, M. (2016). Sonifying for public engagement: A context-based model for sonifying air pollution data. International Community on Auditory Display. (sound files: https://soundcloud.com/marcstpierre retrieved 2021-03-16)
[19] Bicrophonic Research Institute (2020). Environmental Bike (2020). Retrieved from https://sonicbikes.net/environmental-bike-2020/ on 2021-03-16
[20] Kaffe Matthews (2020). Environmental Bike (2020). Retrieved from https://www.kaffematthews.net/project/environmental-bike-2020 on 2021-03-16
[21] Kaffe Matthews (2020). Sukandar connects the air pollution sensor / Environmental Bike gets real. Retrieved from https://www.kaffematthews.net/category/Lisbon/ on 2021-03-16
[22] Parncutt, Richard. "The emotional connotations of major versus minor tonality: One or more origins?." Musicae Scientiae 18.3 (2014): 324-353.
[23] QCGInteractiveMusic/Andrew R. Brown (2020). 39. Modifying Audio File Playback with Pure Data. Real-time Music and Sound with Pure Data vanilla. Retrieved from https://www.youtube.com/watch?v=br7Hcx_FLoc on 2021-03-28
[24] Henrik von Coler (2020). Puredata. Retrieved from https://hvc.berlin/puredata/ on 2021-03-16
[25] Patrick Davison (2009). Open Sound Control (OSC). Retrieved from https://archive.flossmanuals.net/pure-data/network-data/osc.html on 2021-03-16
[26] Johannes Kreidler (2009). Programmierung Elektronischer Musik in Pd. Kapitel 3. Audio. Retrieved from http://www.pd-tutorial.com/german/ch03.html on 2021-03-16