A SOUNDSCAPE OF KNOWLEDGE: LISTENING TO WIKIPEDIA
DOI: 10.23951/2312-7899-2020-2-12-24
The paper’s primary objective is to analyze a particular example of sonification at work, namely the project called Listen to Wikipedia where certain events such as creating, editing or deleting Wikipedia entries or the registration of new users trigger musical events arranged in a somewhat probabilistic, ambient-like sonic output. For that purpose, in the opening paragraphs of the paper we give a brief outline of the current state of the research in sonification drawing from recent handbooks and companions. Next, we discuss what qualities of hearing as a primary medium are important for understanding the limitations and strengths of sonification as a technique of information transfer. Drawing from Michel Chion’s seminal paper on sound in cinema, we also talk about how three listening modes (i.e. causal, semantic and reduced listening) can affect such transfer, and lead to different approaches in sonification. We also discuss the generative nature of sonification and how the generative principles at work within a particular sound interface may be of more importance than the input data per se. The analysis of Listen to Wikipedia proceeds along the lines laid out in the introductory paragraphs. Close attention is paid to the musical side of the project beginning with its rhythmic and instrumental qualities and finally zooming in on its harmonic content. Of equal importance is the fact that the generative principle used in the project draws the listener’s attention from the content of Wikipedia over and across to its metaphorical and virtual existence as a sounding body or a sonically active medium, whose transformations are being translated into a chain of musical events. This body may be assigned a density (corresponding to number of links amongst the entries) and a size (growth in larger increments produces lower sound, and vice versa). Thus, the causal and the reduced listening modes seem to be encouraged in the listener whereas the semantic mode is somewhat downplayed. The euphony and sonority of the resulting sound feed opens up the possibility of perceiving it as a natural process not much different from a purl of water or a bustle of leaves, which makes one wonder whether the distributed subjectivity is what really being represented in the sonification. We also point out, based on our previous research, that sonic practices and phenomena very often serve as dynamic interfaces between the private and the public, and Listen to Wikipedia may also be construed as such an interface. The nature of the audio feed creates an immersive environment facilitating the process of joining Wikipedia, and presenting the latter as a conflict-free, harmonious medium.
Keywords: sonification, auditory display, connotation, sonic humanities, sound studies, soundscape, Wikipedia
References:
Chion, M. (1994) Audio-vision: sound on screen. New York.
Diaz-Merced, W., Candey, R., Brickhouse, N., Schneps, M., Mannone, J., Brewster, S., & Kolenberg, K. (2011) Sonification of Astronomical Data. Proceedings of the International Astronomical Union. 7 (S 285). Pp. 133–136.
Hermann, T., Hunt, A., & Neuhoff, J. G. (2011) The sonification handbook. Berlin.
Listen to Wikipedia. URL: http://listen.hatnote.com/
Logutov, A.V. (2017) Sonic practices and the materiality of urban space. Urban research and practices. Vol. 2. No. 4. Pp. 39–50. (In Russian).
McLuhan, M. (1986) Quotes. [Online] Available from: https://www.marshallmcluhan.com/mcluhanisms/ (Accessed: 25.02.2020).
Schafer, R.M. (1994) The soundscape : our sonic environment and the tuning of the world. Rochester, Vermont.
Seifert, D. (2013) Fall asleep to the sound of Wikipedia / The Verge. 9 Aug. 2013. [Online] Available from: https://www.theverge.com/2013/8/9/4607240/fall-asleep-to-the-sound-of-wikipedia (Accessed: 25.02.2020).
Sterne, J. (2003) The audible past. Duke University Press.
Sterne, J. (2012) Sonic imaginations. The sound studies reader. Ed. J. Sterne. N.Y. Pp. 17-41.
Worrall, D. (2019) Sonification design: from data to intelligible soundfields. Cham.
Issue: 2, 2020
Series of issue: Issue 2
Rubric: ARTICLES
Pages: 12 — 24
Downloads: 783