Research
When researching how a particular soundscape works, it turns out there is a complex adaptive system, in which different people each experience their own soundscape at the same time. How it sounds is determined by factors like where they are and what else is happening on the square. The project Klinkende Pleinen researches how these complex systems work and which principles might play a role in designing soundscapes.In collaboration with the team for urban development STIPO and the University of Groningen, the project explores the sound of the city step-by-step. What does it consist of? What plays a role in it? What is needed to design the acoustics? Which disciplines are involved? What are good and bad practices? Over the course of various sub-projects, the focus of the research becomes increasingly clear. In addition, the results that are gathered provide knowledge of the design parameters and of their possible applications.
Weesperplein
At the beginning of 2014, an initial sub-project was completed, in which students interpreted the available knowledge in an interactive digital model of the square Weesperplein, in Amsterdam. In this model, designers from different disciplines can put things in the square and directly experience the acoustic consequences by “walking around” it. The practical application possibilities of the model turned out to be so great that the students are considering starting up a business.In addition, an inventory of the soundscapes of a number of squares spread over the Netherlands has been made, looking at good and bad practices. In doing so, several research methods were investigated for their usability in designing the soundscape. In the latest version, binaural sound recordings were combined with GoPro video recordings and spectral and sound level measurements.
App
In the latest sub-project (2014-2015), a prototype for an analysis tool for soundscapes was developed. The input takes place through a large number of users, who record the soundscape of a location with an iPhone app and fill in a questionnaire. The results are uploaded through the app to a server, where the audio recording is analysed in a neural network. The analysis is linked to the results of the questionnaire. The combined output is analysed through the Affect model, by Russell (1980). The neural network makes a statement about the suitability of the soundscape with regard to the purpose for being at the location. The software is excellently programmed (modular) and can certainly be developed further in subsequent sub-projects.Read more about this sub-project (in Dutch)