direkt zum Inhalt springen

direkt zum Hauptnavigationsmenü

Sie sind hier

TU Berlin

Inhalt des Dokuments

Making Things Audible

Lupe

When:

  • 05.03.2022
  • 14:00 - 24:00
Where:
  • ACUD MACHT NEU
  • Veteranenstraße 21
  • 10119 Berlin Mitte
  • U8 Rosenthaler Platz
  • Free Admission
With Making Things Audible, the TU Studio presents works by students and researchers from TU Berlin, UdK Berlin and HPI Potsdam. The day of installations, presentations and concerts is the closing event for this semester's Edgard-Varèse Guest Professors Leslie García and Paloma López (interspecifics, Mexico). In very different approaches, the presented projects deal with sonification or ways of controlling acoustic processes. Find the detailed program below.    

Program

14:00 - 22:00 Installations

HPI Potsdam:

  • Listen To Air Pollution (Carla Terboven)
  • Bird's Ear (Simon Wietheger & Vincent Xeno Rahn)

UdK Berlin:

  • Sound Folds (Erika Körner)
  • Piega (Giorgia Petri)

TU Berlin:

  • Cloud hands (Tim-Tarek Grund)
  • Brainwave Data (Christian Kastner)
  • 3 cranes (Lennart Seiler)
  • Archipielago (Pablo Torres)
  • Resoplant (Valentin Lux)
  • Network Feedback (Benedikt Wieder)

16:00 - Talks & Presentations

  • Paloma López & Leslie García (Interspecifics)
  • Julia von Thienen (HPI Potsdam)
  • Christoph Thiede: Auditory Displays in Programming (HPI Potsdam)
  • Henrik von Coler (TU Berlin)

20:00 - Concert

  • EOC plays Bodysounds (by Jean P’ark and Astra Pentaxia)
  • Open Modular Session

Details

Listen To Air Pollution

Air pollution is generated by emissions from road traffic, power plants, heaters, and many other sources. It brings critical health and climate problems even though it is invisible.
The project "Listen To Air Pollution" makes people more aware of the air quality in their daily life by sonifying air pollution data in real-time with a portable device. Live data, collected by a sensor, is directly sonified and presented to the user via headphones.

But what is the sound of air pollution? The audio is based on recorded samples. Each degree of pollution has its particular sound. It gives the user the possibility to perceive an invisible phenomenon.
"Listen To Air Pollution" enables each visitor to individually experience the air quality at the ACUD Berlin.

  • Carla Terboven is a 24-year-old master student at the Hasso Plattner Institute in Potsdam. Originally, she has a background in computer science. Recently, Carla gained interest in data sonification. She is particularly fascinated by working with live data, which allows you to listen to your environment and experience it in new ways.

Piega

Lupe

Piega focuses on the interaction between body, fabric and machine, keeping the human body as the central point. Through the use of folds, we create volumes and shapes that go beyond the human body.  The purpose of this project is to design a structure of wearable folded textile sensors in order to create a musical instrument that is made up of the combination of the performer's body and their garment. The challenge is to use a folded sensor structure that detects human body movements and provides data about them without using a tight fitting.
Two machine learning systems are used to support the project: an interactive one, as a creative tool, to transform body movements into sound, and an offline one to analyse the data and demonstrate the reliability of the system built.
 

  • Giorgia Petri is a creative technologist and researcher. Her work bridges art and science with a particular interest in the interaction between humans and technology.  She is the co-founder of Calembour, a project focused on interactive media art and sound design. She currently is a research assistant , studying towards a PhD in Wearable Computing at Berlin University of the Arts and Einstein Center Digital Future. Her research focuses on the relationship between body and fabric, investigating the design of three-dimensional folded textile sensors.
  • TEAM: Giorgia Petri (Interaction Designer, UDK Berlin), Sophie Skach (Fashion Designer, QMU London), Melissa Wedekind (Performer)

Sound Folds

Lupe

In paper and textile materials, a fold is evidence that action has taken place, a recorded memory of interaction. The goal of the Sound Folds research project was to develop interactive garments for three musicians, playing the trumpet, cello and vibraphone based on their individual gesture patterns. The rhythmic and trained movements of the three musicians were materialized as folds in paper garments, which were used as a basis to design textile garments with embedded folded sensors. The result is a combination of acoustic instrument playing with interactive and responsive electronic music elements intended for blended
performance settings. Sound Folds benefitted from the work of an interdisciplinary team of artists and engineers, spanning expertise in e-textiles, fashion design, motion capture, and interactive music systems
involving machine learning for gesture recognition.

  • Erika Körner is a multi-disciplinary designer and researcher searching for the bridge between digital and physical materialities. Born in the United States at the foothills of the Appalachians, they moved to Berlin in 2012 and are currently finalizing a BA in experimental Fashion Design at the University of the Arts (UdK). Their work probes new ways of defining embodiment and clothing by constructing mixed realities for human/nonhuman bonding.
  • Team: Pauline Vierne (Electronic Textiles, UdK), Erika Körner (Garment Design, UdK), Giorgia Petri (Interaction, UdK), Paul Bießmann (Interaction & Music), Kasper Schleiser (IoT, FU), Emmanuel Bacelli (IoT, FU), Felix Bießmann (Data Science, Beuth Hochschule), Berit Greinke (Wearable Computing, UdK)
   

Cloud Hands

Cloud hands is an interactive installation, in which you can control granular sample playback with hand position tracking and gesture detection. A python script estimates hand landmarks positions and calculates distances between the landmarks. The data is send via OSC to a Pure Data patch to control synthesis parameters, and to a Wekinator project to estimate the hand gesture. The Wekinator project uses a k-Nearest Neighbor classification algorithm to detect the trained gesture of the left hand, which is the American Sing Language gesture for „I love you“. The gesture is made by clenching the fist and extending the pinky finger and forming the letter L with index finger and thumb. By hovering the left hand over a position while maintaining the gesture, one is able to replace audio samples and change the playback speed. (Tim-Tarek Grund)

Resoplant

Our influence on the environment is well seen nowadays.  But there are ways we may be not aware of. And often we can just hardly grasp how the environment shapes us. This is addressed in Resoplant where the visitor has the chance to explore the sonic landscape and finally interact with it through the physical environment in form of plants and the room itself. 

By touching the plant you connect the electrical potentials of room and plant adding your own potential. The measured plant potentials control parts of the synthesized sound. The room is a resonance body for the loudspeakers where the rooms eigenfrequencies build an individual scale. The resonance creates a locally dependent impression. This interactive installation acts more like an instrument and especially invites to be played with others.  Have you ever been inside an instrument?

During playing you will get in the role as a medium between plant and room. The boarder between instrument and performer blurs. Like in nature everything is part of the same system. You will have an experience between influencing and being influenced. Play the room through the touch of a plant. (Valentin Lux)

Brainwave Data

When we listen to music, we react differently to various types of music. It is well know, that our emotional reaction on music listening origins in the brain. However, it is a current field of research to detect the regions where in the brain this emotion arises and how it works.
In this project, I want to research on different states and similar patterns in brain activity when listening to music. For brainwaves recording I will use a "Muse 2" meditation headband, which is connected to a Raspberry Pi 4 to process the data. After processing, the resulting signals are sent to a sound installation, which includes oscillators, subtractive elements (like filter) and other digital sound processing tools. The brain reaction from music listening triggers different parameters of the sound set. A change in the sound set reflects a change in brain activity, which might point to a emotional reaction of the listener. For deeper pattern analysis, the streamed brain data of different listeners gets recorded. To find emotional reaction patterns, a ML-approach for artists and musicians will be used. (Christian Kastner)

Bodysounds

Lupe

Bodysounds is an ongoing intimate audio / visual collaboration between Jean P’ark and Astra Pentaxia, exploring the interviewee*s favourite sound, which they make only with their body. These sound samples will be used in collaboration with the EOC accompanied by visuals using the recorded video material.  

  • The Electronic Orchestra Charlottenburg (EOC) explores the improvisation and interpretation of Electroacoustic Music. This includes the interaction of diverse electronic instruments and their spatialization in real time. The EOC was founded at the Electronic Music Studio at Technical University of Berlin within a seminar of the Audio Communication Group. It offers a platform for developing and applying new instruments and concepts in the realm of electroacoustic music.

Auditory Displays for Exploratory Programming

In the past two semesters, I have investigated the use of auditory displays and sonifications to enhance the feedback for programmers. In this talk, I will describe the approach and the methods that led to the sonyx prototype and further give a summary of the evaluation of the approach.

  • Christoph Thiede is studying IT-Systems Engineering in the master‘s program of the Hasso Plattner Institute in Potsdam. Christoph is engaged in the chair of Software Architectures with a strong interest in improving the programming experience for developers. As a core dev of the interactive programming environment Squeak/Smalltalk, Christoph is working on practical development issues regularly while always striving to build more powerful tools for programmers.

 

 

3 cranes

is a generative soundscape composition based on the motion of cranes operating at a construction site. Centered around the idea of a site specific sonification, an imperfect computer vision algorithm tries to track the movements of the cranes and transduces them into spatial and vector-based data. The sonic material  comes from field recordings gathered at the construction site and is processed by the algorithm's output. Together they unfold a binaural sound space morphing through different states of sonic motion.(Lennart Sailer)

Archipielago

An abstraction of a topographic model, Archipielago explores the understanding of the landscape as both product and further medium of interaction among diverse agencies, whose interplay takes place at different scales and layers.  The installation invites the visitors to interact with the moss' map causing changes in their bioelectrical signals which serve as both triggers and modulation sources of a soundscape composed out of analogue synthesizers. On a further level of interaction, the model extends through light, which work as an abstraction and visualisation of seismic activity data of a bigger magnitude than 2.5Mw in Europe over the last year.

  • Pablo Torres is an anthropologist and current student of the M.A. Design & Computation at the UdK and the TU in Berlin. Most of his inquiries on sociocultural phenomena are developed and referred to through sonic means, with which he aims to develop diverse ways of understanding such phenomena. He finds great interest in sensorial experience as a vehicle with which knowledge on the relation between humans and their environment is reframed.

Bird's Ear

Singing birds in the park, river waves, squeaking starts of trains, cars on a highway: The soundscape of our cities is composed by an enormouse amount of sounds. When walking along a street, we only notice the small portion of our immediate surrounding. But what would we hear in a wider range? Bird's Ear let's you experience the sound of a whole quarter at once — just like a flying bird would. The web application allows the user to navigate around a map to choose an interesting area. Then, a spatial sound composition is created that aggregates various audio samples, taking into account landscape, live data of public transport and road traffic as well as other data sources. The project aims at giving the possibity to compare different soundscapes in an artistic manner. Future developements of this work also want to offer simulations of different scenarios (e.g. car-free cities).

  • Simon (23) and Vincent (22) study computer science at Hasso-Plattner-Institute in Potsdam. During their master studies, they worked together on a project sonifying delays of public transport. Inspired from this work, they started developing Bird's Ear. Simon is particulary interested in theoretical informatics and algorithm engineering. He is very happy that he can put his algorithmic skills to use for the data procession of Bird's Ear. Besides his studies, Vincent produces and works on fictional film projects. It fascinates him to experience the effect of real-time sound composition from pre-recorded samples in Bird's Ear.

Zusatzinformationen / Extras

Direktzugang

Schnellnavigation zur Seite über Nummerneingabe

Diese Seite verwendet Matomo für anonymisierte Webanalysen. Mehr Informationen und Opt-Out-Möglichkeiten unter Datenschutz.