Panel Discussion at ICAD 2023 – Bringing Senses to Mind

Our 5th scientific meeting on Audio-Visual Analytics will be a panel discussion hosted at ICAD 2023, the 28th International Conference on Auditory Display.

Time: June 29, 2023 13:00–15:00 (CEST)

Place: Linköping University, Norrköping, Sweden

Registration: Please use this link to register for the conference: Registration to ICAD 2023

Bringing Senses to Mind

“In our brains, there are topological maps that map the space around us. They are fed information by both the visual system and the auditory system. The two maps must coincide precisely in order to be able to detect and match the location of a sound source both visually and auditorily. If these maps were not aligned, we might see a dog on the right whose barking was coming from the left, which would confuse us greatly. Now, the matching of different sensory maps is not a trivial problem of developmental biology. After all, the maps initially develop independently of each other and then have to be matched to each other. This matching occurs with the help of experience. Babies learn over time to match the different maps of their sensorium. Now one can ask: Who adapts to whom, the acoustic map to the visual one or vice versa? In the meantime it is considered as certain that first the visual map is fixed and then the acoustic one is adapted to it. From this it can be deduced that the greater reliability is attributed to the visual system and the acoustic system has to adapt.” (W. Singer, “Das Bild in uns. Vom Bild zur Wahrnehmung.”in Bildtheorien. Anthropologische Grundlagen des Visualistic Turn, K. Sachs, Ed. Suhrkamp, pp. 104–126. Translation adapted from DeepL.com).

How do we combine our auditory and visual perception of presented information to arrive at a result that is convincing for our overall sense-making? Based on Singer’s observations, what can we infer about designing a method for analyzing audio-visual data? Is sonification restricted to skill-based behavior level, despite the cognitive potential of our auditory system? Does sonification merely indicate potential abnormalities that need to be confirmed visually? Or can sonification be developed to function at a level equivalent to visualization in terms of our perception and sense-making abilities? As an illustration of how cross-sensory sense-making can be achieved in audio-visual data analysis, we will share our observations on identifying clusters through parallel coordinate systems at both auditory and visual levels. These observations are based on the reflection of our sense-making process and are not intended to be evaluated in a scientific sense. Rather, they are meant to serve as a starting point for our panel discussion and, potentially, for future research endeavors.

The Audio-Visual Analytics Community (AVAC) aims to bring together experts from both the visualization and sonification communities to inspire cross-disciplinary collaboration and unlock new insights and possibilities for audio-visual data analysis, challenging established ways of thinking and exploring uncharted territories. ICAD 2023 being hosted at Linköping University, with its strong internationally recognized research focus on visualization, is a perfect opportunity to facilitate the gathering of researchers from the fields of sonification and visualization.

Panelists

To facilitate an insightful and productive discussion, we invited scholars from both communities to participate to join a panel discussion on the topic:

Portrait of Camilla Forsell
Camilla Forsell (Linköping University) is an Associate Professor in evaluation methodology and visualization with a research interest of the development and validation of new evaluation methodology and its applications in visualization.
Portrait of Katharina Groß-Vogt
Katharina Groß-Vogt (University of Music and Performing Arts Graz) researches and teaches sonification and sonic interaction design, at the Institute of Electronic Music and Acoustics (IEM), University of Music and Performing Arts Graz, Austria. In past research projects she sonified data from physics, physiotherapy, or climate science, and more recently created prototypes using virtual reverberation, or augmenting walking or handwriting sounds.
Portrait of Ingrid Hotz
Ingrid Hotz (Linköping University) develops concepts and environments for explorative visual data analysis of large and complex data from scientific applications such as engineering, physics, chemistry, and medicine, to support data understanding and knowledge gain. Therefore she and her team build on methods from computer science and mathematics, such as computer graphics, computer vision, dynamical systems, computational geometry, and topological data analysis.
Portrait of Daniel Västfjäll
Daniel Västfjäll (Linköping University) is a psychoacoustician and holds dual PhDs in Psychology and Acoustics. He studies how perception cause emotion and how they jointly affect actions and decisions. Daniel has previously worked in the field of multimodal perception.
Portrait of Paul Vickers
Paul Vickers (Northumbria University) His general field of research is sonification and auditory display. He has a particular current interest in interdisciplinary work with a focus on issues of aesthetics and embodied perception in sonification listening.

Moderation

  • Michael Iber (St. Pölten University of Applied Sciences)
  • Kajetan Enge (St. Pölten University of Applied Sciences and University of Music and Performing Arts Graz)

Latest Posts

IEEE VIS Application Spotlight

Join our IEEE VIS session for three talks on sonification and visualization in application contexts.

Audio Mostly 2022

The interdisciplinary audio conference Audio Mostly 2022 took place successfully from 6 to 9 September in St. Pölten under the motto of „What you hear is what you see?“.

AVA goes Italia

In June 2022, members of the Audio Visual Analytics community gathered at the AVI 2022 Workshop in Frascati near Rome, Italy.