• home
  • Technical Program
  • Plenary Lecture

Plenary Lecture

  • Design, simulation, and virtual prototyping of room acoustics: challenges and opportunities Cheol-Ho Jeong Technical University of Denmark, Denmark Plenary Lecture 1 / 11:10-12:00, Oct. 24 (Mon) CV
    Abstract Room acoustics has been an active research field at all times, which reflects the human race’s enthusiasm toward healthy and sustainable indoor acoustic environments. Particularly in acoustically important buildings, such as concert halls, room acoustics has been experimented with simulations and scale models in the design stage. Room acoustic simulations have been more accurate and versatile over time, and consequently acousticians have quite several simulation/reproduction tools suitable for various purposes. However, there still are challenges and bottlenecks that makes the collaborative workflow with other parties, such as architects, lighting designers, energy performance researchers, and hearing scientists, difficult and time consuming. Recently we witness more and more advances in virtual prototyping and digitalization technologies, and room acoustics is no exception. In this talk, I will present current technological trends, challenges, and opportunities in simulating and virtual prototyping of room acoustics with recent examples done in building designs and indoor climate evaluations.
  • Vibro-acoustic modelling and Integrated design with control elements Li Cheng The Hong Kong Polytechnic University, Hong Kong Plenary Lecture 2 / 11:20-12:10, Oct. 25 (Tue) CV
    Abstract Vibro-acoustics is a branch of science focusing on the study of structure-sound integration, particularly in the context of acoustic noise radiation. A good understanding of the complex interplay between the elastic waves in a vibrating structure and the acoustic waves in the surrounding acoustic media is essential before appropriate noise and vibration mitigation measured be envisaged. Past experiences show the necessity for the issue to be considered at the very beginning of the design stage, which in turn heavily replies on the available modelling, analysis and optimization tools. The inclusion of noise and vibration control elements, which themselves can also be strongly coupled with the vibo-acoustic system makes the task even more demanding. These issues will be discussed in this talk. In particular, different modelling strategies are reviewed to accommodate some of the particular features of vibroacoustic systems which challenge conventional modelling techniques such as structural inhomogeneity, wide frequency outreach, wavelength incompatibility between different coupled media and high computational cost etc. Discussions also show how different system components could influence each other in terms of working mechanism and control performance, as well as why and how this has to be integrated together under a holistic design philosophy. Examples involving micro-perforated panel absorbers, Helmholtz resonators, cavity shape optimizations and acoustic black holes are used to substantiate the discussion.
  • Bringing the real life into the lab: Hearing research in interactive virtual environments Janina Fels RWTH Aachen University, Germany Plenary Lecture 3 / 11:20-12:10, Oct. 26 (Wed) CV
    Abstract In recent years, considerable progress has been made in understanding auditory cognitive processes and abilities - from perception, attention, and memory to complex performances such as scene analysis and communication. To this end, well-controlled but often unrealistic stimulus presentations that included simple instances of virtual environments have been used. With recent developments in hardware and software technologies, audiovisual virtual reality (VR) has reached a high level of perceptual plausibility that overcomes some of the limitations of simple laboratory settings. Interactive auditory VR is now available and even applicable to non-specialized laboratories where humans can interact with the auditory scene, allowing real-time adaptations of complex auditory input to the listener's ears. Increased application of such interactive VR technology in laboratory settings is expected to help understand auditory perception in complex audiovisual scenes that are closer to real life, including within acoustically adverse situations such as classrooms, open-plan offices, noisy multi-talker communication, and outdoor scenarios. However, a major consideration in bringing real life into the lab requires understanding the extent to which classical theories of auditory cognition and related empirical findings are applicable within the representative interactive audiovisual VR. This talk will introduce recent examples of investigations wherein established paradigms from psychology have been studied using audiovisual VR methodologies. These advances will be discussed in relation to the future of interdisciplinary approaches combining psychology and audiovisual VR in hearing research.
  • Bilayer sonic and elastodynamic graphene: a new playground for twistronics Yun Jing Penn State University, USA Plenary Lecture 4 / 15:30-16:20, Oct. 26 (Wed) CV
    Abstract Twisted bilayer graphene (TBG), which entails two graphene sheets placed on top of each other with a small angle misalignment, has served as an emerging theoretical and experimental platform to study Van der Waals heterostructures owing to their intriguing electronic and optical properties. This field of research concerning how the twist between layers of 2D materials can alter and tailor their electronic behavior was coined “twistronics”. Meanwhile, artificial materials such as phononic crystals and photonic crystals have become a fertile playground for mimicking quantum-mechanical features of condensed matter systems and have revealed new routes to controlling classical waves. This talk will summarize our recent work on the analogue of bilayer graphene in classical wave systems, including acoustic waves and elastic waves. We will start with the analogue of AA- and AB-stacked bilayer graphene. We will then transition into the more complicated bilayer graphene including magic-angle bilayer graphene and SE (sublattice exchange)-even and SE-odd bilayer graphene. We will discuss their unique band structures and the wave behaviors associated with these bilayer graphene, such as energy confinement and topological corner modes. The application of these bilayer phononic graphene systems will also be discussed.
  • Creating ears for AI: speech enhancement techniques for listening to natural human conversations with distant microphones Shoko Araki NTT Communication Science Laboratories, Japan Plenary Lecture 5 / 11:20-12:10, Oct. 27 (Thu) CV
    Abstract AI technology continues to rapidly develop. The area that uses voice interfaces in real environments has been expanding year by year. This leads to a growing demand for speech recognition and communication analysis of everyday multi-speaker conversations in various types of noisy environments, such as offices, living rooms and public areas. This makes speech signal processing, or more specifically, speech enhancement technology, increasingly important.
    When we capture speech signals by distant microphones in daily natural sound environments like conversations at meetings, such interfering sounds as ambient noise, reverberation, and extraneous speakers' voices are included in the captured signals and deteriorate the quality of the speech signal of a target speaker. Speech enhancement technologies such as noise reduction, dereverberation, and source separation remove such interfering sounds from the recorded sound and clearly extract the target speaker's voice; these key technologies make voice interfaces usable in everyday environments. Although speech enhancement for a single speaker in noisy reverberant environments has been proposed and achieved high performance, it remains challenging for a multi-speaker scenario, such as conversational situations.
    In this talk, I first revisit typical elemental core technologies, including multi-channel speech separation and dereverberation, and then show that the joint optimization of them achieves high performance. I will also introduce some new concepts in speech enhancement, such as selective hearing, which extracts only the speech signal of a single target speaker from a complex mixture of sounds.
  • Non-linear signal processing for underwater acoustics: theory and oceanographic applications Julien Bonnel (ICA Early Career Award Recipient  ) Woods Hole Oceanographic Institution, USA Plenary Lecture 6 / 10:30-11:20, Oct. 28 (Fri) CV
    Abstract Lobsters, whales and submarines have little in common. Except that they produce low-frequency sounds, like many other marine occupants that use sound for communication, foraging, navigation and other purposes. However, unraveling and using the underwater cacophony is not at all simple. This is particularly true for low-frequency (f<500 Hz) propagation in coastal water (water depth D<200 m), because the environment acts as a dispersive waveguide: the acoustic field is described by a set of modes that propagate with frequency-dependent speeds. In this context, to extract relevant information from acoustic recording, one needs to understand the propagation and to use physics-based processing. In this presentation, we will show how to analyze low-frequency data recorded on a single hydrophone. We will notably review modal propagation and time-frequency analysis. We will then show how those can be combined into a non-linear signal processing method dedicated to extract modal information from single receiver, and how such information can be used to localize sound sources and/or characterize the oceanic environment. The whole method will be illustrated on several experimental examples, including geoacoustic inversion on the New England Mud Patch and baleen whale localization in the Arctic.
Organized by
Partner Organization
Supported by