Room Transfer Function Estimation and Room Equalization in Noise EnvironmentsIJERA Editor
Audio quality in listening situation is degraded by indoor room reverberation. Room equalization can be used to
increase the audio quality by applying the inverse transfer function to the input audio signals. In noise
environments, however, it is hard to exactly measure the room transfer function. In this work, we developed the
techniques to measure the room transfer function in indoor noise environments and to enhance the audio quality
by room equalization. From the experimental results, we showed that the proposed techniques can be
successfully used in indoor noise environments
Green Building Materials for Acoustics of an Auditorium - A Case Studyinventionjournals
ABSTRACT: In this paper we report the effectiveness of using coir mats – a green building material as an alternative to the less energy efficient and costly materials for acoustic absorption purposes. The most important parameter of the acoustic performance of an auditorium is the reverberation time(RT) which in turn depends on the sound absorption coefficient of materials layered on the floor, wall and ceiling. The present experimental case study carried out in auditoriums with coir mat fittings reveals that the coir mats are very effective in achieving the desired reverberation time for lecture and music concerts. Studies show a considerable reduction in the RT with the use of coir mats. An attractive feature of coir mats is its low cost, sustainable nature and indoor air quality without compromising the technical feasibility.
Room Transfer Function Estimation and Room Equalization in Noise EnvironmentsIJERA Editor
Audio quality in listening situation is degraded by indoor room reverberation. Room equalization can be used to
increase the audio quality by applying the inverse transfer function to the input audio signals. In noise
environments, however, it is hard to exactly measure the room transfer function. In this work, we developed the
techniques to measure the room transfer function in indoor noise environments and to enhance the audio quality
by room equalization. From the experimental results, we showed that the proposed techniques can be
successfully used in indoor noise environments
Green Building Materials for Acoustics of an Auditorium - A Case Studyinventionjournals
ABSTRACT: In this paper we report the effectiveness of using coir mats – a green building material as an alternative to the less energy efficient and costly materials for acoustic absorption purposes. The most important parameter of the acoustic performance of an auditorium is the reverberation time(RT) which in turn depends on the sound absorption coefficient of materials layered on the floor, wall and ceiling. The present experimental case study carried out in auditoriums with coir mat fittings reveals that the coir mats are very effective in achieving the desired reverberation time for lecture and music concerts. Studies show a considerable reduction in the RT with the use of coir mats. An attractive feature of coir mats is its low cost, sustainable nature and indoor air quality without compromising the technical feasibility.
Feature Extraction of Musical Instrument Tones using FFT and Segment AveragingTELKOMNIKA JOURNAL
A feature extraction for musical instrument tones that based on a transform domain approach was
proposed in this paper. The aim of the proposed feature extraction was to get the lower feature extraction
coefficients. In general, the proposed feature extraction was carried out as follow. Firstly, the input signal
was transformed using FFT (Fast Fourier Transform). Secondly, the left half of the transformed signal was
divided into a number of segments. Finally, the averaging results of that segments, was the feature
extraction of the input signal. Based on the test results, the proposed feature extraction was highly efficient
for the tones, which have many significant local peaks in the Fourier transform domain, because it only
required at least four feature extraction coefficients, in order to represent every tone.
Mixed Time Frequency Approach for Multipoint Room Response Equalizationa3labdsp
A still open problem in the field of room response equalization is the development of perceptually useful mixed-phase equalizers. In a recent paper, a multipoint mixed-phase room response equalization system, integrating a minimum-phase multiple position room magnitude equalizer and a FIR group delay equalizer, was developed in the frequency domain. Starting from this approach, a mixed time-frequency algorithm is here proposed. The minimum-phase multiple position equalizer developed in the frequency domain, is combined with an all-pass FIR phase equalizer, designed in the time domain considering a suitable time-reversed version of a prototype function and taking advantage of the mixing time evaluation. Several tests have been performed considering real environments and comparing the proposed approach with the previous one, based on a group delay compensation. Subjective listening tests have also been done in a real environment, confirming the improvement in the perceived audio quality.
classical ray theory, growth and decay of sound in live and dead room, sound absorbing materials, their types and uses, porous material, public address system, howl round, under water acoustics, transmission losses in sea water
Abstract :
Although solutions to the challenge of binaural artificial recreation of audio specializations exist in the Computer Music domain, a review of the area suggests that a comprehensive, generic, accurate and efficient tool set is required; hence this paper/thesis will deal with automated sound creation using sound synthesis. The entire setup was implemented using Csound. Csound implements many synthesis techniques unavailable through other means, such as user-controlled physical modeling.
A Novel Method for Silence Removal in Sounds Produced by Percussive InstrumentsIJMTST Journal
The steepness of an audio signal which is produced by the musical instruments, specifically percussive
instruments is the perception of how high tone or low tone which can be considered as a frequency closely
related to the fundamental frequency. This paper presents a novel method for silence removal and
segmentation of music signals produced by the percussive instruments and the performance of proposed
method is studied with the help of MATLAB simulations. This method is based on two simple features,
namely the signal energy and the spectral centroid. As long as the feature sequences are extracted, a simple
thresholding criterion is applied in order to remove the silence areas in the sound signal. The simulations
were carried on various instruments like drum, flute and guitar and results of the proposed method were
analyzed.
S A M P L E P A P E R S54INHIBITORY INFLUENCES ON ASYCHRO.docxagnesdcarey33086
S A M P L E P A P E R S54
INHIBITORY INFLUENCES ON ASYCHRONY 3
Inhibitory Influences on Asychrony as a Cue for Auditory Segregation
Auditory grouping involves the formation of auditory objects from the sound mixture
reaching the ears. The cues used to integrate or segregate these sounds and so form auditory
objects have been defined by several authors (e.g., Bregman, 1990; Darwin, 1997; Darwin &
Carlyon, 1995). The key acoustic cues for segregating concurrent acoustic elements are
differences in onset time (e.g., Dannenbring & Bregman, 1978; Rasch, 1978) and harmonic
relations (e.g., Brunstrom & Roberts, 1998; Moore, Glasberg, & Peters, 1986). In an example of
the importance of onset time, Darwin (1984a, 1984b) showed that increasing the level of a
harmonic near the first formant (F1) frequency by adding a synchronous pure tone changes the
phonetic quality of a vowel. However, when the added tone began a few hundred milliseconds
before the vowel, it was essentially removed from the vowel percept.… [section continues].
General Method
Overview
In the experiments reported here, we used a paradigm developed by Darwin to assess the
perceptual integration of additional energy in the F1 region of a vowel through its effect on
phonetic quality (Darwin, 1984a, 1984b; Darwin & Sutherland, 1984).…[section continues].
Stimuli
Amplitude and phase values for the vowel harmonics were obtained from the vocal-tract
transfer function using cascaded formant resonators (Klatt, 1980). F1 values varied in 10-Hz
steps from 360–550 Hz—except in Experiment 3, which used values from 350– 540 Hz—to
produce a continuum of 20 tokens.…[section continues].
Listeners
Elements of empirical studies, 1.01
Figure 2.2. Sample Two-Experiment Paper (The numbers refer to num-
bered sections in the Publication Manual. This abridged manu-
script illustrates the organizational structure characteristic of
multiple-experiment papers. Of course, a complete multiple-
experiment paper would include a title page, an abstract page,
and so forth.)
Paper adapted from “Inhibitory Influences on Asychrony as a Cue for Auditory Segregation,” by S. D.
Holmes and B. Roberts, 2006, Journal of Experimental Psychology: Human Perception and Performance, 32,
pp. 1231–1242. Copyright 2006 by the American Psychological Association.
M A N U S C R I P T S T R U C T U R E A N D C O N T E N T 55
INHIBITORY INFLUENCES ON ASYCHRONY 4
Listeners were volunteers recruited from the student population of the University of
Birmingham and were paid for their participation. All listeners were native speakers of British
English who reported normal hearing and had successfully completed a screening procedure
(described below). For each experiment, the data for 12 listeners are presented.…[section
continues].
Procedure
At the start of each session, listeners took part in a warm-up block. Depending on the
number of conditions in a particular experiment, the warm-up blo.
International Journal of Engineering Research and Applications (IJERA) is an open access online peer reviewed international journal that publishes research and review articles in the fields of Computer Science, Neural Networks, Electrical Engineering, Software Engineering, Information Technology, Mechanical Engineering, Chemical Engineering, Plastic Engineering, Food Technology, Textile Engineering, Nano Technology & science, Power Electronics, Electronics & Communication Engineering, Computational mathematics, Image processing, Civil Engineering, Structural Engineering, Environmental Engineering, VLSI Testing & Low Power VLSI Design etc.
Key Trends Shaping the Future of Infrastructure.pdfCheryl Hung
Keynote at DIGIT West Expo, Glasgow on 29 May 2024.
Cheryl Hung, ochery.com
Sr Director, Infrastructure Ecosystem, Arm.
The key trends across hardware, cloud and open-source; exploring how these areas are likely to mature and develop over the short and long-term, and then considering how organisations can position themselves to adapt and thrive.
Dev Dives: Train smarter, not harder – active learning and UiPath LLMs for do...UiPathCommunity
💥 Speed, accuracy, and scaling – discover the superpowers of GenAI in action with UiPath Document Understanding and Communications Mining™:
See how to accelerate model training and optimize model performance with active learning
Learn about the latest enhancements to out-of-the-box document processing – with little to no training required
Get an exclusive demo of the new family of UiPath LLMs – GenAI models specialized for processing different types of documents and messages
This is a hands-on session specifically designed for automation developers and AI enthusiasts seeking to enhance their knowledge in leveraging the latest intelligent document processing capabilities offered by UiPath.
Speakers:
👨🏫 Andras Palfi, Senior Product Manager, UiPath
👩🏫 Lenka Dulovicova, Product Program Manager, UiPath
More Related Content
Similar to The_impact_of_sound_control_room_acoustics_on_the_.pdf
Feature Extraction of Musical Instrument Tones using FFT and Segment AveragingTELKOMNIKA JOURNAL
A feature extraction for musical instrument tones that based on a transform domain approach was
proposed in this paper. The aim of the proposed feature extraction was to get the lower feature extraction
coefficients. In general, the proposed feature extraction was carried out as follow. Firstly, the input signal
was transformed using FFT (Fast Fourier Transform). Secondly, the left half of the transformed signal was
divided into a number of segments. Finally, the averaging results of that segments, was the feature
extraction of the input signal. Based on the test results, the proposed feature extraction was highly efficient
for the tones, which have many significant local peaks in the Fourier transform domain, because it only
required at least four feature extraction coefficients, in order to represent every tone.
Mixed Time Frequency Approach for Multipoint Room Response Equalizationa3labdsp
A still open problem in the field of room response equalization is the development of perceptually useful mixed-phase equalizers. In a recent paper, a multipoint mixed-phase room response equalization system, integrating a minimum-phase multiple position room magnitude equalizer and a FIR group delay equalizer, was developed in the frequency domain. Starting from this approach, a mixed time-frequency algorithm is here proposed. The minimum-phase multiple position equalizer developed in the frequency domain, is combined with an all-pass FIR phase equalizer, designed in the time domain considering a suitable time-reversed version of a prototype function and taking advantage of the mixing time evaluation. Several tests have been performed considering real environments and comparing the proposed approach with the previous one, based on a group delay compensation. Subjective listening tests have also been done in a real environment, confirming the improvement in the perceived audio quality.
classical ray theory, growth and decay of sound in live and dead room, sound absorbing materials, their types and uses, porous material, public address system, howl round, under water acoustics, transmission losses in sea water
Abstract :
Although solutions to the challenge of binaural artificial recreation of audio specializations exist in the Computer Music domain, a review of the area suggests that a comprehensive, generic, accurate and efficient tool set is required; hence this paper/thesis will deal with automated sound creation using sound synthesis. The entire setup was implemented using Csound. Csound implements many synthesis techniques unavailable through other means, such as user-controlled physical modeling.
A Novel Method for Silence Removal in Sounds Produced by Percussive InstrumentsIJMTST Journal
The steepness of an audio signal which is produced by the musical instruments, specifically percussive
instruments is the perception of how high tone or low tone which can be considered as a frequency closely
related to the fundamental frequency. This paper presents a novel method for silence removal and
segmentation of music signals produced by the percussive instruments and the performance of proposed
method is studied with the help of MATLAB simulations. This method is based on two simple features,
namely the signal energy and the spectral centroid. As long as the feature sequences are extracted, a simple
thresholding criterion is applied in order to remove the silence areas in the sound signal. The simulations
were carried on various instruments like drum, flute and guitar and results of the proposed method were
analyzed.
S A M P L E P A P E R S54INHIBITORY INFLUENCES ON ASYCHRO.docxagnesdcarey33086
S A M P L E P A P E R S54
INHIBITORY INFLUENCES ON ASYCHRONY 3
Inhibitory Influences on Asychrony as a Cue for Auditory Segregation
Auditory grouping involves the formation of auditory objects from the sound mixture
reaching the ears. The cues used to integrate or segregate these sounds and so form auditory
objects have been defined by several authors (e.g., Bregman, 1990; Darwin, 1997; Darwin &
Carlyon, 1995). The key acoustic cues for segregating concurrent acoustic elements are
differences in onset time (e.g., Dannenbring & Bregman, 1978; Rasch, 1978) and harmonic
relations (e.g., Brunstrom & Roberts, 1998; Moore, Glasberg, & Peters, 1986). In an example of
the importance of onset time, Darwin (1984a, 1984b) showed that increasing the level of a
harmonic near the first formant (F1) frequency by adding a synchronous pure tone changes the
phonetic quality of a vowel. However, when the added tone began a few hundred milliseconds
before the vowel, it was essentially removed from the vowel percept.… [section continues].
General Method
Overview
In the experiments reported here, we used a paradigm developed by Darwin to assess the
perceptual integration of additional energy in the F1 region of a vowel through its effect on
phonetic quality (Darwin, 1984a, 1984b; Darwin & Sutherland, 1984).…[section continues].
Stimuli
Amplitude and phase values for the vowel harmonics were obtained from the vocal-tract
transfer function using cascaded formant resonators (Klatt, 1980). F1 values varied in 10-Hz
steps from 360–550 Hz—except in Experiment 3, which used values from 350– 540 Hz—to
produce a continuum of 20 tokens.…[section continues].
Listeners
Elements of empirical studies, 1.01
Figure 2.2. Sample Two-Experiment Paper (The numbers refer to num-
bered sections in the Publication Manual. This abridged manu-
script illustrates the organizational structure characteristic of
multiple-experiment papers. Of course, a complete multiple-
experiment paper would include a title page, an abstract page,
and so forth.)
Paper adapted from “Inhibitory Influences on Asychrony as a Cue for Auditory Segregation,” by S. D.
Holmes and B. Roberts, 2006, Journal of Experimental Psychology: Human Perception and Performance, 32,
pp. 1231–1242. Copyright 2006 by the American Psychological Association.
M A N U S C R I P T S T R U C T U R E A N D C O N T E N T 55
INHIBITORY INFLUENCES ON ASYCHRONY 4
Listeners were volunteers recruited from the student population of the University of
Birmingham and were paid for their participation. All listeners were native speakers of British
English who reported normal hearing and had successfully completed a screening procedure
(described below). For each experiment, the data for 12 listeners are presented.…[section
continues].
Procedure
At the start of each session, listeners took part in a warm-up block. Depending on the
number of conditions in a particular experiment, the warm-up blo.
International Journal of Engineering Research and Applications (IJERA) is an open access online peer reviewed international journal that publishes research and review articles in the fields of Computer Science, Neural Networks, Electrical Engineering, Software Engineering, Information Technology, Mechanical Engineering, Chemical Engineering, Plastic Engineering, Food Technology, Textile Engineering, Nano Technology & science, Power Electronics, Electronics & Communication Engineering, Computational mathematics, Image processing, Civil Engineering, Structural Engineering, Environmental Engineering, VLSI Testing & Low Power VLSI Design etc.
Key Trends Shaping the Future of Infrastructure.pdfCheryl Hung
Keynote at DIGIT West Expo, Glasgow on 29 May 2024.
Cheryl Hung, ochery.com
Sr Director, Infrastructure Ecosystem, Arm.
The key trends across hardware, cloud and open-source; exploring how these areas are likely to mature and develop over the short and long-term, and then considering how organisations can position themselves to adapt and thrive.
Dev Dives: Train smarter, not harder – active learning and UiPath LLMs for do...UiPathCommunity
💥 Speed, accuracy, and scaling – discover the superpowers of GenAI in action with UiPath Document Understanding and Communications Mining™:
See how to accelerate model training and optimize model performance with active learning
Learn about the latest enhancements to out-of-the-box document processing – with little to no training required
Get an exclusive demo of the new family of UiPath LLMs – GenAI models specialized for processing different types of documents and messages
This is a hands-on session specifically designed for automation developers and AI enthusiasts seeking to enhance their knowledge in leveraging the latest intelligent document processing capabilities offered by UiPath.
Speakers:
👨🏫 Andras Palfi, Senior Product Manager, UiPath
👩🏫 Lenka Dulovicova, Product Program Manager, UiPath
Essentials of Automations: Optimizing FME Workflows with ParametersSafe Software
Are you looking to streamline your workflows and boost your projects’ efficiency? Do you find yourself searching for ways to add flexibility and control over your FME workflows? If so, you’re in the right place.
Join us for an insightful dive into the world of FME parameters, a critical element in optimizing workflow efficiency. This webinar marks the beginning of our three-part “Essentials of Automation” series. This first webinar is designed to equip you with the knowledge and skills to utilize parameters effectively: enhancing the flexibility, maintainability, and user control of your FME projects.
Here’s what you’ll gain:
- Essentials of FME Parameters: Understand the pivotal role of parameters, including Reader/Writer, Transformer, User, and FME Flow categories. Discover how they are the key to unlocking automation and optimization within your workflows.
- Practical Applications in FME Form: Delve into key user parameter types including choice, connections, and file URLs. Allow users to control how a workflow runs, making your workflows more reusable. Learn to import values and deliver the best user experience for your workflows while enhancing accuracy.
- Optimization Strategies in FME Flow: Explore the creation and strategic deployment of parameters in FME Flow, including the use of deployment and geometry parameters, to maximize workflow efficiency.
- Pro Tips for Success: Gain insights on parameterizing connections and leveraging new features like Conditional Visibility for clarity and simplicity.
We’ll wrap up with a glimpse into future webinars, followed by a Q&A session to address your specific questions surrounding this topic.
Don’t miss this opportunity to elevate your FME expertise and drive your projects to new heights of efficiency.
UiPath Test Automation using UiPath Test Suite series, part 3DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 3. In this session, we will cover desktop automation along with UI automation.
Topics covered:
UI automation Introduction,
UI automation Sample
Desktop automation flow
Pradeep Chinnala, Senior Consultant Automation Developer @WonderBotz and UiPath MVP
Deepak Rai, Automation Practice Lead, Boundaryless Group and UiPath MVP
State of ICS and IoT Cyber Threat Landscape Report 2024 previewPrayukth K V
The IoT and OT threat landscape report has been prepared by the Threat Research Team at Sectrio using data from Sectrio, cyber threat intelligence farming facilities spread across over 85 cities around the world. In addition, Sectrio also runs AI-based advanced threat and payload engagement facilities that serve as sinks to attract and engage sophisticated threat actors, and newer malware including new variants and latent threats that are at an earlier stage of development.
The latest edition of the OT/ICS and IoT security Threat Landscape Report 2024 also covers:
State of global ICS asset and network exposure
Sectoral targets and attacks as well as the cost of ransom
Global APT activity, AI usage, actor and tactic profiles, and implications
Rise in volumes of AI-powered cyberattacks
Major cyber events in 2024
Malware and malicious payload trends
Cyberattack types and targets
Vulnerability exploit attempts on CVEs
Attacks on counties – USA
Expansion of bot farms – how, where, and why
In-depth analysis of the cyber threat landscape across North America, South America, Europe, APAC, and the Middle East
Why are attacks on smart factories rising?
Cyber risk predictions
Axis of attacks – Europe
Systemic attacks in the Middle East
Download the full report from here:
https://sectrio.com/resources/ot-threat-landscape-reports/sectrio-releases-ot-ics-and-iot-security-threat-landscape-report-2024/
Transcript: Selling digital books in 2024: Insights from industry leaders - T...BookNet Canada
The publishing industry has been selling digital audiobooks and ebooks for over a decade and has found its groove. What’s changed? What has stayed the same? Where do we go from here? Join a group of leading sales peers from across the industry for a conversation about the lessons learned since the popularization of digital books, best practices, digital book supply chain management, and more.
Link to video recording: https://bnctechforum.ca/sessions/selling-digital-books-in-2024-insights-from-industry-leaders/
Presented by BookNet Canada on May 28, 2024, with support from the Department of Canadian Heritage.
Elevating Tactical DDD Patterns Through Object CalisthenicsDorra BARTAGUIZ
After immersing yourself in the blue book and its red counterpart, attending DDD-focused conferences, and applying tactical patterns, you're left with a crucial question: How do I ensure my design is effective? Tactical patterns within Domain-Driven Design (DDD) serve as guiding principles for creating clear and manageable domain models. However, achieving success with these patterns requires additional guidance. Interestingly, we've observed that a set of constraints initially designed for training purposes remarkably aligns with effective pattern implementation, offering a more ‘mechanical’ approach. Let's explore together how Object Calisthenics can elevate the design of your tactical DDD patterns, offering concrete help for those venturing into DDD for the first time!
DevOps and Testing slides at DASA ConnectKari Kakkonen
My and Rik Marselis slides at 30.5.2024 DASA Connect conference. We discuss about what is testing, then what is agile testing and finally what is Testing in DevOps. Finally we had lovely workshop with the participants trying to find out different ways to think about quality and testing in different parts of the DevOps infinity loop.
Accelerate your Kubernetes clusters with Varnish CachingThijs Feryn
A presentation about the usage and availability of Varnish on Kubernetes. This talk explores the capabilities of Varnish caching and shows how to use the Varnish Helm chart to deploy it to Kubernetes.
This presentation was delivered at K8SUG Singapore. See https://feryn.eu/presentations/accelerate-your-kubernetes-clusters-with-varnish-caching-k8sug-singapore-28-2024 for more details.
Generating a custom Ruby SDK for your web service or Rails API using Smithyg2nightmarescribd
Have you ever wanted a Ruby client API to communicate with your web service? Smithy is a protocol-agnostic language for defining services and SDKs. Smithy Ruby is an implementation of Smithy that generates a Ruby SDK using a Smithy model. In this talk, we will explore Smithy and Smithy Ruby to learn how to generate custom feature-rich SDKs that can communicate with any web service, such as a Rails JSON API.
LF Energy Webinar: Electrical Grid Modelling and Simulation Through PowSyBl -...DanBrown980551
Do you want to learn how to model and simulate an electrical network from scratch in under an hour?
Then welcome to this PowSyBl workshop, hosted by Rte, the French Transmission System Operator (TSO)!
During the webinar, you will discover the PowSyBl ecosystem as well as handle and study an electrical network through an interactive Python notebook.
PowSyBl is an open source project hosted by LF Energy, which offers a comprehensive set of features for electrical grid modelling and simulation. Among other advanced features, PowSyBl provides:
- A fully editable and extendable library for grid component modelling;
- Visualization tools to display your network;
- Grid simulation tools, such as power flows, security analyses (with or without remedial actions) and sensitivity analyses;
The framework is mostly written in Java, with a Python binding so that Python developers can access PowSyBl functionalities as well.
What you will learn during the webinar:
- For beginners: discover PowSyBl's functionalities through a quick general presentation and the notebook, without needing any expert coding skills;
- For advanced developers: master the skills to efficiently apply PowSyBl functionalities to your real-world scenarios.
GDG Cloud Southlake #33: Boule & Rebala: Effective AppSec in SDLC using Deplo...James Anderson
Effective Application Security in Software Delivery lifecycle using Deployment Firewall and DBOM
The modern software delivery process (or the CI/CD process) includes many tools, distributed teams, open-source code, and cloud platforms. Constant focus on speed to release software to market, along with the traditional slow and manual security checks has caused gaps in continuous security as an important piece in the software supply chain. Today organizations feel more susceptible to external and internal cyber threats due to the vast attack surface in their applications supply chain and the lack of end-to-end governance and risk management.
The software team must secure its software delivery process to avoid vulnerability and security breaches. This needs to be achieved with existing tool chains and without extensive rework of the delivery processes. This talk will present strategies and techniques for providing visibility into the true risk of the existing vulnerabilities, preventing the introduction of security issues in the software, resolving vulnerabilities in production environments quickly, and capturing the deployment bill of materials (DBOM).
Speakers:
Bob Boule
Robert Boule is a technology enthusiast with PASSION for technology and making things work along with a knack for helping others understand how things work. He comes with around 20 years of solution engineering experience in application security, software continuous delivery, and SaaS platforms. He is known for his dynamic presentations in CI/CD and application security integrated in software delivery lifecycle.
Gopinath Rebala
Gopinath Rebala is the CTO of OpsMx, where he has overall responsibility for the machine learning and data processing architectures for Secure Software Delivery. Gopi also has a strong connection with our customers, leading design and architecture for strategic implementations. Gopi is a frequent speaker and well-known leader in continuous delivery and integrating security into software delivery.
1. See discussions, stats, and author profiles for this publication at: https://www.researchgate.net/publication/229045065
The impact of sound control room acoustics on the perceived acoustics of a
diffuse field recording
Article in WSEAS Transactions on Signal Processing · October 2010
CITATIONS
11
READS
931
2 authors:
Some of the authors of this publication are also working on these related projects:
Stage Acoustics and Sound Exposure of Musicians View project
Room in Room Acoustics View project
Constant Hak
Eindhoven University of Technology
56 PUBLICATIONS 228 CITATIONS
SEE PROFILE
Remy Wenmaekers
Eindhoven University of Technology
41 PUBLICATIONS 205 CITATIONS
SEE PROFILE
All content following this page was uploaded by Remy Wenmaekers on 17 June 2015.
The user has requested enhancement of the downloaded file.
2. The Impact of Sound Control Room Acoustics on the Perceived
Acoustics of a Diffuse Field Recording
C.C.J.M. HAK 1,2
, R.H.C. WENMAEKERS 3
1
Department Architecture, Building and Planning, unit BPS
Laboratorium voor Akoestiek, Eindhoven University of Technology
Den Dolech 2, 5612 AZ Eindhoven
2
The Royal Conservatoire, department Art of Sound
Juliana van Stolberglaan 1, 2595 CA The Hague
3
Level Acoustics BV, De Rondom 10, 5612 AP Eindhoven
THE NETHERLANDS
c.c.j.m.hak@tue.nl
r.h.c.wenmaekers@tue.nl
Abstract: - Live recordings of music and speech in concert halls have acoustical properties, such as reverberation,
definition, clarity and spaciousness. Sound engineers play back these recordings through loudspeakers in sound control
rooms for audio CD or film. The acoustical properties of these rooms influence the perceived acoustics of the live
recording. To find the practical impact of ‘room in room’ acoustics in general, combinations of random room acoustic
impulse responses using convolution techniques have been investigated. To find the practical impact of a sound control
room on the acoustical parameter values of a concert hall when played back in that control room, combinations of
concert hall impulse responses and sound control room impulse responses have been investigated. It is found that to
accurately reproduce a steady sound energy decay rate (related to the reverberation time), the playback room should
have at least twice this decay rate, under diffuse sound field conditions. For energy modulations (related to speech
intelligibility) this decay rate should be more than four times higher. Finally, initial energy ratios (related to definition
and clarity) require auditive judgement in the direct sound field. ITU-recommendations used for sound control room
design are sufficient for reverberation and speech intelligibility judgement of concert hall recordings. Clarity
judgement needs a very high decay rate, while judgement of spaciousness can only be done by headphone.
Key-Words: Sound control, Sound studio, Control room, Room acoustics, Concert hall, Recording, Playback,
Convolution, Head and torso simulator, HATS
1 Introduction
From experience it is clear that a recorded reverberation
time can only be heard in a room having a reverberation
time shorter than the one in which the recording was
made. The smallest details and the finest nuances with
regard to colouring, definition and stereo image can only
be judged and criticized when there is little acoustical
influence from the playback acoustics on the recorded
acoustics [1]. However, usually the playback room in
combination with the used sound system affects the
recorded acoustics. This happens in class rooms [2],
congress halls [3], cinemas and even in sound control
rooms.
Using formerly measured impulse responses, a first step
is made in investigating the impact of the reproduction
room acoustics on recorded acoustics [4]. This first step
is to find the practical impact of ‘room in room’
acoustics for 66 combinations of one room acoustics
with another. In this case the impact on reverberation,
speech intelligibility and clarity has been investigated,
using convolution techniques. From the results presented
in chapter 5, criteria for the listening room’s
reverberation time have been derived for the ‘proper
playback’ of each of these parameters, starting from a
more or less diffuse sound field and the JND (Just
Noticeable Difference) as allowable error. Measurement
conditions are given in chapter 4.
The next step in this research is to find the practical
impact of direct field room acoustics on the perceived
acoustics of a reproduced sound. In this case the impact
of the control room acoustics on live recorded acoustics
has been investigated, using the same techniques. To this
end the convolution has been applied to binaural impulse
responses of six control rooms, a symphonic concert
hall, a chamber music hall and a professional headphone.
For 28 combinations of concert hall acoustics and sound
control room acoustics the impact on reverberation,
speech intelligibility, clarity and inter-aural cross-
WSEAS TRANSACTIONS on SIGNAL PROCESSING C. C. J. M. Hak, R. H. C. Wenmaekers
ISSN: 1790-5052 175 Issue 4, Volume 6, October 2010
3. correlation has been investigated using convolution
techniques. From the results, a first step is made to judge
the quality of a sound control room using this new
approach, starting from the JND (Just Noticeable
Difference) as allowable error. Measurement conditions
are given in chapter 4 and measurement results in
chapter 6.
2 Convolution
The convolution y of signal s and system impulse
response h is written as and defined as:
( ) )
(
)
( t
h
t
s
t
y ∗
= (1)
or
( ) ( )( ) ( ) ( ) τ
τ d
t
h
t
s
t
h
s
t
y ∫
∞
∞
−
−
⋅
=
∗
= (2)
In words: the convolution is defined as the integral of the
product of two functions s and h after one is reversed
and shifted. From a room acoustical point of view s(t) is
a sound that is recorded in an anechoic room (dry
recording) and played back in a standard room, h(t) the
impulse response of the standard, more or less
reverberant room and y(t) the convolved sound as it is
heard in that standard room. Therefore, an impulse, for
instance a hand clap, recorded in an anechoic room,
played back in a reverberant room, is heard as an
impulse response of that reverberant room. A recorded
impulse in the reverberant room that is played back in
the anechoic room is again heard as the impulse response
of the reverberant room. In both cases the derived room
acoustic parameter values will be the same.
When both the recording room and the playback
(listening) room are reverberant, smoothing of the sound
occurs. Therefore, in some cases it is impossible to judge
the original recordings in detail. The room acoustics in
the sound recording that we want to demonstrate or
judge will be affected by the acoustics of the listening
room. With a double convolution by which an impulse
response from one room is convolved with a dry
recording and afterwards the result is convolved with the
impulse response of another room, it is possible to hear
how a recording, made in a reverberant room, sounds
when played in another reverberant room. The result is
usually an unwanted smoothed sound signal. By using a
pure impulse (Dirac delta function) instead of a normal
sound signal to be convolved with both room impulse
responses (eq 3 and 4) we can examine what one room
does with the other concerning the values for the room
acoustic parameters (eq 5). So it is possible to derive a
‘room in room’ acoustic parameter value from the
smoothed impulse response (Figure 1).
Fig 1. Impulse response smoothing by convolution.
Mathematically:
( ) ( ) ( ) ( ) ( )
t
h
t
h
t
t
t
h =
∗
=
∗ δ
δ (3)
Where:
h(t) = room impulse response
δ(t) = Dirac delta function (ideal impulse)
( ) ( ) ( ) ( ) ( ) ( )
t
h
t
h
t
h
t
h
t
t
h 2
1
2
1
12 ∗
=
∗
∗
= δ (4)
Where:
h12(t) = ‘total’ impulse response room 1∗ room 2
h1(t) = impulse response room 1
h2(t) = impulse response room 2
Substituting equation (4) into equation (1) results in:
( ) ( ) ( )
t
h
t
s
t
y 12
12 ∗
= (5)
Where:
y12(t) = convolution of a random sound signal with the
‘total’ impulse response
s(t) = random sound signal
3 Room acoustic parameters
Many objective room acoustic parameters are derived
from the room’s impulse responses according to ISO
3382-1 [5] and IEC 60268-16 [6]. Examples of such
parameters are the reverberation time, which is related to
the energy decay rate, the clarity, the definition and the
centre time, which are related to early to late energy
ratios, the speech intelligibility, which is related to the
energy modulation transfer characteristics of the impulse
response and the latereral energy fraction, the late lateral
sound energy and the inter-aural cross-correlation, which
are related to the lateral impulse response measurements.
Five of them have been investigated, being the
WSEAS TRANSACTIONS on SIGNAL PROCESSING C. C. J. M. Hak, R. H. C. Wenmaekers
ISSN: 1790-5052 176 Issue 4, Volume 6, October 2010
4. reverberation time T20 and T30, the clarity C80, the
modulation transfer index MTI and the inter-aural cross-
correlation.
3.1 Reverberation time T
The reverberation time T is calculated from the squared
impulse response by backwards integration [7] through
the following relation:
[ ]
dB
dt
t
p
dt
t
p
t
L t
)
(
)
(
lg
10
)
(
0
2
2
∫
∫
∞
∞
= (6)
where L(t) is the equivalent of the logarithmic decay of
the squared pressure. For this investigation the T20 with
its evaluation decay range from -5 dB to -35 dB and the
T30 with its evaluation decay range from -5 dB to -45 dB
are both used to determine T.
3.2 Clarity C80
The parameter C80 [8] is an early to late arriving sound
energy ratio intended to relate to music intelligibility and
is calculated from the impulse response using the
following relation:
( )
( )
[ ]
dB
dt
t
p
dt
t
p
C
ms
ms
∫
∫
∞
=
80
2
80
0
2
80 lg
10 (7)
3.3 Modulation Tranmission Index
The Modulation Transfer Function m(F) [9] describes to
what extent the modulation m is transferred from source
to receiver, as a function of the modulation frequency F,
which ranges from 0.63 to 12.5 Hz. The m(F) is
calculated from the squared impulse response using the
following relation:
( )
( )
]
[
)
(
2
2
2
−
⋅
=
∫
∫
∞
∞
−
∞
∞
−
−
dt
t
p
dt
e
t
p
F
m
Ft
j π
(8)
The m(F) values for 14 modulation frequencies are
averaged, resulting in the so called Modulation
Transmission Index MTI [10], given by:
]
[
14
)
(
)
(
14
1
−
=
∑
=
n
n
F
m
F
MTI (9)
3.4 Inter-aural cross-correlation coefficient IACC
Although the IACC is still subject to discussion and
research, the parameter IACC [11] is used to measure the
“spatial impression” and is calculated from the impulse
response using the following relation (inter-aural cross-
correlation function):
]
[
)
(
)
(
)
(
)
(
)
(
2
1
2
2
1
2
1
2
2
, −
+
⋅
=
∫ ∫
∫
t
t
t
t
r
l
t
t
r
l
t
t
dt
t
p
dt
t
p
dt
t
p
t
p
IACF
τ
τ (10)
where pl(t) is the impulse response measured at the left
ear and pr(t) is the impulse response measured at the
right ear of the HATS. The inter-aural cross-correlation
coefficient IACC is given by:
IACCt1,t2= | IACFt1,t2(τ) |max for -1ms < τ <+1ms (11)
For this investigation only the interval between t1= 0 and
t2 = 80 ms (early reflections) is used.
3.5 Just noticeable differences
The just noticeable differences (JND) for all used
objective room acoustic parameters are shown in table 1.
Table 1. JND (Just Noticeable Differences).
T20, T30 C80 MTI IACC
10 % 1 dB 0.1 0.075
4 Impulse responses and measurements
4.1 Diffuse field in diffuse field acoustics
4.1.1 Measurement conditions
The subset selection from the original set of impulse
responses is based on the measurement quality, the
measurement equipment, the rooms in which the
measurements are performed and the positions of the
sound source and the measurement microphone. Finally
11 impulse responses have been selected from which the
500, 1000 and 2000 Hz octave bands have been used.
WSEAS TRANSACTIONS on SIGNAL PROCESSING C. C. J. M. Hak, R. H. C. Wenmaekers
ISSN: 1790-5052 177 Issue 4, Volume 6, October 2010
5. Table 2. Properties of used room impulse responses.
All impulse responses are obtained from diffuse sound
field measurements using deconvolution techniques [12]
with MLS and e-sweeps [13], resulting in INR values >
50 dB [14]. Some properties of the selected impulse
responses are shown in Table 2.
4.1.2 Measurement equipment
The measurement equipment consisted of the following
components:
• Microphone: omnidirectional, sound level
meter (RION -NL 21);
• power amplifier: (Acoustics Engineering -
Amphion);
• sound source: omnidirectional (B&K -
Type 4292);
• sound device: USB audio device
(Acoustics Engineering - Triton);
• measurement software: DIRAC
(B&K - Type 7841);
• signal: synchronous or asynchronous
[15][16].
4.2 Diffuse field in direct field acoustics
The impact of the control room acoustics on live
recorded acoustics has been investigated. To this end the
convolution has been applied to binaural impulse
responses of six control rooms, a symphonic concert
hall, a chamber music hall and a professional headphone.
4.2.1 Measurement conditions
All measurements, both single channel and dual channel,
were performed using a HATS or an artificial head [17].
The decay range (INR) [14] for all measured impulse
responses is larger than 52 dB for all octave bands used.
4.2.1.1 Large and small concert hall
Impulse response measurements were performed in the
large and small concert hall of “The Frits Philips
Muziekcentrum Eindhoven” [18] with a volume of
approx. 14400 m3
, an unoccupied stage floor and Tempty ≈
2 s for the large (symphonic) concert hall and a volume
of approx. 4000 m3
, an unoccupied stage floor and Tempty
≈ 1.5 s for the small (chamber music) hall. Figures 2 and
3 give an impression of the halls and the schematic
floorplans with the source position S as indicated, placed
on the major axis of the hall, and the microphone
positions R1 and R2, where R1 is placed at approx. 5 m
from the source S, equal to the critical distance, and R2
is placed at approx. 18 m from S (diffuse field). More
specifications of both concert halls are presented in table
3, using the total average over both microphones (ears)
of the HATS, the 500 and 1000 Hz octave bands and the
receiver positions R1 and R2. The INR for all measured
symphonic and chamber music hall impulse responses
had an average of 60 dB for all used octave bands, with a
minimum exceeding 54 dB.
Fig 2. Symphonic (left) and chamber music hall (right).
Table 3. Concert halls specifications.
WSEAS TRANSACTIONS on SIGNAL PROCESSING C. C. J. M. Hak, R. H. C. Wenmaekers
ISSN: 1790-5052 178 Issue 4, Volume 6, October 2010
6. Fig 3. Sound source S and microphone R positions.
Fig 4. Concert hall measurement using a Head and
Torso Simulator (HATS).
4.2.1.2 Control rooms
The control rooms under test are all Dutch control rooms
and qualified as very good by the sound engineers as
well as the designers. For the sake of privacy the
measured control rooms are marked from CR1 to CR6.
The control rooms were investigated extensively with
microphone positions placed on a grid consisting of 15
measurement positions [19]. Based on these
measurements several important room acoustical
parameters were computed. The results of the
reverberation time measurements revealed that in the
lower frequencies all control rooms under test met the
general criteria of ITU-R BS.1116-1 [20]. In the higher
frequencies only control room CR2 met this criterion.
Specifications of all control rooms under test are
presented in table 4. For all control rooms the monitor
configuration is a two channel stereo arrangement,
according to the ITU-R BS.775-1 [21]. The loudspeakers
are placed with respect to the sound engineer in an arc of
60° as shown in figure 5. The control room impulse
response measurements were performed at the sound
engineer position, known as the ‘sweet spot’, the focal
point between the main (wall mounted) loudspeakers.
Control room CR5 was only suitable for near field
monitoring. Manufacturer, type and frequency range of
all used loudspeakers (monitors) are shown in table 5.
The INR for all measured control room impulse
responses had an average of 57 dB for all used octave
bands, with a minimum exceeding 52 dB.
Fig 5. Monitor placement according to the ITU [21]
with respect to the listener in an arc of 60°.
Fig 6. Sweet spot measurement using a Head and Torso
Simulator (HATS) or an artificial head.
WSEAS TRANSACTIONS on SIGNAL PROCESSING C. C. J. M. Hak, R. H. C. Wenmaekers
ISSN: 1790-5052 179 Issue 4, Volume 6, October 2010
7. Table 4. Control room specifications.
Table 5. Monitor specifications.
The ITU recommendation gives the tolerance limits for
the reverberation times in a critical listening
environment. In figure 7 the recommended tolerance
limits are presented.
Fig 7. Tolerance limits for the reverberation time,
relative to Tm according to ITU 1116.1 [20] .
The average reverberation time Tm [s] is given by:
[ ]
s
V
V
Tm
3
1
0
25
.
0
= (12)
where V is the volume of the room in m3
and V0 is a
reference volume of 100 m3
. Figure 8 shows the
reverberation time T30 as a function of the frequency
relative to the ITU tolerance limits. Because of the
different volumes of the control rooms, their absolute
tolerance limit values also differ slightly.
Fig 8. ITU Tolerance limits for the reverberation time
versus the measured control room reverberation time.
WSEAS TRANSACTIONS on SIGNAL PROCESSING C. C. J. M. Hak, R. H. C. Wenmaekers
ISSN: 1790-5052 180 Issue 4, Volume 6, October 2010
8. 4.2.1.3 Headphone
To complete the set of impulse responses a pure free
field measurement was performed using the HATS and a
high quality headphone [22] as shown in figure 9. The
minimum INR for the measured impulse response
reached a value of 88 dB for both the 500 Hz and the 1
kHz octave band.
Fig 9. Headphone transfer measurement using a HATS.
4.2.2 Measurement equipment
The measurement equipment consisted of the following
components:
• Head And Torso Simulator: used
in concert halls and control rooms
(B&K - Type 4128C) [23];
• artificial head: used in control rooms
(Sennheiser - MZK 2002);
• microphones: used with artificial head
(Sennheiser - MKE 2002);
• power amplifier: used in concert halls
(Acoustics Engineering - Amphion);
• sound source: omnidirectional, used in
concert halls. (B&K - Type 4292);
• sound device: USB audio device
(Acoustics Engineering - Triton);
• headphone: used as a reference source;
(Philips: SBC HP890);
• measurement software: DIRAC
(B&K - Type 7841).
5 Results and discussion
5.1 Diffuse field in diffuse field
5.1.1 Procedure
From an extensive set of measured impulse responses, a
selection was made. From each pair (h1, h2) out of this
selection the first one is considered as a recorded
impulse response and the other one as a listening room
impulse response. Using the convolution function in the
acoustic measurement program DIRAC, each pair of
components are mutually convolved to obtain the
impulse response h12, heard when playing back the
recorded impulse response in the listening room. h12,
thus representing the h1 affected by h2, is then compared
with h1, with respect to the reverberation time T [7], the
Modulation Transfer Index MTI [9][10] and the Clarity
C80 [8].
5.1.2 Measurement results (diffuse field conditions)
In figure 10 through 12 the results of the convolutions
are depicted as scatter diagrams. Each graph shows the
difference between 2 values of a parameter, one
calculated from h12 and one from h1. Using for example
the reverberation time as the base parameter, on the x-
axis the ratio T20(h1)/T20(h2) of the reverberation time
calculated from h1 (= Trecorded room), and the reverberation
time calculated from h2 (= Tlistening room) is given. For
symmetry reasons (h12 = h1*h2 = h2*h1), only impulse
response pairs with T20(h1) > T20(h2) have been depicted.
The differences were calculated for three acoustical
parameters. One is a decay related parameter T20. The
second one is a modulation related parameter MTI, a
value between 0 and 1, used to calculate the speech
intelligibility. The third one is an energy distribution
related parameter C80.
The scatter plot of the reverberation time T20 in figure 10
shows that for the selected impulse responses (table 2)
the variation of the percentual difference between T(h12)
and T(h1) lies within a band of +/- 10% around the trend
line. Starting from a JND (Just Noticeable Difference) of
10%, a diffuse field recording, and the reverberation
time as the only judgement criterion, we can conclude
that for an accurate demonstration a listening room
should have a reverberation time less than half that of
the recording room.
Fig 10. Difference between Tplayback and Trecording.
The scatter plot of the modulation transmission index
MTI in figure 11 shows that for the used impulse
responses (table 2) the variation of the differences
between MTI(h12) and MTI(h1) lies within a band of +/-
WSEAS TRANSACTIONS on SIGNAL PROCESSING C. C. J. M. Hak, R. H. C. Wenmaekers
ISSN: 1790-5052 181 Issue 4, Volume 6, October 2010
9. 0.05 around the trend line. Speech intelligibility
experiments and demonstrations require a playback or
listening room with a reverberation time at least 4 times
shorter than that of the recorded impulse response, using
a JND for the MTI of 0.1.
Fig 11. Difference between MTIplayback and MTIrecording.
The scatter plot of the Clarity C80 in figure 12 shows that
for the used impulse responses (table 2) the variation of
the difference between C80(h12) and C80(h1) lies within a
band of +/- 3 dB around the trend line. When it is
important to demonstrate or judge the details of sound
definition or brightness, using a JND of 1 dB for the
Clarity, you have to use a playback or listening room
with a reverberation time more than a factor of 10 lower
than the reverberation time of the ‘recorded’ impulse
response.
Fig 12. Difference between C80playback and C80recording.
5.2 Diffuse field in direct field
5.2.1 Procedure
Starting from a set of 6 binaural control room impulse
responses, 1 binaural headphone impulse response and 4
binaural concert hall impulse responses, 28 pairs of
impulse responses are defined. From each pair (h1, h2)
the first is considered as a concert hall impulse response
and the other as a control room impulse response. Using
DIRAC, each pair (h1, h2) is convolved to obtain the
impulse response h12, heard when playing back the
recorded concert hall impulse response in the sound
control room. h12, thus representing h1 affected by h2, is
then compared to h1, with respect to the reverberation
time T30, the clarity C80, the modulation transfer index
MTI and the inter-aural cross-correlation coefficient
IACC [2,3].
5.2.2 Measurement results (direct field conditions)
In Figure 13 through 20 the results of the convolutions
are depicted as an average over the 500 and 1000 Hz
octave band. Each graph shows the difference between 2
values of a parameter, one calculated from h12, the
convolution of the concert hall with the control room and
one from h1, the impulse response of the concert hall. On
the x-as the control rooms CR1 to CR6 are given in
order of the decay rate. The differences are calculated for
four acoustical parameters: T30, C80, MTI and IACC.
Fig 13. Percentual difference between Th12
and Th1
(T30 error) measured at (hall) position R1.
.
Fig 14. Percentual difference between Th12
and Th1
(T30 error) measured at (hall) position R2.
Figure 13 shows the T30 error in % at receiver position
R1 (equal to the critical distance) for the symphonic
concert hall and chamber music hall when played back
in control rooms CR1 to CR6 and using a headphone.
Figure 14 shows the results at receiver position R2
(diffuse field). It is shown that all situations result in T30
errors much smaller than the JND of 10%, which
supports the earlier conclusion that for an accurate
reproduction of T30 a listening room should have a
reverberation time below half that of the recorded hall.
However, there is no clear relation between the T30 of the
sound control room and the T30 error. No explanation
WSEAS TRANSACTIONS on SIGNAL PROCESSING C. C. J. M. Hak, R. H. C. Wenmaekers
ISSN: 1790-5052 182 Issue 4, Volume 6, October 2010
10. was found for the fairly large and negative T30 error of -6
% in control room CR2 when listening to the concert
hall recording at position R2.
Fig 15. Difference between C80h12
and C80h1
(C80 error) measured at (hall) position R1.
Fig 16. Difference between C80h12
and C80h1
(C80 error) measured at (hall) position R2.
Figure 15 shows the C80 error in dB at receiver position
R1 (equal to the critical distance) for the symphonic
concert hall and chamber music hall when played back
in control rooms CR1 to CR6 and using a headphone.
Figure 16 shows the results at receiver position R2
(diffuse field). It is shown that some situations result in
C80 errors larger than the JND of 1 dB, dependent on the
reverberation time of the control room and the receiver
position being at the critical distance or in the diffuse
field in the symphonic concert hall and chamber music
hall. In general, the C80 error decreases when the
reverberation time of the control room decreases. Also,
for the control rooms CR1 to CR4 the C80 error increases
at the listeners position R2 in the diffuse field.
Only in control room CR5 and CR6 with a T30 ≤ 0.15 s
or using the headphone the C80 is reproduced within the
JND. This supports the earlier conclusion that for an
accurate reproduction of the clarity C80 you have to use a
playback or listening room with a reverberation time
less than 1/10th
of the reverberation time of the recorded
hall.
Fig 17. Difference between MTIh12
and MTIh1
(MTI error) measured at (hall) position R1.
Fig 18. Difference between MTIh12
and MTIh1
(MTI error) measured at (hall) position R2.
Figure 17 shows the MTI error at receiver position R1
(equal to the critical distance) for the symphonic concert
hall and chamber music hall when played back in control
rooms CR1 to CR6 and using a headphone. Figure 18
shows the results at receiver position R2 (diffuse field).
It is shown that all situations result in MTI errors much
smaller than the JND of 0.1, which supports the earlier
conclusion that for an accurate reproduction of speech
intelligibility MTI a playback or listening room is
required with a reverberation time of at least 4 times
shorter than that of the recorded hall. Again a clear
relation is found between the reverberation time of the
control room and the MTI error. No clear difference is
found between receiver position R1 and R2.
Fig 19. Difference between IACCh12
and IACCh1
(IACC error) measured at (hall) position R1.
WSEAS TRANSACTIONS on SIGNAL PROCESSING C. C. J. M. Hak, R. H. C. Wenmaekers
ISSN: 1790-5052 183 Issue 4, Volume 6, October 2010
11. Fig 20. Difference between IACCh12
and IACCh1
(IACC error) measured at (hall) position R2.
Figure 19 shows the IACC error at receiver position R1
(equal to the critical distance) for the symphonic concert
hall and chamber music hall when played back in control
rooms CR1 to CR6 and using a headphone. Figure 20
shows the results at receiver position R2 (diffuse field).
It is shown that all situations result in IACC errors larger
than the JND of 0.075 except when using the headphone.
In most cases the IACC errors for the chamber music
hall recording are smaller than the errors for the
symphonic concert hall recording. This may imply that
the IACC error when listening in a typical control room
with a good reputation is dependent on the reverberation
time of the room where the recording has been made.
However, an accurate judgement of spaciousness of a
binaural concert hall recording apparently requires the
use of a headphone.
6 Conclusions
6.1 Diffuse field investigation
Starting with 11 more or less randomly selected high
quality room impulse responses and the Just Noticeable
Difference of three calculated room acoustic parameters
(T20, C80 and MTI) it can be concluded:
• To accurately reproduce a steady sound energy decay
rate (related to the reverberation time), the playback
room should have at least twice this decay rate, under
diffuse soundfield conditions.
• For energy modulations (related to speech
intelligibility) the decay rate of the playback room,
under diffuse sound field conditions, should be at
least 4 times higher than that of the recording room.
• Initial energy ratios (related to definition and clarity)
require auditive judgement in a predominantly direct
sound field. It seems that the decay rate of the
playback room, under diffuse sound field conditions,
should be at least 10 times higher than that of the
recording room.
6.2 Sound control room investigation
Starting with 6 qualified as good, more or less
standardised sound control rooms, 2 concert halls, a
headphone and the Just Noticeable Difference of four
calculated room acoustic (ISO/IEC) parameters (T30, C80,
MTI and IACC), the following can be concluded:
• The ITU-recommendations for sound control room
design are adequate for evaluation of reverberation in
concert hall recordings.
• When it is important to assess the details of sound
definition of a concert hall recording, you need a
control room with a very high decay rate. Only the
control rooms with a reverberation time below 0.15 s
can be used. This confirms the conclusion from the
diffuse field investigation that you need a room with
a reverberation time less than 1/10th
of the
reverberation time of the recorded concert hall. This
is lower than the recommended value of the ITU.
• The ITU-recommendations for sound control room
design are adequate for evaluation of speech
intelligibility in concert hall recordings.
• An accurate judgement of spaciousness of a binaural
concert hall recording apparently requires the use of
a headphone. This requires further investigation.
References:
[1] D. Iuşcǎ, “Neural Correlates of Music Perception and
Cognition”, Proceedings of 11th
WSEAS
International Conference on Acoustics & Music
2010, Iasi, Romania (2010).
[2] M.M. Lupu, “Conceptions of Learning and Teaching
in Music Education as Voiced by Pre-service Student
Teachers”, Proceedings of 11th
WSEAS International
Conference on Acoustics & Music 2010, Iasi,
Romania (2010).
[3] F. Kosona, “Diathlassis for Flute Solo: a Compo-
sition based on an Application of the Mathematical
Model of Cusp Catastrophe”, Proceedings of 11th
WSEAS International Conference on Acoustics &
Music 2010, Iasi, Romania (2010).
[4] C.C.J.M. Hak, R.H.C. Wenmaekers, “The effect of
Room Acoustics on the Perceived Acoustics of
Reproduced Sound”, Internoise 2008, Shanghai,
China (2008).
[5] Acoustics-Measurement of room acoustic parameters
- Part 1: Performance rooms, International Standard
ISO/DIS 3382-1 Draft: 2006 (International
Organization for Standardization, Geneva,
Switzerland, 2006).
[6] Sound System Equipment – Part 16: Objective rating
of speech intelligibility by speech transmission
index, International Standard IEC 60268-16: 2003
WSEAS TRANSACTIONS on SIGNAL PROCESSING C. C. J. M. Hak, R. H. C. Wenmaekers
ISSN: 1790-5052 184 Issue 4, Volume 6, October 2010
12. (International Organization for Standardization,
Geneva, Switzerland, 2003).
[7] M.R. Schroeder, “ Integrated-impulse method for
measuring sound decay without using impulses”,
Journal of the Acoustical Society of America, 66,
497-500 (1979)
[8] W.Reichardt , O. Abdel Alim, W. Schmidt,
“Definition und Meβgrundlage eines objektiven
Maβes zur Ermittlung der Grenze zwischen
brauchbarer und unbrauchbarer Durchsichtigkeit bei
Musikdarbietung,” Acustica 32, p.126-137 (1975).
[9] M.R. Schroeder, “Modulation tranfer functions:
definition and measurements”, Acustica, 49, 179-
182 (1981).
[10] T. Houtgast, H.J.M. Steeneken, “The modulation
transfer function in room acoustics as a predictor of
speech intelligibility”, Acustica, 28, 66-73 (1973).
[11] W.V. Keet, “The influence of early reflections on
spatial impressions”, 6th
ICA Tokyo Japan (1968).
[12] Acoustics -Application of new measurement
methods in building and room acoustics,
International Standard ISO 18233: 2006
(International Organization for Standardization,
Geneva, Switzerland, 2006).
[13] S. Müller, P. Massarani, “Transfer-function
measurements with sweeps”, Journal of the Audio
Engineering Society, 49, 443-471, (2001).
[14] C.C.J.M. Hak, J.P.M. Hak, R.H.C. Wenmaekers,
“INR as an Estimator for the Decay Range of Room
Acoustic Impulse Responses”, AES Convention
Amsterdam (2008).
[15] C.C.J.M. Hak, J.P.M. Hak, “Effect of Stimulus
Speed Error on Measured Room Acoustic
Parameters”, International Congress on Acoustics
Madrid (2007).
[16] C.C.J.M. Hak, J.S. Vertegaal, “MP3 Stimuli in
Room Acoustics”, International Congress on
Acoustics Madrid (2007).
[17] C.C.J.M. Hak, M.A.J. v. Haaren, R.H.C.
Wenmaekers, L.C.J. van Luxemburg, “Measuring
room acoustic parameters using a head and torso
simulator instead of an omnidirectional micro-
phone”, Internoise 2009, Ottawa, Canada (2009).
[18] P.E. Braat-Eggen, L.C.J. van Luxemburg, L.G.
Booy, P.A. de Lange, “A New Concert Hall for
the City of Eindhoven – Design and Model Tests”,
Applied Acoustics, 40, 295-309 (1993).
[19] B.J.P.M. v. Munster, “Beyond Control – Acoustics
of Sound Recording Control Rooms – Past, Present
and Future”, FAGO Report nr: 03-23-G, Unit:
Building Physics and Systems, Department of
Architecture, Building and Planning, University of
Technology Eindhoven (2003).
[20] ITU-recommendation, Methods for the Subjective
Assessment of Small Impairments in Audio Systems
including Multichannel Sound Systems, ITU-R
BS.1116-1 (1997).
[21] ITU-recommendation, Multichannel Stereophonic
Sound System with and without Accompanying
Picture, ITU-R BS.775-1 (1994).
[22] M. Vorlander, “Impulse Measurements of Head-
phones on Ear Simulators and on Head and Torso
Simulators”, Acustica, 76, 66-72 (1992)
[23] L. Tronchin, V. Tarabusi, “New Acoustical
Techniques for Measuring Spatial Properties in
Concert Halls”, Proceedings of 5th
WSEAS
International Conference on Instrumentation,
Measurement, Circuits and Systems 2006,
Hangzhou, China (2006).
WSEAS TRANSACTIONS on SIGNAL PROCESSING C. C. J. M. Hak, R. H. C. Wenmaekers
ISSN: 1790-5052 185 Issue 4, Volume 6, October 2010
View publication stats
View publication stats