The document discusses spatial audio and sound localization techniques. It describes how head-related transfer functions (HRTFs) capture the spatial cues the human auditory system uses to localize sound sources. HRTFs are measured impulse responses between sound sources and the ears. Convolving a sound source with HRTFs for that source's location produces binaural audio. Ambisonics uses spherical harmonics to represent soundfields with a small number of channels and can render spatial audio in virtual reality.
Advances in geophysical sensor data acquisitionIXSEA-DELPH
This presentation will explain what is different and new about the ECHOES products and introduce a new approach to Sub-Bottom Profiler (SBP) data acquisition and processing which has the potential to make a real difference to workflow.
Application of Seismic Reflection Surveys to Detect Massive Sulphide Deposits...iosrjce
Seismic reflection techniques, the most widely used geophysical method for hydrocarbon exploration
has the capability to delineate and provide better images of regional structure for exploration of mineral
deposits in any geological settings. Previous tests on detection and imaging of massive sulphide ores using
seismic reflection techniques have been done mostly in crystalline environments. Application of seismic
reflection techniques for imaging sedimentary hosted massive sulphide is relatively new and the few experiments
carried out are at local scale (<500m). In this study, we analyze the feasibility of such regional exploration by
modelling three massive sulphide ore and norite lenses scenario using 2D seismic survey with relatively sparse
source-receiver geometry to image these deposits within 1.5km depth range. Results from the modelling
experiment demonstrate that 2-Dimensional seismic reflections survey can be used to detect massive sulphides
at any scale. The test further indicates that geologic setting and acquisition parameters are very important for
the detection of these ore bodies. Overall, the outcomes of the results support our started objective which is to
demonstrate that seismic reflection surveys can be used to detect the presence of sediment hosted massive
sulphides at regional scale
Advances in geophysical sensor data acquisitionIXSEA-DELPH
This presentation will explain what is different and new about the ECHOES products and introduce a new approach to Sub-Bottom Profiler (SBP) data acquisition and processing which has the potential to make a real difference to workflow.
Application of Seismic Reflection Surveys to Detect Massive Sulphide Deposits...iosrjce
Seismic reflection techniques, the most widely used geophysical method for hydrocarbon exploration
has the capability to delineate and provide better images of regional structure for exploration of mineral
deposits in any geological settings. Previous tests on detection and imaging of massive sulphide ores using
seismic reflection techniques have been done mostly in crystalline environments. Application of seismic
reflection techniques for imaging sedimentary hosted massive sulphide is relatively new and the few experiments
carried out are at local scale (<500m). In this study, we analyze the feasibility of such regional exploration by
modelling three massive sulphide ore and norite lenses scenario using 2D seismic survey with relatively sparse
source-receiver geometry to image these deposits within 1.5km depth range. Results from the modelling
experiment demonstrate that 2-Dimensional seismic reflections survey can be used to detect massive sulphides
at any scale. The test further indicates that geologic setting and acquisition parameters are very important for
the detection of these ore bodies. Overall, the outcomes of the results support our started objective which is to
demonstrate that seismic reflection surveys can be used to detect the presence of sediment hosted massive
sulphides at regional scale
Distance Coding And Performance Of The Mark 5 And St350 Soundfield Microphone...Bruce Wiggins
A paper presented at the Institute of Acoustics Reproduced Sound 25 conference in 2009 looking at the response of two SoundField Ambisonic microphones to sound sources at different distances from the microphone.
TOPICS COVERED:-
How SONAR works
Factors that affect the performance of a sonar unit
Factors that affect underwater acoustic propagation
in the ocean
Principles of sonar
Application of sonar.
Significance of frequency
Conclusion…
Sonar, (from “sound navigation ranging”), technique for detecting and determining the distance and direction of underwater objects by acoustic means. Sound waves emitted by or reflected from the object are detected by sonar apparatus and analyzed for the information they contain.
Basic information about sonar system-History of Sonar-types of Sonar -application and limitation of sonar systems-component of sonar - Microwave systems- (with transition and animation & include one video)
Lecture 5 in the COMP 4010 course on Augmented and Virtual Reality. This lecture talks about spatial audio and tracking systems. Delivered by Bruce Thomas and Mark Billinghurst on August 23rd 2016 at University of South Australia.
Parts 1 and 2 of a series of four presentations that formed the basis of my short course on spatial audio for artists at the Music Department, Ionian University, Corfu, July 2008
Distance Coding And Performance Of The Mark 5 And St350 Soundfield Microphone...Bruce Wiggins
A paper presented at the Institute of Acoustics Reproduced Sound 25 conference in 2009 looking at the response of two SoundField Ambisonic microphones to sound sources at different distances from the microphone.
TOPICS COVERED:-
How SONAR works
Factors that affect the performance of a sonar unit
Factors that affect underwater acoustic propagation
in the ocean
Principles of sonar
Application of sonar.
Significance of frequency
Conclusion…
Sonar, (from “sound navigation ranging”), technique for detecting and determining the distance and direction of underwater objects by acoustic means. Sound waves emitted by or reflected from the object are detected by sonar apparatus and analyzed for the information they contain.
Basic information about sonar system-History of Sonar-types of Sonar -application and limitation of sonar systems-component of sonar - Microwave systems- (with transition and animation & include one video)
Lecture 5 in the COMP 4010 course on Augmented and Virtual Reality. This lecture talks about spatial audio and tracking systems. Delivered by Bruce Thomas and Mark Billinghurst on August 23rd 2016 at University of South Australia.
Parts 1 and 2 of a series of four presentations that formed the basis of my short course on spatial audio for artists at the Music Department, Ionian University, Corfu, July 2008
Spatial Sound 3: Audio Rendering and AmbisonicsRichard Elen
Part 3 of a series of four presentations that formed the basis of my short course on spatial audio for artists at the Music Department, Ionian University, Corfu, July 2008
OwnSurround HRTF Service for ProfessionalsTomi Huttunen
Head-related transfer functions (HRTFs) are an essential part of 3D headphone audio. OwnSurround has developed a simulation based service for the fast generation of personalized HRTFs.
Visit https://alexisbaskind.net/teaching for a full interactive version of this course with sound and video material, as well as more courses and material.
Course series: Fundamentals of acoustics for sound engineers and music producers
Level: undergraduate (Bachelor)
Language: English
Revision: February 2020
To cite this course: Alexis Baskind, The Overtone Spectrum
course material, license: Creative Commons BY-NC-SA.
Course content:
1. What is the overtone spectrum?
Time and frequency representation of a sound, harmonic and inharmonic sounds, overtones, harmonic series, fundamental frequency, linear and logarithmic frequency scales
2. Physical generation of overtones
Tone generator, vibration modes, dependence on geometry
3. Shaping of the overtone spectrum in the instrument
resonator, resonance modes, dependence on geometry, formants, overtone singing
4. Designing the overtone spectrum by playing
harmonics glissandi, natural string harmonics, dependence of tone color on dynamics, decay, radiation patterns
5. Conclusion
An auditorium is designed based on the function itself. For example: Dewan Agong Tuanku Canselor UiTM, that is a multi-purpose auditoria which indicates both functions; speech and also music purposes. It depends on the event that will be held in the auditorium. The design of the auditorium must comprises of both functions in order to have a good room acoustic. In addition, it must be work out for changes.
A new gaze-contingent rendering mode for VR/AR that renders in perceptually correct ocular parallax which benefits depth perception and perceptual realism.
Imaging objects obscured by occluders is a significant challenge for many applications. A camera that could “see around corners” could help improve navigation and mapping capabilities of autonomous vehicles or make search and rescue missions more effective. Time-resolved single-photon imaging systems have recently been demonstrated to record optical information of a scene that can lead to an estimation of the shape and reflectance of objects hidden from the line of sight of a camera. However, existing non-line-of-sight (NLOS) reconstruction algorithms have been constrained in the types of light transport effects they model for the hidden scene parts. We introduce a factored NLOS light transport representation that accounts for partial occlusions and surface normals. Based on this model, we develop a factorization approach for inverse time-resolved light transport and demonstrate high-fidelity NLOS reconstructions for challenging scenes both in simulation and with an experimental NLOS imaging system.
Inspired by Wheatstone’s original stereoscope and augmenting it with modern factored light field synthesis, we present a new near-eye display technology that supports focus cues. These cues are critical for mitigating visual discomfort experienced in commercially-available head mounted displays and providing comfortable, long-term immersive experiences.
A compressive approach to light field synthesis with projection devices. We propose a novel, passive screen design that is combined with high-speed light field projection and nonnegative light field factorization. We demonstrate that the projector can alternatively achieve super-resolved and high dynamic range 2D image display when used with a conventional screen.
Millions of people worldwide need glasses or contact lenses to see or read properly. We introduce a computational display technology that predistorts the presented content for an observer, so that the target image is perceived without the need for eyewear. We demonstrate a low-cost prototype that can correct myopia, hyperopia, astigmatism, and even higher-order aberrations that are difficult to correct with glasses.
Hierarchical Digital Twin of a Naval Power SystemKerry Sado
A hierarchical digital twin of a Naval DC power system has been developed and experimentally verified. Similar to other state-of-the-art digital twins, this technology creates a digital replica of the physical system executed in real-time or faster, which can modify hardware controls. However, its advantage stems from distributing computational efforts by utilizing a hierarchical structure composed of lower-level digital twin blocks and a higher-level system digital twin. Each digital twin block is associated with a physical subsystem of the hardware and communicates with a singular system digital twin, which creates a system-level response. By extracting information from each level of the hierarchy, power system controls of the hardware were reconfigured autonomously. This hierarchical digital twin development offers several advantages over other digital twins, particularly in the field of naval power systems. The hierarchical structure allows for greater computational efficiency and scalability while the ability to autonomously reconfigure hardware controls offers increased flexibility and responsiveness. The hierarchical decomposition and models utilized were well aligned with the physical twin, as indicated by the maximum deviations between the developed digital twin hierarchy and the hardware.
Welcome to WIPAC Monthly the magazine brought to you by the LinkedIn Group Water Industry Process Automation & Control.
In this month's edition, along with this month's industry news to celebrate the 13 years since the group was created we have articles including
A case study of the used of Advanced Process Control at the Wastewater Treatment works at Lleida in Spain
A look back on an article on smart wastewater networks in order to see how the industry has measured up in the interim around the adoption of Digital Transformation in the Water Industry.
Saudi Arabia stands as a titan in the global energy landscape, renowned for its abundant oil and gas resources. It's the largest exporter of petroleum and holds some of the world's most significant reserves. Let's delve into the top 10 oil and gas projects shaping Saudi Arabia's energy future in 2024.
Industrial Training at Shahjalal Fertilizer Company Limited (SFCL)MdTanvirMahtab2
This presentation is about the working procedure of Shahjalal Fertilizer Company Limited (SFCL). A Govt. owned Company of Bangladesh Chemical Industries Corporation under Ministry of Industries.
Using recycled concrete aggregates (RCA) for pavements is crucial to achieving sustainability. Implementing RCA for new pavement can minimize carbon footprint, conserve natural resources, reduce harmful emissions, and lower life cycle costs. Compared to natural aggregate (NA), RCA pavement has fewer comprehensive studies and sustainability assessments.
Forklift Classes Overview by Intella PartsIntella Parts
Discover the different forklift classes and their specific applications. Learn how to choose the right forklift for your needs to ensure safety, efficiency, and compliance in your operations.
For more technical information, visit our website https://intellaparts.com
Build Your Own VR Display Course - SIGGRAPH 2017: Part 4
1. Build Your Own VR Display
Spatial Sound
Nitish Padmanaban
Stanford University
stanford.edu/class/ee267/
2. Overview
• What is sound?
• The human auditory system
• Stereophonic sound
• Spatial audio of point sound sources
• Recorded spatial audio
Zhong and Xie, “Head-Related Transfer Functions
and Virtual Auditory Display”
3. What is Sound?
• “Sound” is a pressure wave propagating in a medium
• Speed of sound is where c is velocity, is density of
medium and K is elastic bulk modulus
• In air, speed of sound is 340 m/s
• In water, speed of sound is 1,483 m/s
c = K
r r
4. Producing Sound
• Sound is longitudinal vibration
of air particles
• Speakers create wavefronts by
physically compressing the air,
much like one could a slinky
6. The Human Auditory System
pinna
cochlea
Wikipedia
• Hair receptor cells pick up
vibrations
7. The Human Auditory System
• Human hearing range:
~20–20,000 Hz
• Variation between
individuals
• Degrades with age
D. W. Robinson and R. S. Dadson, 1957
Hearing Threshold in Quiet
8. Stereophonic Sound
• Mainly captures differences between the ears:
• Inter-aural time difference
• Amplitude differences from path length
and scatter
Wikipedia
time
L
R
t + Dt
t
L R
Hello,
SIGGRAPH!
9. Stereo Panning
• Only uses the amplitude differences
• Relatively common in stereo audio tracks
• Works with any source of audio
Line of
sound
0
0.5
1
L R
0
0.5
1
L R
0
0.5
1
L R
10. Stereophonic Sound Recording
• Use two microphones
• A-B techniques captures
differences in time-of-arrival
• Other configurations work too,
capture differences in amplitude
Rode
Olympus
Wikipedia
X-Y technique
11. Stereophonic Sound Synthesis
L R
R
time
amplitude
L
time
amplitude• Ideal case: scaled & shifted Dirac peaks
• Shortcoming: many positions are identical
Input
time
amplitude
Input
12. Stereophonic Sound Synthesis
• In practice: the path lengths and scattering are more
complicated, includes scattering in the ear, shoulders etc.
R
time
amplitude
L
time
amplitude
R
time
amplitude
L
time
amplitude
13. Head-Related Impulse Response (HRIR)
• Captures temporal responses at all possible sound directions,
parameterized by azimuth and elevation
• Could also have a distance parameter
• Can be measured with two microphones in ears of mannequin &
speakers all around
Zhong and Xie, “Head-Related Transfer Functions and Virtual Auditory Display”
q
q f
L R
14. Head-Related Impulse Response (HRIR)
• CIPIC HRTF database: http://interface.cipic.ucdavis.edu/sound/hrtf.html
• Elevation: –45° to 230.625°, azimuth: –80° to 80°
• Need to interpolate between discretely sampled directions
V. R. Algazi, R. O. Duda, D. M. Thompson and C. Avendano, "The CIPIC HRTF Database,” 2001
15. Head-Related Impulse Response (HRIR)
• Storing the HRIR
• Need one timeseries for each location
• Total of samples, where is the number of
samples for azimuth, elevation, and time, respectively
hrirL t;q,f( )
hrirR t;q,f( )
2×Nq ×Nf ×Nt Nq,f,t
16. Head-Related Impulse Response (HRIR)
Applying the HRIR:
• Given a mono sound source and its 3D position
L R
s t( )
s t( )
17. Head-Related Impulse Response (HRIR)
Applying the HRIR:
• Given a mono sound source and its 3D position
1. Compute relative to center of listener’s head
L R
q,f( )
s t( )
s t( )
q,f( )
18. Head-Related Impulse Response (HRIR)
Applying the HRIR:
• Given a mono sound source and its 3D position
1. Compute relative to center of listener’s head
2. Look up interpolated HRIR for left and right ear at these
angles
time
time
amplitude
hrirL t;q,f( )
hrirR t;q,f( )
amplitude
q,f( )
s t( )
19. Head-Related Impulse Response (HRIR)
Applying the HRIR:
• Given a mono sound source and its 3D position
1. Compute relative to center of listener’s head
2. Look up interpolated HRIR for left and right ear at these
angles
3. Convolve signal with HRIRs to get the sound
at each ear
time
time
amplitudeamplitude
sL t( )= hrirL t;q,f( )*s t( )
sR t( )= hrirR t;q,f( )*s t( )
hrirL t;q,f( )
hrirR t;q,f( )
q,f( )
s t( )
20. Head-Related Transfer Function (HRTF)
frequency
amplitude
hrtfL wt;q,f( )
hrtfR wt;q,f( )
amplitude
sL t( )= hrirL t;q,f( )*s t( )
sR t( )= hrirR t;q,f( )*s t( )
• HRTF is Fourier transform of HRIR! (you’ll find the term HRTF
more often that HRIR)
sL t( ) = F-1
hrtfL wt;q,f( )×F s t( ){ }{ }
sR t( ) = F-1
hrtfR wt;q,f( )× F s t( ){ }{ }
time
time
amplitude
hrirL t;q,f( )
hrirR t;q,f( )
frequency
21. • HRTF is Fourier transform of HRIR! (you’ll find the term HRTF
more often that HRIR)
• HRTF is complex-conjugate
symmetric (since the HRIR must
be real-valued)
Head-Related Transfer Function (HRTF)
frequency
amplitudeamplitude
frequency
hrtfL wt;q,f( )
hrtfR wt;q,f( )
sL t( )= hrirL t;q,f( )*s t( )
sR t( )= hrirR t;q,f( )*s t( )
sL t( ) = F-1
hrtfL wt;q,f( )×F s t( ){ }{ }
sR t( ) = F-1
hrtfR wt;q,f( )× F s t( ){ }{ }
22. Spatial Sound of N Point Sound Sources
L R
s2 t( )
• Superposition principle holds, so just sum the contributions of
each s1 t( )
q2
,f2
( )
q1
,f1
( )
sL t( ) = F-1
hrtfL wt;qi
,fi
( )× F si t( ){ }{ }
i=1
N
å
sR t( ) = F-1
hrtfR wt;qi
,fi
( )× F si t( ){ }{ }
i=1
N
å
23. Spatial Audio for VR
• VR/AR requires us to re-think audio, especially spatial audio!
• User’s head rotates freely traditional surround sound
systems like 5.1 or even 9.2 surround isn’t sufficient
24. Spatial Audio for VR
Two primary approaches:
1. Real-time sound engine
• Render 3D sound sources via HRTF in real time, just
as discussed in the previous slides
• Used for games and synthetic virtual environments
• A lot of libraries available: FMOD, OpenAL, etc.
25. Spatial Audio for VR
Two primary approaches:
2. Spatial sound recorded from real environments
• Most widely used format now: Ambisonics
• Simple microphones exist
• Relatively easy mathematical model
• Only need 4 channels for starters
• Used in YouTube VR and many other platforms
26. Ambisonics
• Idea: represent sound incident at a point (i.e. the listener)
with some directional information
• Using all angles is impractical – need too many sound
channels (one for each direction)
• Some lower-order (in direction) components may be
sufficient directional basis representation to the rescue!
q,f
27. Ambisonics – Spherical Harmonics
• Use spherical harmonics!
• Orthogonal basis functions on the surface of a sphere, i.e.
full-sphere surround sound
• Think Fourier transform equivalent on a sphere
28. Ambisonics – Spherical Harmonics
0th order
1st order
2nd order
3rd order
Wikipedia
Remember, these
representing functions on
a sphere’s surface
29. Ambisonics – Spherical Harmonics
1st order approximation
4 channels: W, X, Y, Z
W
X Y Z
Wikipedia
30. Ambisonics – Recording
• Can record 4-channel Ambisonics via special microphone
• Same format supported by YouTube VR and other
platforms
http://www.oktava-shop.com/
31. Ambisonics – Rendered Sources
W = S ×
1
2
X = S ×cosq cosf
Y = S ×sinq cosf
Z = S ×sinf
• Can easily convert a point sound source, S, to the 4-
channel Ambisonics representation
• Given azimuth and elevation , compute W, X, Y, Z asq,f
omnidirectional component (angle-independent)
“stereo in x”
“stereo in y”
“stereo in z”
32. Ambisonics – Playing it Back
LF = 2W + X +Y( ) 8
LB = 2W - X +Y( ) 8
RF = 2W + X -Y( ) 8
RB = 2W - X -Y( ) 8
• Easiest way to render Ambisonics: convert W, X, Y, Z
channels into 4 virtual speaker positions
• For a regularly-spaced square setup, this results in
LF
LB
RF
R
L R
34. References and Further Reading
• Google’s take on spatial audio: https://developers.google.com/vr/concepts/spatial-audio
HRTF:
• Algazi, Duda, Thompson, Avendado “The CIPIC HRTF Database”, Proc. 2001 IEEE Workshop on
Applications of Signal Processing to Audio and Electroacoustics
• download CIPIC HRTF database here: http://interface.cipic.ucdavis.edu/sound/hrtf.html
Resources by Google:
• https://github.com/GoogleChrome/omnitone
• https://developers.google.com/vr/concepts/spatial-audio
• https://opensource.googleblog.com/2016/07/omnitone-spatial-audio-on-web.html
• http://googlechrome.github.io/omnitone/#home
• https://github.com/google/spatial-media/
35. References and Further Reading
• Google’s take on spatial audio: https://developers.google.com/vr/concepts/spatial-audio
HRTF:
• Algazi, Duda, Thompson, Avendado “The CIPIC HRTF Database”, Proc. 2001 IEEE Workshop on
Applications of Signal Processing to Audio and Electroacoustics
• download CIPIC HRTF database here: http://interface.cipic.ucdavis.edu/sound/hrtf.html
Resources by Google:
• https://github.com/GoogleChrome/omnitone
• https://developers.google.com/vr/concepts/spatial-audio
• https://opensource.googleblog.com/2016/07/omnitone-spatial-audio-on-web.html
• http://googlechrome.github.io/omnitone/#home
• https://github.com/google/spatial-media/
Demo