1. MATERIALS
The following tools were utilized:
1. C6713 DSK –Development board with the TI C6713 DSP
2. Code Composer (CCS) IDE—IDE used for programming in C and
Assembly languages
3. MATLAB—Simulation environment that will be used to design
and test the various digital filters
4. Visual Analyzer—Serves as an oscilloscope for real-time analysis
5. Adafruit DRV2605L Haptic Motor Controller, Arduino Uno R3, and
Vibrating Mini Motor Disc—Hardware components for the beat
detector device
6. MP3 Player and headphones
AUDIO EFFECTS
Echo - An acoustic echo is one of the simplest acoustic
modeling problems. Echoes occur when a sound arrives
via more than one acoustic propagation path (Figure 1).
This effect was implemented using a simple digital filter.
Reverberation - This is similar to the echo effect except
the delayed path (Figure 1, path “r”) is “faded out” over
time. The effect acoustically simulates a concert hall or a
large listening space.
Chorus - The chorus effect is any DSP which makes one
sound source (such as a voice) sound like many such
sources singing (or playing) in unison. The chorus effect
was achieved by creating a time-varying delay line which
was varied using a Low-Frequency Oscillator (LFO) which
slowly varies the delay over time.
Head-Related Transfer Functions (HRTFs) - HRTFs are
special digital filters that characterize how the ear re-
ceives sounds which is a common method for 3D binau-
ral spatial audio (i.e., over headphones). These HRTFs
were obtained from measuring a KEMAR mannequin
with average anatomical features (i.e., MIT’s HRTF data-
base [2]). The HRTF coefficients were imported into the
CCS program which used the HRTFs to spatialize audio in
real-time (no offline processing was required). A Human
Computer Interface (HCI) was also produced that allowed
a user to move the sound source azimuthally (see Figure
2).
BEAT DETECTOR
A haptic motor controller acts as a driver for the mini disc.
The controller was connected to the DSK via an Arduino.
CCS was used to interface with the DSK (see Figure 5).
The incoming music signal is continuously sampled at 8
kHz (with a 4 kHz anti-aliasing filter on the codec) and
stored in a buffer. The buffer has 4000 points and is de-
composed into 20 chunks, each chunk consisting of 200
points. The signal energy of a smaller portion of the buff-
er - a “chunk” of the larger buffer - consisting of the most
recently collected samples is compared to the signal ener-
gy of the entire buffer. When this portion of the signal has
a significantly higher energy than the rest of the signal, it
is considered to be a beat (see Figure 7).
Embedded Real-Time Sound Effects Processing Systems
Richard Jung, Olivia Meza, Kenneth John Faller II, Ph. D
DISCUSSION
The implications of this research reach far beyond the education-
al setting. A major goal was to explore various ways in which em-
bedded real-time sound effects can be used in assistive devices,
particularly in the field of haptics technology. We successfully
created a simplistic haptic device that was able to detect low fre-
quencies from the analog audio signal output of an MP3 player.
This served as proof-of-concept that a more complex device such
as an armband can be created in a similar fashion, allowing the
user to be able to “feel” music (or even environmental surround-
ings) through tactile sensations.
ACKNOWLEDGEMENTS
Thanks to the (STEM)2
program and its partnership with Citrus
College, allowing undergraduate community college students the
opportunity to conduct research during the Summer Research
Experience of 2015. Also, much thanks and appreciation to Dr.
Kenneth John Faller II, for his guidance and supervision through-
out this experience. This work was funded by the U.S. Depart-
ment of Education - Title III Part F, Grant # P031C110116.
RESULTS
All designs of digital filters created through MATLAB’s fdatool and
incorporated into CCS projects were evaluated and compared
through Visual Analyzer and tested with output devices such as
headphones. All project implementations worked according to
their design.
INTRODUCTION
The intention for this summer experience was to provide a com-
plete view of the computer engineering curriculum through sev-
eral small projects. Digital Signal Processors (DSPs) have been an
essential component for a wide array of applications including
communications, control technology, image processing, and
speech processing. Common uses include audio signal pro-
cessing, radar/sonar, radio transmissions, digital cameras, medi-
cal imaging, and speech transmission in mobile phones [1]. There
were two main objectives for this project, to use DSP techniques
on a Texas Instruments (TI) C6713 DSP Starter Kit (DSK) to: 1) pro-
cess sound waves and generate sound effects (e.g., echo, rever-
beration, chorus, 3D audio, and equalization) in real-time, and 2)
to create a device that transform music into a multi-sensory ex-
perience. To achieve this, various digital audio filters and Human
Computer Interfaces (HCIs) were designed and implemented
which allow users to interact with the embedded system in real-
time.
Keywords—DSP, HCI, digital filters, real-time processing, audio
effects
CONCEPTS
As a naturally occurring phenomenon, sound waves are con-
stantly interacting with other objects in the environment, which
in turn alters these sounds and produce sound effects. In es-
sence, sound effects are any modified or enhanced sound. These
sound wave interactions can be modeled and simulated digitally
in a similar fashion using a computer. An echo effect is an exam-
ple of a reflected sound wave (see Figure 1).
Digital Signal Processing can be defined as the mathematical ma-
nipulation and discrete digital representation of an information
signal (such as audio analog signals). A digital filter system usual-
ly consists of an ADC to sample the input signal, followed by a
DSP and some peripheral components such as memory to store
data and filter coefficients. Finally, a DAC to complete the output
stage (see Figure 1).
Figure 1 - Model of the echo effect Figure 5 - Complete assembly of haptic device, Arduino , and DSK
Figure 6 - Vibrating Mini Motor Disc
Figure 7 - Spectrogram plot of a music sample for a beat detector
Figure 8- Design of a bandstop filter on MATLAB and matching
capture in real-time with Visual Analyzer
Figure 2 - Spatialization using HRTFs
Figure 3 - The TI C6713 DSK
METHODS
REFERENCES
[1] Chassaing, R., Digital Signal Processing and Applications with the C6713 and C6416 DSK,
vol. 16, John Wiley & Sons, 2004.
[2] B. Gardner and K. M [1]. “HRTF Measurements of a KEMAR Dummy-Head Microphone,”
Massachusetts Institute of Technology (MIT) Media Laboratory Vision and Modeling Group.
[Online]. Available: http://sound.media. mit.edu/resources/KEMAR.html. [Accessed: 08-Mar
-2014].
[3] K. Faller II, C. Nguyen, and A. Barreto, “A Hands-on Approach to Binaural Spatial Audio
Education,” ASEE Comput. Educ. J., vol. 6, no. 2, pp. 90–99, 2015.
Figure 4 - Mixed signal path through the codec