The document provides an overview of recording services at TU Graz, including a history of developments from 2006-2012. It introduces general manual recording, manual streaming, and prototypes of automated recording and streaming. Key facts are presented on the growing number and time of recordings over the years. The Recordings for LifeLongLearning project aims to improve the recording workflow and investigate uses of recordings for lifelong learning through evaluations, developing searchable recordings, and automated systems.
Evaluation of format identification toolsSCAPE Project
Johan van der Knijff, The National Library of the Netherlands, presented his evaluation of format identification tools. He concluded by discussing the potential next steps for tools like DROID, following the results of his evaluation.
This talk was given as part of The Future of File Format Identification: PRONOM and DROID User Consultation, in collaboration with the Digital Preservation Coalition at The National Archives, UK, on 28 November 2011.
Listen to the full presentation at http://media.nationalarchives.gov.uk/index.php/johan-van-der-knijff-evaluation-of-format-identification-tools/
Authors/Presenters: Vasileios Mezaris and Benoit Huet.
Video hyperlinking is the introduction of links that originate from pieces of video material and point to other relevant content, be it video or any other form of digital content. The tutorial presents the state of the art in video hyperlinking approaches and in relevant enabling technologies, such as video analysis and multimedia indexing and retrieval. Several alternative strategies, based on text, visual and/or audio information are introduced, evaluated and discussed, providing the audience with details on what works and what doesn’t on real broadcast material.
Evaluation of format identification toolsSCAPE Project
Johan van der Knijff, The National Library of the Netherlands, presented his evaluation of format identification tools. He concluded by discussing the potential next steps for tools like DROID, following the results of his evaluation.
This talk was given as part of The Future of File Format Identification: PRONOM and DROID User Consultation, in collaboration with the Digital Preservation Coalition at The National Archives, UK, on 28 November 2011.
Listen to the full presentation at http://media.nationalarchives.gov.uk/index.php/johan-van-der-knijff-evaluation-of-format-identification-tools/
Authors/Presenters: Vasileios Mezaris and Benoit Huet.
Video hyperlinking is the introduction of links that originate from pieces of video material and point to other relevant content, be it video or any other form of digital content. The tutorial presents the state of the art in video hyperlinking approaches and in relevant enabling technologies, such as video analysis and multimedia indexing and retrieval. Several alternative strategies, based on text, visual and/or audio information are introduced, evaluated and discussed, providing the audience with details on what works and what doesn’t on real broadcast material.
Curriculum Development of an Audio Processing Laboratory Coursesipij
This paper describes the development of an audio processing laboratory curriculum at the graduate level. A real-time speech and audio signal-processing laboratory is set up to enhance speech and multi-media signal processing courses to conduct design projects. The recent fixed-point TMS320C5510 DSP Starter Kit (DSK) from Texas Instruments (TI) is used; a set of courseware is developed. In addition, this paper discusses the instructor’s and students’ assessments and recommendations in this real-time signal-processing laboratory course.
The following resources come from the 2009/10 B.Sc in Media Technology and Digital Broadcast (course number 2ELE0076) from the University of Hertfordshire. All the mini projects are designed as level two modules of the undergraduate programmes.
Digital Presentation Best Practices: Lessons Learned From Across the PondULB - Bibliothèques
Digital Presentation Best Practices: Lessons Learned From Across the Pond. Slavko Manojlovich (Associate University Librarian (IT) / Manager, Digital Archives Initiative Memorial University St Johns Canada) and Benoit Pauwels (Head, Library Automation Team, Université libre de Bruxelles Belgium)
Digital Preservation Best Practices: Lessons Learned From Across the PondBenoit Pauwels
Digital Preservation Best Practices: Lessons Learned From Across the Pond. Slavko Manojlovich (Associate University Librarian (IT) / Manager, Digital Archives Initiative Memorial University St Johns Canada) and Benoit Pauwels (Head, Library Automation Team, Université libre de Bruxelles Belgium)
For the full video of this presentation, please visit: https://www.edge-ai-vision.com/2022/09/comparing-ml-based-audio-with-ml-based-vision-an-introduction-to-ml-audio-for-ml-vision-engineers-a-presentation-from-dsp-concepts/
Josh Morris, Engineering Manager at DSP Concepts, presents the “Comparing ML-Based Audio with ML-Based Vision: An Introduction to ML Audio for ML Vision Engineers” tutorial at the May 2022 Embedded Vision Summit.
As embedded processors become more powerful, our ability to implement complex machine learning solutions at the edge is growing. Vision has led the way, solving problems as far-reaching as facial recognition and autonomous navigation. Now, ML audio is starting to appear in more and more edge applications, for example in the form of voice assistants, voice user interfaces and voice communication systems.
Although audio data is quite different from video and image data, ML audio solutions often use many of the same techniques initially developed for video and images. In this talk, Morris introduces the ML techniques commonly used for audio at the edge, and compares and contrasts them with those commonly used for vision. You’ll get inspired to integrate ML-based audio into your next solution.
The following resources come from the 2009/10 B.Sc in Media Technology and Digital Broadcast (course number 2ELE0073) from the University of Hertfordshire. All the mini projects are designed as level two modules of the undergraduate programmes.
The objectives of this module are to demonstrate within a digital broadcast environment:
• an understanding of audio requirements for digital editing.
• an awareness of technical constraints for content management and creation.
• the creation of multitrack sound sequences.
Students will be provided with a library of audio samples captured from a range of sources from which they must select and edit into several short duration, professional quality, multitrack sound sequences. The project provides students with an awareness of current audio standards and also the need to appraise the technical content of source material. The project also introduces the use of contemporary digital authoring tools and processes.
Presentation of the research activities of the GPAC team of Telecom ParisTech during the plenary session of the "Réseau Thématique 4" of the Mines-Telecom Institute
Re-using Media on the Web tutorial: Media Fragment Creation and AnnotationMediaMixerCommunity
Explain approaches to visual, audio and textual media analysis to automatically generate meaningful media fragments out of a media resource. Demonstrate latest results in the areas of video fragmentation, visual conceopt and event detection, face detection, object re-detection, and the use of speech recognition and keyword extraction from text for supporting multimedia analysis.
What's new in MPEG? A brief update about the results of its 131st MPEG meeting featuring:
- Welcome and Introduction: Jörn Ostermann, Acting Convenor of WG11 (MPEG)
- Versatile Video Coding (VVC): Jens-Rainer Ohm and Gary Sullivan, JVET Chairs
- MPEG 3D Audio: Schuyler Quackenbusch, MPEG Audio Chair
- Video-based Point Cloud Compression (V-PCC): Marius, Preda, MPEG 3DG Chair
- MPEG Immersive Video (MIV): Bart Kroon, MPEG Video BoG Chair
- Carriage of Versatile Video Coding (VVC) and Enhanced Video Coding (EVC): Young-Kwon Lim, MPEG Systems Chair
- MPEG Roadmap: Jörn Ostermann, Acting Convenor of WG11 (MPEG)
MPEG Web site: https://mpeg-standards.com/meetings/mpeg-131/
Curriculum Development of an Audio Processing Laboratory Coursesipij
This paper describes the development of an audio processing laboratory curriculum at the graduate level. A real-time speech and audio signal-processing laboratory is set up to enhance speech and multi-media signal processing courses to conduct design projects. The recent fixed-point TMS320C5510 DSP Starter Kit (DSK) from Texas Instruments (TI) is used; a set of courseware is developed. In addition, this paper discusses the instructor’s and students’ assessments and recommendations in this real-time signal-processing laboratory course.
The following resources come from the 2009/10 B.Sc in Media Technology and Digital Broadcast (course number 2ELE0076) from the University of Hertfordshire. All the mini projects are designed as level two modules of the undergraduate programmes.
Digital Presentation Best Practices: Lessons Learned From Across the PondULB - Bibliothèques
Digital Presentation Best Practices: Lessons Learned From Across the Pond. Slavko Manojlovich (Associate University Librarian (IT) / Manager, Digital Archives Initiative Memorial University St Johns Canada) and Benoit Pauwels (Head, Library Automation Team, Université libre de Bruxelles Belgium)
Digital Preservation Best Practices: Lessons Learned From Across the PondBenoit Pauwels
Digital Preservation Best Practices: Lessons Learned From Across the Pond. Slavko Manojlovich (Associate University Librarian (IT) / Manager, Digital Archives Initiative Memorial University St Johns Canada) and Benoit Pauwels (Head, Library Automation Team, Université libre de Bruxelles Belgium)
For the full video of this presentation, please visit: https://www.edge-ai-vision.com/2022/09/comparing-ml-based-audio-with-ml-based-vision-an-introduction-to-ml-audio-for-ml-vision-engineers-a-presentation-from-dsp-concepts/
Josh Morris, Engineering Manager at DSP Concepts, presents the “Comparing ML-Based Audio with ML-Based Vision: An Introduction to ML Audio for ML Vision Engineers” tutorial at the May 2022 Embedded Vision Summit.
As embedded processors become more powerful, our ability to implement complex machine learning solutions at the edge is growing. Vision has led the way, solving problems as far-reaching as facial recognition and autonomous navigation. Now, ML audio is starting to appear in more and more edge applications, for example in the form of voice assistants, voice user interfaces and voice communication systems.
Although audio data is quite different from video and image data, ML audio solutions often use many of the same techniques initially developed for video and images. In this talk, Morris introduces the ML techniques commonly used for audio at the edge, and compares and contrasts them with those commonly used for vision. You’ll get inspired to integrate ML-based audio into your next solution.
The following resources come from the 2009/10 B.Sc in Media Technology and Digital Broadcast (course number 2ELE0073) from the University of Hertfordshire. All the mini projects are designed as level two modules of the undergraduate programmes.
The objectives of this module are to demonstrate within a digital broadcast environment:
• an understanding of audio requirements for digital editing.
• an awareness of technical constraints for content management and creation.
• the creation of multitrack sound sequences.
Students will be provided with a library of audio samples captured from a range of sources from which they must select and edit into several short duration, professional quality, multitrack sound sequences. The project provides students with an awareness of current audio standards and also the need to appraise the technical content of source material. The project also introduces the use of contemporary digital authoring tools and processes.
Presentation of the research activities of the GPAC team of Telecom ParisTech during the plenary session of the "Réseau Thématique 4" of the Mines-Telecom Institute
Re-using Media on the Web tutorial: Media Fragment Creation and AnnotationMediaMixerCommunity
Explain approaches to visual, audio and textual media analysis to automatically generate meaningful media fragments out of a media resource. Demonstrate latest results in the areas of video fragmentation, visual conceopt and event detection, face detection, object re-detection, and the use of speech recognition and keyword extraction from text for supporting multimedia analysis.
What's new in MPEG? A brief update about the results of its 131st MPEG meeting featuring:
- Welcome and Introduction: Jörn Ostermann, Acting Convenor of WG11 (MPEG)
- Versatile Video Coding (VVC): Jens-Rainer Ohm and Gary Sullivan, JVET Chairs
- MPEG 3D Audio: Schuyler Quackenbusch, MPEG Audio Chair
- Video-based Point Cloud Compression (V-PCC): Marius, Preda, MPEG 3DG Chair
- MPEG Immersive Video (MIV): Bart Kroon, MPEG Video BoG Chair
- Carriage of Versatile Video Coding (VVC) and Enhanced Video Coding (EVC): Young-Kwon Lim, MPEG Systems Chair
- MPEG Roadmap: Jörn Ostermann, Acting Convenor of WG11 (MPEG)
MPEG Web site: https://mpeg-standards.com/meetings/mpeg-131/
Similar to Automated Podcasting System for Universities (20)
Macroeconomics- Movie Location
This will be used as part of your Personal Professional Portfolio once graded.
Objective:
Prepare a presentation or a paper using research, basic comparative analysis, data organization and application of economic information. You will make an informed assessment of an economic climate outside of the United States to accomplish an entertainment industry objective.
2024.06.01 Introducing a competency framework for languag learning materials ...Sandy Millin
http://sandymillin.wordpress.com/iateflwebinar2024
Published classroom materials form the basis of syllabuses, drive teacher professional development, and have a potentially huge influence on learners, teachers and education systems. All teachers also create their own materials, whether a few sentences on a blackboard, a highly-structured fully-realised online course, or anything in between. Despite this, the knowledge and skills needed to create effective language learning materials are rarely part of teacher training, and are mostly learnt by trial and error.
Knowledge and skills frameworks, generally called competency frameworks, for ELT teachers, trainers and managers have existed for a few years now. However, until I created one for my MA dissertation, there wasn’t one drawing together what we need to know and do to be able to effectively produce language learning materials.
This webinar will introduce you to my framework, highlighting the key competencies I identified from my research. It will also show how anybody involved in language teaching (any language, not just English!), teacher training, managing schools or developing language learning materials can benefit from using the framework.
How to Make a Field invisible in Odoo 17Celine George
It is possible to hide or invisible some fields in odoo. Commonly using “invisible” attribute in the field definition to invisible the fields. This slide will show how to make a field invisible in odoo 17.
Palestine last event orientationfvgnh .pptxRaedMohamed3
An EFL lesson about the current events in Palestine. It is intended to be for intermediate students who wish to increase their listening skills through a short lesson in power point.
Francesca Gottschalk - How can education support child empowerment.pptxEduSkills OECD
Francesca Gottschalk from the OECD’s Centre for Educational Research and Innovation presents at the Ask an Expert Webinar: How can education support child empowerment?
Biological screening of herbal drugs: Introduction and Need for
Phyto-Pharmacological Screening, New Strategies for evaluating
Natural Products, In vitro evaluation techniques for Antioxidants, Antimicrobial and Anticancer drugs. In vivo evaluation techniques
for Anti-inflammatory, Antiulcer, Anticancer, Wound healing, Antidiabetic, Hepatoprotective, Cardio protective, Diuretics and
Antifertility, Toxicity studies as per OECD guidelines
Model Attribute Check Company Auto PropertyCeline George
In Odoo, the multi-company feature allows you to manage multiple companies within a single Odoo database instance. Each company can have its own configurations while still sharing common resources such as products, customers, and suppliers.
Read| The latest issue of The Challenger is here! We are thrilled to announce that our school paper has qualified for the NATIONAL SCHOOLS PRESS CONFERENCE (NSPC) 2024. Thank you for your unwavering support and trust. Dive into the stories that made us stand out!
Adversarial Attention Modeling for Multi-dimensional Emotion Regression.pdf
Automated Podcasting System for Universities
1. TU
Graz
Recording
Services:
Overview
History
and
development
Introducing
recording
services
General
manual
recording
Manual
streaming
Automated
recording
and
streaming
(prototype)
Facts
and
didacCcs
Project
–
Recordings
for
LifeLongLearning
Searchable
recordings
by
indexing
screencasts
Automated
audio
post-‐processing
Automated
recordings
2. History
and
Development
I
Past
• 2006
–
Start
of
PodcasCng-‐Service
Simple
screening
and
audio
recording
with
Camtasia
–
50%
failed
J
First
efforts
for
automated
postprocessing
• 2007
–
LifeCme
PodcasCng
1st
Austrian
Podcast
Conference
in
cooperaCon
with
iUNIg
• 2008
–
Start
with
(live-‐)Streaming-‐Service
Live
Screening,
audio
and
video
recording
on
ePresence
Server
(Desire2Learn)
hZp://curry.tugraz.at
• 2009
–
Start
of
iTunes
U
pla^orm
for
TU
Graz
hZp://itunes.tugraz.at/series
• 2010
–
Start
of
Project:
Recordings
for
LifeLongLearning
• 2010
–
Start
of
Project:
Automated
Recording
• 2011
–
Start
of
Subproject:
Searchable
Recordings
Sta?onary
workflow
version
3. History
and
Development
II
Ongoing
Developments
and
Future
• Since
2010
–
Project:
Recordings
for
LifeLongLearning
Overall
project
in
the
field
of
recordings
• Since
2010
–
Automated
Lecture
Recordings
Focus:
Workflow
and
usability
improvement
for
recordings
Fully
automated
recording
and
postprocessing
of
lectures
• Since
2011
–
Searchable
Recordings
Focus:
Independent
workflow
version
DocumentaCon
• Since
2011
–
Automated
Audio-‐Postprocessing:
CooperaCon
with
Georg
Holzmann
from
„auphonic“
Focus:
Speech
RecogniCon
9. Strategy:
Open
EducaConal
Resources
hEp://opencontent.tugraz.at
Model
by
Schaffert
(Schaffert,
2010)
adapted
to
TU
Graz
IniCaCves
10. Facts
of
PodcasCng
Service
I
600
Number of Recordings
500
Recording Time (h)
400
300
200
100
0
WS06 SS07 WS07 SS08 WS08 SS09 WS09 SS10 WS10 SS11 WS11 SS12
11. Facts
of
PodcasCng
Service
II
4000
3500 Total Number of Recordings
3000 Total Recording Time (h)
2500
2000
1500
1000
500
0
WS06 SS07 WS07 SS08 WS08 SS09 WS09 SS10 WS10 SS11 WS11 SS12
12. DidacCcs
and
Workflow
I
DidacCcs
and
Purposes
• General
Recording
(Screening
/
Audio
/
Video)
Full
Recording
of
lesson
Pre-‐
or
Postrecording
at
office
Tutorial
and
instrucConal
sequences
Process
centered
content
Short
clips
for
help-‐center
• Live
Streaming
(Screening
/
Audio
/
Video)
Blended
learning
Mass
courses
Special
events
• iTunes
U
„Selected“
media-‐files
for
Public
RelaCons
13. DidacCcs
and
Workflow
II
Workflow
of
General
Recording
• Framework
Agreement
with
teacher,
recording
details,
copyright
aspects
• Preprocess
Check
of
hardware,
sojware,
lecture
room
condiCons
Wireless
microphone,
Tablet
PC
Camtasia,
iShow
U
• Recording
Minimal
or
full
assistance
• Postprocess
Audio
opCmizaCon
Text
to
Search
processing
(indexing
screencasts)
Pruduc?on
of
end-‐formats
(Flash
with
Search,
MP4)
HTML
5
environment
(to
be
programmed)
• Publishing
on
TU
Graz
TeachCenter
(LMS)
14. Project
–
Recordings
for
LifeLongLearning
I
• Project
framework
Period:
2010/01
to
2012/12
In
the
course
of
„Leistungsvereinbarungen“
Budget:
ap.
100.000€
• Project
partner
TU
Graz:
Office
for
LifeLongLearning:
hZp://lifelonglearning.tugraz.at
TU
Graz:
Dept.
Social
Learning:
hZp://elearning.tugraz.at
TU
Graz:
Dept.
InformaCon
Design
&
Media
Associated
partner:
Auphonic:
hZps://auphonic.com
• Project
focus
General
topic:
invesCgaCons
on
recordings
for
lifelonglearning
at
universiCes
Subjects:
DidacCc
scenarios
for
recordings
EvaluaCon
of
recording
aciCvites
PotenCal
of
recording
services
for
general
university
pracCce
15. Project
–
Recordings
for
LifeLongLearning
II
• Project
investments
Personal:
ap.
40h/w;
4
people
(20
h/w,
10
h/w,
on
demand)
Equipment:
several
hardware
for
recording
purposes
set
up
hardware
for
automated
recording
• Project
efforts
EvaluaCons:
Hardcopy
polls
of
4
very
different
lectures
Automated
evaluaCons
of
streaming
server
data
Developments:
indexing
screencasts
for
text-‐searching
videos
fully
automated
recording
systems
for
lecture
rooms
University
pracCce:
LLL-‐Course
„Reniraumtechnik“
(planned)
• PublicaCons
Grigoriadis,
Y.;
S?ckel,
C.;
Schön,
M.;
Nagler,
W.;
Ebner,
M.;
Automated
Podcas?ng
System
for
Universi?es.
-‐
in:
Conference
Proceedings
ICL
2012.
(in
print).
Ebner,
M.;
Nagler,
W.;
Schön,
M.:
Have
They
Changed?
Five
Years
of
Survey
on
Academic
Net-‐GeneraCon.
-‐
in:
Proceedings
of
World
Conference
on
EducaConal
MulCmedia,
Hypermedia
and
TelecommunicaCons
(2012),
S.
343
–
353,
World
Conference
on
EducaConal
MulCmedia,
Hypermedia
and
TelecommunicaCons
;
2012
Grigoriadis,
Y.;
Fickert,
L.;
Ebner,
M.;
Schön,
M.;
Nagler,
W.:
Podcas?ng
for
Electrical
Power
Systems.
-‐
in:
Conference
Proceedings
MIPRO
2012.
(2012),
S.
1412
-‐
1417
Schön,
M.;
Ebner,
M.;
Kothmeier,
G.:
It's
Just
About
Learning
the
Mul?plica?on
Table.
-‐
in:
LAK12
-‐
2nd
InternaConal
Conference
on
Learning
AnalyCcs
&
Knowledge.
(2012),
S.
1
–
8
Nagler,
W.;
Grigoriadis,
Y.;
S?ckel,
C.;
Ebner,
M.:
Capture
Your
University.
-‐
in:
IADIS
InternaConal
Conference
e-‐Learning
;
2010
(2010),
S.
139
-‐
144
16. Searchable
Recordings
by
Indexing
Screencasts
I
• Part
of
the
Project
–
Recordings
for
LifeLongLearning
• Aim:
Make
recordings
searchable
Full
length
lecture
recording
–
45,
90
min
or
more
typically
contains
slides
of
a
presentaCon
• Methode:
Generate
index
from
extracted
text
Key
technology:
OCR:
opCcal
character
recogniCon
Input:
screencast
Output:
encoded
video
embedded
in
flash
player
with
a
ToC
(Table
of
Content)
and
a
word
search
field
Problem:
OCR
sojware
is
not
compaCble
with
video
files
SoluCon:
frame
extracCon
17. Searchable
Recordings
by
Indexing
Screencasts
II
• What
sojware
to
use?
• Which
frame
to
extract?
• Are
all
extracted
frames
useful?
The
frames
can
be
thought
of
as
a
sequence:
ConsecuCve
frames
tend
to
be
...,
f
[n–1],
f
[n],
f
[n+1],
...
very
similiar
in
content
IF
This
allows
for
discarding
of
|fs[n–1]
–
fs[n]|
<
S
repeCCve
data
OR
Lost
data
can
be
later
constructed
j[n]
–
j[n–1]
<
T
from
neighbour
frames
THEN
DetecCon
of
frames
with
discard
the
current
frame
f
[n]
significant
content
changes
with
n:
number
of
the
frame
fs:
size
in
bytes
j:
Cme
in
ms
S:
deviaCon
parameter
for
the
size
T:
deviaCon
parameter
for
the
Cme
18. Searchable
Recordings
by
Indexing
Screencasts
IV
Frame
extracCon
Sojware:
Encoding
a
video
file:
FFmpeg
hZp://ffmpeg.org
$
ffmpeg
-‐i
<inputfile>
-‐ac
1
-‐ab
40k
-‐vcodec
libx264
-‐fpre
<codec_preset>
-‐crf
23
-‐vstats_file
<outputfile>
-‐i:
name
of
the
input
video
file
-‐ac:
number
of
audio
channels
Frame
selecCon:
FFmpeg
(-‐vstats
opCon)
-‐ab:
audio
bitrate
-‐vcodec:
video
codec
library
local
„I“
frames
-‐crf:
constant
rate
factor
-‐vstats_file:
generaCon
of
-‐vstats
file
extract
Cmestamps
ExtracCng
a
specific
frame
from
a
video
file:
Further
frame
sorCng:
Perl
hZp://perl.org
$
ffmpeg
-‐ss
<offset>
-‐i
<inputfile>
-‐an
-‐vframes
1
-‐qscale
1
<outputfile>
size
-‐ss
offset:
(Cme
of
frame
to
be
extracted)
in
seconds
-‐an:
no
audio
posiCon
-‐vframes:
number
of
consequent
frames
to
extract
-‐qscale:
quality
factor
(1[best]
to
31[worst])
20. Searchable
Recordings
by
Indexing
Screencasts
VI
• OCR
procedure:
Extracted
frames
are
sent
to
OCR
sojware
OCR
returns
one
text
file
for
each
frame
Name
of
tex^ile
contains
Cming
info
InformaCon
from
the
text
files
is
collected
and
used
for
ToC
• OCR
sojware
runs
on
iMac
using
Windows
7
through
virtualbox
• OCR
has
„hot
folder“
quality:
starts
operaCng
at
folder
input
automaCcally
21. Searchable
Recordings
by
Indexing
Screencasts
VII
Method
implemented
in
summer
2011
SCll
under
further
development
22. Automated
Audio
Postprocessing
• CooperaCon
with
„auphonic“
• auphonic
supports
a
well
funcConing
service
according
to
audio
processing
for
free:
„We
develop
new
algorithms
in
the
area
of
music
informa7on
retrieval
and
audio
signal
processing
to
create
an
automa7c
audio
post
produc7on
web
service
for
podcasts,
audio
books,
lecture
recordings,
screencasts,
etc.”
• auphonic
offers
an
API
for
automated
upl-‐
and
download
of
audio
files
to
be
processed
• hZps://auphonic.com/api-‐docs/index.html
25. Automated
Recording
III
• Crestron
media
control
panel
at
lecture
hall
• Epiphan
Lecture
recorder
X2
controlled
via
RS-‐232
API
by
Creston
Audio
signal:
single
channel
mix-‐up
from
the
audio
mixer
of
lecture
hall
Video
SD
channel
by
SANYO
IPCam
Video
HD
channel
by
laptop
video
signal;
resoluCon
projector:
1280x960
automated
scaling
up
to
HD
1920x1080
(under
construcCon)
HD,
SD
and
audio
are
saved
separated
in
a
mulC-‐track
AVI
file.
• Transfer
from
X2
to
Streaming
Server
using
Intranet
FTP
• Streaming
Server
Hardware:
Lynx
CALLEO
ApplicaCon
Server
4250
16
Core
CPU`s;
64
GB
RAM;
20
TB
HDD
Space
• Streaming
Server
Sojware:
wowza
3.0.3
on
Windows
2008
server
(controlled
using
RDP
protocol)
For
manual
streaming
with
epresence
and
automated
recording
with
epiphan
X2
MulCcasCng
27. Automated
Recording
V
• Finalising
of
automated
post-‐processing
• Focus
on
speech
recogniCon
• Introducing
a
calendar
based
booking
system
connected
or
implemented
in
the
university
administraCon
pla^orm
(TUGRAZonline)
All
lecture
hall
control
panels
are
connected
to
TUGRAZonline
• Discussion
about:
automated
start
and
stopp
of
recordings
due
to
booking
system
legality
aspects:
works
councils,
copyright
…
• Prototype
at
HS
13
working
since
2012
• 7
more
systems
are
planned
to
start
in
autumn
2012
• Streaming
to
lecture
halls
28. Contact
TU
Graz
–
Dept.
Social
Learning:
Team
Podcas?ng
Walther
Nagler
YpaCos
Grigoriadis
Wolfgang
Hauer
ChrisCan
SCckel
walther.nagler@tugraz.at
ypaCos@gmail.com
Social
Learning
(TU
Graz)
sociallearning
hZp://elearning.tugraz.at