Your SlideShare is downloading. ×
Iasa Presentatie
Upcoming SlideShare
Loading in...5
×

Thanks for flagging this SlideShare!

Oops! An error has occurred.

×
Saving this for later? Get the SlideShare app to save on your phone or tablet. Read anywhere, anytime – even offline.
Text the download link to your phone
Standard text messaging rates apply

Iasa Presentatie

332
views

Published on

Presentation for the 40th aniversary IASA Congress in Athens

Presentation for the 40th aniversary IASA Congress in Athens


0 Comments
0 Likes
Statistics
Notes
  • Be the first to comment

  • Be the first to like this

No Downloads
Views
Total Views
332
On Slideshare
0
From Embeds
0
Number of Embeds
1
Actions
Shares
0
Downloads
0
Comments
0
Likes
0
Embeds 0
No embeds

Report content
Flagged as inappropriate Flag as inappropriate
Flag as inappropriate

Select your reason for flagging this presentation as inappropriate.

Cancel
No notes for slide
  • The dense descriptions, generally per hour of audio, lead to large chunks for user exploration when results are found for a given query.
  • The undisclosed part of the collection cannot be accessed, and its content is largely unknown.
  • For ‘disclosure’ the speech technology researchers want “to automatically generate a time-stamped content description”. The automation will reduce the human annotation effort, and the fact that annotations are time-stamped means that words are linked to locations in the audio recording, allowing fragments to be retrieved in addition to entire audiovisual documents. The technology used for disclosure depends on (1) the available metadata, and (2) the availability of context documents, i.e. documents that are either directly related to the recording or to the topic of the recording. When there is a transcript of te recording available, the words in the transcripts can be aligned to the audio. During this process the locations of the known words are determined in the audio signal. The result is a fairly accurate index of which word was said where in the audio. When it is unknown exactly what was said in the audio recording ASR can be used to generate hypotheses of what was said where in the recording. Context documents can be valuable here to improve the models used for speech recognition. Speech recognizers generate output that is generally not without errors, but up to word error rates of 30 to 40% -- that is 3 or 4 out of every ten words were recognized in correctly-- the automatically generated content descriptions may successfully be used as search indexes. This is explained by the fact that speech is redundant, i.e. when something is on-topic it will be referred to more than once, and that many of the words that have a high risk of being mis-recognized make a relatively small contribution to the information content, i.e. prepositions (in, at), determiners (a, the) etc.
  • How does CHoral technology fit into the archiving workflow? This of course is a simplified representation, but it gives a general idea. After content has been produced, it is transferred to the archives for preservation. The data are being stored, archivists index the collection, and users may search the index for recordings of possible interest. <start animation> CHoral uses the recordings and the existing metadata to give the user a new kind of access. In addition to searching the catalogue for recordings that can be listened to at the archive’s listening room, search results come with audio fragments that can be listened to online, e.g., from the searcher’s home or work location. <animation 2> The technology consists of automatic speech recognition for index generation, information retrieval technology for finding relevant audio fragments in the collection, and of new user interface components that support interaction with the audio fragments.
  • During alignment the locations of known words are determined in the speech signal. By matching the acoustics in the speech signal to the expected acoustics of individual words each word in the transcript is matched to the location in the audio where it is most likely to occur. This results in an index that gives exact word positions for each word in the transcript. The accuracy of the resulting index is very high.
  • The following type of speech recognition system is used. Before the actual recognition process is started some pre-processing is done. This consists of (1) classifying the audio document into speech and non-speech segments, so that the parts of the recordings that do not contain speech (e.g., music, street noise) are not fed to the recognition system. Moreover, the speech may be segmented into coherent chunks per speaker, so that models may be adapted to individual speakers. The speech recognition system itself consists of three components: (1) an acoustic model that models the different speech sounds of a languages, (2) a language model that models which sequences of words are likely, and (3) a dictionary that prescribes out of which speech sounds a word is made up. To develop an acoustic model, over 50 hours of annotated speech materials are needed. To develop a language model, texts of hundreds of millions words are used. The output of the ASR system is a word level index, or a hypothesis of which words were spoken where in the audio document. Instead of running the recognition process just once, the output of the first round may be used to better choose the models used during recognition. Therefore, a so-called second pass is often run with adapted models to arrive at a more accurate index.
  • The output of an ASR system can take several forms. The most well-known form of output is in sentences, reflecting the most likely word sequence that was recognized by the system. For indexing purposes, however, other output types should be considered. One candidate are lattice structures that do not only store the most likely word sequency for a certain fragment of audio, but also alternative words that are likely at certain positions. In this way, alternatives are kept available.
  • For successful take up of technology some investments are needed. Thanks to the ongoing digitization process as well as standardization of formats audio documents should increasingly be fit for automatic processing without further adaptations. The quality of automatic annotations depends on the quality of the ASR models, and those can be tuned to different domains by accurate transcriptions of representative samples and/or (large amounts of) text data on the same or a strongly related topic. But when an ASR system is used to automatically generate time-stamped content descriptions, should those descriptions be validated by archivists? And if so, how?
  • A surrogate is a textual or visual represensation of the content of a spoken word document that can be used by searchers to assess a document’s contents before he/she decides to listen to the audio.
  • Transcript

    • 1. Hidden treasures lost forever? Speech technology for the disclosure of Dutch audiovisual archives Mies Langelaar and Willemijn Heeren
    • 2. Contents
      • Introduction & Problem statement
      • Digitization/standardization in the E-repository
      • Speech technology for AV archives
      • System demonstration
    • 3. Introduction
      • Hidden treasures of audiovisual archives lost forever?
      • Backlog
        • Data stored on deteriorating analogue carriers
        • Digitized and digital born data in non-standardized formats
      •  Digitization and international standardization needed
      • Often global level of description
        • A few keywords data unit (hour, tape, interview)
        • Often no content description at all, because annotation is very (time-)costly
      •  Reduce human effort through use of speech technology?
    • 4. The approach
      • NWO CATCH project CHoral (2006-2010)
      • Goal:
      • investigate and develop automatic annotation and search technology for spoken word archives
      • Cooperation between
      • speech technology researchers, University of Twente
      • archivists, Rotterdam Municipal Archives
    • 5. The test case
      • ‘ Radio Rijnmond’ (RR) archives
        • city of Rotterdam's regional radio channel
        • initial broadcast in 1983
        • broadcast recordings, amounting to over 60.000 hours
        • partially digitized, mostly analog
        • partially disclosed, mostly waiting for annotation
        • typical of A/V archives in cultural heritage (CH)
    • 6. Searching the RR archives I Minimal content descriptions per hour data
    • 7. Searching the RR archives II ? ? ? ? ?
    • 8. Main problems
      • The main problems with this example collection are:
      • 1. a large backlog of undisclosed material  data are inaccessible for third parties
      • 2. fairly unspecific annotations, if available
      •  restricted use for answering information needs
      • 3. audio is being kept on analog data carriers or on CDs
      •  interactive or online search cannot be supported
    • 9. Towards solutions …
      • Digitization/standardization in the E-repository
      • Speech technology for AV archives
    • 10.
      • Digitization/standardization
      • in the E-repository
    • 11. AV Collection of Rotterdam Municipal Archives
      • About 15.000 AV objects in collection
      • Most of this collection is on analog data
      • carriers
      • Part of the collection is on CD’s, dating form the 1980’s onwards
      • No standardisation in storage formats
      • No or minimal metadata and description of content available
    • 12. Work in progress
      • Digitisation of the analogue audio material is done in company
      • Standard formats that are used are:
        • .WAV for uncompressed PCM audio
        • 44.1 KHZ 16 bit stereo for audio CDs
      • that are already digitised, but need preservation
        • 48 KHZ 24 bit stereo for old recordings
        • Digitally produced audio is accepted in its own format
      • Access to the objects is granted by audio CD or MP3
    • 13. Work in progress (2)
      • Digitisation of Video and Film is done partly in company, partly by external partners
      • The standards that are used are:
        • Minimal data rate of 50 Mb for conservational purposes
        • Digital Betacam for VHS and Umatic tapes
        • Digital Video is accepted in its original recording format (miniDV, DVCam, XDCam etc)
        • Digibeta for 8mm, 16mm and 35mm film (processed by external partners)
        • DV25 for 16 mm film (processed in company)
      • Digital Betacam is stored in 10 bit uncompressed
    • 14. How to ensure long term sustainability
      • Set up a trusted digital repository, consisting of hardware, software, procedures, methods, knowledge and experience
    • 15. Trusted Digital Repository Feeder System Workflow Job Queue File Storage Characterisation Preservation Planning Migration Technical Registry Active Preservation Data Management Access Reporting Storage Adaptor Passive Preservation Ingest Toolkit Preservation Controller Workflow Controller User Administrator Archivist Metadata Store
    • 16.
      • Adding a minimal set of metadata, necessary for management, preservation and access
      • Using standard archival formats
      • Making agreements with producers of AV material about acceptable formats
      • Disclosure of content through Automatic Speech Recognition (ASR)
      How to ensure long term access to data?
    • 17.
      • Speech technology for AV archives
    • 18. Disclosure through speech technology
      • Disclosure: automatically generate a time-stamped content description
      •  Allows online retrieval of fragments of AV records
      • Method depends on:
        • Available metadata
        • Availability of context documents
        • When a transcript is available:
          • Speech and transcript can be aligned,
      • i.e. Automatically couple what was said in the transcript, to where it was said in the audio
        • When there is no transcript:
          • Use automatic speech recognition to generate hypotheses of what was said where in the audio
          • Word Error Rates under 40% allow automatically generated content descriptions to be used as search index
    • 19. AV archiving workflow CHoral Content production Indexing
      • Research topics
      • ASR : Automatic Indexing
      • IR: Information Retrieval
      • UI: User Interface Development
      End user ASR IR UI
    • 20. Research
      • Automatic indexing through speech technology:
        • Development of robust automatic speech recognition and audio classification tools
      • Information Retrieval:
        • Retrieval of spoken documents based on ASR output
        • Bridging the semantic gap between user queries and spoken content
      • User Interface development:
        • Support search and browsing in audio documents
        • (Re)presentation of audio content
    • 21. Alignment + Speech signal Typed transcript Landgenooten waar ik enkele Begin frame # End frame # Word 00000 54400 -silence- 54400 65280 Landgenooten 65280 69120 Waar 69120 73600 Ik 73600 79520 Enkele … … …
    • 22. Automatic speech recognition Acoustic model Language model Pronunciation dictionary Speech recognition 50+ hour audio 250-500 M words Pre-processing Classification speech/nonspeech Segmentation of speakers 2 nd recognition with adapted models Word level index
    • 23. Types of word level indexes
      • Most probable words:
      • Lattice structures:
      ASR: Er is een bekend beeld voor veel ouders de grote show in onveilige situatie voor de school TXT: ‘t is een bekend beeld voor veel ouders. De chaotische en onveilige situatie voor de school “ D’66 is z’n ene zetel kwijt”
    • 24. Discussion ASR
      • For successful automatic annotation:
        • Audio should be digitally available, preferably on a server
        • To optimize ASR models for high-quality output,
          • part of the speech should be transcribed,
          • or related documents should be available
      • ? How to validate automatic indexes?
    • 25. User interface development
      • Challenges
      • Understand users’ requirements and information needs
      • Support selection and browsing of spoken content
        • Representation of spoken content via ‘surrogate’
      • Cross-linking to related content within the same or from another collection
      • IPR issues
    • 26. CHoral speech technology for GAR
      • Alignment: Brandgrens interviews Rotterdam
      • Speech recognition: RR archives
    • 27. Disc ussion
      • Development is ongoing in the work-flow and daily practice at audiovisual archives, and speech technology
      • Careful tuning of processes is needed for mutual benefit
      • Examples demonstrate envisioned benefits:
        • Potential reduction of human effort for annotation of undisclosed materials
        • Online access to fragments of spoken heritage
    • 28.
      • For more information, see http :// hmi.ewi.utwente.nl /project/CHoral
      • Questions?

    ×