• Share
  • Email
  • Embed
  • Like
  • Save
  • Private Content
SNML-TNG: Linked Media Interfaces. Graphical User Interfaces for Search and Annotation

SNML-TNG: Linked Media Interfaces. Graphical User Interfaces for Search and Annotation



© Salzburg NewMediaLab – The Next Generation, September 2011...

© Salzburg NewMediaLab – The Next Generation, September 2011

ISBN 978-3-902448-29-3

by Marius Schebella, Thomas Kurz and Georg Güntner:

Linked Media Interfaces. Graphical User Interfaces for Search and Annotation

Issue 2 of the series “Linked Media Lab Reports”, edited by Christoph Bauer, Georg Güntner and Sebastian Schaffert



Total Views
Views on SlideShare
Embed Views



0 Embeds 0

No embeds


Upload Details

Uploaded via as Adobe PDF

Usage Rights

© All Rights Reserved

Report content

Flagged as inappropriate Flag as inappropriate
Flag as inappropriate

Select your reason for flagging this presentation as inappropriate.

  • Full Name Full Name Comment goes here.
    Are you sure you want to
    Your message goes here
Post Comment
Edit your comment

    SNML-TNG: Linked Media Interfaces. Graphical User Interfaces for Search and Annotation SNML-TNG: Linked Media Interfaces. Graphical User Interfaces for Search and Annotation Document Transcript

    • LINKED MEDIA INTERFACES.Graphical User Interfaces for Search and AnnotationMarius Schebella, Thomas Kurz and Georg Güntner
    • The Austrian competence centre “Salzburg NewMediaLab– The Next Generation” (SNML-TNG) conducts researchand development in the field of intelligent content man-agement: It aims at personalising content, making itsearch- and findable, interlinking enterprise content withinternal and external information resources, and buildinga platform for sustainable information integration. Forthis purpose, information about content (Linked Content), structured data (Linked Data)and social interaction (Linked People) has to be connected in a lightweight and standard-ised way. Our approach for interlinking across these levels is what we call “Linked Me-dia”.SNML-TNG is a K-project within the COMET programme (Competence Centers for Excel-lent Technologies, www.ffg.at/comet), is co-ordinated by Salzburg Research and co-fin-anced by the Austrian Federal Ministery of Economy, Family and Youth (BMWFJ), theAustrian Federal Ministry for Transport, Innovation and Technology (BMVIT) and theProvince of Salzburg. Homepage: www.newmedialab.at© Salzburg NewMediaLab – The Next Generation, September 2011ISBN 978-3-902448-29-3Marius Schebella, Thomas Kurz and Georg Güntner:Linked Media InterfacesGraphical User Interfaces for Search and AnnotationIssue 2 of the series “Linked Media Lab Reports”,edited by Christoph Bauer, Georg Güntner and Sebastian SchaffertPublisher: Salzburg Research, SalzburgCover: Daniela Gnad, Salzburg ResearchBibliografische Information der Deutschen Nationalbibliothek:Die Deutsche Nationalbibliothek verzeichnet diese Publikationin der Deutschen Nationalbibliografie; detaillierte bibliografischeDaten sind im Internet über http://dnb.d-nb.de abrufbar.
    • PrefaceSalzburg NewMediaLab – The Next Generation (SNML-TNG) is a competencecentre in the Austrian COMET Programme. Today’s enterprises (and particularlythe media enterprises) rely heavily upon accurate, consistent and timely access tovarious types of structured and unstructured data. However, todays knowledgeworker is increasingly dependent on information that resides outside the com-panys firewall. To meet this challenge a new approach to data integration isneeded. One that harvests the value of both internal and external data sources.There are various approaches to integration, e.g., integration on the presentationlayer (as it is done with portals), on the business layer (for instance via service-oriented architectures), or also on the data and/or persistence layer (for examplevia data warehousing, federated databases, etc.).SNML-TNG’s approach to integration is to focus on the data layer using SemanticWeb technologies with an emphasis on the content and media enterprises. In-formation about people, data and content are semantically linked: Our approach isbased on the Linked Data concepts developed by the World Wide Web Consorti-um (W3C) and extended to include media assets (e.g. video objects): Hence we usethe term “Linked Media” to denote our data integration approach for the enter-prise information space. The principles behind our Linked Media approach are aresult of socio-economic analysis, technological-conceptual work, and technolo-gical development, with the release of the Open Source framework (the “LinkedMedia Framework”) that provides a lightweight approach to interlink informationavailable in content and media assets, structured (meta-)data sets and peoplessocial networks.The ideas and technology behind the Linked Media Principles will be validated bythe company partners of SNML-TNG: the content partners (ORF, Red Bull MediaHouse, Salzburg AG and Salzburger Nachrichten) and technology partners (media-mid, Semantic Web Company, TECHNODAT), in the form of specific applicationsbuilt on top of the Linked Media Framework.This is where the demand for appropriate graphical user interfaces supporting theinterlinking approach arises: The team at SNML-TNG has looked at design pat-terns for applications in the Linked Media sector driven also by the fact that theunderlying W3C Linked Data principles are well established in the Semantic Webcommunity but still lack concrete applications apart from research prototypes.With the present second issue of the “Linked Media Lab Reports” we provide aglossary of design patterns for graphical user interfaces for the interested audi-ence, the developer community and user interaction designers.
    • We hope that you will enjoy this second issue of our “Linked Media Lab Reports”,which – after the report on the value of Linked Media in the enterprise outlined inthe first issue – addresses questions on how the Linked Media Principles can beimplemented on the user-facing side to realise accurate, consistent and timely ac-cess to various types of structured and unstructured data in a Linked Media Enter-prise. We hope that our selective analysis provides a practical glossary of designpatterns for graphical user interfaces for the interested audience, the developercommunity and user interaction designers.Georg Güntnerwww.newmedialab.atManaging DirectorSeptember 2011
    • Content Introducton and Background................................................................... 7 Introducton........................................................................................ 7 The vision of “Linked Media”.............................................................. 7 Scope and purpose............................................................................10 Media Life Cycle and Design Paterns..................................................... 11 Media Life Cycle................................................................................11 Mapping the Life Cycle to Design Paterns........................................12 Types of Linked Enttes.......................................................................... 17 General Aspects of Linked Media............................................................21 Typed Links....................................................................................... 21 Personalisaton of Informaton and DRM..........................................21 Quality of Linked Data and Trusted Sources.....................................22 Linked Media Interfaces – Design Paterns............................................. 23 Paterns for Search (including Visualisaton)...........................................25 Formulatng the Query......................................................................25 Fine-Tuning the Query...................................................................... 29 Search Modifers............................................................................... 32 Sortng and Grouping Results............................................................34 Display of Enttes..............................................................................35 Display of Results.............................................................................. 39 Advanced Search...............................................................................41 Trust indicators................................................................................. 45 Content Summary............................................................................. 47 Reports..............................................................................................51 Enhancement....................................................................................52 Paterns for Annotaton.......................................................................... 55 General Annotaton Based on Text Entry..........................................55 Locaton Annotaton..........................................................................57 Annotaton of Time........................................................................... 58 People, Event and Theme Annotaton..............................................59 Selecton and Picking of Vocabulary................................................. 0 6 Paterns for Ontology Management................................................. 64
    • Crowd Sourced Annotaton...............................................................66 Other Annotaton Tools.................................................................... 67Bundled Packages................................................................................... 71 Pool Party..........................................................................................71 M@RS............................................................................................... 71 More Video Annotaton Tools...........................................................71 Video Content Annotaton: Vizard Annotator...................................72 Video Semantc Search: Jinni............................................................ 72Summary.................................................................................................75References.............................................................................................. 77
    • Linked Media Interfaces INTRODUCTION AND BACKGROUNDIntroductonThe Austrian competence centre “Salzburg NewMediaLab – The Next Generation”(SNML-TNG) conducts research and development in the field of intelligent contentmanagement: It aims at personalising content, making it search- and findable, in-terlinking enterprise content with internal and external information resources,and building a platform for sustainable information integration. For this purpose,information about content (Linked Content), structured data (Linked Data) andsocial interaction (Linked People) has to be connected in a lightweight and stand-ardised way. Our approach for interlinking across these levels is what we call“Linked Media”.When the concepts of Linked Media are brought to end-users, a demand for ap-propriate graphical user interfaces supporting the interlinking approach arises:Therefore, the team at SNML-TNG has looked at design patterns for applicationsin the Linked Media area. This is driven also by the fact that although the under-lying W3C Linked Data principles are well established in the Semantic Web com-munity there is still a lack of concrete applications apart from research proto-types. With this second issue of the “Linked Media Lab Reports” we provide aglossary of design patterns for graphical user interfaces for the interested audi-ence, the developer community and user interaction designers.This report addresses the questions about how “Linked Media Principles” can beimplemented on the front-end to realise accurate, consistent and timely access tovarious types of structured and unstructured data in a Linked Media Enterprise.As this report builds on the vision of “Linked Media”, we will start with a short in-troduction of the concept.The vision of “Linked Media”The Semantic Web is not a separate Web but an extension of the current one, inwhich information is given well-defined meaning, better enabling computers andpeople to work in cooperation. (Tim Berners-Lee et al., 2001)Unfortunately, we are still at a stage where the interaction between machine anduser is not free of “issues”. Over the last decades we have gained a better under-standing of human thinking on the one hand and technological possibilities on theother. We have learned to live with the shortcomings of how machines store andprocess information and at the same time, technology has evolved and taken onmore human-like actions. Machines were built to collect, produce and process in-formation without permanent human involvement. The user in-turn has adaptedto keyboard input, forms, controlled vocabulary, and so on. 7
    • Introducton and BackgroundThe Semantic Web operates at this intersection between human versus machine-like understanding of the world. It weaves human concepts into a web of machine-understandable, exchangeable and processable data. To this aim, the web com-munity and public information archives have contributed significant structuredand trustworthy information for applications to automatically harvest and applythat information in daily production settings.In the multimedia space such new possibilities have redefined todays media assetmanagement system. There is an increase in the amount, quality and the inter-re-latedness of media archives. Information can be included from public repositories,such as Wikipedia, or its “semantic sibling”, DBpedia. Search results can be im-proved by incorporating Semantic Web knowledge. Information that is generatedin a company can be made public.The basic concept of SNML-TNG that describes this relatedness of information, re-positories and eventually people is called “Linked Media”. Linked Media combinesthe following three principles: – Linked Content means hyperlinks between textual docu- ments and other unstructured content on the Web; such content is designed for human readers, and the hyperlinks are primarily meant for navigation purposes, i.e. when a user clicks on the link, the browser displays the site linked to; however, some services, e.g. Google, also use the hyperlink struc- ture to rate and rank the value of information, at the core of the added value from the business perspective there are recommendations and annotations. – Linked People means that people can connect and commu- nicate over the Internet in ways they could not before us- ing social software systems; most prominent among these systems are nowadays social networking platforms like LinkedIn, Facebook, and Xing, but also some of the more content-oriented collaborative platforms can be con- sidered to (implicitly) link people (e.g. Blogs, Wikis) as well as the users of enterprise information management systems (e.g. media asset management systems, document management systems, customer rela- tion management systems). – Linked Data is a recent development emerging from the “Semantic Web” community and aims at providing a com- mon standard for linking structured data that is primarily meant for machine processing and not for human con- sumption1; with Linked Data, it is possible to collect and further process data from many different sources (the so-1 Christan Bizer, Tom Heath and Tim Berners-Lee (in press). Linked Data – The Story So Far. Internatonal Journal on Semantc Web and Informaton Systems, Special Issue on Linked Data.8
    • called “Linked Data Cloud”). As of March 2011, the Linked Open Data (LOD) Cloud comprised more than 28.5 billion statements (“RDF-triples”) in over 200 datasets2 and keeps growing rapidly. Among the best-known datasets there are DBpedia3, the representation of Wikipedia in the Linked Data world, and a representation of the GeoNames geographical database in the LOD cloud4. Datasets compliant to the Linked Data principles can be com- bined in completely new ways that have not been thought of when the data has been collected, e.g. in mash-ups or scientific applications.The technology behind Linked Media is the Linked Media Framework. Its coreidea is to make (extensive) use of typed interlinking technologies. Metadata prop-erties link to either an internal knowledge store or are interlinked to entities ofthe Linked Open Data Cloud. All links are of a typed form, meaning they describethe quality (predicate) of a link.For example, instead of tagging an image with the literal “Salzburg” a user maywant to identify the content of this image as “the” particular city of Salzburg thatis already specified in the Linked Open data Cloud, for example in DBpedia. Fig. 1. Instead of a simple tag ("Salzburg") the content of this image can be uniquely identfed as htp://dbpedia.org/resource/Salzburg. Source: htp://en.wikipedia.org/wiki/File:SalzburgerAltstadt02.JPG [2011-09-20]By allowing metadata property fields to point to linked entities like people, loca-tions, historic events, etc., it is possible to support features such as the inclusionand combination of external resources, machine driven reasoning, inferencing andmany more. For the user this means an improvement in the search for resources,but also enhancements during the playback of media (presenting and accessingadditional information) or automation of processes such as creating electronicprogram guides, etc. On the other hand, it also means that annotations become2 Christan Bizer, Anja Jentzsch, Richard Cyganiak. The State of the LOD Cloud. Version 0.2, 03/28/2011 (2011), htp://www4.wiwiss.fu-berlin.de/lodcloud/state [2011-09-20]3 htp://www.dbpedia.org [2011-09-20]4 htp://www.geonames.org [2011-09-20]
    • partly dynamic since entities that are referred to can be edited in a separate pro-cess within a separate information source.To respond to the new challenges from a user interaction viewpoint, we describea set of design patterns for Graphical User Interfaces (GUIs) that allow users to in-teract with media resources in the realm of semantic annotation and retrieval ofdigital image and video files. These are the Linked Media Interfaces (LMI).Scope and purposeThe Linked Media Interfaces (LMI) deal with situations and processes wherepeople in the area of media asset management annotate, query, search, browse,research or interact with digital media objects and their metadata.To generate a diverse field of applications this report explores the following situ-ations and approaches within this report: It takes a look at the Media Life Cycle -processes and workflows where users interact with media information. It alsoprovides a list of common entity types (such as people, places, events) as well asother aspects involved in the descriptive part of content annotation. The mainbody of the work is a description of design patterns of Graphical User Interfacesdivided into two chapters “Search” and “Annotation”.
    • MEDIA LIFE CYCLE AND DESIGN PATTERNSThis chapter explores Linked Media Interfaces from a user perspective. It exam-ines the processes and workflows typical to media assets and tries to bridge lifecycle management to the new possibilities that are provided by semantic techno-logies. The media assets handled in this context are documents, images andvideos.Media Life Cycle“Multimedia resources typically have an extended life cycle covering an array ofdistinct processes and workflows” (Smith & Schirling, 2006, p. 84). Kosch et al.(2005) propose three phases of a life cycle: Creation, Management and Transac-tion. To complement the workflow, especially with regards to visualisation anddata enhancement we add another phase that comes into play after media re-sources are delivered: Application and utilisation of resources.In each phase (and sub-phase) of the multimedia life cycle users interact withmetadata. But not always is this interaction the main aspect of a process. For ex-ample, a user may be reading the news and then decides to add just a small part ofthat information to a media asset. Nor are semantic aspects always a main aspect,but they can always influence the interaction.Some processes, however, are solely dedicated to metadata. These parts of the lifecycle are described in the Insemtive Annotation Life Cycle Deliverable (InsemtivesD2.2.1) as a metadata life cycle (see Pipek et al., 2009). For our purposes thismeans, additionally to the organisation, production and maintenance of media as-sets, these three phases are also most relevant for ontology creation, concept cre-ation as well as ontology maturing. Insemtives distinguishes between controlledand uncontrolled annotation. In that context ontology maturing means “enrichingthe controlled vocabulary with new terms and concepts that have been used asuncontrolled annotations”.
    • Media Life Cycle Interacton with metadata Plan Search, browse, create background and project informaton Create Annotate resources in real tme Acquire Extract metadata from resources Organize Structure and collate metadata, organize ontologies, disclose vocabu- lary and interfaces, select datasets, (pre)-select RDF-predicates Produce Annotate assets. Create concepts from existng uncontrolled annota- ton or during new annotaton of content Maintain Maintain, add, edit media assets. Manage, author, validate, and as- sure quality of metadata and ontology Store Store and index metadata Sell/Place at Disposal Provide access and search mechanisms as well as informaton feed- back channel for metadata Distribute Package and organize metadata (collecton baskets) Deliver Deliver metadata Consume Display metadata Use Draw new informaton/conclusions based on metadata Table 1. Processes of the Media Life Cycle and forms of interactonThe different phases of the media life cycle are of relevance when we try to covera representative set of tasks in which users deal with media assets. The phases arediscussed in detail in the next section and mapped to particular patterns.The table also shows that some interactions appear more than once under thesame label, but within a different context and hence indicating different goals ofinteraction. In these cases different interface patterns are provided under thesame label. For example the term “search“ can mean that a user may want to ask aquestion, they might seek a particular resource that they know in advance, or theymay browse for unknown resources related to a certain topic. They may want tohave an overview over search results and extract information by looking at thetotal number and quality of results, or they may just want to browse through theresources. Knowing a user’s purpose and intention will influence and shape theinterfaces proposed to them.Mapping the Life Cycle to Design PaternsIn the following section we explain the phases of the life cycle in a little more de-tail and provide a mapping of the phases to different design patterns.PlanningThe planning phase takes place before the actual media content is created oravailable. For example an editor is researching for a show, browsing old docu-mentaries, scheduling interviews, etc. Or a product manager is putting togetherinformation for a photo shooting (time and location, people and products in-volved), adding notes for the photographer, awarding the contract.
    • During this phase the user needs many types of interfaces. Browsing is an import-ant technique to scope the topic and then drill down for particular media assets. Alot of metadata is created in the process, which can be linked to media objects af-terwards or provide a context for future search and annotation.Featured patterns: – All search patterns – Create Context – Rating Systems – “I know more” ButtonCreatonDuring the creation of a media asset (e.g. a video shot), machines and softwareautomatically add metadata (location, creation date, technical data). But also hu-man annotators may be involved. They need annotation tools that support the do-main of the editor or a certain context. A tool similar to a live chat includes timeannotation and supports context aware controlled vocabulary of a sport event orsimilar.Featured patterns: – Real Time Video AnnotationAcquisitonOften the acquisition of large amounts of media assets involves a machine suppor-ted metadata extraction or conversion. At the same time editors are involved inthe quality assurance of the process, having to overlook the mapping of new con-cepts to old ones, a process that involves the possibilities to compare original andconverted, list all newly created concepts, look for duplicates, etc. Optionallyprovenance information can be added at this stage.Featured patterns: – Conceptual Mapping – Concept Adder – Entity Index – Completeness Feedback
    • OrganisatonOrganisation of assets and metadata takes place at several levels. On an overviewlevel all assets are given general categorisation tags. Issues may be grouped intoseries, not all types of media assets may be treated in the same way. On a moregranular level, assets are given pre-defined classification tags. Archivists tend tobe consistent in labelling the same types with the exact same terms. Finally, tosupport a “per-item level”, a controlled vocabulary needs to be created, includingthe creation of thesauri and ontologies, putting metadata concepts in relation toeach other, and indexing metadata.At this stage three phases of the metadata life cycle are relevant: ontology cre-ation, concept creation and ontology maturing.Featured patterns: – Concept Mapping – Category Adder – Entity IndexProductonDuring the production phase archivists and document editors annotate - oftenmanually - media assets. They create and assign metadata, author it and assuretheir quality. But not only professionals and specialists produce metadata. Wehave seen masses of users labelling, tagging, annotating pictures, videos and docu-ments. Patterns are provided to link entities to the Linked Open Data cloud.During the production phase information also has to be authorized. The author-isation level of metadata may be different than the media asset itself.Featured Patterns: – All annotation patterns – Quick ViewMaintenanceMaintenance is closely related to production in a way that it pursues the samegoal – creating additional metadata and editing it. It is an ongoing process.Featured Patterns: – All annotation patterns – Authoring List – “I know more” Button
    • StorageThe storage of digital assets affects the features that are available for Linked Me-dia Interfaces. Is it possible to instantly preview assets? Is it possible to accessonly parts of an asset? Which data formats are available?With Open Linked Data, portions of the metadata are not stored within a system,but link to public sources. This can be seen as a problem in production-criticalsituations (e.g. a newsroom) and may call for an offline storage of relevant inform-ation.Since the user of a Linked Media Interface cannot affect the storage process dir-ectly there are no patterns that are related to this phase of the life cycle.Sale/Placing at DisposalThis is the phase where users want to (re-)access media assets. It includes search-ing, browsing, previewing, choosing and ordering or buying items. Users maywant to store their personal history, or preferences, receive suggestions based onrecommendations, have their own collection of favorite assets, etc. Linked MediaInterfaces provide access and search mechanisms as well as information feedbackchannels for metadata.Featured patterns: – All search and display patterns – Storing Searches and Results – Rating SystemsPackage/Distributon/DeliverySome metadata is distributed along with the media asset, other is primarily usedfor retrieving assets. In the cases where metadata contains useful information forthe user it has to be packaged and shaped into a format that can be viewed by theuser. There might be a separate process involved that selects and designsmetadata for consumption, for example when metadata includes a short descrip-tion of an asset that is used for an electronic programming guide.Featured pattern: – Automated Content ExtractionConsumptonIn many cases metadata is meant not only to support search and retrieval of me-dia assets, but also add additional information for a user. This information might
    • pop up during video playback or be visible when a user asks for more informationon an item. In that case metadata is displayed within or next to the asset.Featured patterns: – Enhancements – Trust IndicatorsUsageThe last part of the life cycle deals with the use of metadata. It includes the utilisa-tion of metadata for research and statistical processes. Users want to applymetadata to their contexts and draw new conclusions.Metadata is provided in an open way and tools are provided for viewing informa-tion.Featured pattern: – Display of Results
    • TYPES OF LINKED ENTITIESLinked entities (locations, people, events, etc.) play a key role in the annotationand retrieval of content. Many Linked Media Interfaces (LMI) focus on a particularentity (for example when annotating locations). But unfortunately there is no gen-eral definition of what an entity is, or which entities are considered to be basic orstandard. Neither is there a rule stating how many types of entities there are orshould be. The actual number of linked entities and their definitions are alwaysrelated to the applied knowledge model (ontology) and the type of assets that isunderlying an application.Schema.orgA recent approach to agree on a set of entities is schema.org5. The three searchengine providers Google, Bing and Yahoo! share this method and collection of con-cepts to markup websites in ways recognized by automated web crawlers. Theirmain types are: – Creative work (including MediaObject as the object that encodes this creat- ive work) – Event – Organisation – Person – Place – ProductThere are two other schemes that are already available in structured form in theLinked Open Data Cloud: Facebooks Opengraph and OpenCalais.Facebook OpengraphFacebooks Opengraph defines the following list of types6 – Activities: activity, sport – Businesses: bar, company, cafe, hotel, restaurant – Groups: cause, sports_league, sports_team – Organisations: band, government, non_profit, school, university5 htp://www.schema.org [2011-09-20]6 htp://developers.facebook.com/docs/opengraph/#types [2011-09-20]
    • – People: actor, athlete, author, director, musician, politician, public_figure – Places: city, country, landmark, state_province – Products and Entertainment: album, book, drink, food, game, product, song, movie, tv_show – Websites: blog, website, articlePlease note that Opengraph has no notion of Time and Events. From a users per-spective some businesses could also be referenced as places (bars, etc.)OpenCalaisOpenCalais lists the following entities7: – Entities: Anniversary, City, Company, Continent, Country, Currency, Email- Address, EntertainmentAwardEvent, Facility, FaxNumber, Holiday, In- dustryTerm, MarketIndex, MedicalCondition, MedicalTreatment, Movie, Mu- sicAlbum, MusicGroup, NaturalFeature, OperatingSystem, Organisation, Per- son, PhoneNumber, PoliticalEvent, Position, Product, ProgrammingLan- guage, ProvinceOrState, PublishedMedium, RadioProgram, RadioStation, Re- gion, SportsEvent, SportsGame, SportsLeague, Technology, TVShow, TVSta- tion, URL – Events and Facts: Acquisition, Alliance, AnalystEarningsEstimate, Analys- tRecommendation, Arrest, Bankruptcy, BonusSharesIssuance, BusinessRela- tion, Buybacks, CompanyAccountingChange, CompanyAffiliates, Compa- nyCustomer, CompanyEarningsAnnouncement, CompanyEarningsGuidance, CompanyEmployeesNumber, CompanyExpansion, CompanyForceMajeure, CompanyFounded, CompanyInvestment, CompanyLaborIssues, Com- panyLayoffs, CompanyLegalIssues, CompanyListingChange,CompanyLoca- tion, CompanyMeeting, CompanyNameChange, CompanyProduct, Com- panyReorganisation, CompanyRestatement, CompanyTechnology, Com- panyTicker, CompanyUsingProduct, ConferenceCall, ContactDetails, Convic- tion, CreditRating, DebtFinancing, DelayedFiling, DiplomaticRelations, Di- vidend, EmploymentChange, EmploymentRelation, EnvironmentalIssue, EquityFinancing, Extinction, FamilyRelation, FDAPhase, IndicesChanges, In- dictment, IPO, JointVenture, ManMadeDisaster, Merger, MovieRelease, Mu- sicAlbumRelease, NaturalDisaster, PatentFiling, PatentIssuance, PersonAt- tributes, PersonCareer, PersonCommunication, PersonEducation, PersonE- mailAddress, PersonRelation, PersonTravel, PoliticalEndorsement, Politic- alRelationship, PollsResult, ProductIssues, ProductRecall, ProductRelease, Quotation, SecondaryIssuance, StockSplit, Trial, VotingResult7 A list with examples can be found at htp://www.opencalais.com/documentaton/linked- data-enttes [2011-09-02]
    • OpenCalais distinguishes between “Entities” and “Facts and Events” on the toplevel. This is different to our approach to simply call everything an entity, includ-ing facts and events. On a second level OpenCalais introduces a lot of specifictypes.LMI EnttesDerived from these three examples we arrived at a pragmatic approach based onthe capability scenarios in SNML-TNG. – Digital Asset/Media Object: Document, Video, Picture – Location/Place: Geo-Coordinates, Region, City, Country, ZIP-Code, Street Ad- dress, Landmarks, Geographic Entities – Time/Temporal Information: Date, Time, Cyclic Events, Relative Dates, Range – People/Person: Person, Actor, Athlete, Author, Director, Musician, Artist, Public Figure – Event: Sport Event, Public Event, Party, Meeting, Business Conference – Other Types/Themes: Organisation, Business, Activity, Product, Entertain- ment, Project, etc.This chapter will examine the nature and specific quality of these concepts and as-sign intuitive or meaningful interaction metaphors for each.Digital Asset (Media Object)A media object is a digital item for information exchange that is used to transportand store stories, messages and the like. It is a container of information, it tells astory or depicts an event. The content is not explicit to machines and has to be de-scribed by meta-data (entities, actions, etc.) to be understood. It is the entity thatall the meta-data is attached to by the Linked Media Interfaces.The Media Object is the central entity type in media asset management systems. Inthe case of LMI this can be a video, an image or a document. Other forms of mediaobjects or temporal and spacial regions like single frames or audio layers (videofragments)8 are outside the scope of this report.98 “W3C Media Fragments Working Group”, n.d., htp://www.w3.org/2008/WebVideo/Fragments/ [2011-09-20]9 We are aware that there are other forms of multmedia objects that can be stored and annotated. The complexity and additonal interacton paterns of these media assets cannot be addressed within the scope of the Linked Media Interfaces Report.
    • LocatonLocation is described by geo-coordinates (longitude/latitude) or by semantic de-nomination (the name of a bar). Location can also be a region (Salzkammergut), acity, country and as such it can also be annotated for example by its ZIP-Code. Oth-er examples for location annotations are street address, landmarks, geographicentities (e.g. the Alps).TimeUsually a time is a date (year, month and day), but it can also be a time of the day(hours, minutes and seconds), or a (cyclic) event (spring break, Easter,Christmas). It can also be a relative date (in two weeks, the second year of KingGeorge IV), or a range (15th century). Time can also be inherent in a complexevent like World War II.PeopleThe People entity describes human beings (living and dead, real and fictional).They can be figures of public interest, private persons, as well as historic figures.People have properties, birthdays, a job, attend meetings, and there are relation-ships that can be shown between different people, for example using Open Graph.They can be grouped and categorized (for example DBpedia-categories.) Butpeople can also be a product of fiction. Like a character in a theatre play or amovie. Finally, people can have roles as well as synonyms, for example playernumber 10 in a soccer game.EventEvents are framed by four aspects: what, where, when and who. They contain aname or description, a place where they are located, a time period in which theytake place, and a number of participants that are involved or take part in theevent.Themes/Other ObjectsTheme is the collective term for the rest of concepts that can be linked to mediaresources. This is basically any object that has a well defined class structure (“fea-ture set”) which can be disclosed to the user (for example to narrow down searchresults).
    • GENERAL ASPECTS OF LINKED MEDIATyped LinksThe basic building block for Linked Media is a typed link. It attaches informationto media assets including information about the quality of the link. This associatesdescriptions of content (people, events, etc.) or other arbitrary information to amedia asset. Other than just holding the address of a target location like in a hy-pertext reference (where the type is inherent), a typed link also contains informa-tion about the type of connection that is made between the two objects. For ex-ample it makes a difference whether a scene was “filmed at” a location (e.g.Salzburg) or is making the same location the “topic” of a story.Typed links can also be seen as properties of an asset where the target of a linkrelates to the attribute value and the type to the property.That additional information can be used for search, but also has to be accountedfor during annotation. Linked Media Interfaces usually handle descriptions of thecontent, which means a type like “IsContent” may be predefined, but link typescan also handle authors, origins, technical information, etc.This leads to the topic of metadata standards that define and specify properties ofmedia assets. Unfortunately there are not many tools that allow users to workwith typed links and LMI does not fully support these standards at the moment.For further reading about metadata standards please refer to Smith (2006)10 for ageneral overview and the W3C Media Annotations group for the “Ontology for Me-dia Resource 1.0”11.Personalisaton of Informaton and DRM“You affect the world by what you browse” (Tim Berners-Lee)Not only do media resources disclose information about content and properties,but so do people who access these resources. Every user has their own context(time, country, interests) and (browsing) history. This personal digital footprintcan be tracked and used to filter and influence the search results. People also wantto manage how results are displayed, store favourites or have their current loca-tion and time taken into account in the search.Based on their role a user will own rights to view certain data. In other situationsthe system will recommend views. In some cases it is necessary to inform a userabout internal processes and make personalisation decisions and recommenda-10 Smith and Schirling, “Metadata Standards Roundup.”11 Werner Bailer et al., “Ontology for Media Resources 1.0”, March 8, 2011, htp://www.w3.org/TR/mediaont-10/ [2011-09-20]
    • tions transparent (this movie is recommended to you, because you watched it be-fore, because you liked it, because your friends liked it, etc.)Quality of Linked Data and Trusted SourcesJan Hannemann and Jürgen Kett (2010)12 point out the problems of trustworthi-ness in an article about the German National Library. “The main problem for thelinked data web is dealing with reliability: Is the data correct and do processes ex-ist that guarantee data quality? Who is responsible for it?”The Linked Open Data Cloud is a mix of community driven efforts and contribu-tions by cultural heritage institutions like national archives or media organisa-tions. The increase in information available also leads to a loss of control over it.Information may not always prove to be reliable, in the worst case it may be incor-rect, incomplete or not available, especially when dealing with community drivenrepositories like DBpedia. In fact original authorship may be ambiguous or simplynot traceable.Halb and Hausenblas (2008)13 name two indicators of quality: provenance andtrust. Knowing the provenance of a source can serve as a quality seal as well aswell as knowing the person or organisation that provides the information. Withlinked media, the media asset and the metadata information can even stem fromdifferent sources and be of different quality.Eventually, with typed links, not only the reference that is linked to but also thequality of the link have to be considered in assessing the overall quality. An imagethat was taken by by a certain photographer is incorrectly annotated if the photo-grapher is associated as the content and not as the creator. And the informationremains incorrect, even if the URI of the photographer originated from a trust-worthy repository source.For an enterprise media asset management system the trust quality of assets canfall into one of three categories. Content and information that was generated in-ternally by a skilled expert within the organisation, could be regarded of highestquality. Respected cultural heritage repositories such as the German National Lib-rary with some certificate of reliability, but still external sources may fall into acategory 2 rating and information that is derived from user generated contentmight be classified as a third category of trustworthiness.With the Linked Media Interfaces the user can set the level of trust and filter con-tent, based on these settings. They can also indicate the trust level if necessary bycolour or flag it in some way and, finally, allow users to obtain source informationabout the provenance of an asset or its metadata.12 Jan Hannemann and Jürgen Ket, “Linked Data for Libraries”, 2010.13 Wolfgang Halb and Michael Hausenblas, “select * where { :I :trust :you } How to Trust In- terlinked Multmedia Data,” Proceedings of the Internatonal Workshop on Interactng with Multmedia Content (2008).
    • LINKED MEDIA INTERFACES – DESIGN PATTERNSDesign patterns describe solutions and interaction components for common prob-lems and defined contexts. They are building blocks that function as small distinctmini-tools which – in connection with other blocks – form Linked Media Inter-faces. For each pattern there is a small introduction of the problem and the con-text, followed by either existing or generic solutions. Where applicable, a descrip-tion of best practices is provided, and future direction and ideas are added.The design patterns are grouped into patterns for search and patterns for an-notation. This also holds for the two main aspects of information interaction (en-tering metadata and content retrieval).
    • PATTERNS FOR SEARCH (INCLUDING VISUALISATION)In this chapter we introduce graphical user interface design patterns that dealwith the search and visualisation of media assets. For each pattern we provide abrief description with examples.Formulatng the QueryText Entry FieldText entry is elementary for a lot of query interfaces. The characteristics of differ-ent implementations are a result not so much of the interaction process, but bythe features that are included in the search. We want to point out three essentialtechnologies:Auto-CompletonAuto-completion with Linked Media Interfaces have to meet an additional re-quirement. Not only should it literally complete a term, but also arrive at the cor-rect entity and provide the corresponding URI. In a semantic context it is notenough to complete P-A-R to Paris, because there are a lot of meanings/conceptsfor the term “paris”. The site rdf.freebase.com is an example of a service that deliv-ers RDF identifiers for Linked Open Data concepts.Depending on the implemented auto-completion settings there are differences inthe list of suggestions that is provided and whether or not and in which way thesuggestions are sorted and grouped. Sorting principles can be derived based onthe LATCH principle of Richard S. Wurman (2001) 14.Common sorting principles are: – alphabetic – most relevant (e.g. most viewed, highest rated, nearest) – based on a category – most recent – order of appearance (in a text)If the text field belongs to a certain type or theme then of course additional sortingand grouping principles my apply.Even entities of the same type can be grouped differently. For example locationsand places can be grouped by their type (city, mountain, point of interest) or bycountries.14 Richard S. Wurman, Informaton Anxiety 2, 1st ed. (Que, 2001).
    • Fig. 2. Grouping autocompleton suggestons based on category or country. Source: Amin et al. (2009), p. 523 (cropped).Word StemmingWord stemming is a technique that allows searching for concepts that are builtfrom the same stem or root of a word. For example a search for “fisher” could alsodeliver results for “fishing” or “fished” or even “fish”. Google search adopted wordstemming in 2003.This technique as well as the error tolerant search are only related to Linked Datainsofar as they are based on linked data sources like WordNet 15. But the techniqueis fundamental for good search results.Error-Tolerant SearchSimilar to word stemming Error-Tolerant Search is only partially related toLinked Open Data. It provides a mechanism that recognizes typing errors andtreats them as such, hence delivering results related only to existing concepts of athesaurus or ontology.Semantc SearchSemantic search is a technique that is based on the ability of a system to “under-stand” the meaning of data items and allow search that is not only based on liter-ally searching for these textual data items but also on related meanings and con-cepts that are not explicitly mentioned in the metadata of an asset. It can also in-volve techniques of inferencing and reasoning.The food website Yummly16 offers a combination of semantic search patterns. Itincludes typed fields (with, without), weighted search, and several categories. The15 htp://wordnet.princeton.edu [2011-09-20]
    • system “understands” that certain ingredients taste salty or sweet and henceprovides a Taste slider to set the flavour of a dish. Fig. 3. Semantc search based on categories and inferred knowledge (taste, nutriton, etc.) Source: htp://www.yummly.com [2011-09-20]Specify which...This is a special form of auto-completion, a kind of query precision pattern thatasks the user for the precise search term. It can be used for example if several en-tities of the same type (e.g. city) are available.16 htp://www.yummly.com [2011-09-20]
    • Fig. 4 Specify which “paris” Example again from: Amin et al. (2009), p. 523 (cropped)Facet-Based QueryingOne way to formulate queries is to use a special form of facets. Every facetprovides an entry field that in combination formulate a complex query. For ex-ample17: Fig. 5. DBpedia Search for 19th century Austrian scientsts Source: htp://dbpedia.neofonie.de/browse/rdf-type:Scientst/birthDate- year~:1800~1900/natonality:Austria/birthPlace:Vienna/ [2011-09-20]17 Find more use cases at htp://wiki.dbpedia.org/UseCases.
    • Fine-Tuning the QueryDid you mean...A “Did you mean” field with one or a few disambiguation terms is used, if there ismore than one prominent entity. For example, the Yahoo! image search lists pic-tures for Vienna (Austria) but also allows to switch to Vienna, VA. Fig. 6. Disambiguaton suggeston with Yahoo! image search Source: htp://images.yahoo.com [2011-09-20]Narrow Propertes SidebarWhen search results are delivered, it is possible to narrow down the search res-ults by applying certain filters. Microsoft Bing, for example allows to narrow downby language and region. Fig. 7. Side bar with flters Source: htp://www.bing.com/search?q=salzburg&go=&form=QBLH&flt=all [2011-09-20]Similarly, a widget could allow the restriction of facet values by checking multiplefacets.
    • Fig. 8. Category flter from “Tagit” Source: htp://tagit.salzburgresearch.at [2011-09-20]Faceted Result FilteringThis tool allows users to filter results based on facets. An example is Yahoos im-age search18: The facets in the left sidebar are created automatically and can beused to narrow down search results based on facets (categories). Fig. 9. The results for a search for “Salzburg” at Yahoo. Source: htp://images.search.yahoo.com [2011-09-20]Suggested Search Terms (“See also...”)This pattern adds suggestions of additional search terms to the list of results. Theyare semantically related vocabularies, for example higher category terms (“Clas-sical music festivals” when you search for Salzburg Festival). Example below isfrom from duckduckgo.com.18 htp://images.search.yahoo.com [2011-09-20]
    • Fig. 10. Below the quick explanaton there are two suggested search terms (“Salzburg” and “Classical music festvals”) Source: htp://duckduckgo.com/?q=salzburg+festval [2011-09-20]Another pattern suggests similar search terms for a new, related search: In theIconclass Browser the search for “church” returns “see also: cathedral, chapel,sanctuary, temple”. Fig. 11. Iconclass suggests search terms similar to the original one. (“See also:...”) Source: htp://www.iconclass.org/rkd/1/?q=church&q_s=1 [2011-09-20]Specifying the Content TypeSpecifying the content type is a special case of filtering categories. It allows theuser to switch search (virtual) repositories of different types (videos, images ordocuments). It is also possible to use this pattern to search for projects, people orany other themes. The M@RS system by Mediamid19 adds the types “Vehicle mod-el” (Pkw), “Race Car” amongst the usual candidates (such as images, documents,videos, events, people).Usually this feature is provided as menu items in a horizontal menu above thesearch results. In M@RS this is done dynamically.19 htp://www.mediamid.com/hp/mars_6.html [2011-09-20]
    • Fig. 12. The mediamid M@RS user interface (v6.4.2) shows a dynamically generated type selecton menu. htp://www.mediamid.com/hp/mars_6.html[2011-09-20]Search ModifersThese are some modules that modify search entries involving common entitiessuch as location and time or personal preferences and current context. They aresimilar to Faceted-Based search but can be set on a more global level, or to workin the background. They can also be applied as filters or influence the sorting ofresults.TimeTime can be included in queries in different ways. It can denote the creation of anasset or the last time it was accessed or, it can be associated to a concept (e.g. anevent). Selecting the time for a search query is similar to the process of annotatingtime (see Calendar Picker). A time modifier can also account for current time anddate.
    • LocatonLocations offer several properties of query modification (e.g. position, size, type oflocation), the most prominent one, the geographic location, is given as longitudeand latitude. Queries that include a geographic position may also want to specify aradius within which results may be located.The tools to specify locations are similar to the tools used for annotation (e.g. Loc-ation Picker). But it is also possible to include the current location into the querycontext. Fig. 13. Setng a spatal query modifer via zip code and range. Source: htp://www.mobile.de [2011-09-20]People, Events and ThemesTo make sure a query widget understands what a user wants, it has to interpretbasically all entities (people, events, themes) as concepts. Sometimes the conceptcan be inferred from plain text entry or the context of the query, but sometimesthis will involve the same technologies that are utilized in the annotation process(see People, Event and Theme Annotation p. 59ff).Similar to time and location, it is also possible to modify queries by including par-ticular entities into the query that have to occur in search results. The next ex-ample from the Red Bull Content Pool restricts search results to a specific theme.
    • Fig. 14. Picking a theme (sport) to modify search results. Source: htp://www.redbullcontentpool.com [2011-09-20]Sortng and Grouping ResultsSortngSearch results can be sorted by various properties. YouTube, for example providesa drop down menu to sort results by relevance, upload date, view count or rating. Fig. 15. Sortng choices on YouTube Source: htp://youtube.com [2011-09-20]Weighted SortngInstead of sorting and filtering results based on only one property, sorting can bebased on two or more properties in one weighting setting. A search for websitessimilar to a particular one, but at the same time popular is implemented by“Moreofit”.2020 htp://www.moreoft.com [2011-09-20]
    • Fig. 16. Moreoft lists websites. Sortng can be weighted contnuously either by popularity of the sites or by similarity to the given one. Source: htp://www.moreoft.com [2011-09-20]Grouping WidgetGrouping of search results works similar as the grouping methods shown in theauto-complete pattern, but could be applied to any predefined category (e.g. sea-sons). It allows the user to more quickly spot a resource in a list of search results. Fig. 17. Grouping results (mockup) based on seasonal diferences (summer vs. winter).Display of EnttesIdeally, entities include a self-explanation method to be queried by an API. Basedon the description, tools can offer widgets for their visualisation. With Micro-formats21, for example there is a “Cheat Sheet” that lists the most common entitiesand their properties22. For Linked Open Data entities there is no similar approach.21 htp://microformats.org [2011-09-20]22 htp://microformats.org/wiki/cheat-sheet [2011-09-20]
    • In principle, every entity type can have generic visualisation forms. People can al-ways be displayed using an avatar image, the name, maybe their profession, orcurrent status/location. Having predefined properties will allow designers to cre-ate “styles” for the display of the same entity type. These display forms can be im-plemented in views, in overview, or displayed with mouse-over, etc.Quick ViewQuick view displays the very basic information about an entity in a frame or pop-up window. It contains links to navigate to more detailed information. The ex-ample from duckduckgo provides information about Sebastian Vettel. Fig. 18. Displaying informaton about Sebastan Vetel Source: htps://duckduckgo.com/?q=sebastan+vetel [2011-09-20]Extended View (Mash-Up)The extended entity mash-up tries to display all the available information of anentity in a structured way. Fig. 19. A “sig.ma” is a mash-up of all available structured informaton Source: htp://sig.ma/search?q=sebastan+vetel [2011-09-20]
    • Specifying Display DetailsThe configuration settings determining which details to display can be manuallyadjusted by a type manager, seen as well in the mediamid M@RS user interface. Fig. 20. The mediamid M@RS user interface (v6.4.2) provides a display confguraton tool Source: tp://www.mediamid.com/hp/mars_6.html [2011-09-20]People Entty VisualisatonThis tool integrates information about people into an interface or application.People will be displayed with their key features and information. For example,profession, origin, personal information, current location can be displayed. An in-teractive widget will behave in a familiar way regardless of the interface contextand enable access to relevant information.
    • Event and Theme Entty VisualisatonThese widgets display the properties of common entities in standardized ways.For example event visualisation will always include title, date, location and parti-cipants. With themes there are domain specific ways to visualize these entities, forexample a “Project” theme will look similar to the event visualisation (title, sched-ule, people, current tasks, etc.). Fig. 21. Standard visualisaton of a Project theme displaying informaton about a research project. It includes pre-defned propertes (Title, Acronym, Duraton, etc.)A real world example comes from the “Microformats for Google Chrome”-Pluginthat detects microformat entities and displays them in a popup window. Fig. 22. Visualising an Organisaton (Collectve Idea), including contact informaton and locaton Source: htp://michromeformats.harmonyapp.com [2011-09-20]
    • Display of ResultsTimeline WidgetA timeline widget places information on a 2d map illustrating events chronologic-ally . Fig. 23. The Simile Timeline Widget is an interactve display. In this example it displays the events related to the assassinaton of John F. Kennedy Source: htp://www.simile-widgets.org/tmeline [2011-09-20]Timeplot-WidgetSimilar to the timeline widget a timeplot visualisation places dates and timeranges on a timeline. Timeplot is more suited to the presentation of numericaldata. Fig. 24. A Simile Time-Plot illustratng the number of new permanent residents in the U.S per year Source: htp://www.simile-widgets.org/tmeplot [2011-09-20]
    • Locaton WidgetA location widget takes the geographic information of a location and places it on amap. Google Maps is the most prominent example of a location widget. Fig. 25. Displaying bars in Salzburg on Google Maps Source: htp://maps.google.com [2011-09-20]Display of Large Amounts of ResultsTools like mosaics and video walls display a large amount of images . Fig. 26. Cooliris is a Firefox plugin that displays for example Google image results on an interactve 3d wall that can be scrolled. Source: htp://www.cooliris.com [2011-09-20]
    • Fig. 27. Medienfuss is an installaton that arranges videos in a constantly moving data stream Source: htp://medienfuss.netzspannung.org [2011-09-20]Advanced SearchThe “Advanced Search” section lists search patterns that involve search conceptsbased on knowledge models that include reasoning and inferencing as well as res-ults that are derived by combining information from heterogeneous data sources.Feature SearchThis describes a search query based on properties (“features”) of entities. For ex-ample a book can be written by someone or about someone. It is not enough tosimply link a certain person to the book, but instead link the person to a feature ofthe book (“written by”).If a user searches for books written by Bill Clinton, the results should bring up thememoir of Bill (as opposed to results that were written about him).
    • Fig. 28. Distnguishing search for documents about and writen by Bill Clinton.A working example of this pattern can be found at dbpedia.neofonie.de. Depend-ing on the entity type (“item type”) the sidebar discloses features of the type in fa-cets that can be used as additional query refinement. The item type “plant” pro-duces a different set of properties than the type “organisation”. With this ap-proach it is possible to search for organisations based on date of incorporation(formation date).
    • Fig. 29. Features generated by the entty type. “Plant” on the lef side vs. “Organisaton” on the right Source: htp://dbpedia.neofonie.de [2011-09-20]
    • Query BuilderSimilar to the feature search, but more detailed are tools that help create complexqueries. Tools like the DBpedia Query Builder make use of the RDF “predicate” op-tion allowing to query for different link types. For example the search term “win-ner” will allow to combine an event and an athlete and provide a logical switchbetween IS_winner or ISNOT_winner.The Wikipedia example (DBpedia Query Builder) shows the combination of differ-ent aspects of cities. Fig. 30. DBpedia Querybuilder ofering a complex search interface Source: htp://querybuilder.dbpedia.org [2011-09-20]Another example is the KIWI Query Builder (visKWQL, see Hartl et al., 2010). Fig. 31. A visKWQL query Source: Hartl et al. (2010), p. 1255
    • Associatve SearchMetadata can describe the content of an asset (e.g. mother with two children in arefugee camp) or the statement of it (plight of refugees). Statements are more dif-ficult to annotate and search and may call for a special associative search field.This provides the possibility to enter high level search terms or even emotions(for music), the result is a list of related and associated matches. It incorporatesslightly different concepts of closeness, other than the most common “narrower”,“broader”, “related” and therefore not included in standard search fields.This search interface allows the entry of keywords and finds directly and indir-ectly related results. It could be useful for research activities. Fig. 32. Using a tag cloud as a search input.Trust indicatorsThe goal of trust indicators is to tell users how reliable the given information is.This can either be achieved by displaying source information (as a literal or asicon) or by indicating the level of trustworthiness.ProvenanceProvenance can be indicated by adding icons of origin to information or by addinga reference link. Sig.ma for example adds provenance information as footnotesand lists all sources in a separate table.
    • Fig. 33. Sig.ma indicates provenance informaton as footnotes and in a separate sources tab Source: htp://sig.ma/search?q=sebastan+vetel [2011-09-20]Source RatngIf the source of the information is not known or if it is not necessary that it isshown in detail, the trustworthiness can still be indicated by flagging the contentor by applying a rating system.User driven trust rating appear for example on shopping sites like amazon.comwhere users write product evaluations and rate them which in return can indicatethe reliability level: “This number of users found the information useful”.Within an enterprise context a user probably just wants to indicate whether in-formation is derived from internal sources or from outside the company. Fig. 34. Mockup of trust level indicaton with a fag.
    • Transparent recommendatonSimilar to the indication of the trust level, it may be of interest for the user toknow why assets are recommended. Youtube started adding the line “because youwatched” to its recommended video clips. At the same time recommendationscould be chosen for other reasons (because you watched, because you liked, be-cause your friends like, because you subscribed the channel, etc.). Fig. 35. Youtube indicates the reason for recommendatons (“Because you watched”) Source: www.youtube.com [2011-09-20]Content SummaryThese tools either summarize a single entity or a group of entities.Video SummaryVideo OverviewYovisto23 features a nice interface that provides keyframes related to shot lengthsand at the same time frame-synchronized tag overview. This example shows avideo result for the term “Paris”. The upper footer bar shows tags, ones that con-tain “Paris” are marked red. The lower footer bar shows the keyframes (stripes in-dicate length of shots). Tags and keyframes can be viewed by pointing the mousecursor at the particular spots.23 htp://www.yovisto.com [2011-09-20]
    • Fig. 36. Tag and keyframe overview from yovisto.com Source: htp://yovisto.com [2011-09-20]Stripe Image“A stripe image is an even more compressed representation of the original videothan the keyframes are. It is created by adjoining the middle vertical column ofpixels of every video frame.” (Rehatschek & Kienast, 2001). Fig. 37. Stripe Image. The x-axis represents tme dimension Source: Rehatschek & Kienast, 2001Keyframe PanelKeyframes are still images of a video that represent a cut or scene. “The keyframepanel displays the storyboard of the video. Keyframes are extracted by the basicvideo analyzer plug-in. These keyframes give a compressed overview about thecontent of the video.” (Rehatschek & Kienast, 2001).
    • An example for a keyframe panel is theVideoSurf FireFox-Addon that enhancessearch results of common video search engines by adding a bar of keyframes. Fig. 38. The FireFox plugin of VideoSurf adds a keyframe panel to the search results for Sebastan Vetel on YouTube. Source: htps://addons.mozilla.org/en-US/frefox/addon/videosurf-videos-at-a-glance/ [2011-06-27]Entty IndexAn entity index lists all entities of a certain type that appear in a a webpage or me-dia asset or are related to a group of assets. The index can be sorted alphabeticallyand helps the user to get a better overview and find entities, as well as the possib-ility to follow links to a more detailed description. Fig. 39. An index listng the event enttes of a website Source: htp://michromeformats.harmonyapp.com [2011-09-20]Informaton Overview on Large Data SetsThese are tools that allow users to draw information from the total results avail-able. Users may find patterns in the results through the process of data visualisa-tion and data interaction.There are standard sets of visualisation methods for data visualisation. Flare24, forexample is an ActionScript library created by the UC Berkeley Visualisation Lab.The algorithms are generic and can be used to display different types of results.Display methods include tree, force, indent, radial, circle, dendrogram, bubbles,circle pack, icicle, sunburst, treemap, timeline, scatter, bars, pie.24 htp://fare.prefuse.org [2011-09-20]
    • Fig. 40. Displaying the appearance of Formula 1 drivers in video results in a tree mapStoring Searches and ResultsSometimes it is useful to store complex searches. Working through a long list ofresults can take several days including the necessity to re-access the same list at alater stage of research. It may be also useful to store the list of results to share itwith collaborators.This pattern is found for example in selection baskets and shopping carts. Fig. 41. A “Collecton Basket” is used to store and share search results. Source: Screenshot from mARCo, the research and producton tool of the Austrian Broadcastng Organisaton/ORF [2011-09-20]
    • ReportsSummary ReportReports are a way to provide a structured overview of a list of items. It allows theuser to display results in a particular representation that can be chosen by theuser.A typical report would be a collection of all media assets that were requested bycertain users, or divisions or assets that were sold to a certain region. Queriescould also be used to comprise a generic annual overview of all internal docu-ments, deliverables and publications that are related to a certain theme. The res-ults can be listed in chronological order and grouped by document types. Fig. 42. Reportng the publicatons of a partcular year related to partcular theme.Automated Content ExtractonFor services such as Electronic Program Guides (EPG) information is automatic-ally extracted in an exportable format. This information could be pulled from theLinked Open Data cloud as well.
    • Fig. 43. Electronic Program Guide mockup that pulls short info and 3-star ratng from a (hypothetcal) online repository.EnhancementEntty EnhancementThis tool pulls data from the Open Linked Data Cloud to add additional informa-tion to existing entities. It is possible to enhance text by icons or images, but it alsoworks for example with personal details of politicians, athletes, actors, etc. thatare added as text boxes or similar.
    • Fig. 44. Example of enhancing informaton about Jonathan Stephens, the Permanent Secretary of the Department for Culture, Media and Sport. Data provided by data.gov.uk.Media EnhancementMetadata can be used to generate additional information during the display of amedia resource. Names of people can be inserted, hyperlinks can be created in re-gions of a video, etc.The triggers for enhancements can also be included in media content. Mozilla Pop-corn is an example for pulling wikipedia information in realtime during videoplayback.
    • Fig. 45. Mozilla Popcorn displays additonal informaton in separate windows during video playback Source: htp://webmademovies.etherworks.ca/popcorndemo [2011-09-20]Another example is Soundcloud. It shows user comments during playback. Fig. 46. Showing user comments in Soundcloud during music playback Source: htp://soundcloud.com [2011-09-20]
    • PATTERNS FOR ANNOTATIONEditor tools for metadata annotation allow specific description of media content.With Linked Media Interfaces, descriptions point to entities of the Linked DataCloud through typed links. Hausenblas et al. (2008) call the process of linking dataitems to entities of the Linked Open Data Cloud “interlinking”. It describes the actof semantically enriching content on a uniquely identifiable description method(e.g. RDF). This assumes that resources can be retrieved in an easy way from theLinked Open Data Cloud. The interlinking of particular words and phrases to en-tities of the Linked Data Cloud happens mostly in the background.General Annotaton Based on Text EntryThe patterns that are introduced in this section show graphical examples in whichtext is entered via keyboard. The techniques that are applied in the background –like semantic lifting, as well as auto correction or the possibility to use abbrevi-ations and shortcuts are part of the graphical user interfaces but not elaborated inparticular.Auto CompleteThis widget provides autocompletion during typing. Data is populated from in-dexed controlled vocabulary. This features different methods of auto-completion,including drop-down boxes, etc. Fig. 47. Auto completon example from reegle.info. Suggestons come from a controlled vocabulary related to the domain (clean energy) of the site Source: htp://www.reegle.info [2011-09-20]
    • Rich Text EditorThe rich text editor recognizes entities in text files. It allows users to link words ina text to Linked Open Data entities. Fig. 48. Rich Text Editor mockup.It scans text during entry or after entry and provides suggestions for links to re-sources and entities. It also enables the user to select text manually. The tool willlink to existing concepts or create a new ones. As soon as a link is confirmed, theuser can benefit from features such as quick info. Colours or icons may indicatethe entity type (e.g. person, event) or the source of the concept (see Dbpedia, Fig.48. Rich Text E). A tool like this is provided by DBpedia Spotlight. Fig. 49. Clicking the “Annotate” buton, returns a text with linked dbpedia-concepts Source: htp://www5.wiwiss.fu-berlin.de/SpotlightWebApp/index.xhtml [2011-09-20] Fig. 50. Result is linking words to DBpedia enttes Source: htp://www5.wiwiss.fu-berlin.de/SpotlightWebApp/index.xhtml [2011-09-20]
    • Locaton AnnotatonLocaton Picker from MapThis tool lets the user select a place, city, etc. based on a 2d map. Examples for oth-er locations: places (such as pubs, theatres, sites), geographical names (such ascities, mountains, bodies of water), etc.ÖBB online service uses a button that lets the user open up a map with all stations.Additional features like train stations, bus stops or parking areas can be displayedoptionally. Fig. 51. ÖBB Staton Picker optonally shows additonal informaton Source: htp://fahrplan.oebb.at [2011-09-20]Creatng New Place markTo create a new place mark, location or point of interest a user can use a map tolocate the spot and name it. Alternatively the user can enter geo-location data.
    • Fig. 52. Adding a new locaton by dragging mouse cursor or entering geo-locatonLocaton Diferentaton Based on CategoryIt is possible to label different types of locations, for example with icons for busstations, subway stations and railway stations, as is seen on the ÖBB example. Fig. 53. Based on the type (subway staton or a general landmark) two diferent enttes are presented Source: htp://fahrplan.oebb.at [2011-09-20]Annotaton of TimeAnnotation of time can be sophisticated. Besides fixed dates, time stamps may in-clude ranges (World War 2, Medieval) or cyclic events (Christmas, Valentines day,
    • Monday, etc.) as well as relative and hypothetic times (e.g. 2 months after the re-lease, a year after his death).Calendar PickerThe Calendar picker is used to chose a fully defined date. Fig. 54. Calendar picker tool from Yahoo Source: htp://travel.yahoo.com [2011-09-20]People, Event and Theme AnnotatonPeople TaggingThis process is supported by automated face detection or, in an advanced versionfrom face recognition. Face shapes are pre-selected and are annotated by the user.On a more powerful system a persons ID is already suggested by the computer. Afeature to tag people is implemented in various applications and services likeFacebook, iLife/iPhoto by Apple or face.com.
    • Fig. 55. Face tagging demo from Mozilla DemoStudio Source: htps://developer.mozilla.org/en-US/demos/detail/facial-recogniton-and-analytcs-with-html5s-video [2011-09-20]Sorting of people can be done according to how “close” they are to the user basedon a social graph. Other possibilities for sorting are geographic closeness, orgrouping by categories (such as school, team, family)@-SignThis pattern uses a notation known from Twitter and other social networking ser-vices. It is used to prefix a user name.Hashtag (#-Sign)Events and themes in general can always be picked from a set of choices or de-noted by shortcuts or key combinations.But similar to the @-sign for people, it is also possible to use the hash-sign (#) toprefix an event. The resulting tag is called a hashtag and can be regarded as ashortcut for the URI.In some environments a colon (:) is used instead of the hash-sign. Placed in frontof a term this method is again used to indicate a concept or stand for a URI (“:Se-bastian_Vettel”).Selecton and Picking of VocabularyCascading ListThe values in the different columns may affect each other. The pattern shows howthe font properties are split up in separate lists. (TextEdit Application).
    • Fig. 56. Font dialog box from the TextEdit applicaton of MacOS X Source: Screenshot of the TextEdit applicaton of MacOSVocabulary Picker with ImagesAs additional help, search terms can be combined with images or stock icons.last.fm implemented a widget like this. Fig. 57 Last.fm search widget Source: htp://www.last.fm [2011-09-20]
    • Vocabulary Picker with Diferentaton/DisambiguatonEspecially for terms with different meanings it may be useful to use a combinationof text entry and a method of differentiation. For example a vocabulary picker thatlists the different possibilities of meaning and giving a short semantic explanationthat allows the differentiation. Fig. 58. Disambiguaton by short explanaton.Duckduckgo implemented both these ideas including icons and grouping of res-ults based on types (people, geography, botany, film and television, music, etc.).
    • Fig. 59. duckduckgo resultsSource: htp://www.duckduckgo.com [2011-09-20]
    • Grid SelectonRepresentations of controlled vocabulary can be placed on a 2-dimensional planesimilar to a map or on any other spatial object. This works especially well, if con-cepts are represented by icons or images. They are placed as items on a grid or aslimbs of a body or as herbs in a garden. Similar to geographic maps, such selec-tions are based on the location of a concept in 2- or 3-dimensional space.An example for a grid selection is the symbol location in a text editor. The symbolsare always at the same position like on a map. Fig. 60. Searching an icon on a 2d grid is an example of a locaton-based search method. Source: Screenshot from TextEdit applicaton of MacOS X.Paterns for Ontology ManagementConceptual MappingConverting existing metadata into a set of linked data can be a tedious task. Never-theless, in many cases multimedia archives need to map large amounts of indi-
    • vidually annotated resources to uniquely identifiable concepts. W3C and otherssuggest mappings between the different metadata fields (e.g. creator, artist).Poolparty includes a mapper that provides immediate overview of a Linked Datasource and lets the user map concepts of one particular set of controlled vocabu-lary to another, existing one in different ways (e.g. Exact Match). Fig. 61. PoolPartys Concept Mapper Source: htp://poolparty.punkt.at [2011-09-20]Category AdderThis tool is useful if a new term is coined for a category, for example “organicfood”. None of the existing entities are labelled that way, but to some point thecategory can be inferred or at least narrowed down by filtering by other categor-ies.Another possibility is to add new categories when a user adds new assets. In Ta-gIT the user can add subcategories when they add a new point of interest.
    • Fig. 62. By entering text into feld (b) the user can add a new sub-category. This new category can also be applied to other parent categories. Source: htp://tagit.salzburgresearch.at [2011-09-20]Crowd Sourced AnnotatonThese patterns of annotation are used to receive asset annotations from a largenumber of users (crowd). They are not skilled in a way like authorized informa-tion workers such as archivists are, but still deliver valuable contribution. Asidefrom classical tagging which often does not involve controlled vocabulary, thereare also tools that do not require particular domain knowledge.Create ContextUsers can intentionally create a context for their work environment. This will in-fluence keywords, suggested entities, search, etc.A simple way to create a context would be to drag collections of digital informa-tion material (e.g. word documents, pdf-files, meeting minutes) into an “extract”-container and create a context by parsing the documents. A list of keywordswould be lifted and interlinked to Linked Data concepts and then displayed in asidebar.Ratng SystemsA rating system as implemented by Amazon collects user feedback on books, art-icles, etc. Fig. 63 Amazon ratng of products Source: htp://www.amazon.com [2011-09-20]
    • “I know more” ButonThis idea was introduced by the riese-project (Hausenblas et al., 2008). It allowsusers to add “User Contributed Interlinking” to metadata. Additionally, the contextof the data item from which the button is launched can be taken into account topreselect states of the interlink-process. Since archivists and information editorsusually are concerned with quality assurance, including persistent usage of con-cepts, this may involve monitoring and approval by authorized personnel.Other Annotaton ToolsReal Time Video AnnotatonThis tool works like a live chat. It creates time stamped annotation snippets. Theuser types a short annotation and stores it by pressing “Enter”. Depending on thecontext, concepts are recognized automatically. Fig. 64. Entering annotaton in real tme during video playbackThe real time annotation can be supported by additional tools. For a sport eventthe current athletes may be inserted in a sidebar, or different players of a soccergame may be associated with keyboard shortcuts. Events can be triggered manu-ally (e.g. lap numbers in a Formula 1 race, athletes of a skiing race) or automatic-ally (weather condition, telemetric data).Single concepts or groups of concepts may even be provided as pre-set buttons in-side or next to the annotation window. This helps to speed up live annotation. Pre-sets may include certain people that appear in an event, certain actions, etc.Tag RecommenderThis tool suggests new tags based on text extraction or user entry, for example assoon as a first tag is entered. The recommended tags are related to the first oneand allow a refinement of the topic.
    • PoolParty offers a semi-automated tag recommender to classify the texts and to al-locate them with concepts. Fig. 65. PoolPartys Tag Recommender Source: htp://poolparty.punkt.at [2011-09-20]Completeness FeedbackSimilar to the trustworthiness of a document a user may also be interested in thecompleteness and amount of available information related to a certain entity.Feedback can be given by percentage numbers or progress bars. In the examplebelow weather icons indicate the state of an article, whether it is new, improved,tagged, reviewed, etc.
    • Fig. 66. Weather icons indicate the state of an artcle (from clouds: missing informaton to sunny: reviewed) Source: htp://www.newmedialab.at/projekte/interedu [2011-09-20]Embedded Annotaton EntryThis is a way to annotate content within the workflow of some other process. Forexample allow a user to enter participants of an event during the annotation of avideo-clip:Imagine the live coverage of a skiing event. During the event a reporter types inthe names of the athletes in real time. This information is not only stored as meta-information of the video, but creates and edits the participants of the skiing event.That means the editor tool for the event is embedded in a real time annotationtool, but feeds into the event entity.
    • BUNDLED PACKAGESThere are such bundled packages available. First of all, we want to introduce twotechnologies from the SNML-TNG’s partners: PoolParty and M@rs.Pool PartyPoolParty25 is a professional thesaurus management system and a SKOS editor forthe Semantic Web including text mining and linked data capabilities. The systemhelps enterprises to build and maintain multilingual thesauri providing an easy-to-use interface. The PoolParty server provides semantic services to integrate se-mantic search or recommender systems into enterprise systems like CMS, webshops, or Wikis.M@RSM@RS26 is a Media Asset Management solution for the intelligent organisation anddistribution of media assets. It offers extensive search functions and a sophistic-ated user- and access management that ensure clear structures and lucidity. Itsupports features such as multi-mandator capability, multilingualism, mass im-ports, versioning and workflow-support.An integrated thesaurus supports different notations (e.g. Photo/Picture), syn-onyms and abbreviations and offers a comfortable and efficient research tool.More Video Annotaton ToolsThe LIVE “Report on Live Human Annotation”27 provides an overview of existingvideo annotation tools, including Anvil, ELAN (EUDICO Linguistic Annotator), M-OntoMat-Annotizer, Vannotea, ViPER-GT, VIDETO Video Description Tool,Frameline 47 Video Notation, VideoLogger and Efficient Video Annotation (EVA)System. All these tools provide graphical user interfaces to annotate and displayvideo metadata.The following two sections introduce two more examples, the first one an annota-tion tool, the second one an example of a semantic video search engine.25 htp://poolparty.punkt.at [2011-09-20]26 htp://www.mediamid.com/hp/mars_6.html [2011-09-20]27 htp://www.ist-live.org/intranet/iais029 [2011-09-20]
    • Video Content Annotaton: Vizard AnnotatorCosta et al. (2002) introduce “Vizard Annotator”, a video publishing tool that in-cludes a video annotation module. The module allows users to add information inone of three “annotation tracks”: “Transcript” (transcription of spoken language),“Script” (description of the storyline), “In Shot” (content that appears in the shot).The interface includes a video player with common VCR-controls (Video CassetteRecorder). Fig. 67. VAnnotator showing the movie player and three annotaton tracks Source: Costa et al. (2002), p. 285Video Semantc Search: JinniJinni28 is an online project “to describe video in more richer forms”. It is based onsemantic technology and features searches for plot, mood, style and more.28 htp://www.jinni.com/discovery.html [2011-09-20]
    • Fig. 68. Jinni search for “car chase” makes use of semantc technology to fnd TV clips where the plot includes a care chase Source: htp://www.jinni.com/discovery.html [2011-09-20]
    • SUMMARYThe current list is a compilation of design patterns showing a variety of user inter-faces in the Linked Media domain. As the field of user interaction evolves and newdesign patterns emerge the current list can only serve as a starting point.We hope however that the large set of use cases both guide and stimulate de-velopers and interface designers to provide more engaging and meaningful userinteraction with Linked Media.SNML-TNG is planning to implement many of the design patterns and providecode examples and also share implementations as generic widgets. We appreciateyour feedback and contribution.
    • REFERENCES• Amin, A., M. Hildebrand, J. Van Ossenbruggen, V. Evers, and L. Hardman. “Organizing suggestions in auto- completion interfaces.” Advances in Information Retrieval (2009): 521–529.• Bailer, Werner, Tobias Bürger, Véronique Malaisé, Thierry Michel, Felix Sasaki, Joakim Söderberg, Flori- an Stegmaier, and John Strassner. “Ontology for Media Resources 1.0”, March 8, 2011. http://www.w3.org/TR/mediaont-10/.• Berners-Lee, Tim, James Hendler, and Ora Lassilia. “The Semantic Web.” Scientific American (May 2001).• Christian Bizer, Tom Heath and Tim Berners-Lee (in press). Linked Data – The Story So Far. Internation- al Journal on Semantic Web and Information Systems, Special Issue on Linked Data.• Christian Bizer, Anja Jentzsch, Richard Cyganiak. The State of the LOD Cloud. Version 0.2, 03/28/2011 (2011), http://www4.wiwiss.fu-berlin.de/lodcloud/state [2011-09-20].• Costa, M., N. Correia, and N. Guimaraes. “Annotations as multiple perspectives of video content.” In Pro- ceedings of the Tenth ACM international Conference on Multimedia, 283–286, 2002.• Halb, Wolfgang, and Michael Hausenblas. “select * where { :I :trust :you } How to Trust Interlinked Multi- media Data.” Proceedings of the International Workshop on Interacting with Multimedia Content (2008).• Hannemann, Jan, and Jürgen Kett. “Linked Data for Libraries”, 2010.• Hartl, A., K. Weiand, and F. Bry. “visKQWL, a visual renderer for a semantic web query language.” In Pro- ceedings of the 19th international conference on World wide web, 1253–1256, 2010.• Hausenblas, M., W. Halb, and Y. Raimond. “Scripting User Contributed Interlinking.” In 4th Workshop on Scripting for the Semantic Web (SFSW08), Tenerife, Spain, 2008.• Kosch, H., L. Boszormenyi, M. Doller, M. Libsie, P. Schojer, and A. Kofler. “The life cycle of multimedia metadata.” Multimedia, IEEE 12, no. 1 (March 2005): 80- 86.• Pipek, V., M. Rohde, R. Cuel, M. Herbrechter, M. Stein, O. Tokarchuk, T. Wiedenhöfer, F. Yetim, and M. Zamarian. “Requirements Report of the INSEMTIVES Seekda! Use Case (D2.2.1)” (2009).• Rehatschek, H., and G. Kienast. “VIZARD -- An Innovative Tool for Video Navigation, Retrieval and Edit- ing.” In Proceedings of the 23rd Workshop of PVA “Multimedia and Middleware”. Vienna, 2001.• Smith, John R., and Peter Schirling. “Metadata Standards Roundup.” IEEE Multimedia, 2006.• “W3C Media Fragments Working Group”, n.d. http://www.w3.org/2008/WebVideo/Fragments/.• Wurman, Richard S. Information Anxiety 2. 1st ed. Que, 2001.• Zielke, Felix, Christian Eckes, Carsten Rosche, Matthias Aust, Sven Hoffmann, Stefan Grünvogel, Richard Wages. "D5.2 Report On Live Human Annotation" (2007). http://www.ist- live.org/intranet/iais029/live-05-d5-2-report_on_live_human_annotation.pdf [2011-09-20]
    • LINKED MEDIA LAB REPORTS – THE NEW SERIES OF THE SNML-TNGThis is the second issue of the series “Linked Media Lab Reports” edited by theSalzburg NewMediaLab – The Next Generation (edited by Christoph Bauer, GeorgGüntner and Sebastian Schaffert). Within this series, lab reports in English or Ger-man will be published. They are characterised as conceptual papers and/or how-tos. Additional issues are in preparation. Band 1 (in German) Linked Media. Ein White-Paper zu den Potentalen von Linked People, Linked Content und Linked Data in Unternehmen. (Salzburg NewMediaLab – The Next Generaton) ISBN 978-3-902448-27-9 Issue 2 Linked Media Interfaces Graphical User Interfaces for Search and Annotaton (Marius Schebella, Thomas Kurz and Georg Güntner) ISBN 978-3-902448-29-3 Issue 3 Media Objects in the Web of Linked Data Publishing Multmedia as Linked Data (Thomas Kurz) ISBN 978-3-902448-30-9 Band 4 (in German) Smarte Annotatonen. Ein Beitrag zur Evaluaton von Empfehlungen für Annotatonen. (Sandra Schön und andere) ISBN 978-3-902448-31-6 Band 5 (in German, scheduled for November 2011) Qualitätssicherung bei Annotatonen Soziale und technologische Verfahren der Medienbranche
    • SOCIAL MEDIA – PUBLICATION SERIES OF THE SNML-TNGWithin the series “Social Media”edited by Salzburg NewMediaLab (editors GeorgGüntner and Sebastian Schaffert) the following issues are published (in German): Band 1 Erfolgreicher Aufau von Online-Communitys. Konzepte, Szenarien und Handlungsempfehlungen. (Sandra Schafert und Diana Wieden-Bischof) ISBN 978-3-902448-13-2 Band 2 (Meta-) Informatonen von Communitys und Netzwerken. Entstehung und Nutzungsmöglichkeiten. (Sandra Schafert, Julia Eder, Wolf Hilzensauer, Thomas Kurz, Mark Markus, Sebastan Schafert, Rupert Westenthaler, Rupert und Diana Wieden-Bischof) ISBN 978-3-902448-15-6 Band 3 Empfehlungen im Web. Konzepte und Realisierungen. (Sandra Schafert, Tobias Bürger, Cornelia Schneider und Diana Wieden-Bischof) ISBN 978-3-902448-16-3 Band 4 Reputaton und Feedback im Web. Einsatzgebiete und Beispiele. (Sandra Schafert, Georg Güntner, Markus Lassnig und Diana Wieden-Bischof) ISBN 978-3-902448-17-0 Band 5 – in Kooperaton mit evolaris und Salzburg Research Mobile Gemeinschafen. Erfolgreiche Beispiele aus den Bereichen Spielen, Lernen und Gesundheit. (Sandra Schön, Diana Wieden-Bischof, Cornelia Schneider und Martn Schumann) ISBN 978-3-902448-25-5