Group 6 steve museum


Published on

  • Be the first to comment

  • Be the first to like this

No Downloads
Total views
On SlideShare
From Embeds
Number of Embeds
Embeds 0
No embeds

No notes for slide
  • Source:  is designed that individuals without art expertise and casual museum visitors are able to search for specific pieces without knowing precise details about a collection or work of art.  
  • example. Family comes home from vacation can’t remember artists name or piece of work… is geared towards this type of user. -They want to engage them to use their software and to provide these types of tags that will help users like them in the future. 
  • -Designed to provide new ways to describe and access art collections and encourage visitors to engage with the collections -Has been implemented in 21 museum institutions and galleries -Users help museums describe their collections by applying keywords, or tags to objects, the users are able to publish their collections of tags and collect tags -Steve.Museum is designed to be simple to use, users simply click on an object and applying tags A good summary of what the site aims to do.
  • What are the theoretical frameworks developed to explain the relationship between non-expert, casual users using art museum databases? 
  • With the onset of the Internet, the predominant users of museum information retrieval systems are shifting from expert users to non-expert.Generate keywords for image and object records in a cost effective way. You don’t need to pay an expert in subject analysis to determine these subjects. Engage: Museums no longer a box in a certain location, but now accessible to all from every location. Elicit: The sort of terms non-experts users will supply, “pictorial and emotional subject description,” will help similarly non-expert users retrieve information in the future. Into challenges: How effective can these users and the process of “tagging” images with subject terms be? 
  • The practice of assigning subject terms to art objects is not widespread in museums today. Before, only a few museums offered “subject” searching, without many guidelines for the description and analyzing of art objects. “Curatorial opinions vary on what descriptive terms to use to caputre subject themes”Usefulness: what specific types of subjects are requested by users. Lack of user studies: Elizabeth will elaborate on a study by the creators later.
  • Smith provides an analysis of the three different levels of iconographic interpretation, and relates them to the process of tagging works of art for the benefit of non-expert users.I: stag, gold, gilded II: post-impressionism, date the painting was painted, Dutch school, etc… III: using information to say something of value about a work of art, what it means, etc.., As Smith notes, non-expert users engage works of art at the first level, and typically provide tags at that level as well. They rely on museums to supply level two information, and are often not interested in level III. 
  • In one of the user-generated tagging studies conducted at the MOMA, researchers found that users supplied consistent tags, but for the most part, those tags reflected “commonly perceived aspects.” In general, users prefer realistic works to more abstract. This results in the entirety of a museum’s collection not being adequately covered. When confronted with abstract terms, they provided fewer terms. 
  • A good synopsis of a fairly complex article.
  • Study is about 100 pages without appendices, so I’ve limited my discussion essentially the questions surrounding the utility of tagging for art museums – there were also questions about how the user interface influenced user tagging behaviour, but for the sake of brevity, I’ve left them out. 
  • What was going on before steve? Art historians require incredible volumes of material to produce a persuasive argument, and procuring reproductions of sufficient numbers of examples has long been a problem.After a decade of debating whether art museums should even make their collections available online, the collections were finally available, but failed to address that the online medium cannot be treated the same way as a physical space – it’s the difference between offering interpretation and information retrieval.  While this may be fine if it is only trained art historians accessing the collections, if museums want to use their online presence to increase their audience and their appeal to the general public, it creates a few fundamental problems for the average person who wants to explore the collection:The language used by museum curators is so specific that it is frequently of little value to a casual user When items are organized in the same way they would be presented in an exhibition, a user will not be able to find items when they don’t share that particular point of view of the context On the opposite end of the spectrum, if the items are not contextualized at all, it will cause problems for finding items that are related 
  • Where Smith seemed very critical of social tagging, Trant decided that social tagging should be explored as a solution.  She argues that some of the potential benefits of social tagging include:Perspective of general public – mitigates some of the closed and incomprehensible nature of that specialized vocabulary – Metropolitan Museum of Art curator “everything I know is not in the picture” Subjective nature – increases the number of access points Broaden the scope – distinguish between replacing and augmenting.  In Trant’s words, “User tags might help bridge the gap between professional and public discourse by providing a source of terms not in museum documentation.” (p. 3) Trant stressed though that “The tagging activity needs to be positioned within a context of on-line information retrieval and use, and distinguished from possible studies of in-gallery applications or discursive art educational texts and programs” and that “Studies of tagging must take care to distinguish it from more discursive user commenting.”
  • However, Trant did not simply suggest diving right into social tagging. She felt that the development of this possibility should be carefully studied, and thus was born: a research project and social tagging experiment, the results of which would determine its future implementation. This is what sets apart from the other sites that we have looked at. Where the other social tagging sites seemed to develop organically as an extension of the organization of their specific type of information, was developed in a very controlled fashion. Instead of social tagging being the intuitive solution developed and put out there for people to use as they saw fit, there were very specific criteria for steve’s implementation. Prior to the project outlined in this report, was created by a consortium of museums who wanted to study the utility of social tagging.  They created the software as an unaffiliated third-party environment that would allow them to study tagging without confusing the experiment with the institutional services.  The collaborative nature of the project also allowed a broader perspective than a single institution might have produced.  The software had already been used to conduct prototype studies when this research project was begun.So what exactly did they want to know in this study?ie, when all the user tages are considered together, does it provide additional information, or just restate what the professional language represents? Compare to Union List of Artists’ Names and Art and ArchitectureThesaurus. If yes, improve recall by adding access points.Established by a review by museum staff – a review by museum staff of actual tags may help to address criticism that calls into question the ability of a naïve user to provide useful informationCompare search terms and if the tagging terms match search terms above and beyond those already represented by the museum documentation, searching can be said to be improved by tagging
  • -Because the prototype tagging showed that some types of materials elicited more tagging material than others, they ensured that a full range of materials were presented and distributed in roughly the same distribution as an actual collection -this is the same information that might be presented on the museum website or on labels in the museum. No additional research was conducted prior to tagging -The data collected included “the tags assigned to each work, details about the context in which they were assigned and whether users chose not to tag a work – to skip it without adding any tags” -”Required data collected at registration included Language, Education, Art Experience and Year of Birth. Other optional information included Gender, Community Affiliation, Income, Relationship to a Museum (work in one, visit often, felt involvement], Internet Usage and Connection, Tagging experience and sites used. Finally, users were asked if they were willing to be contacted for follow-up during the research project.” “Users that did not register were assigned a sequential user identifier to group their tagging activity”
  • -86% = simple match of tag term to terms in museum documentation -a lot of the terms that did not match the museum documentation were based on what could be seen in the picture – museums do not supply these kinds of terms in their online records.-hard to determine whether or not the terms that did match were used in the same way because of lack of context (ULAN = Union List of Artists Names)-usefulness of tags associated to any given work could range from 65% to 100%, and of course things such as resolution could affect legibility, etc. impacting accuracy -the more frequently a tag was applied to a single item, the more likely it was to be considered useful. All tags applied 4 or more times unanimously considered useful – in practical implementation this may mean that where there is a high frequency of tagging by the same term the tag need not undergo a review process to be included -useful misperceptions=things tagged incorrectly, but more than once, indicated terms that might be useful to redirect to the correct terms as a common misperception “This qualitative analysis was designed to address museum-based concerns about the appropriateness of tags assigned by the general public, and contribute to our understanding the contribution of tagging. It has helped to establish that the vast majority of tags assigned by users of the steve tagger were appropriate, and that misbehaviour in the steve tagging environment was very infrequent.” -low correlation could possibly be related to disproportionate number of searches by artist name, related to learned behaviour that this is a more reliable way to search,  and the fact that the collection is not separated from the rest of the site (search logs will include things like opening hours)-still, one item was only successfully retrieved by searching the tags associated to it with 4 tags out of 30, which would indicate that tags may still improve recall even if the data does not support the hypothesis to the extent anticipated.
  • -many registered users returned to the site after their initial visit; those directed to the project by the museum saw themselves as helping the museum and provided 4x tags on average-added functionality could increase this interest further-usefulness was probably the easiest, while correlation to search terms was probably the most problematic-many of the “Useful” tags could not retrieve the specific item – one test showed that only 4 of 39 tags successfully retrieved the item.A good synopsis of the report, but perhaps a little too detailed for a presentation of this length.
  • En ce qui concerne l’évaluation globale du site web, ici j’ai faire l’analyse d’une façon différente, puisque le site est plutôt un projet. J’ai suivi quelques organismes qui étaient affiliés avec steve pour en vérifier le fonctionnement. En voici un peu plus. PremièrementLOGICIELSur le site de steve, ils ont mis en ligne un outil qui est un open source que les organismes peuvent télécharger et incorporer à leur site web. Ceci permet de tagger des collections de musées, de les analyser et de le gérer.  DeuxièmementSTEVE.TAGGERUn outil que l’on peut ajouter des tags sur des images de plus de 21 Institutions Donc, il faut alors s’identifier via Google ou Yahoo pour pouvoir ajouter des mots-clés. On peut choisir la langue que l’on désire ajouter des tags, mais je n’ai pas vu autre chose que l’anglais. Si on essaie de mettre un espace pour faire des mots-clés de mots composés, ceci les change en un seul mot. 
  • Tente de faire un certain contrôle Sur leur wiki offre des documents.Par exemple: Steve in Action: Object MetadataSpecification, un document qui donne certaines lignes directrices pour les métadonnées. Description de DC Mais peut être un problème lorsque l’information provient de plusieurs sources. Par exemple :On peut remarquer que lorsque l’on fait une recherche, puisque les objets proviennent de plusieurs musées, il se peut parfois que les données ne soient pas entrées de la même façon d’une page à l’autre. Alors, lorsque l’on fait une recherche, ceci peut être un peu déstabilisant, puisque l’on recherche dans tous tous les champs, ainsi les étiquettes ajoutées par les usagers.  Enfin, déjà que les étiquettes ne suivent aucune réglementation, il pourrait toutefois être intéressant d’en apercevoir une dans les métadonnées. 
  • Musée qui y participe. Je les ai naviguer. En voici quelques conclusions.Premier exempleIci, c’est un musée d’Art tibétain qui utilise le logiciel de steve pour voir ce que les usagers pensent de leur ouvrage.  Alors, ceci est plutôt utilisé comme outil d’analyse plutôt que comme outil de recherche. Cette fonction devient intéressante, puisqu’il permet d’analyser les tags selon plusieurs critères puisque les usagers doivent remplir un court questionnaire avant de pouvoir tagger les images. 
  • Deuxièmeexemple.Le musée qui utilise le steve.tagger le plus. (Probablement, car ilsontparticipé au développement de l’outil et qu’ilsontfourniune bourse) Utilise le steve tagger tool. Ondoit se connecter pour ajouter des tags.
  •  Moteur de recherche.Nous permet de faire plusieurs recherches. Mais a des lacunes.  Je vais vous les expliquer en vous démontrant une recherche que j’y ai faiteDans la barre d’outils, on peut choisir de faire une recherche en excluant les tags, en choisissant une période de temps.
  • On peut raffiner notre recherche par tags en cliquant sur la gaucheJ’aichoisi Dress et woman.J’aicliquésurune image.
  • Même si j’ai choisi de limiter par tags, lorsque j’arrive ici, je vois qu’aucun tag n’a été ajouté par les usagers. Quoiqu’il en soit, j’ai bien une image d’une femme. Mais la recherche peut être déstabilisante. 
  • Un autre problème …Différence entre le pluriel et le singulier. On retrouve alors beaucoup de doublons de mots.  (Un autre problème : le blogue semble pas avoir été mis à jour depuis décembre 2010) 
  • ConclusionEn fait, certains projets semblent être mis de côté,  par exemple, le blogue de steve n’a pas été mis à jour depuis plus d’un an.Plusieurs outils non dispos Par exemple, outil de tag sur Facebook n’est plus disponibleComme on l’a remarqué avec lesartciles qui ont été décrits, il a y a un besoin puisque la plupart des tags ne se retrouvent pas dans la description faite par les professionnels. Mais il faudrait alors un peu plus de rigueur dans l’étiquetage. Sans limiter les choix des usagers, peut-être que d’ajouter seulement les tags lorsque plusieurs usagers les ajoutent pourrait imiter les erreurs. Par exemple : Musée McCord qui n’est pas le premier à le faire. ESP gameGoogleLorsque deux personnes ajoutent les mêmes mots-clés, il peut être ajouté, ceci permet une vérification et éviter certaines erreurs. A good overallsummary of strengths and weaknesses, but some of the screen captures weredifficult to follow in the presentation.
  • Group 6 steve museum

    1. 1. A- (see comments below and on bibliography)ISI 5121 Subject Analysis of Information Social Tagging Group 6 Meghan Dunlap Elizabeth Ross Peter Forestell Mariane Léonard Steve Museum
    2. 2. As a non-specialist in the field of art and sculpture, what terms would you use to locate this item in a museum’s database? • Artist/Maker: Joachim Friess ca.animal 1579-1620, m. 1610 antler • Title: Diana and the Stagstag • Object Name: AUTOMATON gold • Date: First quarter 17th statue century (about 1620) •Made in:gilded Country: Germany, City: Augsburg • Medium: Silver, partly gilt, jewels, enamel
    3. 3. • Based on open source software which aids developers in social tagging research in museum collections, while testing the effectiveness of tagging • Socially focused data tagging tool aimed at making museum collections and acquisitions more accessible. •’s goal is to build interest around museum and gallery holdings. 
    4. 4. 21 Institutions95468 Objects508250 Terms4962 Users
    5. 5. Peter Paul Ruebens, “TheStraw Hat” (1625)
    6. 6. In 2008, Steve.Museum received aNational Research Grant for AdvancingDigital Resources from the US Institute ofMuseum and Library Service. “The goals of the grant are toenhance the existing tagging softwaretools to make steve easy to use formuseums of all sizes and types; todevelop next-generation tagging toolsthat motivate and engage users, includingmobile interfaces that allow tagging inmuseum spaces; to investigate ways toaggregate tags in order to facilitate cross-collection searching and browsing; and todemonstrate integrations of the stevetagger with commonly-used museumsystems” (
    7. 7. Viewer Tagging in Art Museums: Comparisons to Concepts andVocabularies of Art Museum VisitorsMartha Kellogg SmithUniversity of Washington, USAAdvances in classification research, Vol. 17: Proceedings of the 17th ASIS&T SIG/CRClassification Research Workshop (Austin, TX, November 4, 2006), ed. JonathanFurner and Joseph T. TennisWhy are art museums engaging onlineusers to solicit subject keywords forvarious works of art?  And howsuccessful have they been? 
    8. 8. Motivations• Generate keywords in acost effective way • Engage online visitors • Elicit terms for “subjects” in artworks. • Closing the “semantic gap”between specialists andcasual museum visitors 
    9. 9. Challenges• Difficulty of convincing artmuseums that this is a usefulpractice • Specialized art vocabularies • Curatorial experts differ in opinion on what subject terms to use • How useful is this to theend user? • As of 2006, lack of userstudies 
    10. 10. Levels of artwork interpretation and information use Level I : objects and their parts;concretely observed characteristicsLevel II : styles, dates, and original andhistorical settings and functionsLevel III : evaluating, explaining, andsynthesizing interpretations.
    11. 11. (more) ChallengesDoes asking users to supply Level I typetags help them develop Level II or Level IItype knowledge? • Art historical and foreignlanguage terms • Depth and coverage • Imprecision, error • Bias 
    12. 12. Smith’s conclusion:“The generation of keywords for populating systems shouldnot inadvertently encourage non-specialist volunteertaggers to interpret keywording activity as somehow whatart viewing and meaning making is all about: simplyenumerating and listing what they see.” Exactly how art museums can use online resources to helptheir users move beyond Level 1 type information is yet tobe seen. 
    13. 13. Tagging, Folksonomy and Art Museums: Results’s researchJennifer TrantUniversity of Toronto/Archives & Museum InformaticsRetrieved from has demonstrated?
    14. 14. The problem: accessibility of art museumcollectionsCollections made available online, but organizedaccording to the principles of the physical space 1) Highly specialized and technical language 2) Items could be organized as an exhibition 3) Items could be in a non-contextualized database 
    15. 15. The solution: Social Tagging• Can give the perspective of the general public • Accounts for the sometimes subjective nature of art • Broaden the scope of professional indexing and cataloguing 
    16. 16. a research project onsocial tagging in art museumsResearch Question: Can Social Tagging and FolksonomyImprove On-line Access to Art Museum Collections? • Do user tags differ from terms in professional museum documentation? • Do museum staff find user tags useful for searching art collections? • Do user tags differ from terms used to search on- line art museum collections? 
    17. 17. Methodology• A set of images of works of art to be tagged were selected from existing digital materials • Each item was accompanied by the following documentation from the museum: Artist (nationality birthdate-deathdate); Title, date; medium, support; dimensions; Acquisition details (accession number) • The works of art were presented to the user through the software, which recorded user data and connects it with the tags the user assigned • Users could register with the site or access the site and begin tagging without registering
    18. 18. Results• Do user tags differ from terms in professional museum documentation?  • 86% of user tags not found in museum documentation • 62.8% of distinct tags not found in AAT • 85% of distinct tags not found in ULAN• Do museum staff find user tags useful for searching art collections?  • 88% of tags considered useful overall • Correlation between usefulness and frequency • Some tags considered useful misperceptions• Do user tags differ from terms used to search on- line art museum collections?  • Search log data was analyzed from the Minneapolis Institute of Arts and the San Francisco Museum of Modern Art • Only 38.5% and 22.6% matched distinct tags 
    19. 19. ConclusionsResearch Question: Can Social Tagging and FolksonomyImprove On-line Access to Art Museum Collections?• Interest in tagging is high – could also lead to increased engagement with museums • While some correlations were harder to prove than others, including tagging would certainly at least improve recall 
    20. 20. L’évaluation
    21. 21. Un contrôle ?
    22. 22. Musées qui l’ont implanté
    23. 23. Indianapolis Museum of Art
    24. 24.
    25. 25.
    26. 26. Un autre problème
    27. 27. Conclusion