The document discusses the origins and development of JSON (JavaScript Object Notation). It describes how Douglas Crockford discovered JSON in 2001 and published the first JSON specification in 2002. It outlines some of the key decisions made in JSON's design, such as requiring quoted keys and allowing different programming languages to parse it.
The document discusses various topics related to Web 2.0 including web feeds, markup tagging, collaborative filtering, social networking, and text mining. It provides examples of using microformats to add semantics to web pages through tags like XFN for social networks, hCard for contact information, and hCalendar for events. It also discusses using RDFa and embedded RDF for metadata and tools for tagging like Delicious, Flickr, CiteULike, and Connotea.
The document provides an introduction to XML, explaining that it aims to capture the structure and meaning of content rather than presentation. It discusses some key XML concepts like semantic tags, elements, attributes, and lowerCamelCase naming conventions. The document also outlines reasons why Wiley needs XML, including for single source publishing to multiple formats and platforms, enriched content features, and powerful searching across content.
The document discusses GRDDL ( Gleaning Resource Descriptions from Dialects of Languages), which is an easy way to extract metadata from web pages in various formats like HTML and XML. It allows one to declare a web page or document as a source of data, link it to extractors that can glean semantic data from it, and then have GRDDL agents extract RDF descriptions from the document. This is done by adding GRDDL profiling and transformation links to declare the source and transformation that will be applied.
With some of the newer HTML5 API’s it is now possible to redesign how your web pages interact with the desktop. Web pages are too often little islands that fail to interact well with the wider user interface of our devices. This talk will explain the new Drag/Drop and File APIs, demonstrating how to make web pages more equal in the world of applications.
the practical reuse of social media data and how it can create better user experience. Combining Google’s Social Graph API with open data sources like RSS and Microformats to provide a wealth information about your users.
REST est devenu un standard pour les APIs web. Mais malgré sa popularité, il est plein de défauts. Son successeur existe et il vient de Facebook. Venez découvrir en détail le principe, la mise en oeuvre et l’écosystème de GraphQL.
This workshop explained how metadata, both technical and descriptive, is and can be embedded in these files and how its addition makes them more accessible and usable. If you have a large collection of digital files and are looking for a way to add information about them without a database, using embedded metadata may be for you.
In a hands-on session, we worked with several existing embedded metadata reading and editing tools and introduce the Visual Resources Association's custom XMP File Info panel for Adobe Photoshop and Bridge applications.
more info:
http://serc.carleton.edu/viz/metadata11/embedded_metada.html
The document discusses the origins and development of JSON (JavaScript Object Notation). It describes how Douglas Crockford discovered JSON in 2001 and published the first JSON specification in 2002. It outlines some of the key decisions made in JSON's design, such as requiring quoted keys and allowing different programming languages to parse it.
The document discusses various topics related to Web 2.0 including web feeds, markup tagging, collaborative filtering, social networking, and text mining. It provides examples of using microformats to add semantics to web pages through tags like XFN for social networks, hCard for contact information, and hCalendar for events. It also discusses using RDFa and embedded RDF for metadata and tools for tagging like Delicious, Flickr, CiteULike, and Connotea.
The document provides an introduction to XML, explaining that it aims to capture the structure and meaning of content rather than presentation. It discusses some key XML concepts like semantic tags, elements, attributes, and lowerCamelCase naming conventions. The document also outlines reasons why Wiley needs XML, including for single source publishing to multiple formats and platforms, enriched content features, and powerful searching across content.
The document discusses GRDDL ( Gleaning Resource Descriptions from Dialects of Languages), which is an easy way to extract metadata from web pages in various formats like HTML and XML. It allows one to declare a web page or document as a source of data, link it to extractors that can glean semantic data from it, and then have GRDDL agents extract RDF descriptions from the document. This is done by adding GRDDL profiling and transformation links to declare the source and transformation that will be applied.
With some of the newer HTML5 API’s it is now possible to redesign how your web pages interact with the desktop. Web pages are too often little islands that fail to interact well with the wider user interface of our devices. This talk will explain the new Drag/Drop and File APIs, demonstrating how to make web pages more equal in the world of applications.
the practical reuse of social media data and how it can create better user experience. Combining Google’s Social Graph API with open data sources like RSS and Microformats to provide a wealth information about your users.
REST est devenu un standard pour les APIs web. Mais malgré sa popularité, il est plein de défauts. Son successeur existe et il vient de Facebook. Venez découvrir en détail le principe, la mise en oeuvre et l’écosystème de GraphQL.
This workshop explained how metadata, both technical and descriptive, is and can be embedded in these files and how its addition makes them more accessible and usable. If you have a large collection of digital files and are looking for a way to add information about them without a database, using embedded metadata may be for you.
In a hands-on session, we worked with several existing embedded metadata reading and editing tools and introduce the Visual Resources Association's custom XMP File Info panel for Adobe Photoshop and Bridge applications.
more info:
http://serc.carleton.edu/viz/metadata11/embedded_metada.html
CrossRef Secondary Query: Practice and Problems (2011 CrossRef Workshops)Crossref
The document discusses issues related to linking references to journal articles and non-journal items in CrossRef, as well as the introduction of CrossRef's secondary query feature. It provides examples of references that failed to link using structured queries but were successfully linked using secondary author-title queries due to metadata problems like incorrect titles, proceedings mismatches, and proceedings title mismatches. It also discusses how different reference types like book chapters can be represented in CrossRef metadata.
This document discusses XML and the Document Object Model (DOM). It introduces XML documents and namespaces. It describes how the DOM represents an XML document as a tree structure with parent and child nodes. It provides code examples for reading an XML file using an XmlReader, and manipulating an XML document's DOM tree by building a copy of the tree as TreeNodes in a TreeView control.
creating portable social networks with microformatselliando dias
Microformats like hCard and XFN allow people to publish social network information like profiles and connections in HTML. This makes the data portable across sites and interoperable. Key points are:
1) hCard encodes things like names and profiles in HTML class attributes. XFN uses HTML link relations like "contact" and "me" to represent connections between people.
2) Many existing social networks already publish this kind of data, so microformats extract and structure existing information rather than requiring new behaviors.
3) Google's Social Graph API can parse microformats to allow importing profiles and connections from different sites and services into a single social network.
Leon Poole from Creative World gives you "Practical HTML5". A short talk that starts with a brief look at where HTML5 is currently, followed by some practical HTML5 markup examples.
This presentation was originally created for a discussion on HTML5 at the Brisbane Web Design meetup group.
The document provides guidance on evaluating the reliability and credibility of websites. It recommends checking who authored and published the site, whether sources are cited, if there are signs of bias, and searching the domain on tools like Alexa to understand ownership and backlinks. It also suggests searching topics on Google to compare sites and analyzing the quality and reliability of top results.
The document provides an overview of HTML and CSS basics. It defines HTML as a markup language used to structure and present content on the web. It lists and describes many common HTML tags such as <head>, <body>, <p>, <img>, <a>, etc. It also defines CSS as cascading style sheets used to describe the presentation of HTML content, and describes various CSS properties, selectors, and ways to apply CSS styles.
The document provides an overview of XML including:
- The basic syntax and structure of XML documents
- Key differences between XML and HTML
- Common XML data types like elements and attributes
- How whitespace is handled in XML
- What makes an XML document valid versus well-formed
- How XML Schema can be used to validate XML documents
- How namespaces help avoid conflicts between element names
The document provides an overview of creating pages in Symfony, including generating a module skeleton, adding actions and templates, passing information between actions and templates, linking between actions, and retrieving information from requests. Key aspects covered are that pages have separate logic in actions and presentation in templates, helpers can generate HTML, and the request parameters should be accessed through the parameter holder rather than directly.
This document summarizes research into discovering lost web pages using techniques from digital preservation and information retrieval. Key points include:
- Web pages are frequently lost due to broken links or content being moved/removed, but copies may still exist in search engine caches or archives.
- Techniques like lexical signatures (representing a page's content in a few keywords) and analyzing page titles, tags and link neighborhoods can help characterize lost pages and find similar replacement content.
- Experiments showed that lexical signatures degrade over time but page titles are more stable, and combining techniques improves performance in locating replacement content. The goal is to develop a browser extension to help users find lost web pages.
The document discusses XML and web services with PHP5 and PEAR. It provides an introduction and overview of XML including related technologies like DTD, XML Schema, Relax NG, XPath, and XSLT. It also covers using XML in PHP5 with SAX, DOM, SimpleXML, XPath, and XSLT. Finally, it discusses web services in PHP5 and PEAR.
Creative Commons @ Seybold San Francisco 2004 - DRM RoundtableMike Linksvayer
This document summarizes a discussion on digital rights management (DRM) and rights description. It provides an example of using metadata to describe the license and permitted uses of a creative work according to a Creative Commons license. It also distinguishes between rights description, which promotes certain uses, and rights management, which focuses on restricting uses and protecting content.
The document discusses XML namespaces and XML schemas. It provides examples of using namespaces to differentiate between similarly named elements, such as <highschool:subject> and <medicine:subject>. It also compares defining an XML document using a DTD versus using an XML schema, and provides a sample schema for defining book information. Key differences between "ref" and "type" attributes in schemas are explained using an employee example.
GTAC: AtomPub, testing your server implementationDavid Calavera
The document discusses Atom Publishing Protocol (AtomPub), an application-level protocol for publishing and editing web resources. It provides an introduction to AtomPub and the Atom format, describes how AtomPub works using an introspective protocol, and outlines steps for testing an AtomPub server including creating, retrieving, modifying and deleting resources.
The document discusses the history and development of JSON (JavaScript Object Notation). It describes how Douglas Crockford discovered JSON in 2001 and published the first JSON specification in 2002. It outlines some of the key decisions made in the design of JSON, such as requiring quoted keys and not including a version number to ensure stability. The document presents JSON as a minimal data interchange format influenced by languages like JavaScript, Python, and Lisp.
The document discusses the origins and development of JSON (JavaScript Object Notation). It describes how Douglas Crockford discovered JSON in 2001 and published the first JSON specification in 2002. It outlines some of the key events in the early adoption of JSON, including its use for browser/server communication and as an alternative to XML.
The document discusses using the XML::XSH2 tool in Perl to process XML documents, describing how it allows working with XML using XPath queries and verbs to navigate, select, and transform XML content. Examples are provided of using XML::XSH2 to count elements, copy content between XML files, and convert between XML formats like transforming XML into HTML.
This talk was given by Mark Birbeck at 'Kings of Code', in Amsterdam on May 27th, 2008.
The W3C has a number of exciting new languages in development, from XForms for user interface definition, to RDFa and @role for defining semantics. These languages are usually regarded as something to be implemented natively by browsers, but this presentation shows examples of how they can also provide a rich source of 'hooks' onto which to attach 'unobtrusive javascript'.
This presentation was given at SemTech 2009.
RDFa combines the power of RDF with the ease of publishing of HTML, making it the missing link of the semantic web. Now anyone who is able to publish to the web -- from bloggers, to corporates to governments -- can easily publish to the semantic web.
This session will introduce the key ideas of RDFa and its syntax, by way of a case-study of a project from a UK government department. We'll also survey the state of support in tools and software.
The document discusses RESTful IDEAS (Integrated Documentation Environment for Aircraft Support) which aims to improve the distribution of aircraft technical documentation using RESTful web services and standards like AtomPub, OpenSearch, and S1000D. It describes how technical information is currently distributed via paper/CDs and proprietary formats, and outlines a new architecture that leverages web feeds, syndication, and federated search to provide up-to-date electronic documentation to airlines, manufacturers, and other stakeholders.
This document summarizes various XML formats used in libraries for managing electronic resources metadata, including MARCXML, MARCXML holdings, ISO 20775 holdings schema, OpenURL XML formats, and digital library standards like Dublin Core, MODS, and METS. It provides examples of XML code for each format.
The document discusses the Semantic Web, including its languages (RDF, RDFS, OWL), storage and querying using SPARQL, and methods for browsing and viewing semantic data through techniques like faceted browsing and Fresnel lenses. While the core technologies exist, broader adoption of the Semantic Web on the mainstream web still has challenges to overcome.
CrossRef Secondary Query: Practice and Problems (2011 CrossRef Workshops)Crossref
The document discusses issues related to linking references to journal articles and non-journal items in CrossRef, as well as the introduction of CrossRef's secondary query feature. It provides examples of references that failed to link using structured queries but were successfully linked using secondary author-title queries due to metadata problems like incorrect titles, proceedings mismatches, and proceedings title mismatches. It also discusses how different reference types like book chapters can be represented in CrossRef metadata.
This document discusses XML and the Document Object Model (DOM). It introduces XML documents and namespaces. It describes how the DOM represents an XML document as a tree structure with parent and child nodes. It provides code examples for reading an XML file using an XmlReader, and manipulating an XML document's DOM tree by building a copy of the tree as TreeNodes in a TreeView control.
creating portable social networks with microformatselliando dias
Microformats like hCard and XFN allow people to publish social network information like profiles and connections in HTML. This makes the data portable across sites and interoperable. Key points are:
1) hCard encodes things like names and profiles in HTML class attributes. XFN uses HTML link relations like "contact" and "me" to represent connections between people.
2) Many existing social networks already publish this kind of data, so microformats extract and structure existing information rather than requiring new behaviors.
3) Google's Social Graph API can parse microformats to allow importing profiles and connections from different sites and services into a single social network.
Leon Poole from Creative World gives you "Practical HTML5". A short talk that starts with a brief look at where HTML5 is currently, followed by some practical HTML5 markup examples.
This presentation was originally created for a discussion on HTML5 at the Brisbane Web Design meetup group.
The document provides guidance on evaluating the reliability and credibility of websites. It recommends checking who authored and published the site, whether sources are cited, if there are signs of bias, and searching the domain on tools like Alexa to understand ownership and backlinks. It also suggests searching topics on Google to compare sites and analyzing the quality and reliability of top results.
The document provides an overview of HTML and CSS basics. It defines HTML as a markup language used to structure and present content on the web. It lists and describes many common HTML tags such as <head>, <body>, <p>, <img>, <a>, etc. It also defines CSS as cascading style sheets used to describe the presentation of HTML content, and describes various CSS properties, selectors, and ways to apply CSS styles.
The document provides an overview of XML including:
- The basic syntax and structure of XML documents
- Key differences between XML and HTML
- Common XML data types like elements and attributes
- How whitespace is handled in XML
- What makes an XML document valid versus well-formed
- How XML Schema can be used to validate XML documents
- How namespaces help avoid conflicts between element names
The document provides an overview of creating pages in Symfony, including generating a module skeleton, adding actions and templates, passing information between actions and templates, linking between actions, and retrieving information from requests. Key aspects covered are that pages have separate logic in actions and presentation in templates, helpers can generate HTML, and the request parameters should be accessed through the parameter holder rather than directly.
This document summarizes research into discovering lost web pages using techniques from digital preservation and information retrieval. Key points include:
- Web pages are frequently lost due to broken links or content being moved/removed, but copies may still exist in search engine caches or archives.
- Techniques like lexical signatures (representing a page's content in a few keywords) and analyzing page titles, tags and link neighborhoods can help characterize lost pages and find similar replacement content.
- Experiments showed that lexical signatures degrade over time but page titles are more stable, and combining techniques improves performance in locating replacement content. The goal is to develop a browser extension to help users find lost web pages.
The document discusses XML and web services with PHP5 and PEAR. It provides an introduction and overview of XML including related technologies like DTD, XML Schema, Relax NG, XPath, and XSLT. It also covers using XML in PHP5 with SAX, DOM, SimpleXML, XPath, and XSLT. Finally, it discusses web services in PHP5 and PEAR.
Creative Commons @ Seybold San Francisco 2004 - DRM RoundtableMike Linksvayer
This document summarizes a discussion on digital rights management (DRM) and rights description. It provides an example of using metadata to describe the license and permitted uses of a creative work according to a Creative Commons license. It also distinguishes between rights description, which promotes certain uses, and rights management, which focuses on restricting uses and protecting content.
The document discusses XML namespaces and XML schemas. It provides examples of using namespaces to differentiate between similarly named elements, such as <highschool:subject> and <medicine:subject>. It also compares defining an XML document using a DTD versus using an XML schema, and provides a sample schema for defining book information. Key differences between "ref" and "type" attributes in schemas are explained using an employee example.
GTAC: AtomPub, testing your server implementationDavid Calavera
The document discusses Atom Publishing Protocol (AtomPub), an application-level protocol for publishing and editing web resources. It provides an introduction to AtomPub and the Atom format, describes how AtomPub works using an introspective protocol, and outlines steps for testing an AtomPub server including creating, retrieving, modifying and deleting resources.
The document discusses the history and development of JSON (JavaScript Object Notation). It describes how Douglas Crockford discovered JSON in 2001 and published the first JSON specification in 2002. It outlines some of the key decisions made in the design of JSON, such as requiring quoted keys and not including a version number to ensure stability. The document presents JSON as a minimal data interchange format influenced by languages like JavaScript, Python, and Lisp.
The document discusses the origins and development of JSON (JavaScript Object Notation). It describes how Douglas Crockford discovered JSON in 2001 and published the first JSON specification in 2002. It outlines some of the key events in the early adoption of JSON, including its use for browser/server communication and as an alternative to XML.
The document discusses using the XML::XSH2 tool in Perl to process XML documents, describing how it allows working with XML using XPath queries and verbs to navigate, select, and transform XML content. Examples are provided of using XML::XSH2 to count elements, copy content between XML files, and convert between XML formats like transforming XML into HTML.
This talk was given by Mark Birbeck at 'Kings of Code', in Amsterdam on May 27th, 2008.
The W3C has a number of exciting new languages in development, from XForms for user interface definition, to RDFa and @role for defining semantics. These languages are usually regarded as something to be implemented natively by browsers, but this presentation shows examples of how they can also provide a rich source of 'hooks' onto which to attach 'unobtrusive javascript'.
This presentation was given at SemTech 2009.
RDFa combines the power of RDF with the ease of publishing of HTML, making it the missing link of the semantic web. Now anyone who is able to publish to the web -- from bloggers, to corporates to governments -- can easily publish to the semantic web.
This session will introduce the key ideas of RDFa and its syntax, by way of a case-study of a project from a UK government department. We'll also survey the state of support in tools and software.
The document discusses RESTful IDEAS (Integrated Documentation Environment for Aircraft Support) which aims to improve the distribution of aircraft technical documentation using RESTful web services and standards like AtomPub, OpenSearch, and S1000D. It describes how technical information is currently distributed via paper/CDs and proprietary formats, and outlines a new architecture that leverages web feeds, syndication, and federated search to provide up-to-date electronic documentation to airlines, manufacturers, and other stakeholders.
This document summarizes various XML formats used in libraries for managing electronic resources metadata, including MARCXML, MARCXML holdings, ISO 20775 holdings schema, OpenURL XML formats, and digital library standards like Dublin Core, MODS, and METS. It provides examples of XML code for each format.
The document discusses the Semantic Web, including its languages (RDF, RDFS, OWL), storage and querying using SPARQL, and methods for browsing and viewing semantic data through techniques like faceted browsing and Fresnel lenses. While the core technologies exist, broader adoption of the Semantic Web on the mainstream web still has challenges to overcome.
The document discusses the architecture and features of Struts 2 framework. Some key points:
1) Struts 2 architecture simplified the original Struts 1 architecture by removing unnecessary classes and using plain Java objects instead of forms and forwards.
2) It introduced interceptors to handle tasks like validation and security in a centralized way.
3) An example demonstrates how the same form can be defined more simply in Struts 2 using tags instead of custom JSP elements.
4) Additional features discussed include built-in testing support, debugging tools, ability to run existing Struts 1 actions, and tutorials/training resources.
The document discusses how JavaScript, AJAX, jQuery, and other technologies enable dynamic and interactive functionality on web pages without reloading. It covers how the Document Object Model (DOM) allows JavaScript to access and modify elements, and how events and asynchronous requests via XMLHttpRequest (XHR) enable complicated behaviors and communication between the browser and server. jQuery is presented as a library that abstracts away cross-browser differences and simplifies tasks like DOM manipulation and event handling.
The document discusses various standards for data portability including open standards, RSS, RDF, XMPP, and microformats. It also covers topics like OAuth, OpenID, and data portability tools that can make the internet more like real life by allowing users to easily transfer data between services. The presentation encourages the use of these standards and tools to improve user control and mobility of personal data online.
1. The document discusses significant characteristics that need to be extracted from digital files to enable their long-term preservation and processing.
2. It describes solutions like the PRONOM and DROID formats registries, and the XCL extractor that can identify file formats and extract characteristics in a standardized way.
3. The document argues that automating characteristic extraction is necessary when dealing with large numbers of files, as it can save significant time compared to manual processing.
The document discusses classic web applications versus Ajax web applications and provides an overview of how Struts 2 can be used to build both. It explains that classic applications involve full page reloads when submitting forms, while Ajax applications use JavaScript to submit requests and update parts of the page without reloading. It also summarizes that Struts 2 handles much of the work behind the scenes, supports separation of concerns, and allows defining actions and results through XML configuration or annotations, making development faster.
VRA 2023 Collections Management in Fashion and Media session. Presenter: Wen Nie Ng
The goal of the paper is to enhance the metadata standard of fashion collections by expanding the controlled vocabulary and metadata elements for Costume Core, a metadata schema designed specifically for fashion artifacts. Various techniques are employed to achieve this goal, including identifying new descriptors using word embedding similarity measurements and adding new descriptive terms for precise artifact descriptions to use when re-cataloging a university fashion collection in Costume Core. The paper also provides a sneak peek of the Model Output Confirmative Helper Application, which simplifies the vocabulary review process. Additionally, a survey was conducted to collect insights into how other fashion professionals use metadata when describing dress artifacts. The survey results reveal 1) commonly used metadata standards in the historic fashion domain; 2) sample metadata respondents use; and 3) partial potential metadata that can be appended to Costume Core, which is relevant to Virginia Tech's Oris Glisson Historic Costume and Textile Collection. The expanded Costume Core resulting from the project offers a more comprehensive way of describing fashion collection holdings/artifacts. It has the potential to be adopted by the fashion collections to produce metadata that is findable, accessible, interoperable, and reusable.
VRA 2023 Adventures in Critical Cataloging session. Presenters: Sara Schumacher and Millicent Fullmer
This paper will cover the results of a research study looking at visual resources professionals' perceptions of the visual canon at their institutions and their actions confronting biases in their visual collections. This research is innovative because the "visual canon" as a concept is often evoked but rarely defined, and there has not been research into perceptions and practices that span different types of cultural heritage institutions. The researchers seek to focus on the role of the visual resources professional as a potential change-maker in confronting bias and transforming the “visual canon.” In our presentation, we will discuss the analysis of our survey and interviews around three key research questions: What barriers do visual resources professionals perceive in remedying the biases in the visual canon? What authorities, past and present, do they identify in shaping the visual canon? How do they approach teaching users to identify and critically confront these issues? We will highlight trends as well as unique concerns and solutions from our research participants and engage our audience with how these issues impact their own collections, policies, and instruction.
VRA 2023 Beyond the Classroom: Developing Image Databases for Research session. Presenter: John J. Taormina
The Medieval Kingdom of Sicily Image Database project collects historic images of the medieval monuments of South Italy, from the so-called Kingdom of Sicily dating from c. 950 to c. 1430, during the Norman, Hohenstaufen, Angevin, and early Aragonese periods. The project was begun in 2011, as part of a 3-year Collaborative Research Grant from the National Endowment for the Humanities, under project investigators Caroline Bruzelius, Duke University, and William Tronzo, University of California, San Diego.
The site features over 8,000 historical images in a range of media, including drawings, paintings, engravings, photographs, and plans and elevations culled from museums, archives, and libraries in Europe and America, often from the Grand Tour, as well as from available publications. The value of the database lies in making accessible to scholars the visual documentation of changes to historical sites because the medieval monuments of South Italy have been damaged, changed, and restored on many occasions, with tombs and liturgical furnishings often destroyed, dismantled, or removed. In fact, many of the 600 monuments no longer exist, often bombed during World War II or destroyed in earthquakes, or obscured by modern buildings and urban sprawl.
VRA 2023 Archives Tools and Techniques session. Presenters: Maureen Burns and Lavinia Ciuffa
The Ernest Nash collection documents ancient Roman architecture in pre- and post-World War II Italy. What made Nash's work significant, beyond capturing the present state of the ancient Roman monuments at a volatile historical moment, was the primacy of the topographical photography and the systematic order he brought to this subject. The American Academy's Photographic Archive contributed Nash's images to an open access, interactive website called the "Urban Legacy of Ancient Rome." It reveals the city in stunning detail and uses geo-referencing to provide the viewer with a better understanding of the overall contextual and spatial logic. These Nash images and metadata are also IIIF compatible. As the Academy continues to digitize and describe the full collection of about 30,000 images, thanks to the generous support of the Kress Foundation, a new partnership has developed with Archivision and vrcHost. Current high quality digital photographs of the same ancient Roman monuments are being added to compare with the historical images documenting architectural changes--whether conserved, restored, altered, reconstructed, re-sited or destroyed. This presentation will provide a progress report about what it takes to move new digital photography into IIIF and the various tools available for close examination and presentation. Finding ways to provide ready access and juxtapose historic and contemporary photography online, builds upon the legacy of Nash's quality curation and scholarship to create 21st century, accessible, online educational resources of great interest and utility to scholars, students, and a wide audience of ancient Roman enthusiasts.
VRA 2023 Exploring 3D Technologies in the Classroom session. Presenter: Amy McKenna
Amy McKenna (Williams College) discusses her project that uses Photoshop and cardboard 3D glasses to recreate the 19th-century spectacle of a historic glass stereo collection.
VRA 2023 Keynote. Presenter: Melissa Gohlke
A historical record that focuses on white, heteronormative society and events obscures many facets of San Antonio history. Peel back the veneer of normalcy and one can find rich, diverse, and unexpected strands of the city’s past. From female impersonators of the early 1900s to queer life in derelict spaces during the 1960s and finally, gay and lesbian bar culture of the1970s and beyond, the hidden threads of San Antonio’s history reveal themselves. In this presentation, LGBTQ Historian Melissa Gohlke explores these hidden histories and stitches together an alternative interpretation of the city’s historical narrative by examining a wealth of primary sources found in archives and personal collections.
About the speaker:
Melissa Gohlke is an urban historian who specializes in San Antonio LGBTQ+ history. For over a decade, Gohlke has been researching queer history in San Antonio and South Texas and sharing her passion for this history through extensive outreach activities such as presentations, media interactions, exhibits, and written work. Gohlke is the Assistant Archivist for UTSA Libraries Special Collections.
About the VRA:
The Visual Resources Association is a multidisciplinary organization dedicated to furthering research and education in the field of image management within the educational, cultural heritage, and commercial environments.
VRA 2023 Beyond the Classroom: Developing Image Databases for Research session. Presenter: Mark Pompelia
Material Order is an academic consortium of material sample collections (including wood, metal, glass, ceramic, polymers, plastics, textiles, bio-materials, etc.—any material that might be used in or considered for art, architecture, and design disciplines) founded by the Graduate School of Design at Harvard University and Fleet Library at Rhode Island School of Design and now comprising several more institutions in the US. It provides a community-based approach to management and access to material collections utilizing and developing standards and best practices. Material Order created the Materials Profile that serves as a shared cataloging tool on the LYRASIS CollectionSpace platform and can be further developed as the different needs of consortium members are identified. Open Web searching across all collections occurs via a front-end discovery portal built with Wordpress at materialorder.org.
The Material Order project was born from the acknowledgment that resource sharing and collaborative catalogs are the most promising approach to exploration and implementation. It was always the intent, now actualized, for partner institutions with different mission and scope to compel the project to consider and accommodate criteria such as material health ecologies, fabrication possibilities, and overlap into adjacent fields such as engineering and archeology. Thus, Material Order represents not just items on a shelf but a knowledge-base of compositions, uses, forms, and properties. No longer in its infancy, Material Order provides a shared and adaptable framework for managing collections across the consortium and optimal facilitation of materials-based research and exploration for art, architecture, and design applications.
VRA 2023 New Frontiers in Visual Resources session. Presenters: Meghan Rubenstein and Kate Leonard
The Art Department at Colorado College is piloting a Personal Archiving program in select undergraduate studio courses that combines visual and digital literacy instruction with personal reflection and professional development. Meghan Rubenstein, Curator of Visual Resources, and Kate Leonard, Professor of Art, will discuss the drive behind this initiative to develop student competencies within a liberal arts setting. We will share our ongoing iterative process as well as select student activities and learning outcomes that may be adopted to various institutions.
VRA 2022 Teaching Visual Literacy session. Presenter: Molly Schoen
Our everyday lives are more saturated in images and videos than any other time in human history. This fact alone underscores the need to implement visual literacy skills in all stages of education, from pre-K to post-grad. Learning how to read images with critical, analytical eyes is crucial to understanding the world around us as we see it represented in the news, social media, advertisements, etc. New technologies have exasperated this already urgent need for visual literacy education. Synthetic media, deepfakes, APIs, bot farms, and other forms of artificial intelligence have many innovative uses, but bad actors also use them to fan the flames of disinformation. We have seen the grave consequences from this age of disinformation, from undermining elections to attempts to delegitimize science and doctors, undoubtedly raising the death toll from the COVID-19 pandemic. What do we need to know about these new forms of altered images made by artificial intelligence? How do we discern between real, human-made content versus fakes made by computers, which are becoming more and more difficult to discern? This paper aims to raise awareness of how new forms of visual media can manipulate and deceive the viewer. Audience participants will learn how to empower themselves and their peers into being more savvy consumers of visual materials by understanding the basics of AI and recognizing the characteristics of faked media.
VRA 2022 Individual Papers Session. Presenter: Malia Van Heukelem
This case study of a large artist archive at a medium sized academic research library will connect the success of the artist serving as his own archivist and the collection's broad research appeal locally, nationally and internationally. Like many artists, there is so much more than his own work represented. There is correspondence, fine art prints, ephemera of other artists and writers hidden in the collection. The foundation of organization is in place; now the focus is on creating online access points through finding aids and image collections. The presentation will explore the use of ArchivesSpace, Omeka, and other software to increase access. It will also demonstrate how a solo archivist can leverage interns, student assistants, and volunteers for collections management projects that benefit both the institutional priorities and desired learning outcomes. This talk will delve into the challenges of 20th century visual resource collections such as copyright and engagement with donors. Featuring a local artist has brought other art and architecture collections to the library, without clear boundaries which has led to questions of sustainability, who and what is collected. There is definitely a need to balance the historical record and yet, there are already more archival collections accessioned than can be responsibly managed by one person. The primary collection does include works by women and artists of color, yet much descriptive work remains to forefront the diversity contained within. As an archivist and librarian at a public university, there are many competing demands for collections management, support of researchers, and instruction plus the added interest for exhibition loans and the desire for other artists and architects to be represented. This artist archive is both interesting and complex.
This document summarizes an art history course titled "Pattern & Representation: Critical Cataloging for a New Perspective on Campus History" taught at Oklahoma State University. The course examines major developments in American art across different media from European contact through the mid-20th century. As part of the course, students are divided into groups to create digital exhibitions cataloging artworks from university newspaper archives between certain years. Students must include contextual information and link their entries to related articles. Their entries and a reflective essay are graded individually based on their work plan. The course introduces the concept of "critical cataloging" to bring social justice perspectives to archival and metadata work.
VRA 2022 session. Organizer/Moderator: Allan T. Kohl. Speakers: Virginia (Macie) Hall, Christina Updike, Marcia Focht, Rebecca Moss, Steven Kowalik, Jenni Rodda
During the past year, the “Great Resignation” (aka. The “Big Quit”) has roiled the world of employment nationwide in the wake of the COVID-19 pandemic, which had already caused job losses among our membership. While many institutions and individuals now hope for a “return to normal,” others anticipate that the past two years mark a watershed necessitating further transformational changes in the years ahead. These larger employment trends have come on top of quantum shifts in the visual resources field itself, as traditional tasks give way to new responsibilities, and siloed image collections are replaced by interdisciplinary projects.
For several years, our annual conferences have featured the perspectives of newer professionals in “Stories from the Start.” Looking at the opposite ends of their career arcs, this session brings together the perspectives and experiences of two pre-pandemic retirees, two of our members who made their decisions to retire during the past year, and two currently active professionals whose retirements are pending in the near future. When and why did they make their decisions to retire? What was/is the actual process? Concerns? What comes next after we leave our offices for the last time?
VRA 2022 Digital Art History session. Presenters: Melissa Becher and Samuel Sadow
In 2019, the art history program at American University gave its masters students a new option for the capstone project that is the culmination of the degree: create a digital project on an art historical topic using Omeka S or Wordpress. Initially, only a single student chose to complete a digital capstone over a traditional thesis, but within two years there was near parity between the two options, meaning seven digital capstones for the 2021 cohort. To support these projects, a close partnership quickly developed between the University’s library, the visual resources center, and the archives. This session covers how three campus units coordinate that support for these innovative digital humanities projects, including administration of the platforms, instruction, technical support, preservation, and access to the final projects. The session will also showcase examples of student work to demonstrate the variety and creativity of projects that can be accomplished using these platforms, as well as their contributions to the field of art history. The outcome of this initiative is clear: the best of digital humanities, weaving design and technology with rigorous art historical research, and finished projects that have already resulted in successful job applications in the field.
VRA 2022 Material Objects and Special Collections session. Presenters: Allan T. Kohl and Jackie Spafford
Materials-based collections represent a challenging new mode of information management in terms of subject specialization, physical description and accommodation, and institutional mission. Building upon the successful introductory meeting of this Group in Los Angeles at the 2019 Conference, the goal of this SIG is to provide a forum for open discussion of Material and Object Collections and their relationship to various library/visual resources tasks. The Material and Object Collections SIG provides an opportunity for individuals working with a variety of materials and objects collections – including those that support art and art history courses, those that support architecture and design courses, and those in cultural heritage organizations – to share ideas, issues, and potential solutions in regard to tasks similar to common library/visual resources activities (including cataloging, documentation, staffing, outreach), as well as more specialized concerns relating to the management of physical objects (security, storage and retrieval, the design of user spaces, etc.).
By continuing to offer an opportunity for participants to share brief introductions and profiles of their collections, we hope to encourage networking and exchange information about sources for specialized items; to display sample items and share surplus samples with other collections; and to provide examples of successful solutions to typical problems. Our long-range goal is to maintain an ongoing support group that can be of particular benefit to those professionals who are in the beginning stages of building or organizing physical collections.
VRA 2022 Digital Art History session. Moderator: Otto Luna
Exploration of visualization tools in the Digital Humanities/Digital Art History realm. Presenter: Catherine Adams
Assessing the use of Qualitative Data Analysis Software (QDAS) by Art Historians and Archaeologists. Presenter: Kayla Olson
Supporting Art History Students’ Digital Projects at American University. Presenters: Samuel Sadow and Melissa Becher
VRA 2022 Digital Art History session. Presenter: Kayla Olson
This paper discusses a study (completed in the spring of 2021) which explores how common the use of Qualitative Data Analysis software (QDAS) is among two kinds of object-based researchers: art historians and archaeologists. Surveys were disseminated in a snowball fashion and contained open and closed questions. The questions sought to give participants a platform to describe if, why, and how they use programs like Atlas.ti, NVivo, Dedoose, and MAXQDA throughout their research process. While not QDAS, the image management application Tropy was also included. The author hopes that the anonymized responses will prompt discussion among professionals in academic librarianship and visual resources management about the possible impact of these digital tools on researchers in these disciplines. The question remains on whether researchers in art and material culture disciplines would benefit more from QDAS if participants were aware of: 1) Their existence and 2) Their ability to help organize artifact data and to assist in performing image-based analysis.
VRA 2022 Critical Cataloging Conversations in Teaching, Research, and Practice session. Presenter: Ann M. Graf, Assistant Professor of Library and Information Science, Simmons University
In the field of information science, we strive to provide access to information through the most efficient means possible. This is often done through the use of controlled vocabularies for description of subjects, and, in the case of art objects, for the identification of styles, processes, materials, and types. My research has examined the sufficiency of controlled vocabularies such as the Art and Architecture Thesaurus (AAT) for description of graffiti art processes and products. This research is evolving as the AAT is responding to warrant for a broader set of terms to represent outsider art communities such as the graffiti art community. The methods used to study terminological warrant by examining the language of the graffiti art community are helpful to give voice to artists who work outside the traditional art institution, allowing the way that they talk about their work and how they describe it to become part of the common discourse. It is hoped that this research will inspire others who design and supplement controlled vocabularies for use in the arts to give priority in descriptive practice to those who have been historically underrepresented or made invisible by default use of terminology that does not speak to their experiences.
VRA 2022 Session. Presenter: Douglas Peterson
In 2021, the National Archives of Estonia engaged Digital Transitions’ Service division, Pixel Acuity, to build an Artificial Intelligence (AI) tool to analyze part of its historic record. The objective was to use this tool to enhance their collection with descriptive metadata that identified persons of interest in a collection of over 8,000 photographic glass plate negatives, a task that would ordinarily take years of human labor. In this presentation, we discuss our approach to accurately detecting and identifying human subjects in transmissive media, our initial findings using commercially available AI models, and the subsequent refinements made to our workflow to generate the most accurate metadata. In addition to working with commercially available AI models, we developed strategies for validation of AI-generated results without additional human supervision, and explored the benefits of building bespoke, heritage-specific AI models. By combining all of these tools, we developed a highly customized solution that greatly expedited accurate metadata generation with minimal human oversight, operated efficiently on large collections, and supported discovery of novel content within the archive.
VRA 2022 Community Building Session. Presenter: Dacia Metes
Queens Memory is an ongoing community archiving program that engages with our local communities in our two-fold mission to (1) push local history collections out to the public through programming and online resources, and (2) pull new materials into our collections from the diverse communities of Queens, NYC. The COVID-19 pandemic forced us to close our buildings, cease all in-person work and programming and shift our work to the virtual world. Our team quickly modified our processing workflow and asset tracking with the high volume of crowd-sourced donations coming through new online submission forms, set up in a rapid response to capture the stories coming from the pandemic’s first epicenter in the U.S. In my proposed conference session, I will discuss how we planned and managed the shift to fully online collection development. I will talk about our virtual outreach efforts to engage with the community and get them to contribute their materials, and how we developed the online tools and processes that allowed us to collect photographs, oral history interviews and other audio/visual materials, while also capturing the necessary metadata and consent forms. New internal communications channels, roles for volunteers, and triage processing for publication resulted from these efforts and are now essential parts of the team’s practices.
The document summarizes a workshop on accessibility guidance for digital cultural heritage collections. The workshop consists of two hours which include presentations on accessibility requirements and workflow strategies, a breakout activity where participants practice creating accessible descriptions for images, and a wrap-up discussion. The presentations cover topics such as common barriers to accessibility, guidelines for making images, video, audio and documents accessible, and best practices for incorporating accessibility into workflows. The breakout activity has participants work in groups to write alt-text and accessibility descriptions for sample images from online collections.
The simplified electron and muon model, Oscillating Spacetime: The Foundation...RitikBhardwaj56
Discover the Simplified Electron and Muon Model: A New Wave-Based Approach to Understanding Particles delves into a groundbreaking theory that presents electrons and muons as rotating soliton waves within oscillating spacetime. Geared towards students, researchers, and science buffs, this book breaks down complex ideas into simple explanations. It covers topics such as electron waves, temporal dynamics, and the implications of this model on particle physics. With clear illustrations and easy-to-follow explanations, readers will gain a new outlook on the universe's fundamental nature.
This presentation includes basic of PCOS their pathology and treatment and also Ayurveda correlation of PCOS and Ayurvedic line of treatment mentioned in classics.
ISO/IEC 27001, ISO/IEC 42001, and GDPR: Best Practices for Implementation and...PECB
Denis is a dynamic and results-driven Chief Information Officer (CIO) with a distinguished career spanning information systems analysis and technical project management. With a proven track record of spearheading the design and delivery of cutting-edge Information Management solutions, he has consistently elevated business operations, streamlined reporting functions, and maximized process efficiency.
Certified as an ISO/IEC 27001: Information Security Management Systems (ISMS) Lead Implementer, Data Protection Officer, and Cyber Risks Analyst, Denis brings a heightened focus on data security, privacy, and cyber resilience to every endeavor.
His expertise extends across a diverse spectrum of reporting, database, and web development applications, underpinned by an exceptional grasp of data storage and virtualization technologies. His proficiency in application testing, database administration, and data cleansing ensures seamless execution of complex projects.
What sets Denis apart is his comprehensive understanding of Business and Systems Analysis technologies, honed through involvement in all phases of the Software Development Lifecycle (SDLC). From meticulous requirements gathering to precise analysis, innovative design, rigorous development, thorough testing, and successful implementation, he has consistently delivered exceptional results.
Throughout his career, he has taken on multifaceted roles, from leading technical project management teams to owning solutions that drive operational excellence. His conscientious and proactive approach is unwavering, whether he is working independently or collaboratively within a team. His ability to connect with colleagues on a personal level underscores his commitment to fostering a harmonious and productive workplace environment.
Date: May 29, 2024
Tags: Information Security, ISO/IEC 27001, ISO/IEC 42001, Artificial Intelligence, GDPR
-------------------------------------------------------------------------------
Find out more about ISO training and certification services
Training: ISO/IEC 27001 Information Security Management System - EN | PECB
ISO/IEC 42001 Artificial Intelligence Management System - EN | PECB
General Data Protection Regulation (GDPR) - Training Courses - EN | PECB
Webinars: https://pecb.com/webinars
Article: https://pecb.com/article
-------------------------------------------------------------------------------
For more information about PECB:
Website: https://pecb.com/
LinkedIn: https://www.linkedin.com/company/pecb/
Facebook: https://www.facebook.com/PECBInternational/
Slideshare: http://www.slideshare.net/PECBCERTIFICATION
Assessment and Planning in Educational technology.pptxKavitha Krishnan
In an education system, it is understood that assessment is only for the students, but on the other hand, the Assessment of teachers is also an important aspect of the education system that ensures teachers are providing high-quality instruction to students. The assessment process can be used to provide feedback and support for professional development, to inform decisions about teacher retention or promotion, or to evaluate teacher effectiveness for accountability purposes.
Main Java[All of the Base Concepts}.docxadhitya5119
This is part 1 of my Java Learning Journey. This Contains Custom methods, classes, constructors, packages, multithreading , try- catch block, finally block and more.
Exploiting Artificial Intelligence for Empowering Researchers and Faculty, In...Dr. Vinod Kumar Kanvaria
Exploiting Artificial Intelligence for Empowering Researchers and Faculty,
International FDP on Fundamentals of Research in Social Sciences
at Integral University, Lucknow, 06.06.2024
By Dr. Vinod Kumar Kanvaria
Thinking of getting a dog? Be aware that breeds like Pit Bulls, Rottweilers, and German Shepherds can be loyal and dangerous. Proper training and socialization are crucial to preventing aggressive behaviors. Ensure safety by understanding their needs and always supervising interactions. Stay safe, and enjoy your furry friends!
Executive Directors Chat Leveraging AI for Diversity, Equity, and InclusionTechSoup
Let’s explore the intersection of technology and equity in the final session of our DEI series. Discover how AI tools, like ChatGPT, can be used to support and enhance your nonprofit's DEI initiatives. Participants will gain insights into practical AI applications and get tips for leveraging technology to advance their DEI goals.
Executive Directors Chat Leveraging AI for Diversity, Equity, and Inclusion
Embedded Metadata working group
1. Embedded Metadata working group Johanna Bauman, Pratt Institute Sheryl Frisch, Cal Poly, San Luis Obispo Jesse Henderson, Colgate University Greg Reser, UCSD Kari Smith, University of Michigan Steve Tatum, Virginia Tech http://metadatadeluxe.pbworks.com
41. Custom XMP info Panel Useful to: curators Easily shared with: database
42. Extension Creator Title Date Created Source Source Inventory Number Copyright Notice Artwork or Object in the Image
43. Extension single calendar date no text no BCE or CE Artwork or Object in the Image Date Created “ built 1298 – 1310, destroyed 1943” error what are you doing?
First a quick explanation of embedded metadata. Digital music players illustrate how we all use embedded data on a daily basis without giving it a second thought. How do MP3s do it? That’s right, embedded metadata! Once a music file has its Artist, Album, Title, Genre, and other information, embedded, it can be moved to another computer, an iPod or the cloud and identify itself. It can be integrated into a new user’s database based on these fields and be sorted and searched.
Wouldn’t it be great if digital image files were as easy to use? It would be great if you could move image files from software to software, from device to device and still be able to search, categorize, and sort them.
More often than not, all you know about a digital image is its file name and when it was created.
By embedding metadata in the files, images can have the same usability as MP3s, like displaying the Title and tags.
And searching.
How does this work? Digital image formats, such as TIFF, use a set of metadata tags in the file header to tell your system how to interpret and display the image. Some of these tags contain data recorded by cameras and scanners such as date, time, and device settings.
There is no reason that data about what is shown in the image can’t be encoded as well - it’s all just bits after all. The TIFF format has several tags for descriptive metadata.
The process of embedding the data is handled by your photo editing or organization tool. It converts your text into code and adds it to the file.
There have been a few different embedded metadata formats, but one of the newest, and one which has been widely adopted, is Adobe’s XMP. XMP encodes the caption using RDF/XML. Here is a snippet for Dublin Core Subjects.
Many photo applications provide viewing and editing of embedded metadata
There are already several info panels built into Photoshop and Bridge for embedding metadata. They have been used by photographers for a long time. These standard panels utilize several different schemas but none of them are suited to art & architecture. Fortunately, Adobe has open source tools that make it easy to build your own custom panel. This is the reason the have started with a tool for Photoshop and Bridge.
To fill the need for an info panel for the cultural heritage community, This is the VRA panel, well part of it anyway. We tried to select the fields most necessary for describing cultural objects.
In Adobe Creative Suite applications, metadata info panels are accessed by selecting “File Info…” in the File menu
You then see all the panels you have loaded.
Purpose: Facilitate sharing descriptive metadata between a user and a chosen recipient, such as a database curator, an image sharing service, or a colleague. Useful to users and curators – basic users with consumer-level tools and pros with sophisticated tools. Lessen the barriers to general understanding Provide complexity when desired
First of all, the VRA panel is meant to be easily integrated into user’s workflow by inserting itself into production tools they already use – Photoshop and Bridge. We have plans to build a stand-alone version for when Creative Suite isn't an option.
Allow database assistants to enter pre-cataloging information, i.e., source captions, original resource documentation, or backlog tracking information. Use Adobe Bridge's bulk metadata input and editing capabilities to increase efficiency.
Field photography Allow curators and collection managers to more efficiently collect metadata from faculty and student contributors and ingest it into a central database.
Because it’s a form of RDF, an XMP record can be a mix of various schemas - as long as you follow the specifications of each.
Mix and match: Dublin Core, photoshop, xmp, and Iptc Many major software and hardware makers have worked together to use the same XMP properties for metadata to ensure interoperability. If a property already exists which matches your definition, use it. Why create a new one?
We could have gone with the classy Getty CDWA lite
Or good time lovin’ VRA.
But we went boring, choosing the most widely used schema used for embedded metadata – the one that most tools recognize. IPTC has been around for a long time and it is used in just about every photo application and social media site out there.
When choosing fields, like Work Title or Image Copyright, the idea is to start with schemas that are most widely used by the majority of photo applications and web services and then move down the list, using specialized schemas last. This places as much of the metadata as possible in properties that will be read by common tools. The approach to building the VRA panel was to use as many well-known namespaces as possible to provide interoperability with a wide range of photo software. The first schema used was IPTC core and Extension, then PLUS, then any other namespace built into XMP (as specified in the XMP specs, part 2). Remaining properties were assigned to the VRA Core 4.0 namespace. This ensures that the most essential data about an artwork can be read when the user does not have access to the VRA or IPTC Adobe CS panels. Further, key fields are combined to create Tags and a photo Caption, the most widely supported fields in photo applications, web sites and operating systems.
There’s VRA all the way at the bottom looking like the least favored of the children. It turns out however, that it has an important role to fill, stepping when the other schemas can’t fulfill our needs.
It’s not all about database curators, we also want to give user benefits of embedded info that can be used many places: operating system, photo apps, and social media. We want to enable contributors to keep embedded metadata in their image files for the purpose of managing them with common desktop photo applications and sharing with colleagues and students.
First on our list were the well known and well used Dublin Core fields. These are displayed by most photo tools.
So what about all that detailed artwork information we are embedding, it might not be seen with specialized applications like Photoshop, right? To make sure that the most important information is carried to all likely destinations, the VRA panel concatenates most of the artwork fields to the Title, Caption, and Tags
Here, you can see the concatenated data for this image in the Windows 7 file explorer metadata area.
Here is the same file in Picasa.
Here and Flickr. There is our caption. Flickr also displays keywords and Title but all of our artwork fields are not shown. We want the artwork information to be seen in consumer level tools – especially free ones like Flickr.
But we also want to provide complexity when desired This means introducing structured data.
Our first choice for this was going to be IPTC, specifically Extension because it is well structured and it includes fields that are useful for the VRA panel including fields exclusively for artworks. Our original intent was to use all of these fields and then use VRA for the remaining fields such as Measurements, Materials, Technique, Culture, Style/Period. This turned out to be harder than we thought.
For Instance, “Date Created” is a single calendar date only and doesn’t allow for a range of dates or a complex free-text date such as “built 1298 – 1310, destroyed 1943”
So with several fields remaining to be mapped we turned to VRA Core 4.0. Wanting to join the RDF revolution we decided to implement the Core 4 XML structure in XMP RDF/XML. This made sense because it matched IPTC’s deeply nested structure.
Here is a snippet of Core 4 XMP RDF/XML. Sure, it looks great. It’s got lots of arrows, colons, and slashes and everything is nicely indented, so it should work well. Unfortunately, complications arose that got in the way.
One of these was the difficulty in connecting the IPTC and VRA data sections. RDF data is written in arrays or sections and these have to be merged in an exported excel document. This is not impossible, we just are not in the position to build a tool to do it.
You need an extraction tool that can match repeating VRA sub-elements like “Location” and “Location Type” to IPTC Location Shown versus Location Created and Repository.
Another thing we tried, and the method that would be the most reliable and computer friendly, would be to nest VRA within IPTC. This keeps all the artwork data together in one array and makes it possible to describe multiple artworks using multiple arrays, each one being a completely dicrete packet. This method is supported by XMP.
Unfortunately, Adobe hasn’t updated Photoshop and Bridge to support it so what happens in most applications is that the nested schema data is deleted. This is much too unreliable at this point.
We have now decided to keep it simple and use a flattened version of Core 4 display fields. This means that we will eliminate the nested arrays of parsed terms and use single free text display values instead. These are very easy to extract to Excel using tools such as ARTstor’s EMET. A spreadsheet is the easiest and most common way people will use it. This does mean that curators will have to do some work before they can ingest the data into their database.
We have now decided to keep it simple and use a flattened version of Core 4 display fields. This means that we will eliminate the nested arrays of parsed terms and use single free text display values instead. These are very easy to extract to Excel using tools such as ARTstor’s EMET. A spreadsheet is the easiest and most common way people will use it. This does mean that curators will have to do some work before they can ingest the data into their database.
When the tools allow it, mix IPTC Extension and true VRA Core 4.0 in a true RDF structure, either nested or separate. This would make the data more database friendly reducing the amount of work a curator has to do.
Interactive geocoding that links to Google, Yahoo and other online maps.
To make it easy for users to input parsed data, the panel could query a semantic web resource, such as VIAF, and retrieve authoritative database-ready data.
Encode values as linked data URIs that query a central database for the most accurate information possible. You could also embed any level of data that you think will provide a basic description of the content and then take the user to a central source for complete information.
Retrieve data from semantic databases (DBpedia, GeoNames, LoC, Europeana, VAIF, NY Times terms)