JCDL 2012 Doctoral Consortium presentation by Justin F. Brunelle. Covers the problem Web 2.0 creates for preservation, and proposes a solution for client-side capture of content.
This document discusses the evolution of digital libraries from Digital Library 1.0 to Digital Library 3.0. Digital Library 1.0 focused on providing access to digital documents but had issues with interaction and scalability. Digital Library 2.0 aimed to solve these issues by applying Web 2.0 technologies like user tagging and commenting. Digital Library 3.0, or Semantic Digital Libraries, will apply Semantic Web technologies like RDF and ontologies to provide interconnected metadata and enable new search paradigms through ontology-based search. This will help transition digital libraries from static information sources to dynamic knowledge spaces.
The document provides an overview of Web 2.0 and Health 2.0. It defines Web 2.0 as the transition from static web pages to a more dynamic and user-generated web where users can interact and collaborate. Examples of Web 2.0 applications include blogs, wikis, social networks, and user reviews. Health 2.0 applies similar principles of user participation and collaboration to the healthcare field by enabling patients to access and share health information online. The document discusses how platforms like PatientsLikeMe allow patients to connect with others and participate more actively in their own healthcare.
Aeliapedia: Knowledge Building with XWiki at AELIAXWiki
A subsidiary of Lagardere Services, AELIA employs 2000 people covering more than 20 airports located in France, UK, Poland. AELIA launched a corporate initiative to reinforce knowledge sharing and promote communication between services. On March 10th, AELIA will launch AELIAPEDIA, their renewed intranet. It will address all AELIA employees, specifically targeting shop advisers who need to have access to a lot of information to do their work. The group chose XWIKI Enterprise Manager to facilitate knowledge building and streamline communication through an easy to use interface totally integrated in the existing information system, at a reduced cost. The call will begin with a presentation on the project stakes and its first results delivered by Jean LEROUX ( AELIA's CIO). Ludovic Dubost (XWIKI's CEO) will then present the suggested solution and its implementation.
Jist tutorial semantic wikis and applicationsJesse Wang
This document provides an overview of a tutorial on semantic wikis and applications. It introduces the instructors Jesse Wang and Mark Greaves from Vulcan Inc., and Justin Zhang and Ning Hu from TeamMersion LLC. The tutorial covers topics like Semantic MediaWiki (SMW), SMW+, hands-on sessions, and connecting SMW to other systems. It aims to address challenges in building large knowledge bases by acquiring knowledge at scale and lower costs.
The document defines and discusses key concepts related to Web 2.0, including its characteristics, applications, and risks. It provides definitions of Web 2.0 from various sources and describes its key features as being user-centered, interactive, and facilitating information sharing. The document also outlines several categories and popular tools associated with Web 2.0 applications, and discusses some potential security and social risks.
The document discusses the impact of Web 2.0 on archives and the archival profession. It explores how Web 2.0 requires archives to let go of some control and embrace user participation through tagging, crowdsourcing, and user-generated content. While this poses preservation challenges, it also opens opportunities to grow audiences and make archives more relevant. Web 2.0 encourages a more personal approach that blends professional and private identities. Overall, the document argues that Web 2.0 represents a change in mindset that archives must adapt to in order to remain engaged with modern users.
Bridging the Web and Digital Publishing: EPUBWEBIvan Herman
This document presents a vision called EPUBWEB that aims to bridge the gap between digital publishing and the open web. It envisions a future where portable documents are fully integrated citizens of the open web platform, allowing content to seamlessly move between online and offline consumption. Technical challenges are discussed, such as the need for standardized packaging, identification, styling and pagination to achieve this vision. The document proposes next steps like publishing a white paper and establishing groups to further specify an updated EPUB format called EPUBWEB.
This document discusses the evolution of digital libraries from Digital Library 1.0 to Digital Library 3.0. Digital Library 1.0 focused on providing access to digital documents but had issues with interaction and scalability. Digital Library 2.0 aimed to solve these issues by applying Web 2.0 technologies like user tagging and commenting. Digital Library 3.0, or Semantic Digital Libraries, will apply Semantic Web technologies like RDF and ontologies to provide interconnected metadata and enable new search paradigms through ontology-based search. This will help transition digital libraries from static information sources to dynamic knowledge spaces.
The document provides an overview of Web 2.0 and Health 2.0. It defines Web 2.0 as the transition from static web pages to a more dynamic and user-generated web where users can interact and collaborate. Examples of Web 2.0 applications include blogs, wikis, social networks, and user reviews. Health 2.0 applies similar principles of user participation and collaboration to the healthcare field by enabling patients to access and share health information online. The document discusses how platforms like PatientsLikeMe allow patients to connect with others and participate more actively in their own healthcare.
Aeliapedia: Knowledge Building with XWiki at AELIAXWiki
A subsidiary of Lagardere Services, AELIA employs 2000 people covering more than 20 airports located in France, UK, Poland. AELIA launched a corporate initiative to reinforce knowledge sharing and promote communication between services. On March 10th, AELIA will launch AELIAPEDIA, their renewed intranet. It will address all AELIA employees, specifically targeting shop advisers who need to have access to a lot of information to do their work. The group chose XWIKI Enterprise Manager to facilitate knowledge building and streamline communication through an easy to use interface totally integrated in the existing information system, at a reduced cost. The call will begin with a presentation on the project stakes and its first results delivered by Jean LEROUX ( AELIA's CIO). Ludovic Dubost (XWIKI's CEO) will then present the suggested solution and its implementation.
Jist tutorial semantic wikis and applicationsJesse Wang
This document provides an overview of a tutorial on semantic wikis and applications. It introduces the instructors Jesse Wang and Mark Greaves from Vulcan Inc., and Justin Zhang and Ning Hu from TeamMersion LLC. The tutorial covers topics like Semantic MediaWiki (SMW), SMW+, hands-on sessions, and connecting SMW to other systems. It aims to address challenges in building large knowledge bases by acquiring knowledge at scale and lower costs.
The document defines and discusses key concepts related to Web 2.0, including its characteristics, applications, and risks. It provides definitions of Web 2.0 from various sources and describes its key features as being user-centered, interactive, and facilitating information sharing. The document also outlines several categories and popular tools associated with Web 2.0 applications, and discusses some potential security and social risks.
The document discusses the impact of Web 2.0 on archives and the archival profession. It explores how Web 2.0 requires archives to let go of some control and embrace user participation through tagging, crowdsourcing, and user-generated content. While this poses preservation challenges, it also opens opportunities to grow audiences and make archives more relevant. Web 2.0 encourages a more personal approach that blends professional and private identities. Overall, the document argues that Web 2.0 represents a change in mindset that archives must adapt to in order to remain engaged with modern users.
Bridging the Web and Digital Publishing: EPUBWEBIvan Herman
This document presents a vision called EPUBWEB that aims to bridge the gap between digital publishing and the open web. It envisions a future where portable documents are fully integrated citizens of the open web platform, allowing content to seamlessly move between online and offline consumption. Technical challenges are discussed, such as the need for standardized packaging, identification, styling and pagination to achieve this vision. The document proposes next steps like publishing a white paper and establishing groups to further specify an updated EPUB format called EPUBWEB.
This document discusses the concepts of Library 2.0 and user-generated content. It provides two examples of how libraries can incorporate this type of content: 1) Wikipedia and library name authority files, where metadata from Wikipedia entries is linked to library catalogs, and 2) Wikisource, a Wikipedia sister project containing freely accessible source materials. The document also discusses social tagging as a way for users to classify and organize information using freely chosen keywords. It concludes by encouraging libraries to embrace these new forms of user contribution and to build platforms that support collaborative work between librarians and users.
Art discovery group catalogue: Usage, content and new horizonsJanifer Gatenby
The Art Discovery Group Catalogeu was presented at the meeting of Art Libraries.net in Copenhagen, October 2014. The presentation outlines the content, interface developments and new horizons including data mining and language tagging for improved clustering and presentation, clustering journal articles, analysing data and improving data quality.
This talk was given at the Chrismash Mashed Library event in London on December 3 2011. I spoke about the outcomes of an investigation into user experience and understanding of next-gen library catalogues and next steps we're taking at Senate House Library, University of London.
Creating better user interfaces for libraries catalogues: how to present and ...Tanja Merčun
Elag2013 slides and report for workshop "Creating better user interfaces for libraries catalogues: how to present and interact with (FRBR-based) bibliographic data?" by Tanja Merčun and Maja Žumer
This document discusses how web-based catalogues and search functions are impacting libraries. It provides an overview of trends like Web 2.0 that have led to libraries making their resources available remotely through web-based online public access catalogs (OPACs). The document also gives specific details about the catalogue and resources available at the University of the Western Cape (UWC) library, which uses the ALEPH system and is a member of OCLC and CALICO agreements to provide access to holdings. It concludes that libraries must find ways to better meet the changing information needs of users.
From Catalogue 2.0 to the digital humanities: exploring the future of librari...Sally Chambers
This document discusses the evolving role of libraries and librarians in supporting digital scholarship and the digital humanities. It describes how traditional cataloguing tools like MARC are changing to incorporate new metadata standards and linked data. Research libraries' engagement with research infrastructures has been low but is increasing as opportunities arise in areas like research data management, digital repositories, and scholarly communication. The document argues libraries have important roles to play in discovery, data management, and as embedded partners supporting digital humanities researchers and their evolving needs. Collaboration between libraries and digital humanities centers is highlighted as a way to advance both fields.
User-Generated Content and Social Discovery in the Academic Library Catalogu...Steve Toub
1) The document discusses findings from user research on incorporating user-generated content and social discovery features into academic library catalogs.
2) Participants expressed a desire to see what trusted colleagues think of resources and find "gems" they don't know exist. However, few used existing social tools for academic purposes.
3) The strongest motivation for contributing user reviews was helping others find useful resources faster. Ensuring quality would involve authenticating users and exposing more than binary reviews.
Introducing Social Catalogues and Social Software into Public LibrariesLaurel Tarulli
This is a presentation that was given at Dalhousie University's School of Information Management. It was presented to the first year students enrolled in the Knowledge Organization class.
Social Catalogues: Enriching Content that Enhances RA ServicesLaurel Tarulli
This presentation was given at the RA in a Day pre-conference session at the 2009 Atlantic Provinces Library Association Conference in Halifax, Nova Scotia.
OLA 2014: A Future of Freedom and Innovation in Library Cataloguesjocelyneandrews
For too long the catalogue has been an extension of proprietary systems, offering us little opportunity to influence the functionality and usability of this mission-critical tool. While user expectations and our competition have changed radically, catalogues have not.
We will look at some current best-in-class catalogue examples, and consider the future of the catalogue, looking at how we can embrace next-generation trends like Linked Data and the Semantic Web. By advocating for systems that provide openness and flexibility, libraries will be empowered to face an uncertain technological future.
This document discusses next generation discovery tools for library catalogs and resources. It provides a brief history of library catalogs and how people traditionally search them, noting that users want full text searches but often use literal searches. The document then mentions an undergraduate research project and says the future will improve on current discovery methods. It includes several citations and the author's contact information.
Rethinking the library catalogue: making search work for the library userSally Chambers
The document discusses the challenges libraries face in transforming traditional library catalogs into search experiences that are as user-friendly as popular search engines. It outlines approaches libraries can take to improve search capabilities, such as harvesting metadata to create centralized indexes, enabling full-text search, and using faceted search and relevance ranking models. The goal is to provide users with integrated, easily navigable search results from a library's diverse range of resources.
Web 2.0, web searching and web based catalogueGerald Louw
This document discusses web-based catalogues and how they have evolved with web searching trends. It describes how library resources can now be accessed remotely through web-based online public access catalogs (OPACs). The document also provides details on the functions and search methods of the University of the Western Cape library's web-based ALEPH catalogue, which is part of the CALICO agreement and indexes holdings in OCLC's WorldCat Local and Sabinet. It concludes that libraries must meet users' information needs by bringing resources to users online.
The document discusses the evolution of library catalogs from traditional to next-generation systems. Traditional catalogs were limited in scope and functionality, focusing only on printed materials. Next-generation catalogs incorporate features like federated search across multiple resources, enriched content like images and summaries, faceted navigation, user contributions and reviews, and recommendations. They integrate these new features and services into a unified discovery interface to provide a more modern library experience.
Library cataloging involves creating a list of all library materials arranged according to a systematic plan to help users locate items. The main purposes of a library catalog are to provide access to the library's collection and to direct users from natural language to an artificial classification system. There are different types of catalogs, with card catalogs being the most widely used as they allow for infinite expansion and easy updating. The essential information included in each catalog entry depends on the type of catalog but usually includes the author, title, and subject among other details.
What are the options for sellers and buyers collaborating on catalog content? Join a panel of leading suppliers as they discuss their catalog strategies and preference for hosted CIF versus PunchOut catalogs. Learn how leading sellers use the Ariba Network to drive exposure of their product content to procurement organizations and individuals at their key accounts, while leading procurement organizations use online catalogues to drive up compliance to contract terms and to improve the user experience.
This document provides an overview of the history and development of library cataloguing codes. It discusses early cataloguing practices and some of the seminal cataloguing codes developed over time, including Panizzi's 91 Rules, Jewett's Rules, Cutter's Rules, the 1908 ALA Code, Prussian Instructions, Vatican Rules, Classified Catalogue Code, ALA Rules 1949, Library of Congress Descriptive Rules, AACR1, AACR2 and its revisions. The document traces how cataloguing evolved from individual library practices to a more standardized and principle-based approach through the development of these various codes and standards.
This document discusses stateful microservices in cloud native environments. It begins by introducing the authors Grace Jansen and Mary Grygleski. It then discusses the differences between stateless and stateful computing, and how data can exist in different states. The document explores how microservices operate on data and how statefulness was handled in older client-server systems versus modern cloud native environments. Finally, it discusses techniques like caching, databases, and tokens that can preserve state across boundaries, and provides examples using Kubernetes, Apache Pulsar, and reactive systems.
The document discusses the key concepts of Web 2.0, including how it utilizes collective intelligence through social bookmarking, tagging, wikis and collaborative filtering. It also examines how Web 2.0 applications harness the network effect to aggregate user data and benefit from increased participation. Finally, it outlines some of the design principles of Web 2.0 such as treating the web as a platform, harnessing collective intelligence, and providing rich user experiences through technologies like AJAX.
This document discusses the concepts of Library 2.0 and user-generated content. It provides two examples of how libraries can incorporate this type of content: 1) Wikipedia and library name authority files, where metadata from Wikipedia entries is linked to library catalogs, and 2) Wikisource, a Wikipedia sister project containing freely accessible source materials. The document also discusses social tagging as a way for users to classify and organize information using freely chosen keywords. It concludes by encouraging libraries to embrace these new forms of user contribution and to build platforms that support collaborative work between librarians and users.
Art discovery group catalogue: Usage, content and new horizonsJanifer Gatenby
The Art Discovery Group Catalogeu was presented at the meeting of Art Libraries.net in Copenhagen, October 2014. The presentation outlines the content, interface developments and new horizons including data mining and language tagging for improved clustering and presentation, clustering journal articles, analysing data and improving data quality.
This talk was given at the Chrismash Mashed Library event in London on December 3 2011. I spoke about the outcomes of an investigation into user experience and understanding of next-gen library catalogues and next steps we're taking at Senate House Library, University of London.
Creating better user interfaces for libraries catalogues: how to present and ...Tanja Merčun
Elag2013 slides and report for workshop "Creating better user interfaces for libraries catalogues: how to present and interact with (FRBR-based) bibliographic data?" by Tanja Merčun and Maja Žumer
This document discusses how web-based catalogues and search functions are impacting libraries. It provides an overview of trends like Web 2.0 that have led to libraries making their resources available remotely through web-based online public access catalogs (OPACs). The document also gives specific details about the catalogue and resources available at the University of the Western Cape (UWC) library, which uses the ALEPH system and is a member of OCLC and CALICO agreements to provide access to holdings. It concludes that libraries must find ways to better meet the changing information needs of users.
From Catalogue 2.0 to the digital humanities: exploring the future of librari...Sally Chambers
This document discusses the evolving role of libraries and librarians in supporting digital scholarship and the digital humanities. It describes how traditional cataloguing tools like MARC are changing to incorporate new metadata standards and linked data. Research libraries' engagement with research infrastructures has been low but is increasing as opportunities arise in areas like research data management, digital repositories, and scholarly communication. The document argues libraries have important roles to play in discovery, data management, and as embedded partners supporting digital humanities researchers and their evolving needs. Collaboration between libraries and digital humanities centers is highlighted as a way to advance both fields.
User-Generated Content and Social Discovery in the Academic Library Catalogu...Steve Toub
1) The document discusses findings from user research on incorporating user-generated content and social discovery features into academic library catalogs.
2) Participants expressed a desire to see what trusted colleagues think of resources and find "gems" they don't know exist. However, few used existing social tools for academic purposes.
3) The strongest motivation for contributing user reviews was helping others find useful resources faster. Ensuring quality would involve authenticating users and exposing more than binary reviews.
Introducing Social Catalogues and Social Software into Public LibrariesLaurel Tarulli
This is a presentation that was given at Dalhousie University's School of Information Management. It was presented to the first year students enrolled in the Knowledge Organization class.
Social Catalogues: Enriching Content that Enhances RA ServicesLaurel Tarulli
This presentation was given at the RA in a Day pre-conference session at the 2009 Atlantic Provinces Library Association Conference in Halifax, Nova Scotia.
OLA 2014: A Future of Freedom and Innovation in Library Cataloguesjocelyneandrews
For too long the catalogue has been an extension of proprietary systems, offering us little opportunity to influence the functionality and usability of this mission-critical tool. While user expectations and our competition have changed radically, catalogues have not.
We will look at some current best-in-class catalogue examples, and consider the future of the catalogue, looking at how we can embrace next-generation trends like Linked Data and the Semantic Web. By advocating for systems that provide openness and flexibility, libraries will be empowered to face an uncertain technological future.
This document discusses next generation discovery tools for library catalogs and resources. It provides a brief history of library catalogs and how people traditionally search them, noting that users want full text searches but often use literal searches. The document then mentions an undergraduate research project and says the future will improve on current discovery methods. It includes several citations and the author's contact information.
Rethinking the library catalogue: making search work for the library userSally Chambers
The document discusses the challenges libraries face in transforming traditional library catalogs into search experiences that are as user-friendly as popular search engines. It outlines approaches libraries can take to improve search capabilities, such as harvesting metadata to create centralized indexes, enabling full-text search, and using faceted search and relevance ranking models. The goal is to provide users with integrated, easily navigable search results from a library's diverse range of resources.
Web 2.0, web searching and web based catalogueGerald Louw
This document discusses web-based catalogues and how they have evolved with web searching trends. It describes how library resources can now be accessed remotely through web-based online public access catalogs (OPACs). The document also provides details on the functions and search methods of the University of the Western Cape library's web-based ALEPH catalogue, which is part of the CALICO agreement and indexes holdings in OCLC's WorldCat Local and Sabinet. It concludes that libraries must meet users' information needs by bringing resources to users online.
The document discusses the evolution of library catalogs from traditional to next-generation systems. Traditional catalogs were limited in scope and functionality, focusing only on printed materials. Next-generation catalogs incorporate features like federated search across multiple resources, enriched content like images and summaries, faceted navigation, user contributions and reviews, and recommendations. They integrate these new features and services into a unified discovery interface to provide a more modern library experience.
Library cataloging involves creating a list of all library materials arranged according to a systematic plan to help users locate items. The main purposes of a library catalog are to provide access to the library's collection and to direct users from natural language to an artificial classification system. There are different types of catalogs, with card catalogs being the most widely used as they allow for infinite expansion and easy updating. The essential information included in each catalog entry depends on the type of catalog but usually includes the author, title, and subject among other details.
What are the options for sellers and buyers collaborating on catalog content? Join a panel of leading suppliers as they discuss their catalog strategies and preference for hosted CIF versus PunchOut catalogs. Learn how leading sellers use the Ariba Network to drive exposure of their product content to procurement organizations and individuals at their key accounts, while leading procurement organizations use online catalogues to drive up compliance to contract terms and to improve the user experience.
This document provides an overview of the history and development of library cataloguing codes. It discusses early cataloguing practices and some of the seminal cataloguing codes developed over time, including Panizzi's 91 Rules, Jewett's Rules, Cutter's Rules, the 1908 ALA Code, Prussian Instructions, Vatican Rules, Classified Catalogue Code, ALA Rules 1949, Library of Congress Descriptive Rules, AACR1, AACR2 and its revisions. The document traces how cataloguing evolved from individual library practices to a more standardized and principle-based approach through the development of these various codes and standards.
This document discusses stateful microservices in cloud native environments. It begins by introducing the authors Grace Jansen and Mary Grygleski. It then discusses the differences between stateless and stateful computing, and how data can exist in different states. The document explores how microservices operate on data and how statefulness was handled in older client-server systems versus modern cloud native environments. Finally, it discusses techniques like caching, databases, and tokens that can preserve state across boundaries, and provides examples using Kubernetes, Apache Pulsar, and reactive systems.
The document discusses the key concepts of Web 2.0, including how it utilizes collective intelligence through social bookmarking, tagging, wikis and collaborative filtering. It also examines how Web 2.0 applications harness the network effect to aggregate user data and benefit from increased participation. Finally, it outlines some of the design principles of Web 2.0 such as treating the web as a platform, harnessing collective intelligence, and providing rich user experiences through technologies like AJAX.
Making the Black Hole Gray: Web Archiving Art Resources at New York Art Resou...The Frick Collection
This document summarizes the New York Art Resources Consortium's (NYARC) efforts to implement a web archiving program to preserve born-digital art resources. It discusses NYARC's pilot projects from 2010-2013 and the objectives of its 2014/2015 web archiving program funded by The Andrew W. Mellon Foundation. The program aims to archive approximately 2 TB of content from websites related to art using the Archive-It platform. It also outlines the staffing, collaboration, collection scope, tools, and sustainability efforts of the new web archiving initiative.
Scripts in a Frame: A Two-Tiered Crawling Approach to Archiving Deferred Repr...Justin Brunelle
This document summarizes Justin Brunelle's dissertation defense on archiving deferred web representations using a two-tiered crawling approach. It discusses how current archival tools are unable to fully capture dynamic and interactive web pages that use JavaScript to modify page content after load. The dissertation measures the impact of missing JavaScript resources on memento quality and proposes crawling pages using PhantomJS to execute scripts and archive the complete representation as seen by users. Future work is needed to scale this approach for archiving large portions of the deferred web at risk of being lost to history.
Scripts in a Frame: A Two-Tiered Approach for Archiving Deferred RepresentationsJustin Brunelle
This document provides a 3-sentence summary of a dissertation defense presentation on archiving deferred web representations using a two-tiered crawling approach. The presentation discusses how JavaScript impacts what is archived, measuring archive quality, and a new approach to crawling and replaying deferred representations so archives better reflect the user experience. The goal is to mitigate the impact of JavaScript on archives by making crawlers behave more like users when capturing deferred representations.
Share point saturday presentation 9 29-2012-2Derek Gusoff
This document summarizes a presentation about using JavaScript in SharePoint 2010. It discusses why JavaScript is useful for customizing SharePoint, introduces jQuery for DOM manipulation and AJAX calls, and covers various APIs for interacting with SharePoint data from JavaScript like ASMX web services, REST, and the Client Object Model. It also provides examples of debugging techniques and deploying JavaScript to SharePoint. A case study demonstrates building a filtered lookup field using only JavaScript.
Combining Heritrix and PhantomJS for Better Crawling of Pages with JavascriptMichael Nelson
Justin F. Brunelle
Michele C. Weigle
Michael L. Nelson
Web Science and Digital Libraries Research Group
Old Dominion University
@WebSciDL
IIPC 2016
Reykjavik, Iceland, April 11, 2016
Slides accompanying the paper:
Buckingham Shum, Simon (2008). Cohere: Towards Web 2.0 Argumentation. In: Proc. COMMA'08: 2nd International Conference on Computational Models of Argument, 28-30 May 2008, Toulouse, France. Preprint: http://oro.open.ac.uk/10421
Open web platform talk by daniel hladky at rif 2012 (19 april 2012 moscow)AI4BD GmbH
The document discusses the Open Semantic Web Platform and the role of the W3C. It summarizes that the W3C is working to develop standards like HTML5 to transform the web across devices. HTML5 in particular is becoming the cornerstone for building applications that can work across desktops, mobile devices, and televisions. The document gives examples of how major industries are using or planning to use the Open Web Platform.
Technologie Proche: Imagining the Archival Systems of Tomorrow With the Tools...Artefactual Systems - AtoM
These slides accompanied a June 4th, 2016 presentation made by Dan Gillean of Artefactual Systems at the Association of Canadian Archivists' 2016 Conference in Montreal, QC, Canada.
This presentation aims to examine several existing or emerging computing paradigms, with specific examples, to imagine how they might inform next-generation archival systems to support digital preservation, description, and access. Topics covered include:
- Distributed Version Control and git
- P2P architectures and the BitTorrent protocol
- Linked Open Data and RDF
- Blockchain technology
The session is part of an attempt by the ACA to create interactive "working sessions" at its conferences. Accompanying notes can be found at: http://bit.ly/tech-Proche
Participants were also asked to use the Twitter hashtag of #techProche for online interaction during the session.
The document provides an overview of browser-based digital preservation including:
- The current state of digital preservation which relies on web crawlers and archives like the Internet Archive. However, this approach is insufficient for preserving pages that are not popular, behind authentication, or use complex JavaScript.
- The requirements for new software to directly capture and preserve web pages from within the browser in order to address the limitations of current archival approaches.
- A proposed system called "WARCreate" that would leverage the Chrome extension API to capture web pages and resources and generate WARC files for preservation while maintaining the original browsing context.
1) The document discusses HTML5 and the W3C standards process. It notes that HTML5 is still a work in progress at the W3C, going through the recommendation process.
2) It also discusses new APIs being developed for HTML5, including vibration and device access APIs, as well as the Semantic Web and Linked Data initiatives.
3) Finally, it mentions the W3C's work on the Web of Things and assigning URIs to real-world objects to integrate them into the Web.
This presentation gives a brief overview on achievements and challenges of the Data Web and describes different aspects of using the Semantic Data Wiki OntoWiki for Linked Data management.
The document provides an introduction to the W3C organization and standards. It discusses several W3C standards including HTML5, EmotionML, and WCAG 2.0. It also explains how standards are developed through the W3C process and provides information on getting involved.
Front End page speed performance improvements for DrupalPromet Source
If you are a developer or business manager with responsibilities over your website, then check out this deck.. What will you learn? The webinar, created by our Founder and CEO, Andy Kucharski, is a highly accessible, information-rich review on the ways Drupal site performance can be radically improved. Some of the main topics we will cover include: What is slow site.
Front End page speed performance improvements for DrupalAndy Kucharski
If you are a developer or business manager with responsibilities over your website, then check out this deck..
What will you learn?
The webinar, created by our Founder and CEO, Andy Kucharski, is a highly accessible, information-rich review on the ways Drupal site performance can be radically improved. Some of the main topics we will cover include:
What is slow site speed?
What tools to use to diagnose it.
Plus six key improvements to make Drupal “run fast!”
And if that’s not already enough, we will also share some best practices monitoring tips for making sure you know how the Drupal server is performing 24/7.
Web 3.0 will bring more structure and connectivity to the web through semantic technologies. It will create a web where software agents can perform sophisticated tasks and content is interconnected. Key aspects of Web 3.0 include personalized and context-aware experiences, integration of data from various online and offline sources, and new ways of combining multimedia content and data for novel insights. Initiatives toward building Web 3.0 include projects that publish government and academic data as linked open data, technologies for identifying and linking multimedia fragments, and location-aware mobile applications that provide customized offers and information to users.
Slides of my presentation at TransferSummit 2010, "Open innovation in software means Open Source", http://transfersummit.com/programme/60 . See accompanying article on the H online, http://x42.ch/03.10.01
Human Scale Web Collecting for Individuals and Institutions (Webrecorder Work...Anna Perricci
This is the main slide deck for a workshop at iPRES 2018 on human scale web collecting. A primary focus of the presentation was the use of Webrecorder.io, a free, open source web archiving tool available to all.
The document summarizes tools developed by the Web Sciences and Digital Libraries Group for managing archived web content. It describes WARCreate, a Chrome extension that archives the current state of web pages; WAIL, which loads archived web pages (WARC files) into a local Wayback instance for viewing; and Mink, a Chrome extension that displays archived versions of visited pages. It also discusses techniques for assessing damage in archived pages, generating thumbnail summaries of archive collections, and detecting off-topic pages within archives. The tools are intended to make web archiving more accessible and help curate archived web collections.
Similar to Filling in the Blanks: Capturing Dynamically Generated Content (20)
iPRES2015: Archiving Deferred Representations Using a Two-Tiered Crawling App...Justin Brunelle
This document proposes a two-tiered crawling approach using PhantomJS and Heritrix to better archive deferred representations, which are web pages that require JavaScript execution or user interaction to fully render. The approach uses PhantomJS to execute JavaScript and interact with deferred representations, while Heritrix crawls non-deferred representations for better performance. Test results found the PhantomJS frontier was 1.5 times larger but crawling was 10.5 times slower. The approach provides a better method for archiving deferred representations compared to the current workflow.
Not All Mementos Are Created Equal: Measuring The Impact Of Missing MementosJustin Brunelle
This document discusses measuring the quality of archived webpages or "mementos" when some embedded resources like images or CSS are missing. It found that a "damage rating" approach, which considers factors like the size, position and importance of missing resources, was a better indicator of memento quality than just the percentage of missing resources. A study with Amazon Mechanical Turk users found they agreed more with damage ratings than percentage missing when comparing mementos. The quality of mementos in the Internet Archive was also found to generally improve over time despite more resources being missing.
- The document discusses how the author spends their summer vacations studying the archivability of web resources by capturing 1,000 URIs each from Twitter and Archive-It.
- The author analyzes the captured data to measure how perfectly the resources were archived, finding only 4.2% of Twitter and 34.2% of Archive-It were perfectly archived, with missing embedded resources, stylesheets, and functionality impacting the archivability.
- The author plans future work evaluating the importance of missing elements, measuring damage to archives, and developing methods to predict and potentially improve archivability.
An Evaluation of Caching Policies for Memento TimeMapsJustin Brunelle
This document evaluates caching policies for Memento TimeMaps. The authors observed changes to over 4,000 TimeMaps for 92 days and analyzed caching policies based on the changes. They found that TimeMaps either remained unchanged (77.4%) or increased in size through new archives or mementos (22.6%). An optimal TimeMap cache Time-To-Live of 15 days balances freshness with reduced load on archives by caching incrementally improving TimeMaps.
Justin F. Brunelle is a computer scientist who works at The MITRE Corporation and received his BS and MS in computer science from Old Dominion University. He is currently pursuing his PhD in digital preservation from ODU under Dr. Nelson, focusing on ensuring web pages are archived over time. Previously he conducted research in serious games and intelligent tutoring systems.
This document provides an introduction to agile methodologies and Scrum. It explains that agile promotes iterative development, user feedback, and rapid prototyping. Scrum is then defined as a common agile framework using self-organizing teams, sprints, product backlogs to prioritize work, and retrospectives to improve. Key Scrum roles of Product Owner, Scrum Master, and Development Team are outlined along with processes like storyboarding, backlogs, sprints, and reviews.
The document discusses how to find archived web pages from the past using two different services - http://web.archive.org and Memento. It explains that web.archive.org allows searching for and accessing cached pages of a website like CNN from specific past dates, while Memento aims to make navigating archived past web pages easier through its framework and development community.
This is the slide deck of the presentation given to the RRAC national group meeting on 10-20-2010. It is a summary of the research efforts in Digital Preservation at ODU.
The presentation given for the RRAC meeting on 10-20-2010. This is a summary of the research efforts in Digital Preservation at Old Dominion University.
Let's Integrate MuleSoft RPA, COMPOSER, APM with AWS IDP along with Slackshyamraj55
Discover the seamless integration of RPA (Robotic Process Automation), COMPOSER, and APM with AWS IDP enhanced with Slack notifications. Explore how these technologies converge to streamline workflows, optimize performance, and ensure secure access, all while leveraging the power of AWS IDP and real-time communication via Slack notifications.
HCL Notes and Domino License Cost Reduction in the World of DLAUpanagenda
Webinar Recording: https://www.panagenda.com/webinars/hcl-notes-and-domino-license-cost-reduction-in-the-world-of-dlau/
The introduction of DLAU and the CCB & CCX licensing model caused quite a stir in the HCL community. As a Notes and Domino customer, you may have faced challenges with unexpected user counts and license costs. You probably have questions on how this new licensing approach works and how to benefit from it. Most importantly, you likely have budget constraints and want to save money where possible. Don’t worry, we can help with all of this!
We’ll show you how to fix common misconfigurations that cause higher-than-expected user counts, and how to identify accounts which you can deactivate to save money. There are also frequent patterns that can cause unnecessary cost, like using a person document instead of a mail-in for shared mailboxes. We’ll provide examples and solutions for those as well. And naturally we’ll explain the new licensing model.
Join HCL Ambassador Marc Thomas in this webinar with a special guest appearance from Franz Walder. It will give you the tools and know-how to stay on top of what is going on with Domino licensing. You will be able lower your cost through an optimized configuration and keep it low going forward.
These topics will be covered
- Reducing license cost by finding and fixing misconfigurations and superfluous accounts
- How do CCB and CCX licenses really work?
- Understanding the DLAU tool and how to best utilize it
- Tips for common problem areas, like team mailboxes, functional/test users, etc
- Practical examples and best practices to implement right away
For the full video of this presentation, please visit: https://www.edge-ai-vision.com/2024/06/temporal-event-neural-networks-a-more-efficient-alternative-to-the-transformer-a-presentation-from-brainchip/
Chris Jones, Director of Product Management at BrainChip , presents the “Temporal Event Neural Networks: A More Efficient Alternative to the Transformer” tutorial at the May 2024 Embedded Vision Summit.
The expansion of AI services necessitates enhanced computational capabilities on edge devices. Temporal Event Neural Networks (TENNs), developed by BrainChip, represent a novel and highly efficient state-space network. TENNs demonstrate exceptional proficiency in handling multi-dimensional streaming data, facilitating advancements in object detection, action recognition, speech enhancement and language model/sequence generation. Through the utilization of polynomial-based continuous convolutions, TENNs streamline models, expedite training processes and significantly diminish memory requirements, achieving notable reductions of up to 50x in parameters and 5,000x in energy consumption compared to prevailing methodologies like transformers.
Integration with BrainChip’s Akida neuromorphic hardware IP further enhances TENNs’ capabilities, enabling the realization of highly capable, portable and passively cooled edge devices. This presentation delves into the technical innovations underlying TENNs, presents real-world benchmarks, and elucidates how this cutting-edge approach is positioned to revolutionize edge AI across diverse applications.
Trusted Execution Environment for Decentralized Process MiningLucaBarbaro3
Presentation of the paper "Trusted Execution Environment for Decentralized Process Mining" given during the CAiSE 2024 Conference in Cyprus on June 7, 2024.
Driving Business Innovation: Latest Generative AI Advancements & Success StorySafe Software
Are you ready to revolutionize how you handle data? Join us for a webinar where we’ll bring you up to speed with the latest advancements in Generative AI technology and discover how leveraging FME with tools from giants like Google Gemini, Amazon, and Microsoft OpenAI can supercharge your workflow efficiency.
During the hour, we’ll take you through:
Guest Speaker Segment with Hannah Barrington: Dive into the world of dynamic real estate marketing with Hannah, the Marketing Manager at Workspace Group. Hear firsthand how their team generates engaging descriptions for thousands of office units by integrating diverse data sources—from PDF floorplans to web pages—using FME transformers, like OpenAIVisionConnector and AnthropicVisionConnector. This use case will show you how GenAI can streamline content creation for marketing across the board.
Ollama Use Case: Learn how Scenario Specialist Dmitri Bagh has utilized Ollama within FME to input data, create custom models, and enhance security protocols. This segment will include demos to illustrate the full capabilities of FME in AI-driven processes.
Custom AI Models: Discover how to leverage FME to build personalized AI models using your data. Whether it’s populating a model with local data for added security or integrating public AI tools, find out how FME facilitates a versatile and secure approach to AI.
We’ll wrap up with a live Q&A session where you can engage with our experts on your specific use cases, and learn more about optimizing your data workflows with AI.
This webinar is ideal for professionals seeking to harness the power of AI within their data management systems while ensuring high levels of customization and security. Whether you're a novice or an expert, gain actionable insights and strategies to elevate your data processes. Join us to see how FME and AI can revolutionize how you work with data!
Building Production Ready Search Pipelines with Spark and MilvusZilliz
Spark is the widely used ETL tool for processing, indexing and ingesting data to serving stack for search. Milvus is the production-ready open-source vector database. In this talk we will show how to use Spark to process unstructured data to extract vector representations, and push the vectors to Milvus vector database for search serving.
Skybuffer SAM4U tool for SAP license adoptionTatiana Kojar
Manage and optimize your license adoption and consumption with SAM4U, an SAP free customer software asset management tool.
SAM4U, an SAP complimentary software asset management tool for customers, delivers a detailed and well-structured overview of license inventory and usage with a user-friendly interface. We offer a hosted, cost-effective, and performance-optimized SAM4U setup in the Skybuffer Cloud environment. You retain ownership of the system and data, while we manage the ABAP 7.58 infrastructure, ensuring fixed Total Cost of Ownership (TCO) and exceptional services through the SAP Fiori interface.
How to Interpret Trends in the Kalyan Rajdhani Mix Chart.pdfChart Kalyan
A Mix Chart displays historical data of numbers in a graphical or tabular form. The Kalyan Rajdhani Mix Chart specifically shows the results of a sequence of numbers over different periods.
Fueling AI with Great Data with Airbyte WebinarZilliz
This talk will focus on how to collect data from a variety of sources, leveraging this data for RAG and other GenAI use cases, and finally charting your course to productionalization.
A Comprehensive Guide to DeFi Development Services in 2024Intelisync
DeFi represents a paradigm shift in the financial industry. Instead of relying on traditional, centralized institutions like banks, DeFi leverages blockchain technology to create a decentralized network of financial services. This means that financial transactions can occur directly between parties, without intermediaries, using smart contracts on platforms like Ethereum.
In 2024, we are witnessing an explosion of new DeFi projects and protocols, each pushing the boundaries of what’s possible in finance.
In summary, DeFi in 2024 is not just a trend; it’s a revolution that democratizes finance, enhances security and transparency, and fosters continuous innovation. As we proceed through this presentation, we'll explore the various components and services of DeFi in detail, shedding light on how they are transforming the financial landscape.
At Intelisync, we specialize in providing comprehensive DeFi development services tailored to meet the unique needs of our clients. From smart contract development to dApp creation and security audits, we ensure that your DeFi project is built with innovation, security, and scalability in mind. Trust Intelisync to guide you through the intricate landscape of decentralized finance and unlock the full potential of blockchain technology.
Ready to take your DeFi project to the next level? Partner with Intelisync for expert DeFi development services today!
Monitoring and Managing Anomaly Detection on OpenShift.pdfTosin Akinosho
Monitoring and Managing Anomaly Detection on OpenShift
Overview
Dive into the world of anomaly detection on edge devices with our comprehensive hands-on tutorial. This SlideShare presentation will guide you through the entire process, from data collection and model training to edge deployment and real-time monitoring. Perfect for those looking to implement robust anomaly detection systems on resource-constrained IoT/edge devices.
Key Topics Covered
1. Introduction to Anomaly Detection
- Understand the fundamentals of anomaly detection and its importance in identifying unusual behavior or failures in systems.
2. Understanding Edge (IoT)
- Learn about edge computing and IoT, and how they enable real-time data processing and decision-making at the source.
3. What is ArgoCD?
- Discover ArgoCD, a declarative, GitOps continuous delivery tool for Kubernetes, and its role in deploying applications on edge devices.
4. Deployment Using ArgoCD for Edge Devices
- Step-by-step guide on deploying anomaly detection models on edge devices using ArgoCD.
5. Introduction to Apache Kafka and S3
- Explore Apache Kafka for real-time data streaming and Amazon S3 for scalable storage solutions.
6. Viewing Kafka Messages in the Data Lake
- Learn how to view and analyze Kafka messages stored in a data lake for better insights.
7. What is Prometheus?
- Get to know Prometheus, an open-source monitoring and alerting toolkit, and its application in monitoring edge devices.
8. Monitoring Application Metrics with Prometheus
- Detailed instructions on setting up Prometheus to monitor the performance and health of your anomaly detection system.
9. What is Camel K?
- Introduction to Camel K, a lightweight integration framework built on Apache Camel, designed for Kubernetes.
10. Configuring Camel K Integrations for Data Pipelines
- Learn how to configure Camel K for seamless data pipeline integrations in your anomaly detection workflow.
11. What is a Jupyter Notebook?
- Overview of Jupyter Notebooks, an open-source web application for creating and sharing documents with live code, equations, visualizations, and narrative text.
12. Jupyter Notebooks with Code Examples
- Hands-on examples and code snippets in Jupyter Notebooks to help you implement and test anomaly detection models.
Digital Marketing Trends in 2024 | Guide for Staying AheadWask
https://www.wask.co/ebooks/digital-marketing-trends-in-2024
Feeling lost in the digital marketing whirlwind of 2024? Technology is changing, consumer habits are evolving, and staying ahead of the curve feels like a never-ending pursuit. This e-book is your compass. Dive into actionable insights to handle the complexities of modern marketing. From hyper-personalization to the power of user-generated content, learn how to build long-term relationships with your audience and unlock the secrets to success in the ever-shifting digital landscape.
leewayhertz.com-AI in predictive maintenance Use cases technologies benefits ...alexjohnson7307
Predictive maintenance is a proactive approach that anticipates equipment failures before they happen. At the forefront of this innovative strategy is Artificial Intelligence (AI), which brings unprecedented precision and efficiency. AI in predictive maintenance is transforming industries by reducing downtime, minimizing costs, and enhancing productivity.
Main news related to the CCS TSI 2023 (2023/1695)Jakub Marek
An English 🇬🇧 translation of a presentation to the speech I gave about the main changes brought by CCS TSI 2023 at the biggest Czech conference on Communications and signalling systems on Railways, which was held in Clarion Hotel Olomouc from 7th to 9th November 2023 (konferenceszt.cz). Attended by around 500 participants and 200 on-line followers.
The original Czech 🇨🇿 version of the presentation can be found here: https://www.slideshare.net/slideshow/hlavni-novinky-souvisejici-s-ccs-tsi-2023-2023-1695/269688092 .
The videorecording (in Czech) from the presentation is available here: https://youtu.be/WzjJWm4IyPk?si=SImb06tuXGb30BEH .
Filling in the Blanks: Capturing Dynamically Generated Content
1. Filling in the Blanks:
Capturing Dynamically
Generated Content
Justin F. Brunelle
Old Dominion University
Advisor: Dr. Michael L. Nelson
JCDL ‘12 Doctoral Consortium
06/10/2012
1
4. Problem!
• Which exists in the archive?
– Probably the unauthenticated version, right?
• What factors created “my” representation?
– Can I archive “my” representation?
• Am I seeing undead resources?
– Mix of live and archived content?
• How can we capture, share, and
archive user experiences?
4
12. Web 2.0
• Crawlers aren’t enough
• Relies on interaction/personalization
• Users may want to archive personal
content
• How do we capture user experiences?
– Justin’s vs. Dr. Nelson’s experience? Both?
• What about sharing browsing sessions?
12
13. Things are better
(but really worse)
• Better UI, worse archiving
• HTML5
• JavaScript
– document.write
• Cookies
• User Interaction
• GeoIP
13
14. Traditional Representation
generation
Dereference
URI
Resource
Identifies
Represents
Representation
From W3C Web Architecture 14
15. Representation through
content negotiation
Dereference Negotiate
URI
Resource
Identifies
Represents
Representation
From W3C Web Architecture 15
16. Web 2.0 Representation
Generation
Dereference
User
URI
Interaction
Client-
side
Resource script
Identifies
Represents
Representation
16
18. Two Current Solutions
• Browser-based crawling
– Problematic at scale, misses post-render content, no
session spanning, misses “personal” browsing
– IA
– To be released – Heritrix 3.X
• Transactional Web Archiving
– Impact/depth is unknown, client-side changes are
missed, must have server/content author buy-in
– LANL
– http://theresourcedepot.org/
18
19. What can Justin do about it?
• How can we capture THE user
experience?
– How much user-shared content is archivable?
– What defines a dynamic representation?
• Infinitely Changing?
– How much dynamic content are archives missing?
– What tools are required to capture the
representation?
• Browser Add-on?
– How much will users contribute to the archives?
• Is this even possible? 19
20. Characteristics of a Potential
Solution
• Browser Add-on
• Crowd sourced
– User contributions to archives
• Opt-in representation archiving/sharing
• Capture client-side DOM
– JS, HTML, representation, etc.
• Capture client-side events and resulting
DOM
– Includes Ajax and post-render data
• Package and submit to archives 20
22. Dissertation Plan
BEGIN
Background Research
Coursework
Quals
Prevalence of Current
Unarchivable Resources State
Define test datasets (set of dynamic and static test pages)
Define factors/equations of dynamic representations – What
dynamic content can (and cannot) be captured for archiving?
Construction of software solution -- VCR for the Web: Record,
Rewind, Replay
Analysis of improved capture -- Client-side Archiving: Client-side
(human assisted) Capture vs. Traditional Crawlers vs. Headless clients
Explore how personalized archives can be combined with public web
archives
PhD Defense
23. Current Work:
How much can we archive?
• Sample from Bit.ly URIs from Twitter
• Load page in each environment:
– Live
– 3rd Party Archived
• Submit and load from WebCitation
– Locally stored
• wget –k -p and load from local drive
– Local only
• Load from local drive – no Internet access
23
27. Local Only
(No Internet)
http://localhost/dctheatrescene.com/2009/06/11/toxic-avengers-john-rando/
• Missing:
12/78 without internet
• dctheatrescene.com/…/uperfish.args.js?e83a2c
• dctheatrescene.com/…/css/datatables.css?
ver=1.9.3
• Small files, bit impact
27
32. Future Research Questions
• What dynamism can (and cannot) be
captured for archiving?
• Client-side Archiving: Client-side Capture vs.
Traditional Crawlers
• Client-side contributions to Web Archives:
Archiving User Experiences
32
33. Conclusion
• Is dynamic content
archivable?
• How much are we
missing?
• Can you archive
your experience?
• For the betterment
of archives
• For personal
capture
33
35. References
• J. Mickens, J. Elson, and J. Howell. Mugshot: deterministic capture and replay for
JavaScript applications. In Proceedings of the 7th USENIX conference on Networked
systems design and implementation, NSDI'10, pages 11-11, Berkeley, CA, USA, 2010.
USENIX Association.
• K.Vikram, A. Prateek, and B. Livshits. Ripley: Automatically securing web 2.0 applications
through replicated execution. In Proceedings of the Conference on Computer and
Communications Security, November 2009.
• E. Kiciman and B. Livshits. Ajaxscope: A platform for remotely monitoring the client-side
behavior of web 2.0 applications. In the 21st ACM Symposium on Operating Systems
Principles (SOSP'07), SOSP '07, 2007.
• B. Livshits and S. Guarnieri. Gulfstream: Incremental static analysis for streaming
JavaScript applications. Technical Report MSR-TR-2010-4, Microsoft, January 2010.
• M. Dhawan and V. Ganapathy. Analyzing information flow in JavaScript-based browser
extensions. Annual Computer Security Applications Conference, pages 382 - 391, 2009.
• A. Mesbah, E. Bozdag, and A. van Deursen. Crawling Ajax by inferring user interface state
changes. In Web Engineering, 2008. ICWE '08. Eighth International Conference on, pages
122-134, July 2008.
• C. Duda, G. Frey, D. Kossmann, and C. Zhou. AjaxSearch: crawling, indexing and
searching Web 2.0 applications. Proc. VLDB Endow., 1:1440-1443, August 2008. 35
• D. Lowet and D. Goergen. Co-browsing dynamic web pages. In WWW, pages 941-950,
36. References
• S. Chakrabarti, S. Srivastava, M. Subramanyam, and M. Tiwari. Memex: A browsing
assistant for collaborative archiving and mining of surf trails. In Proceedings of the 26th
VLDB Conference, 26th VLDB, 2000.
• R. Karri. Client-side page element web-caching, 2009.
• E. Benson, A. M. 0002, D. R. Karger, and S. Madden. Sync kit: a persistent client-side
database caching toolkit for data intensive websites. In WWW, pages 121{130, 2010.
• M. N. K. Boulos, J. Gong, P. Yue, and J. Y. Warren. Web gis in practice viii: Html5 and the
canvas element for interactive online mapping. International journal of health geographics,
March 2010.
• S. Periyapatna. Total recall for Ajax applications firefox extension, 2009.
• S. Sivasubramanian, G. Pierre, M. van Steen, and G. Alonso. Analysis of caching and
replication strategies for web applications. IEEE Internet Computing, 11:60-66, 2007.
36
37. Web Archives
• “Web archiving is the process of
collecting portions of the World Wide
Web and ensuring the collection
is preserved … for future researchers,
historians, and the public. “
-- http://
en.wikipedia.org/wiki/Web_archiving
37
38. What does this have to do with
DLs?
• Improved coverage
• NARA regulation
• Improved “memory”
• Gathers missing User Experiences
– Or at least an adequate sub-sample
38
39. Envisioned Solution
SELECT PREVIOUS REPRESENTATION TO ARCHIVE:
User Event: User Event: User Event:
Text Entered Double Click Text Entered
Button Push
Ajax Event: Ajax Event: Ajax Event:
XMLResponse XMLResponse XMLResponse 39