Relational Navigation Brings Social Computing and Semantic Technology Computing and Semantic Technology To The Enterprise To The Enterprise (Infonortics Search Engine Meeting 2008)
1. Relational navigation brings together social computing, semantics, and faceted navigation to improve search and information organization in enterprises.
2. Traditional search approaches like teleportation are being replaced by orienteering models using feeds, usage metadata, subject tagging, and organizational metadata.
3. This new approach applies enterprise 2.0 principles like wikis, blogs, and social networks to information organization using metadata and relationships between information.
This is an older presentation given in 2009. The goal was to advocate for the adoption of microformats to improve markup, SEO positioning, and modularize web development. The talk was first given at local user groups: Refresh Hampton Roads and the Web Usability and Standards User Group. Later, I gave the workshop to an internal audience: the UI Engineering team and, later, to a UI/UX Future Group
This is an older presentation given in 2009. The goal was to advocate for the adoption of microformats to improve markup, SEO positioning, and modularize web development. The talk was first given at local user groups: Refresh Hampton Roads and the Web Usability and Standards User Group. Later, I gave the workshop to an internal audience: the UI Engineering team and, later, to a UI/UX Future Group
Gateway to Oklahoma History Case Study: Structured Data and Metadata Evaluati...Emily Kolvitz
Image Resource Findability on the World Wide Web is still very much a landgrab. For the Semantic Web to become a reality online businesses and individuals have to get their hands dirty and also come facetoface with the realization that search engine giants are increasingly becoming the goto tool for information resource retrieval. “Increasingly, students use Web search engines such as Google to locate information resources rather than seek out library online catalogs or databases of scholarly journal articles” (Lippincott 2013). This puts the search engine giant in a unique position to dictate how the future of search will work on the Web and therefore, your organization’s future presence (or lack thereof) on the Web. Search Engine Optimization (SEO) techniques change frequently and remain much a mystery to many companies. The one variable in the equation of Web findability that remains a staple is good qualitymetadataunderthehoodoftheWebsite. Inthiscasestudy,amethodologyisappliedto the Gateway to Oklahoma History’s Website. This study can be generalized to organizations looking to benchmark their own findability maturity on the Web from an imagecentric viewpoint.
The World Wide Web is booming and radically vibrant due to the well established standards and widely accountable framework which guarantees the interoperability at various levels of the application and the society as a whole. So far, the web has been functioning at the random rate on the basis of the human intervention and some manual processing but the next generation web which the researchers called semantic web, edging for automatic processing and machine-level understanding. The well set notion, Semantic Web would be turn possible if only there exists the further levels of interoperability prevails among the applications and networks. In achieving this interoperability and greater functionality among the applications, the W3C standardization has already released the well defined standards such as RDF/RDF Schema and OWL. Using XML as a tool for semantic interoperability has not achieved anything effective and failed to bring the interconnection at the larger level. This leads to the further inclusion of inference layer at the top of the web architecture and its paves the way for proposing the common design for encoding the ontology representation languages in the data models such as RDF/RDFS. In this research article, we have given the clear implication of semantic web research roots and its ontological background process which may help to augment the sheer understanding of named entities in the web.
Talk at the 2nd Summer Workshop of the Center for Semantic Web Research (January 16, 2016, Santiago, Chile) about the construction of Yahoo's Knowledge Graph and associated research challenges.
Presentation about a central knowledge and resource-sharing system at the Local Discovery Day on 13 June 2014 hosted by the Department for Communities and Local Government in partnership with GDS. These presentation notes are from both the @LocalGovDigital #Hack group and the @LocalGovCamp group working on better sharing knowledge and assets relating to public service transformation
For the first time since the emergence of the Web, structured data is playing a key role in search engines and is therefore being collected via a concerted effort. Much of this data is being extracted from the Web, which contains vast quantities of structured data on a variety of domains, such as hobbies, products and reference data. Moreover, the Web provides a platform that encourages publishing more data sets from governments and other public organizations. The Web also supports new data management opportunities, such as effective crisis response, data journalism and crowd-sourcing data sets.
I will describe some of the efforts we are conducting at Google to collect structured data, filter the high-quality content, and serve it to our users. These efforts include providing Google Fusion Tables, a service for easily ingesting, visualizing and integrating data, mining the Web for high-quality HTML tables, and contributing these data assets to Google's other services.
Alon Halevy heads the Structured Data Management Research group at Google. Prior to that, he was a professor of Computer Science at the University of Washington in Seattle, where he founded the database group. In 1999, Dr. Halevy co-founded Nimble Technology, one of the first companies in the Enterprise Information Integration space, and in 2004, Dr. Halevy founded Transformic, a company that created search engines for the deep web, and was acquired by Google. Dr. Halevy is a Fellow of the Association for Computing Machinery, received the the Presidential Early Career Award for Scientists and Engineers (PECASE) in 2000, and was a Sloan Fellow (1999-2000). He received his Ph.D in Computer Science from Stanford University in 1993 and his Bachelors from the Hebrew University in Jerusalem. Halevy is also a coffee culturalist and published the book "The Infinite Emotions of Coffee", published in 2011 and a co-author of the book "Principles of Data Integration", published in 2012.
Information Organisation for the Future Web: with Emphasis to Local CIRs inventionjournals
Semantic Web is evolving as meaningful extension of present web using ontology. Ontology can play an important role in structuring the content in the current web to lead this as new generation web. Domain information can be organized using ontology to help machine to interact with the data for the retrieval of exact information quickly. Present paper tries to organize community information resources covering the area of local information need and evaluate the system using SPARQL from the developed ontology.
IJERA (International journal of Engineering Research and Applications) is International online, ... peer reviewed journal. For more detail or submit your article, please visit www.ijera.com
This presentation provides a top down introduction to semantics and Web 3.0
It is intended for the busy executive or developer who want to understand quickly why this new technological wave is relevant
For a “one slide presentation” see the first slide only
For a general introduction, see only the slides of the first section
Following slides about semantic technologies, architectures and applications
This is an update to the Christian Social Graph and Nonprofit Social Graph slideshares, but with a specific target on the role Google could plan with their Knowledge Graph
Gateway to Oklahoma History Case Study: Structured Data and Metadata Evaluati...Emily Kolvitz
Image Resource Findability on the World Wide Web is still very much a landgrab. For the Semantic Web to become a reality online businesses and individuals have to get their hands dirty and also come facetoface with the realization that search engine giants are increasingly becoming the goto tool for information resource retrieval. “Increasingly, students use Web search engines such as Google to locate information resources rather than seek out library online catalogs or databases of scholarly journal articles” (Lippincott 2013). This puts the search engine giant in a unique position to dictate how the future of search will work on the Web and therefore, your organization’s future presence (or lack thereof) on the Web. Search Engine Optimization (SEO) techniques change frequently and remain much a mystery to many companies. The one variable in the equation of Web findability that remains a staple is good qualitymetadataunderthehoodoftheWebsite. Inthiscasestudy,amethodologyisappliedto the Gateway to Oklahoma History’s Website. This study can be generalized to organizations looking to benchmark their own findability maturity on the Web from an imagecentric viewpoint.
The World Wide Web is booming and radically vibrant due to the well established standards and widely accountable framework which guarantees the interoperability at various levels of the application and the society as a whole. So far, the web has been functioning at the random rate on the basis of the human intervention and some manual processing but the next generation web which the researchers called semantic web, edging for automatic processing and machine-level understanding. The well set notion, Semantic Web would be turn possible if only there exists the further levels of interoperability prevails among the applications and networks. In achieving this interoperability and greater functionality among the applications, the W3C standardization has already released the well defined standards such as RDF/RDF Schema and OWL. Using XML as a tool for semantic interoperability has not achieved anything effective and failed to bring the interconnection at the larger level. This leads to the further inclusion of inference layer at the top of the web architecture and its paves the way for proposing the common design for encoding the ontology representation languages in the data models such as RDF/RDFS. In this research article, we have given the clear implication of semantic web research roots and its ontological background process which may help to augment the sheer understanding of named entities in the web.
Talk at the 2nd Summer Workshop of the Center for Semantic Web Research (January 16, 2016, Santiago, Chile) about the construction of Yahoo's Knowledge Graph and associated research challenges.
Presentation about a central knowledge and resource-sharing system at the Local Discovery Day on 13 June 2014 hosted by the Department for Communities and Local Government in partnership with GDS. These presentation notes are from both the @LocalGovDigital #Hack group and the @LocalGovCamp group working on better sharing knowledge and assets relating to public service transformation
For the first time since the emergence of the Web, structured data is playing a key role in search engines and is therefore being collected via a concerted effort. Much of this data is being extracted from the Web, which contains vast quantities of structured data on a variety of domains, such as hobbies, products and reference data. Moreover, the Web provides a platform that encourages publishing more data sets from governments and other public organizations. The Web also supports new data management opportunities, such as effective crisis response, data journalism and crowd-sourcing data sets.
I will describe some of the efforts we are conducting at Google to collect structured data, filter the high-quality content, and serve it to our users. These efforts include providing Google Fusion Tables, a service for easily ingesting, visualizing and integrating data, mining the Web for high-quality HTML tables, and contributing these data assets to Google's other services.
Alon Halevy heads the Structured Data Management Research group at Google. Prior to that, he was a professor of Computer Science at the University of Washington in Seattle, where he founded the database group. In 1999, Dr. Halevy co-founded Nimble Technology, one of the first companies in the Enterprise Information Integration space, and in 2004, Dr. Halevy founded Transformic, a company that created search engines for the deep web, and was acquired by Google. Dr. Halevy is a Fellow of the Association for Computing Machinery, received the the Presidential Early Career Award for Scientists and Engineers (PECASE) in 2000, and was a Sloan Fellow (1999-2000). He received his Ph.D in Computer Science from Stanford University in 1993 and his Bachelors from the Hebrew University in Jerusalem. Halevy is also a coffee culturalist and published the book "The Infinite Emotions of Coffee", published in 2011 and a co-author of the book "Principles of Data Integration", published in 2012.
Information Organisation for the Future Web: with Emphasis to Local CIRs inventionjournals
Semantic Web is evolving as meaningful extension of present web using ontology. Ontology can play an important role in structuring the content in the current web to lead this as new generation web. Domain information can be organized using ontology to help machine to interact with the data for the retrieval of exact information quickly. Present paper tries to organize community information resources covering the area of local information need and evaluate the system using SPARQL from the developed ontology.
IJERA (International journal of Engineering Research and Applications) is International online, ... peer reviewed journal. For more detail or submit your article, please visit www.ijera.com
This presentation provides a top down introduction to semantics and Web 3.0
It is intended for the busy executive or developer who want to understand quickly why this new technological wave is relevant
For a “one slide presentation” see the first slide only
For a general introduction, see only the slides of the first section
Following slides about semantic technologies, architectures and applications
This is an update to the Christian Social Graph and Nonprofit Social Graph slideshares, but with a specific target on the role Google could plan with their Knowledge Graph
Similar to Relational Navigation Brings Social Computing and Semantic Technology Computing and Semantic Technology To The Enterprise To The Enterprise (Infonortics Search Engine Meeting 2008)
This presentation hopes to illuminate how Search, Content Strategy, Information Architecture, User Experience, Interaction Design can break down silos to take back relevance. Because, in the end, we, the people, should be the arbiters of experience, not machines and certainly not math.
A talk on the past, present, and future evolution of the Web -- Where it's headed and in particular, the Semantic Web, and where it fits.
If it doesn't load here on slideshare -- try viewing it at http://novaspivack.com
An analysis of competing social computing platforms against SharePoint 2010. A lot of context is lost without the narrative, but for those who have seen Mike Watson and/or myself present, this will be a reminder.
The more “networking” becomes the world, the more difficult to find relevant information using traditional search engine technology. Here social media and search possibilities appear
March 2008 presentation from a BEA Systems webinar about expertise location. Pathways lets users tag content and people, as well as bookmark internal content and external websites. It applies an algorithm to give ratings to users and information in the system.
IJERA (International journal of Engineering Research and Applications) is International online, ... peer reviewed journal. For more detail or submit your article, please visit www.ijera.com
Social Network Analysis (SNA) and its implications for knowledge discovery in...ACMBangalore
Social Network Analysis (SNA) and its implications for knowledge discovery in Informal Networks- Talk by Dr Jai Ganesh, SETLabs, Infosys at Search and Social Platforms tutorial, as part of Compute 2009, ACM Bangalore
This is a presentation that I did for the Enterprise Search Summit West 2008 that has been amended for a Web Project Management class at the University of Washington
Mining and analyzing social media part 1 - hicss47 tutorial - dave kingDave King
Part 1 of a Tutorial on Mining and Analyzing Social Media at HICSS 47
Similar to Relational Navigation Brings Social Computing and Semantic Technology Computing and Semantic Technology To The Enterprise To The Enterprise (Infonortics Search Engine Meeting 2008) (20)
A presentation on smart content (what it is, how it is produced, why it is useful, and its relevance to the future of scholarly publishing) for the Association of American Publishers Professional and Scholarly Publishing Pre-Conference in Washington, D.C. on 2012-02-01.
GraphSummit Singapore | The Art of the Possible with Graph - Q2 2024Neo4j
Neha Bajwa, Vice President of Product Marketing, Neo4j
Join us as we explore breakthrough innovations enabled by interconnected data and AI. Discover firsthand how organizations use relationships in data to uncover contextual insights and solve our most pressing challenges – from optimizing supply chains, detecting fraud, and improving customer experiences to accelerating drug discoveries.
GraphSummit Singapore | The Future of Agility: Supercharging Digital Transfor...Neo4j
Leonard Jayamohan, Partner & Generative AI Lead, Deloitte
This keynote will reveal how Deloitte leverages Neo4j’s graph power for groundbreaking digital twin solutions, achieving a staggering 100x performance boost. Discover the essential role knowledge graphs play in successful generative AI implementations. Plus, get an exclusive look at an innovative Neo4j + Generative AI solution Deloitte is developing in-house.
GridMate - End to end testing is a critical piece to ensure quality and avoid...ThomasParaiso2
End to end testing is a critical piece to ensure quality and avoid regressions. In this session, we share our journey building an E2E testing pipeline for GridMate components (LWC and Aura) using Cypress, JSForce, FakerJS…
20 Comprehensive Checklist of Designing and Developing a WebsitePixlogix Infotech
Dive into the world of Website Designing and Developing with Pixlogix! Looking to create a stunning online presence? Look no further! Our comprehensive checklist covers everything you need to know to craft a website that stands out. From user-friendly design to seamless functionality, we've got you covered. Don't miss out on this invaluable resource! Check out our checklist now at Pixlogix and start your journey towards a captivating online presence today.
Alt. GDG Cloud Southlake #33: Boule & Rebala: Effective AppSec in SDLC using ...James Anderson
Effective Application Security in Software Delivery lifecycle using Deployment Firewall and DBOM
The modern software delivery process (or the CI/CD process) includes many tools, distributed teams, open-source code, and cloud platforms. Constant focus on speed to release software to market, along with the traditional slow and manual security checks has caused gaps in continuous security as an important piece in the software supply chain. Today organizations feel more susceptible to external and internal cyber threats due to the vast attack surface in their applications supply chain and the lack of end-to-end governance and risk management.
The software team must secure its software delivery process to avoid vulnerability and security breaches. This needs to be achieved with existing tool chains and without extensive rework of the delivery processes. This talk will present strategies and techniques for providing visibility into the true risk of the existing vulnerabilities, preventing the introduction of security issues in the software, resolving vulnerabilities in production environments quickly, and capturing the deployment bill of materials (DBOM).
Speakers:
Bob Boule
Robert Boule is a technology enthusiast with PASSION for technology and making things work along with a knack for helping others understand how things work. He comes with around 20 years of solution engineering experience in application security, software continuous delivery, and SaaS platforms. He is known for his dynamic presentations in CI/CD and application security integrated in software delivery lifecycle.
Gopinath Rebala
Gopinath Rebala is the CTO of OpsMx, where he has overall responsibility for the machine learning and data processing architectures for Secure Software Delivery. Gopi also has a strong connection with our customers, leading design and architecture for strategic implementations. Gopi is a frequent speaker and well-known leader in continuous delivery and integrating security into software delivery.
Threats to mobile devices are more prevalent and increasing in scope and complexity. Users of mobile devices desire to take full advantage of the features
available on those devices, but many of the features provide convenience and capability but sacrifice security. This best practices guide outlines steps the users can take to better protect personal devices and information.
Securing your Kubernetes cluster_ a step-by-step guide to success !KatiaHIMEUR1
Today, after several years of existence, an extremely active community and an ultra-dynamic ecosystem, Kubernetes has established itself as the de facto standard in container orchestration. Thanks to a wide range of managed services, it has never been so easy to set up a ready-to-use Kubernetes cluster.
However, this ease of use means that the subject of security in Kubernetes is often left for later, or even neglected. This exposes companies to significant risks.
In this talk, I'll show you step-by-step how to secure your Kubernetes cluster for greater peace of mind and reliability.
Essentials of Automations: The Art of Triggers and Actions in FMESafe Software
In this second installment of our Essentials of Automations webinar series, we’ll explore the landscape of triggers and actions, guiding you through the nuances of authoring and adapting workspaces for seamless automations. Gain an understanding of the full spectrum of triggers and actions available in FME, empowering you to enhance your workspaces for efficient automation.
We’ll kick things off by showcasing the most commonly used event-based triggers, introducing you to various automation workflows like manual triggers, schedules, directory watchers, and more. Plus, see how these elements play out in real scenarios.
Whether you’re tweaking your current setup or building from the ground up, this session will arm you with the tools and insights needed to transform your FME usage into a powerhouse of productivity. Join us to discover effective strategies that simplify complex processes, enhancing your productivity and transforming your data management practices with FME. Let’s turn complexity into clarity and make your workspaces work wonders!
UiPath Test Automation using UiPath Test Suite series, part 6DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 6. In this session, we will cover Test Automation with generative AI and Open AI.
UiPath Test Automation with generative AI and Open AI webinar offers an in-depth exploration of leveraging cutting-edge technologies for test automation within the UiPath platform. Attendees will delve into the integration of generative AI, a test automation solution, with Open AI advanced natural language processing capabilities.
Throughout the session, participants will discover how this synergy empowers testers to automate repetitive tasks, enhance testing accuracy, and expedite the software testing life cycle. Topics covered include the seamless integration process, practical use cases, and the benefits of harnessing AI-driven automation for UiPath testing initiatives. By attending this webinar, testers, and automation professionals can gain valuable insights into harnessing the power of AI to optimize their test automation workflows within the UiPath ecosystem, ultimately driving efficiency and quality in software development processes.
What will you get from this session?
1. Insights into integrating generative AI.
2. Understanding how this integration enhances test automation within the UiPath platform
3. Practical demonstrations
4. Exploration of real-world use cases illustrating the benefits of AI-driven test automation for UiPath
Topics covered:
What is generative AI
Test Automation with generative AI and Open AI.
UiPath integration with generative AI
Speaker:
Deepak Rai, Automation Practice Lead, Boundaryless Group and UiPath MVP
DevOps and Testing slides at DASA ConnectKari Kakkonen
My and Rik Marselis slides at 30.5.2024 DASA Connect conference. We discuss about what is testing, then what is agile testing and finally what is Testing in DevOps. Finally we had lovely workshop with the participants trying to find out different ways to think about quality and testing in different parts of the DevOps infinity loop.
Unlocking Productivity: Leveraging the Potential of Copilot in Microsoft 365, a presentation by Christoforos Vlachos, Senior Solutions Manager – Modern Workplace, Uni Systems
Sudheer Mechineni, Head of Application Frameworks, Standard Chartered Bank
Discover how Standard Chartered Bank harnessed the power of Neo4j to transform complex data access challenges into a dynamic, scalable graph database solution. This keynote will cover their journey from initial adoption to deploying a fully automated, enterprise-grade causal cluster, highlighting key strategies for modelling organisational changes and ensuring robust disaster recovery. Learn how these innovations have not only enhanced Standard Chartered Bank’s data infrastructure but also positioned them as pioneers in the banking sector’s adoption of graph technology.
GDG Cloud Southlake #33: Boule & Rebala: Effective AppSec in SDLC using Deplo...James Anderson
Effective Application Security in Software Delivery lifecycle using Deployment Firewall and DBOM
The modern software delivery process (or the CI/CD process) includes many tools, distributed teams, open-source code, and cloud platforms. Constant focus on speed to release software to market, along with the traditional slow and manual security checks has caused gaps in continuous security as an important piece in the software supply chain. Today organizations feel more susceptible to external and internal cyber threats due to the vast attack surface in their applications supply chain and the lack of end-to-end governance and risk management.
The software team must secure its software delivery process to avoid vulnerability and security breaches. This needs to be achieved with existing tool chains and without extensive rework of the delivery processes. This talk will present strategies and techniques for providing visibility into the true risk of the existing vulnerabilities, preventing the introduction of security issues in the software, resolving vulnerabilities in production environments quickly, and capturing the deployment bill of materials (DBOM).
Speakers:
Bob Boule
Robert Boule is a technology enthusiast with PASSION for technology and making things work along with a knack for helping others understand how things work. He comes with around 20 years of solution engineering experience in application security, software continuous delivery, and SaaS platforms. He is known for his dynamic presentations in CI/CD and application security integrated in software delivery lifecycle.
Gopinath Rebala
Gopinath Rebala is the CTO of OpsMx, where he has overall responsibility for the machine learning and data processing architectures for Secure Software Delivery. Gopi also has a strong connection with our customers, leading design and architecture for strategic implementations. Gopi is a frequent speaker and well-known leader in continuous delivery and integrating security into software delivery.
Observability Concepts EVERY Developer Should Know -- DeveloperWeek Europe.pdfPaige Cruz
Monitoring and observability aren’t traditionally found in software curriculums and many of us cobble this knowledge together from whatever vendor or ecosystem we were first introduced to and whatever is a part of your current company’s observability stack.
While the dev and ops silo continues to crumble….many organizations still relegate monitoring & observability as the purview of ops, infra and SRE teams. This is a mistake - achieving a highly observable system requires collaboration up and down the stack.
I, a former op, would like to extend an invitation to all application developers to join the observability party will share these foundational concepts to build on:
Observability Concepts EVERY Developer Should Know -- DeveloperWeek Europe.pdf
Relational Navigation Brings Social Computing and Semantic Technology Computing and Semantic Technology To The Enterprise To The Enterprise (Infonortics Search Engine Meeting 2008)
1. Relational Navigation Brings Social Computing and Semantic Technology To The Enterprise Bradley P. Allen Founder and CTO Siderean Software, Inc. Search Engine Meeting Boston, MA, USA April 29th, 2008
10. Orienteering Subsumes Teleportation Bates, M. J. (1989a). Design of browsing and berrypicking techniques for the online search interface. Online Review, 13, 407-424.
11. Faceted Navigation Was The First Realization Of Orienteering subject s 3 genre s 2 s m s n g 3 g 2 g m g n documents g 0 g 1 s 0 s 1
12. Relational Navigation Adds Semantics and Pivoting subject genre documents persons dc:creator frbr:creatorOf title location organizations size industry foaf:member foaf:member -1
13. Enterprise 2.0: McAfee’s Model Strong Wikis Content Weak Social networks Information Potential Blogs Teams None Prediction markets Answers Ties to co-workers Tool Product
14. Enterprise 2.0 Applied To Information Organization Strong Syndicated Feeds Content Weak Social networks Organizational metadata Potential Social bookmarking Subject metadata None Search logs Usage metadata Ties to co-workers Tool Product
Thanks for having me I want to talk about three things today How an epochal shift in the Web and computing in general is creating a need for enterprises to fight for user attention and regain control over how information relevant to their business is made accessible on the Web Then I want to describe how social computing can provide semantically-expressive metadata that enables a rich search and navigation user experience, supporting the creation of authoritative resources that can recapture user attention Finally I’ll illustrate this approach with some work we been doing over the last year in the development of information hubs for provisioning information to external audiences
The walls are coming down between: Producers and consumers of information Catalogers and researchers IT and users Content and data inside and outside the firewall This is an epochal change, like the end of the Cold War… or the invention of the printing press Call it Web 2.0, 3.0, Enterprise 2.0, just the Web, or computing in general… whatever We’re just at the beginning of this transition
Clay Shirky’s book “Here Comes Everybody”: epochal technology transitions trigger the mass amateurization of established professions Shirky’s example of the impact of the printing press on the profession of scribe Today: photographers, journalists, publishers… and now librarians Nancy Pearl (the Librarian Action Figure) is really shushing us not to keep us quiet , but to not let the cat out of the bag Tremendous tumult, a profession in crisis In spite of the fact that all things information retrieval are deeply rooted in library science e.g., Google’s origins in NSF-funded digital library research FRBR, RDA/DCMI (which we are helping fund) etc. are attempts to grapple with this transition Other professions under siege: taxonomists and information architects
Hat tip to David Weinberger: “everything is metadata and everything is connected” This is a conceptual map of the various information objects in Flickr Not just documents (images), but many types of objects Rich with not just attributes of but with relationships between documents, people, concepts/tags, places, etc. Lower quality but greater quantity than that created using traditional approaches
The mass amateurization of metadata creation means that everyone is not only a publisher but everyone is also a cataloger Some librarians would dispute this Martha Yee’s definition of cataloging attempts to carve out the masses, but it’s a stretch Counterexamples: LIbraryThing, del.icio.us, Flickr
The notion of a static collection of documents is increasing dated Recency trumps archival longevity Constant, near-real-time updates coming from everywhere 12-14 Million Blogs in USA 900,000 Blogs in USA making money Top 200,000 blogs exceed $250 monthly Top 20,000 blogs exceed $2,000 monthly Standard formats for syndicated content and data (RSS, Atom) dramatically lower the costs of aggregation and integration (mashups) For enterprises this is hugely important Most of the relevant content and data about your products and services comes from somewhere other than you, from outside your firewall Sources: individual and collaborative blogs, news organizations, feed aggregators, Twitter lifestreams Herb Simon: abundance of information yields scarcity of attention Enterprises are losing control of how information about them gets to their customers, partners, press They’re losing their ability to retain attention
Sue Feldman’s work on the current state and future evolution of query traffic on the Web Traffic shifting from gateways (Google, Yahoo, MSN) to hubs A hub is an aggregation of content and data focused on a specific subject area, organized according to an editorial point of view Authoritative hubs attract attention, provide value to the audience, deliver thought leadership supporting market leadership Assertion: Enterprise search and navigation will increasingly be about building hubs that provision information to external audiences with the enterprise’s editorial point of view A big problem: customer searches Google, finds a page on your site, then goes away A bigger problem: customer searches Google, finds a blog posting with misinformation about your product, and never gets to your site Essentially this is a marketing communications problem, which is a communciations problem, which is what the Web is all about Who will build these hubs and how? Enterprises themselves or third-party publishers? Examples: Zillow vs. MLS, Macenstein vs. Apple, TechCrunch vs. InfoWorld
The question of how to build hubs hinges on giving a user the best possible search and navigation experience So how do you do that? Marti Hearst’s dichotomy: teleportation vs. orienteering The dream is teleportation: something that will instantly take you where you want to go Assumes you know where you want to go Great when it works, messy when it doesn’t (The Fly) Teleportation is a holy grail Google has convinced users that this is what they want But this is perhaps a great example of an AI-complete problem, in that in the limit it assumes deep understanding of user intent Perhaps we can chip away at this in specific domains and use cases But we’re a long way from teleportation that works flawlessly
The reality is orienteering: using hints and cues embedded in an information space to work your way towards where you want to go Multiple, mutually reinforcing ways to navigate Search as a filter and an arrow in the quiver, as opposed to a be-all-end-all Kevin Lynch’s “Image of the City” reference from yesterday’s Monitor Group presentation by Steve and Amelia; contexts and paths as key Not magic, but fundamentally social Issue: built on metadata, lots and lots of it
In practice the two are being blended Marcia Bates’ berrypicking model anticipated this by twenty years Iterative query refinement across multiple collections Jump, re-orient, jump again It’s taken twenty years to slough off the older, simplistic, teleportation-centric paradigm of a single query returning a relevance-ranked list of results The transition discussed earlier is providing a point of departure for the development of information hubs using this approach
Faceted navigation was the first computational realization of an orienteering approach to information access Hearst, Endeca, Siderean, now in many products and services Quick explanation of diagram: navigation over a collection of documents by hierarchical subject and genre taxonomies Requires attribute metadata on information objects Plays well with search as a filter
Relational navigation extends faceted navigation beyond retrieval of a single type of information object by adding relational metadata and multiple object types Siderean uses semantic technology (RDF/OWL) to represent this information A tremendously effective representation Pivoting to get different views across these types Quick explanation of diagram: faceted navigation over documents, people and organizations, with the ability to find and follow relations Any type can be the initial entry point Supporting focused and serendipitous discovery of information Zoom in and out, then tumble across related items Closer to Bates’ berrypicking model than search or faceted navigation Even more dependent on an abundance of metadata As always the question is: where does the metadata come from?
This is where social computing comes in: as a rich source of metadata for navigation Last November at the first Defrag conference in Denver, Andrew McAfee of Harvard Business School described a model understanding the impact of social computing in the enterprise It organizes thinking about tools for communication and collaboration by knowledge worker tie strength Quick description of diagram: tie strength goes from strong (people you work with everyday) to weak (people you interact with occasional) to potential (people you know but haven’t met) to none (people you’ll never deal with) McAfee associates with each levels of tie strength a particular type of social computing tool, together with the resulting type of intellectual product that the tool enables A model to think about where and how to apply social computing to support productivity within the enterprise
Let’s apply this model to the task of metadata creation, I.e., information organization in a editorial team of one or more The people you work with everyday select syndicated feeds from blogs, wikis, feed generation wrappers around traditional CMSs and DBMSs Feeds come with simple asset metadata (date, publisher, author, etc.) Organizational metadata comes from social network systems used by contributors, either internal to the application or leveraged for existing SNS or wrappers around personnel registries Subject metadata comes from user tagging of navigation results, as a replacement for or a complement to automated entity extraction and autocategorization Usage metadata comes from search and navigation logs associated with the system, from people you don’t necessarily know: the end users This is one take applying McAfee’s method to this problem; there can be others but this is the one we have used in application
Here’s some examples of this approach to leveraging semantic technology and social computing to produce information hubs These were done in the context of ongoing work over the last year with Oracle Corporation in building information hubs as microsites on top of content from both inside and outside Oracle Target audience: press and analysts, customers, software developers Goals: improved discoverability, richer user experience, thought leadership Done using our Seamark relational navigation product, delivered as a hosted service with DNS configured to be reachable through the Oracle domain Sites use syndicated feeds from Oracle Marketing as well as external sources such as blogs by Oracle experts, del.icio.us This is a screenshot of the Oracle Events microsite (events.oracle.com) Several feeds of Oracle in-person and web-based marketing events Facets include relative date, geographical location, subject, intended audience Attributes automatically extracted Usage metadata from logs shows level of user interest in events in the near future
This is a screenshot of the Oracle Pressroom microsite (pressroom.oracle.com) This is the second version of the first initial microsite Initially a response to senior management frustration with the accessibility and discoverability of Oracle press releases Again, some subject metadata comes from automated processes, and usage metadata provides information about popular documents But now user tagging support is blended in with this as a separate facet to support navigation on the basis of idiomatic or newly emerging subject terms
Finally, this is the Oracle Technology Network Semantic Web microsite Aggregates feeds about product releases, announcements, as well as developer forums and blogs Here contributor usernames are exposed, with a distinction made about Oracle Aces, I.e. acknowledged experts in a given area Provides a way to identify experts as well as find relevant product and issue information Supports pivoting between these views Across each of these sites, Oracle is seeing increases in time on site: an indicator of an increase in user attention
Summary: social computing provides semantically-expressive metadata that enables a rich search and navigation user experience The benefit addresses the issue raised earlier with respect to winning and keeping the audience’s attention By providing better, more productive ways for users to allocate their attention, providing opportunities for learning and discovery as well as finding, enterprises can achieve greater user satisfaction and traffic in the long run A bigger piece of the hub query traffic pie Reclaiming user attention Reasserting control over the conversation with the customer