The document discusses architecting a new content management system (CMS) for a university website to improve search engine optimization, content sharing, and navigation. Key goals include structuring content in a "hub and spoke" model to create content clusters and shorter URLs, developing a taxonomy to classify content, and defining content types and properties through an ontology to facilitate reusable, consistent content across sites. The new CMS aims to make university content easier for search engines and users to find through its optimized architecture and navigation.
February 18 2015 NISO Virtual Conference
Scientific Data Management: Caring for Your Institution and its Intellectual Wealth
Improving Integrity, Transparency, and Reproducibility Through Connection of the Scholarly Workflow
Andrew Sallans, Partnerships, Collaborations, and Funding, Center for Open Science
This document discusses web page classification using naive Bayes classifiers. It outlines the goals of web page classification, including improving web directories and search results. The document reviews literature on different representations for classification, including bags of words, n-grams, using HTML structure, and visual analysis. It then describes experiments using a university web page dataset to classify pages into categories like course, department, etc. using bag of words, HTML tag weighting, and n-grams. The document concludes with an overview of evaluation techniques like k-fold cross validation and confusion matrices.
positive examples are used to SVM classifier
examples from train initial SVM with positive
labeled data classifier examples
4th: Classifier labels 5th: Unlabeled data is 6th: Labeled as
unlabeled data labeled based on negative if not
7th: Labeled data classifier's prediction predicted as
augments the 8th: New classifier is positive
positive examples retrained with 7th: Process repeats
augmented data
This document discusses several academic social network sites that researchers can use to build their professional reputation and find collaborators. It provides an overview of the main features and functions of Academia.edu, ResearchGate, Mendeley, and SSRN. These sites allow users to create profiles, share and discuss papers, and find others in their fields of research. The document then compares the sites on metrics like user base size, document sharing and analytics available. It concludes with suggestions for how researchers and librarians can utilize these tools.
How discovery impacts of users' experiencesKatherine Rose
In the 21st century the academic library supports both research activities and teaching outcomes of faculty members and students through web-scale discovery services. These discovery services embrace new technologies to provide deep discovery of vast scholarly collections from a one-stop access interface, relying on a central index of pre-harvested data. With unified indexing of full-text library content, users’ experience of search and retrieval is greatly improved.
Discovery is changing the way that library users find and access library materials, especially electronic resources. In the opening part of this presentation, I will share my experiences of using different discovery systems – Summon, Primo and Enterprise – in my current and previous roles, in term of differences, strengths and common areas among these tools. Relevant findings from the literature and latest research reports will be sketched. I will also speak of how technical services teams can support the next generation of discovery systems that will help the progress of the digital library field. The presentation will conclude with the approach of technical services towards future discovery.
Web page classification features and algorithmsunyil96
This document summarizes research on classifying web pages. It discusses how web page classification is important for tasks like maintaining web directories, improving search results, and building focused crawlers. The document reviews different types of web page classification problems and features that are useful for classification, like content-based features and link-based features. It also discusses algorithms that have been used for web page classification.
This document summarizes two research profiling and preservation tools: Focus on Research and T-Space. Focus on Research allows faculty to create online research profiles highlighting publications and activities. It integrates with other websites and includes tools to import publications. T-Space is an institutional repository that allows scholars to preserve and distribute research in various digital formats with persistent access. The two tools work together, with Focus populating research profiles and T-Space archiving full-text works. They provide benefits like increased access, citation rates, and preservation of scholarly output for faculty and the university community.
This document summarizes a presentation on web page classification techniques. It discusses the significance of web page classification and various applications such as constructing web directories, improving search results, and question answering systems. It then reviews common features used for classification, including on-page features like text, tags, and visual analysis, as well as neighbors features. Finally, it outlines different algorithms and approaches for classification, such as dimension reduction, relational learning methods, modifications to kNN and SVM algorithms, hierarchical classification, and combining multiple information sources.
February 18 2015 NISO Virtual Conference
Scientific Data Management: Caring for Your Institution and its Intellectual Wealth
Improving Integrity, Transparency, and Reproducibility Through Connection of the Scholarly Workflow
Andrew Sallans, Partnerships, Collaborations, and Funding, Center for Open Science
This document discusses web page classification using naive Bayes classifiers. It outlines the goals of web page classification, including improving web directories and search results. The document reviews literature on different representations for classification, including bags of words, n-grams, using HTML structure, and visual analysis. It then describes experiments using a university web page dataset to classify pages into categories like course, department, etc. using bag of words, HTML tag weighting, and n-grams. The document concludes with an overview of evaluation techniques like k-fold cross validation and confusion matrices.
positive examples are used to SVM classifier
examples from train initial SVM with positive
labeled data classifier examples
4th: Classifier labels 5th: Unlabeled data is 6th: Labeled as
unlabeled data labeled based on negative if not
7th: Labeled data classifier's prediction predicted as
augments the 8th: New classifier is positive
positive examples retrained with 7th: Process repeats
augmented data
This document discusses several academic social network sites that researchers can use to build their professional reputation and find collaborators. It provides an overview of the main features and functions of Academia.edu, ResearchGate, Mendeley, and SSRN. These sites allow users to create profiles, share and discuss papers, and find others in their fields of research. The document then compares the sites on metrics like user base size, document sharing and analytics available. It concludes with suggestions for how researchers and librarians can utilize these tools.
How discovery impacts of users' experiencesKatherine Rose
In the 21st century the academic library supports both research activities and teaching outcomes of faculty members and students through web-scale discovery services. These discovery services embrace new technologies to provide deep discovery of vast scholarly collections from a one-stop access interface, relying on a central index of pre-harvested data. With unified indexing of full-text library content, users’ experience of search and retrieval is greatly improved.
Discovery is changing the way that library users find and access library materials, especially electronic resources. In the opening part of this presentation, I will share my experiences of using different discovery systems – Summon, Primo and Enterprise – in my current and previous roles, in term of differences, strengths and common areas among these tools. Relevant findings from the literature and latest research reports will be sketched. I will also speak of how technical services teams can support the next generation of discovery systems that will help the progress of the digital library field. The presentation will conclude with the approach of technical services towards future discovery.
Web page classification features and algorithmsunyil96
This document summarizes research on classifying web pages. It discusses how web page classification is important for tasks like maintaining web directories, improving search results, and building focused crawlers. The document reviews different types of web page classification problems and features that are useful for classification, like content-based features and link-based features. It also discusses algorithms that have been used for web page classification.
This document summarizes two research profiling and preservation tools: Focus on Research and T-Space. Focus on Research allows faculty to create online research profiles highlighting publications and activities. It integrates with other websites and includes tools to import publications. T-Space is an institutional repository that allows scholars to preserve and distribute research in various digital formats with persistent access. The two tools work together, with Focus populating research profiles and T-Space archiving full-text works. They provide benefits like increased access, citation rates, and preservation of scholarly output for faculty and the university community.
This document summarizes a presentation on web page classification techniques. It discusses the significance of web page classification and various applications such as constructing web directories, improving search results, and question answering systems. It then reviews common features used for classification, including on-page features like text, tags, and visual analysis, as well as neighbors features. Finally, it outlines different algorithms and approaches for classification, such as dimension reduction, relational learning methods, modifications to kNN and SVM algorithms, hierarchical classification, and combining multiple information sources.
Relationship Building and Advocacy Across the CampusUCD Library
Presentation given by Julia Barrett, Research Services Manager at University College Dublin Library, to the ANLTC Seminar: Supporting the Activities of Your Research Community - Issues and Initiatives, held on December 3, 2014 at the Royal Irish Academy, Dublin, Ireland.
Shaping Expectations: Defining and Refining the Role of Technical Services in...NASIG
From trial to implementation, technical services staff play an important role in shaping awareness of, and expectations for, new resources. Internally, technical services staff provide information and instruction to public services staff. Externally, they influence how new resources are integrated into the library website and other platforms. With appropriate “message control,” technical services staff can positively influence awareness of new resources while keeping everyone’s expectations in check.
During fall 2015, technical services staff at Georgia Southern University adopted a protocol for new resource rollouts that explicitly times and structures internal and external communications to ensure that all library staff are ready to support new resources as they go live. This protocol focuses on providing appropriate lead-time notifications to public services staff and “training the trainers” first, prior to releasing any external communications. Furthermore, this protocol integrates with activities of the library’s promotion committee, supporting smooth transition to public services promotion of new resources.
During this session, presenters will discuss this protocol in detail, with special emphasis on timing of internal and external communications, the importance of providing sufficient staff training and support materials early on, and the importance of maintaining objectivity and accuracy in all rollout communications and assets. Presenters will share protocol planning tools and worksheets, describe how these are integrated into implementation workflows, and engage participants in discussion about the role of technical services in new resource rollouts.
Presenters:
Jeff Mortimore & Debra Skinner
Zach S. Henderson Library
Georgia Southern University
Strategies To Make Library Resources DiscovableSuhui Ho
This document discusses strategies to make library resources more discoverable on the web. It suggests focusing navigation on resources, separating resources from services, using subject portals to group related resources and expertise, and embedding widgets to alert users of new resources. User surveys found that users understand library resources are better than Google but have difficulty finding resources on library websites. The strategies aim to guide users to resources through task-oriented design and pulling relevant content to library homepages.
Web-scale discovery tools have advantages like ease of use and speed but also limitations such as incomplete coverage and confusing interfaces. Instruction can help address limitations and move beyond just teaching tools to higher levels of information literacy. Discovery tools may index content inconsistently due to lack of metadata sharing between vendors. The interface can make it hard to distinguish resource types or access full text. Teaching how to develop search strategies and evaluate results can help students despite these limitations.
Libraries are running two spaces - physical and virtual. The e-Library or library's online presence is not the traditional library website. What new roles and skills are required to run a virtual library?
This document discusses challenges related to curating and providing access to open access collections. It outlines the author's institution's response which involves curating and vetting open access resources using a rubric. Some things that are working well include continued ingestion and discoverability. Areas for improvement include increasing automation for metadata and tracking usage. Going forward, the author proposes fully integrating open access into digital library collections and exploring additional access points, while continuing to focus on metadata and tracking for open access resources.
Create and maintain an up-to-date ResearcherID profileNader Ale Ebrahim
A curriculum vitae (CV) allows you to showcase yourself and your academic and professional achievements in a concise and effective way. Creating an online CV presenting who you are to your academic and professional peers. Creating and maintaining your online CV is an essential tool in disseminating your research and publications. A scholarly identifiers like your ResearcherID, is one of the online CV and provides a solution to the author ambiguity problem within the scholarly research community.
Use Google Analytics Stats to Improve WebsiteSuhui Ho
This document discusses how to use Google Analytics to understand website visitors and improve a website. It recommends using Google Analytics reports to analyze popular content, traffic sources, and visitor navigation behavior. This can help with decisions about content priorities, information architecture, search engine optimization, and evaluating website services. The presenter provides an overview of key Google Analytics reports and how understanding visitor data can help improve a website.
The modern library web environment consists of multiple content sources and applications that perform essential functions that often overlap and could potentially create a fractured user experience. For example, content in a library’s Drupal website may be replicated in LibGuides or WordPress blogs. Search functionality in a discovery platform may be replicated in a federated search tool or the ILS OPAC. This presentation provides tips, tackles technical and political challenges to building a single web experience for users, discusses solutions and use of APIs (application programming interfaces), provides concrete examples, and more.
Library Support for Journal Publishing: Emphasis on multi-modal open peer rev...Karen Estlund
Brief review of University of Oregon Libraries Journal Publishing program followed by in-depth look at Ada. Content also provided by Sarah Hamid and Bryce Peake
The document discusses how the UC San Diego Library embedded library resources directly into the university's learning management system, WebCT. By collaborating with campus partners, the library was able to link directly from course pages to curated subject guides of the library's top 5 resources for over 90% of subjects. This increased the visibility and accessibility of library resources for faculty and students. Key lessons included the importance of cross-campus collaboration, maintenance planning, and designating staff roles for ongoing web updates and content ownership. Embedding library resources directly into the systems users interact with on a daily basis helps place the library in the information space of the campus community.
A tale of two systems - Library Plus and DiscoverKatherine Rose
The University of Derby manages two separate discovery systems - Library Plus for higher education (HE) students and Discover for further education (FE) students at Buxton & Leek College. Library Plus was implemented in 2013 while Discover was launched in 2015 after a testing and implementation period. The implementation of Discover highlighted some unexpected issues and helped inform future improvements to Library Plus, including integrating the new Full Text Finder tool. Usability sessions were held to get user input on preferences for search defaults and interface elements in Library Plus. Ongoing work continues to make both discovery systems more accessible and showcase library resources.
This document summarizes Bethany Greene's investigation of perpetual access provisions for e-resources at UNC-Chapel Hill as a graduate student. She compiled data from license agreements and ran title lists against the Keepers Registry to determine what percentage of e-resources have perpetual access and what form it takes. Her results found that 15% of titles had no third-party archiving, 9% had access on local hard drives or media, and 26% were thought to be archived but actually were not.
Web scale discovery services have stronger functionality than Google Scholar for searching library resources. Discovery services allow limiting searches to a library's subscribed resources only, to peer-reviewed articles only, and integrate a library's catalogue and institutional repositories. They support more filtering facets and integration of subject indexes, Scopus, and Web of Science. However, Google Scholar may update search results more quickly, cover more open access and free sources, and have better relevancy and consistent features due to its focus on article searching.
We describe current work in federating data from institutional research profiling systems – providing single-point
access to substantial numbers of investigators through concept-driven search, visualization of the relationships
among those investigators and the ability to interlink systems into a single information ecosystem.
This document discusses web-scale discovery services (WDS), including what they are, their key features and benefits, examples of major WDS providers, and considerations for implementation. Specifically:
- WDS allows users to search a library's entire collection through a single search box, ranking results based on relevancy across sources. This is presented as an improvement over federated search.
- Major WDS providers discussed include EBSCO Discovery Service, Ex Libris Primo, Serials Solutions Summon, and OCLC's WorldCat Local.
- A comparison of these providers shows they index a variety of content like the library catalog, e-books, journals, and more.
- The
Linda Treffinger presentation for Univ Press as Digital Pubavonderharr
The document discusses strategies for improving the discoverability and visibility of eBooks. It summarizes findings from various surveys that show eBooks can be difficult for users to find due to a lack of targeted discovery tools and their exclusion from existing library discovery frameworks. The document proposes four principles to address this: socialization, openness, integration, and repurposing. Socialization involves cultivating expert recommendations and delivering ancillary content. Openness means understanding search engines and embracing standards. Integration is joining forces with ejournals. And repurposing encourages reuse of content segments.
This document summarizes a faculty development institute presentation about e-books. Rebecca Miller and Carolyn Meier discussed the history and current state of e-books, how they are used in higher education, and how to access e-books through the university libraries. They covered topics like e-book packages available, searching the catalog, accessing content, using different reading devices, and issues around digital rights management. Resources for free e-books online or through the public library were also mentioned.
Please follow these steps1. Choose a topic from the subject lis.docxmattjtoni51554
Please follow these steps:
1. Choose a topic from the subject list
2. Research four (4) sites dealing with the same topic you have selected. (Do not use Wikipedia, About.com, or Google as final source sites)
3. Write a brief critical evaluation report on each of the four sites you have visited. (Please include hyperlink connections to the four sites.)
4. Your evaluation must have a title page listing the topic, your name and the name of the instructor, the course title and section number, and the date.
5. Your paper must be in Word format ( doc. or docx.) or a Rich Text Format (.rtf) format (if using a different word processing program). The total report should be type-written, double-spaced, and 600 - 800 words in length. Papers are expected to demonstrate quality collegiate writing.
Submit your paper in the Web Evalution Drop Box. Check the When Assignments are Due page for due date.
Web Site Evaluation Criteria
· Authority--
· Does the resource have some reputable organization or expert behind it?
· Does the author have standing in the field? How do you know?
· Content
· What aspects of the subject are covered (breadth)?
· What is the level of detail provided about the subject (depth)?
· Is the information fact or opinion?
· Does the site contain original information or simply links?
· Accuracy
· Is the information in the resource accurate?
· How do you know?
· Currency
· Is the resource updated or static?
· Objectivity
· How biased is the site?
· Does it carry balanced information based on objective research or does it convey propaganda and subjective opinions?
Suggested Topics for WEB Project
1. Climate Change
2. Space Exploration
3. Biotechnology / Medical Innovations
4. Nanotechnology
5. Communication Technologies / Social Media
6. Alternative Energy Sources
7. Artificial Intelligence / Robotics
8. Green Jobs of the (not so distant)Future
9. Have you seen it?? (Latest Innovations)
10. Women Inventors & Scientists
11. Reuse,Repurpose,Recycle
12. Security,Surveillence, and Drones
Is the Web a good research tool? This question is dependent on the researcher's objective. As in traditional print resources, one must use a method of critical analysis to determine its value. Here is a checklist for evaluating web resources to help in that determination.
Authority:
Is the information reliable?
Check the author's credentials and affiliation. Is the author an expert in the field?
Does the resource have a reputable organization or expert behind it?
Are the sources of information stated? Can you verify the information?
Can the author be contacted for clarification?
Check for organizational or author biases.
Scope:
Is the material at this site useful, unique, accurate or is it derivative, repetitious, or doubtful?
Is the information available in other formats?
Is the purpose of the resource clearly stated? Does it fulfill its purpose?
What items are included in the resource? What subject area, time period, formats or types of material .
This document discusses how academics can leverage their existing academic publications and research to establish an online presence through search engine optimization. It notes that academics already produce large volumes of well-written, keyword-rich text through their research and publishing activities. This body of work represents a valuable resource that can be used to create web content and populate various online platforms. The document outlines techniques for hosting academic content online, submitting sites to search engines, and monitoring website visibility over time to improve search engine rankings. It argues that with some SEO efforts, academics can promote their research topics and expertise online without incurring significant costs.
Relationship Building and Advocacy Across the CampusUCD Library
Presentation given by Julia Barrett, Research Services Manager at University College Dublin Library, to the ANLTC Seminar: Supporting the Activities of Your Research Community - Issues and Initiatives, held on December 3, 2014 at the Royal Irish Academy, Dublin, Ireland.
Shaping Expectations: Defining and Refining the Role of Technical Services in...NASIG
From trial to implementation, technical services staff play an important role in shaping awareness of, and expectations for, new resources. Internally, technical services staff provide information and instruction to public services staff. Externally, they influence how new resources are integrated into the library website and other platforms. With appropriate “message control,” technical services staff can positively influence awareness of new resources while keeping everyone’s expectations in check.
During fall 2015, technical services staff at Georgia Southern University adopted a protocol for new resource rollouts that explicitly times and structures internal and external communications to ensure that all library staff are ready to support new resources as they go live. This protocol focuses on providing appropriate lead-time notifications to public services staff and “training the trainers” first, prior to releasing any external communications. Furthermore, this protocol integrates with activities of the library’s promotion committee, supporting smooth transition to public services promotion of new resources.
During this session, presenters will discuss this protocol in detail, with special emphasis on timing of internal and external communications, the importance of providing sufficient staff training and support materials early on, and the importance of maintaining objectivity and accuracy in all rollout communications and assets. Presenters will share protocol planning tools and worksheets, describe how these are integrated into implementation workflows, and engage participants in discussion about the role of technical services in new resource rollouts.
Presenters:
Jeff Mortimore & Debra Skinner
Zach S. Henderson Library
Georgia Southern University
Strategies To Make Library Resources DiscovableSuhui Ho
This document discusses strategies to make library resources more discoverable on the web. It suggests focusing navigation on resources, separating resources from services, using subject portals to group related resources and expertise, and embedding widgets to alert users of new resources. User surveys found that users understand library resources are better than Google but have difficulty finding resources on library websites. The strategies aim to guide users to resources through task-oriented design and pulling relevant content to library homepages.
Web-scale discovery tools have advantages like ease of use and speed but also limitations such as incomplete coverage and confusing interfaces. Instruction can help address limitations and move beyond just teaching tools to higher levels of information literacy. Discovery tools may index content inconsistently due to lack of metadata sharing between vendors. The interface can make it hard to distinguish resource types or access full text. Teaching how to develop search strategies and evaluate results can help students despite these limitations.
Libraries are running two spaces - physical and virtual. The e-Library or library's online presence is not the traditional library website. What new roles and skills are required to run a virtual library?
This document discusses challenges related to curating and providing access to open access collections. It outlines the author's institution's response which involves curating and vetting open access resources using a rubric. Some things that are working well include continued ingestion and discoverability. Areas for improvement include increasing automation for metadata and tracking usage. Going forward, the author proposes fully integrating open access into digital library collections and exploring additional access points, while continuing to focus on metadata and tracking for open access resources.
Create and maintain an up-to-date ResearcherID profileNader Ale Ebrahim
A curriculum vitae (CV) allows you to showcase yourself and your academic and professional achievements in a concise and effective way. Creating an online CV presenting who you are to your academic and professional peers. Creating and maintaining your online CV is an essential tool in disseminating your research and publications. A scholarly identifiers like your ResearcherID, is one of the online CV and provides a solution to the author ambiguity problem within the scholarly research community.
Use Google Analytics Stats to Improve WebsiteSuhui Ho
This document discusses how to use Google Analytics to understand website visitors and improve a website. It recommends using Google Analytics reports to analyze popular content, traffic sources, and visitor navigation behavior. This can help with decisions about content priorities, information architecture, search engine optimization, and evaluating website services. The presenter provides an overview of key Google Analytics reports and how understanding visitor data can help improve a website.
The modern library web environment consists of multiple content sources and applications that perform essential functions that often overlap and could potentially create a fractured user experience. For example, content in a library’s Drupal website may be replicated in LibGuides or WordPress blogs. Search functionality in a discovery platform may be replicated in a federated search tool or the ILS OPAC. This presentation provides tips, tackles technical and political challenges to building a single web experience for users, discusses solutions and use of APIs (application programming interfaces), provides concrete examples, and more.
Library Support for Journal Publishing: Emphasis on multi-modal open peer rev...Karen Estlund
Brief review of University of Oregon Libraries Journal Publishing program followed by in-depth look at Ada. Content also provided by Sarah Hamid and Bryce Peake
The document discusses how the UC San Diego Library embedded library resources directly into the university's learning management system, WebCT. By collaborating with campus partners, the library was able to link directly from course pages to curated subject guides of the library's top 5 resources for over 90% of subjects. This increased the visibility and accessibility of library resources for faculty and students. Key lessons included the importance of cross-campus collaboration, maintenance planning, and designating staff roles for ongoing web updates and content ownership. Embedding library resources directly into the systems users interact with on a daily basis helps place the library in the information space of the campus community.
A tale of two systems - Library Plus and DiscoverKatherine Rose
The University of Derby manages two separate discovery systems - Library Plus for higher education (HE) students and Discover for further education (FE) students at Buxton & Leek College. Library Plus was implemented in 2013 while Discover was launched in 2015 after a testing and implementation period. The implementation of Discover highlighted some unexpected issues and helped inform future improvements to Library Plus, including integrating the new Full Text Finder tool. Usability sessions were held to get user input on preferences for search defaults and interface elements in Library Plus. Ongoing work continues to make both discovery systems more accessible and showcase library resources.
This document summarizes Bethany Greene's investigation of perpetual access provisions for e-resources at UNC-Chapel Hill as a graduate student. She compiled data from license agreements and ran title lists against the Keepers Registry to determine what percentage of e-resources have perpetual access and what form it takes. Her results found that 15% of titles had no third-party archiving, 9% had access on local hard drives or media, and 26% were thought to be archived but actually were not.
Web scale discovery services have stronger functionality than Google Scholar for searching library resources. Discovery services allow limiting searches to a library's subscribed resources only, to peer-reviewed articles only, and integrate a library's catalogue and institutional repositories. They support more filtering facets and integration of subject indexes, Scopus, and Web of Science. However, Google Scholar may update search results more quickly, cover more open access and free sources, and have better relevancy and consistent features due to its focus on article searching.
We describe current work in federating data from institutional research profiling systems – providing single-point
access to substantial numbers of investigators through concept-driven search, visualization of the relationships
among those investigators and the ability to interlink systems into a single information ecosystem.
This document discusses web-scale discovery services (WDS), including what they are, their key features and benefits, examples of major WDS providers, and considerations for implementation. Specifically:
- WDS allows users to search a library's entire collection through a single search box, ranking results based on relevancy across sources. This is presented as an improvement over federated search.
- Major WDS providers discussed include EBSCO Discovery Service, Ex Libris Primo, Serials Solutions Summon, and OCLC's WorldCat Local.
- A comparison of these providers shows they index a variety of content like the library catalog, e-books, journals, and more.
- The
Linda Treffinger presentation for Univ Press as Digital Pubavonderharr
The document discusses strategies for improving the discoverability and visibility of eBooks. It summarizes findings from various surveys that show eBooks can be difficult for users to find due to a lack of targeted discovery tools and their exclusion from existing library discovery frameworks. The document proposes four principles to address this: socialization, openness, integration, and repurposing. Socialization involves cultivating expert recommendations and delivering ancillary content. Openness means understanding search engines and embracing standards. Integration is joining forces with ejournals. And repurposing encourages reuse of content segments.
This document summarizes a faculty development institute presentation about e-books. Rebecca Miller and Carolyn Meier discussed the history and current state of e-books, how they are used in higher education, and how to access e-books through the university libraries. They covered topics like e-book packages available, searching the catalog, accessing content, using different reading devices, and issues around digital rights management. Resources for free e-books online or through the public library were also mentioned.
Please follow these steps1. Choose a topic from the subject lis.docxmattjtoni51554
Please follow these steps:
1. Choose a topic from the subject list
2. Research four (4) sites dealing with the same topic you have selected. (Do not use Wikipedia, About.com, or Google as final source sites)
3. Write a brief critical evaluation report on each of the four sites you have visited. (Please include hyperlink connections to the four sites.)
4. Your evaluation must have a title page listing the topic, your name and the name of the instructor, the course title and section number, and the date.
5. Your paper must be in Word format ( doc. or docx.) or a Rich Text Format (.rtf) format (if using a different word processing program). The total report should be type-written, double-spaced, and 600 - 800 words in length. Papers are expected to demonstrate quality collegiate writing.
Submit your paper in the Web Evalution Drop Box. Check the When Assignments are Due page for due date.
Web Site Evaluation Criteria
· Authority--
· Does the resource have some reputable organization or expert behind it?
· Does the author have standing in the field? How do you know?
· Content
· What aspects of the subject are covered (breadth)?
· What is the level of detail provided about the subject (depth)?
· Is the information fact or opinion?
· Does the site contain original information or simply links?
· Accuracy
· Is the information in the resource accurate?
· How do you know?
· Currency
· Is the resource updated or static?
· Objectivity
· How biased is the site?
· Does it carry balanced information based on objective research or does it convey propaganda and subjective opinions?
Suggested Topics for WEB Project
1. Climate Change
2. Space Exploration
3. Biotechnology / Medical Innovations
4. Nanotechnology
5. Communication Technologies / Social Media
6. Alternative Energy Sources
7. Artificial Intelligence / Robotics
8. Green Jobs of the (not so distant)Future
9. Have you seen it?? (Latest Innovations)
10. Women Inventors & Scientists
11. Reuse,Repurpose,Recycle
12. Security,Surveillence, and Drones
Is the Web a good research tool? This question is dependent on the researcher's objective. As in traditional print resources, one must use a method of critical analysis to determine its value. Here is a checklist for evaluating web resources to help in that determination.
Authority:
Is the information reliable?
Check the author's credentials and affiliation. Is the author an expert in the field?
Does the resource have a reputable organization or expert behind it?
Are the sources of information stated? Can you verify the information?
Can the author be contacted for clarification?
Check for organizational or author biases.
Scope:
Is the material at this site useful, unique, accurate or is it derivative, repetitious, or doubtful?
Is the information available in other formats?
Is the purpose of the resource clearly stated? Does it fulfill its purpose?
What items are included in the resource? What subject area, time period, formats or types of material .
This document discusses how academics can leverage their existing academic publications and research to establish an online presence through search engine optimization. It notes that academics already produce large volumes of well-written, keyword-rich text through their research and publishing activities. This body of work represents a valuable resource that can be used to create web content and populate various online platforms. The document outlines techniques for hosting academic content online, submitting sites to search engines, and monitoring website visibility over time to improve search engine rankings. It argues that with some SEO efforts, academics can promote their research topics and expertise online without incurring significant costs.
The document provides an overview of Valerie Forrestal's presentation for a web services librarian position. It discusses designing an intuitive library website with clear navigation, simple design, and engaging content. It emphasizes user testing during the design process and defining user groups to meet their needs. The presentation also covers migrating content to a new content management system and training staff on maintenance responsibilities.
This is a sneak peek into the 2014 Spring EAIE Academy course 'SEO and online content: strategies for international student recruitment'.
Are you an international higher education professional? Check out all the training events of the European Association for International Education (EAIE) here: www.eaie.org/training
VIVO is an open-source semantic web application and information model that enables discovery of research across disciplines at institutions. It harvests data from verified sources to create detailed profiles of faculty and researchers. The structured linked data in VIVO allows for relationships and connections between researchers, publications, grants, and more to be visualized. Libraries can play important roles in implementing and supporting VIVO through activities like outreach, training, ontology development, and technical support.
The document outlines the key aspects of web content strategy, including defining content strategy and common workflows. It discusses typical problems with web content like it being a low priority and lacking a consistent voice. The summary then provides an overview of the typical phases in a content strategy workflow - analyzing existing content, planning new content needs, gathering/writing content, and maintaining it over time. It also briefly mentions some organizations that are doing content strategy well, like REI.
This document discusses strategies for implementing social media and metadata management in SharePoint. It begins with definitions of social media and metadata. It then discusses why metadata is important for enabling search, discovery, and reuse of content. Common problems with inconsistent or lacking metadata are explained. The document outlines best practices for planning a social media strategy including defining requirements, centralizing taxonomy, and recruiting key stakeholders. Emerging technologies that integrate with social media are also highlighted.
Congratulation, you published a paper. Has anyone read it? or Cited it? Citation tracking is used to discover how many times a particular article has been cited by other articles. Citation counts are not perfect. They are influenced by a number of factors. Review articles are sometimes more often cited than their quality would warrant. Poor quality papers can be cited while being criticized or refuted. In this workshop, I will explain about the advantages of "Citation Tracking" and introduced some “Research Tools” for improving the research impact and citations by “Tracking Citations”.
Linked Data Love: research representation, discovery, and assessment
#ALAAC15
The explosion of linked data platforms and data stores over the last five years has been profound – both in terms of quantity of data as well as its potential impact. Research information systems such as VIVO (www.vivoweb.org) play a significant role in enabling this work. VIVO is an open source, Semantic Web-based application that provides an integrated, searchable view of the scholarly activities of an organization. The uniform semantic structure of VIVO-ISF data enables a new class of tools to advance science. This presentation will provide a brief introduction and update to VIVO and present ways that this semantically-rich data can enable visualizations, reporting and assessment, next-generation collaboration and team building, and enhanced multi-site search. Libraries are uniquely positioned to facilitate the open representation of research information and its subsequent use to spur collaboration, discovery, and assessment. The talk will conclude with a description of ways librarians are engaged in this work – including visioning, metadata and ontology creation, policy creation, data curation and management, technical, and engagement activities.
Kristi Holmes, PhD
Director, Galter Health Sciences Library
Director of Evaluation, NUCATS
Associate Professor, Preventive Medicine-Health and Biomedical Informatics
Northwestern University Feinberg School of Medicine
Higher Education University Websites: Improving Information Architecture & Sc...Jorge Serrano-Cobos
The document provides guidance on improving the information architecture and scientific visibility of university websites by outlining key questions to consider regarding goals, audiences, and analysis, recommending benchmarking other top university sites, and emphasizing the importance of interaction design, open access initiatives, and social media to increase a university's scientific reputation and visibility.
This presentation was provided by Carolyn Hansen of the University of Cincinnati during the NISO Training Thursday event, Metadata and the IR, held on Thursday, February 23, 2017.
A machine learning approach to web page filtering using ...butest
This document describes a machine learning approach to web page filtering that combines content and structural analysis. The proposed approach represents web pages with features extracted from content and links. These features are used as input for machine learning algorithms like neural networks and support vector machines to classify pages. An experiment compares this approach to keyword-based and lexicon-based filtering, finding the proposed approach generally performs better, especially with few training documents.
A machine learning approach to web page filtering using ...butest
This document describes a machine learning approach to web page filtering that combines content and structural analysis. The proposed approach represents web pages with features extracted from content, such as terms and phrases, and from links. These features are used as input for machine learning algorithms like neural networks and support vector machines to classify pages. An experiment compares this approach to keyword-based and lexicon-based filtering, finding the proposed approach generally performs better, especially with few training examples. The approach could benefit topic-specific search engines and other applications.
The Stanford Workshop focused on creating plans to expedite a shift in how knowledge and information resources are managed and discovered through linked data. The goal was to identify capabilities and design new tools, processes, and systems that move beyond current metadata practices to link related resources and provide improved navigation and discovery through open feedback. A number of organizations from around the world participated in the workshop to discuss these issues.
Multichannel Self-Organized Learning and Research in Web 2.0 EnvironmentMalinka Ivanova
The document discusses building a multichannel learning environment to support self-organized learners using Web 2.0 technologies. It analyzes various start pages that could provide components for such an environment, including features for authoring, accessing information, research, collaboration, and personalization. A methodology is used involving investigating start pages, creating evaluation criteria, exploring and practicing with start pages, and forming results.
Metadata Management In A Social Media World, Spsbos, 2 2010Christian Buckley
Presentation given at the Feb 27, 2010 SharePoint Saturday event in Boston (Waltham, MA) by Christian Buckley, Senior Product Manager with echoTechnology. The premise of the presentation is that metadata and taxonomy drive the integration and business utility of social media.
Webometrics is a quantitative analysis of universities' web presence and impact. It ranks universities based on (1) Activity, which measures presence, excellence, and openness, and (2) Visibility, which measures links and impact. The ranking considers millions of web pages and links to provide a multidimensional view of university performance and influence online. However, it is limited by not distinguishing institution types and having a bias towards larger, more research-focused universities. Proper web naming practices are also important for visibility.
The document discusses various online tools for effective literature management and reference searching. It introduces popular tools like Mendeley, EndNote and Zotero for building local reference databases and sharing references online. Social bookmarking and networking sites like Diigo, SlideShare and Wikipedia are also covered that allow searching references through tags and connecting with other users.
Wiser Pku Lecture@Life Science School Pkuguest8ed46d
The document discusses various online tools for effective literature management and reference searching. It introduces popular tools like Mendeley, EndNote and Zotero for building local reference databases and sharing references online. Social bookmarking and networking sites like Diigo, SlideShare and Wikipedia are described as useful resources for searching references in a social way through tags and user connections.
Similar to Architecting a CMS for a content centered website (20)
Mastering the Concepts Tested in the Databricks Certified Data Engineer Assoc...SkillCertProExams
• For a full set of 760+ questions. Go to
https://skillcertpro.com/product/databricks-certified-data-engineer-associate-exam-questions/
• SkillCertPro offers detailed explanations to each question which helps to understand the concepts better.
• It is recommended to score above 85% in SkillCertPro exams before attempting a real exam.
• SkillCertPro updates exam questions every 2 weeks.
• You will get life time access and life time free updates
• SkillCertPro assures 100% pass guarantee in first attempt.
This presentation by OECD, OECD Secretariat, was made during the discussion “Competition and Regulation in Professions and Occupations” held at the 77th meeting of the OECD Working Party No. 2 on Competition and Regulation on 10 June 2024. More papers and presentations on the topic can be found at oe.cd/crps.
This presentation was uploaded with the author’s consent.
This presentation, created by Syed Faiz ul Hassan, explores the profound influence of media on public perception and behavior. It delves into the evolution of media from oral traditions to modern digital and social media platforms. Key topics include the role of media in information propagation, socialization, crisis awareness, globalization, and education. The presentation also examines media influence through agenda setting, propaganda, and manipulative techniques used by advertisers and marketers. Furthermore, it highlights the impact of surveillance enabled by media technologies on personal behavior and preferences. Through this comprehensive overview, the presentation aims to shed light on how media shapes collective consciousness and public opinion.
XP 2024 presentation: A New Look to Leadershipsamililja
Presentation slides from XP2024 conference, Bolzano IT. The slides describe a new view to leadership and combines it with anthro-complexity (aka cynefin).
This presentation by Professor Alex Robson, Deputy Chair of Australia’s Productivity Commission, was made during the discussion “Competition and Regulation in Professions and Occupations” held at the 77th meeting of the OECD Working Party No. 2 on Competition and Regulation on 10 June 2024. More papers and presentations on the topic can be found at oe.cd/crps.
This presentation was uploaded with the author’s consent.
Collapsing Narratives: Exploring Non-Linearity • a micro report by Rosie WellsRosie Wells
Insight: In a landscape where traditional narrative structures are giving way to fragmented and non-linear forms of storytelling, there lies immense potential for creativity and exploration.
'Collapsing Narratives: Exploring Non-Linearity' is a micro report from Rosie Wells.
Rosie Wells is an Arts & Cultural Strategist uniquely positioned at the intersection of grassroots and mainstream storytelling.
Their work is focused on developing meaningful and lasting connections that can drive social change.
Please download this presentation to enjoy the hyperlinks!
Suzanne Lagerweij - Influence Without Power - Why Empathy is Your Best Friend...Suzanne Lagerweij
This is a workshop about communication and collaboration. We will experience how we can analyze the reasons for resistance to change (exercise 1) and practice how to improve our conversation style and be more in control and effective in the way we communicate (exercise 2).
This session will use Dave Gray’s Empathy Mapping, Argyris’ Ladder of Inference and The Four Rs from Agile Conversations (Squirrel and Fredrick).
Abstract:
Let’s talk about powerful conversations! We all know how to lead a constructive conversation, right? Then why is it so difficult to have those conversations with people at work, especially those in powerful positions that show resistance to change?
Learning to control and direct conversations takes understanding and practice.
We can combine our innate empathy with our analytical skills to gain a deeper understanding of complex situations at work. Join this session to learn how to prepare for difficult conversations and how to improve our agile conversations in order to be more influential without power. We will use Dave Gray’s Empathy Mapping, Argyris’ Ladder of Inference and The Four Rs from Agile Conversations (Squirrel and Fredrick).
In the session you will experience how preparing and reflecting on your conversation can help you be more influential at work. You will learn how to communicate more effectively with the people needed to achieve positive change. You will leave with a self-revised version of a difficult conversation and a practical model to use when you get back to work.
Come learn more on how to become a real influencer!
Carrer goals.pptx and their importance in real lifeartemacademy2
Career goals serve as a roadmap for individuals, guiding them toward achieving long-term professional aspirations and personal fulfillment. Establishing clear career goals enables professionals to focus their efforts on developing specific skills, gaining relevant experience, and making strategic decisions that align with their desired career trajectory. By setting both short-term and long-term objectives, individuals can systematically track their progress, make necessary adjustments, and stay motivated. Short-term goals often include acquiring new qualifications, mastering particular competencies, or securing a specific role, while long-term goals might encompass reaching executive positions, becoming industry experts, or launching entrepreneurial ventures.
Moreover, having well-defined career goals fosters a sense of purpose and direction, enhancing job satisfaction and overall productivity. It encourages continuous learning and adaptation, as professionals remain attuned to industry trends and evolving job market demands. Career goals also facilitate better time management and resource allocation, as individuals prioritize tasks and opportunities that advance their professional growth. In addition, articulating career goals can aid in networking and mentorship, as it allows individuals to communicate their aspirations clearly to potential mentors, colleagues, and employers, thereby opening doors to valuable guidance and support. Ultimately, career goals are integral to personal and professional development, driving individuals toward sustained success and fulfillment in their chosen fields.
Presentatie 8. Joost van der Linde & Daniel Anderton - Eliq 28 mei 2024
Architecting a CMS for a content centered website
1. Architecting a CMS for a content centered website
Or ‘you really do learn a lot at these conferences’
Kristin Rowley IA/UX
University of Colorado Denver | Anschutz Medical Campus
5. 1. Switch from http to https
2. Update the site and URL structure
3. Update the architecture and content of sites as they move into the new CMS
4. Add a new search engine
5. Implement a taxonomy for tagging content
6. Break the university website into two campus specific top-level domains
7. Move to new templates (responsive) and themes (standardized)
8. Integrate with authoritative sources for content
9. Create new customer journey sites for prospective and current students
on both campuses
10. Improve accessibility
11. Improve site security and performance
Project Goals
6. 1. The structure of the site should make content easier to
find by search engines
2. The structure of the CMS should make content easily
sharable and reusable
(content types and a taxonomy)
3. Improve the site navigation and the findability of content
(site maps and wireframes!)
My Goals
7. 1. The structure of the site should make content easier to
find by search engines
2. The structure of the CMS should make content easily
sharable and reusable
(content types and a taxonomy)
3. Improve site navigation and the findability of content
(site maps and wireframes!)
My Goals
8. The structure of the site should make
content easier to find by search engines
10. When asked, almost 87%
of prospective students
say they use search
engines to find university
websites, program
information and
applications.
Ruffalo Noel Levitz 2017 E-expectations report
87%
11. Once on the university website, high
school students and their parents use
search engines more than site
navigation to find specific information
on the site.
Ruffalo Noel Levitz 2017 E-expectations report
12. People generally scan search engine
results in the order in which the results
appear and then fixate on the results that
rank highest, even when lower-ranked
results are more relevant to their
search.
From Robert Epstein and Ronald E. Robertson’s paper “The search engine manipulation effect
(SEME) and its possible impact on the outcomes of elections(2015).
18. The benefits of hub and spoke
•Better SEO
•Topics are included in the URL
•We’re creating content clusters
19. The benefits of hub and spoke
•Shorter, learnable URLs
•/offices/hr
•/offices/it
•/offices/facilities
20. The benefits of hub and spoke
•Navigation and content that serve a
specific audience
•We no longer have to have pages that
are everything to everyone
•Students can go to /students to find
information and postdocs to /postdocs
21. The structure of the CMS should make
content easily sharable and reusable
(content types and a taxonomy)
23. Ren Pope
IA Summit 2016
ATLANTA, GEORGIA
May 4-8, 2016
Ontology Dojo: Learn How to
Use Ontology to Define Your
Information And Supercharge
Your Deliverables
https://blueprintdigital.com/ia-
summit-2016/ren-pope/
24. • A specification of a conceptualization
• (Stanford: http://www-ksl.stanford.edu/kst/what-is-an-ontology.html)
• A branch of metaphysics concerned with the nature and
relations of being Ontology deals with abstract entities
• Merriam-Webster: https://www.merriam-webster.com/dictionary/ontology
•[A]n ontology is a formal naming and definition
of the types, properties, and interrelationships
of the entities that really exist in a
particular domain of discourse.
• Wikipedia: https://en.wikipedia.org/wiki/Ontology_(information_science)
26. Step 1: I evaluated the website for content that - could be
structured, is used by multiple sites (no one-offs), and
needs to be consistent.
• Bios/profiles
• Academic programs
• Events
• FAQs
• Newsrooms/articles/bylines
• Testimonials
• Alerts
• Tuition & fees
• Lists
27. Academic program content type
Degree program title
Program level
Degree earned
Associated school or college
Campus where program is delivered
Type of classroom
Area of interest
Number of credits
Description
Event content type
Title
Description
Date
Time
Location (building address)
Campus
Image
Target audience
Step 2: Once I had that list, I looked at each type of structured
content and broke it out into properties (the individual pieces of
content that make up the content type)
28. Building
Building name
Building code
Address
Campus
Person
Email address
Name
Academic degree
Title
Phone number
Address (building)
Campus
Step 3: Properties that were shared by more than one content type, were
broken out into related content types so there would be a single source for
this information.
Authoritative data
Authoritative data
Authoritative data
Manual
29. Building
Building name
Building code
Address
Campus
Person
Email address
Name
Academic degree
Title
Address (building)
Step 4: I started diagramming all this out as an ontology
Event
Title
Description
Date
Campus
Bio
Name
Title
Short bio
Campus
Testimonial
Name
Quote
School/college
Campus
Academic program
Name
Program level
Degree earned
Campus
Has a
Has a May refer to…
30. Step 5: I created specification documents for each content type that would
help the developers and designer bring these content types to life.
34. Credits
• A big thanks to Jorge Arango and Ren Pope for some
really great ideas and inspiring sessions.
• Images of the university are courtesy of the University of Colorado
Denver | Anschutz Medical Campus
• Other images (unless otherwise noted) are used under the Creative
Commons license from https://pixabay.com/
Editor's Notes
Let me start off by introducing myself. My name is Kristin Rowley and I work in the IT department at the university of Colorado Denver which has two campuses (one medical, one traditional). I consider myself an IA who also does UX.
While I work primarily on the university’s public-facing website, I also work on other IT projects around the university, helping to improve the user experience of web apps and other user-focused applications– this can mean anything from doing UX reviews to creating taxonomies or writing understandable error messages.
The current university website was set up more than 10 years ago, and while there have been some localized redesigns, the plumbing of the site hasn’t changed in that time.
Every time a new web browser, update or security patch is released, everyone in the department holds their breathe waiting to see if something will break. And for discussion at another session (or over drinks) the CMS that is currently running most of the university websites is SharePoint 2010. – that’s the reason we’re having so many issues
And while updates to the CMS are dangerous, one of the biggest issues is that due to customizations to the CMS, to make SharePoint workable as a public website, the system has become overly complex and fragile. For example, back in 2015, the web team released responsive templates -but some sites have still not be able to migrated to them because too many of their pages break when they switch, so we have large sections of our website that are not mobile friendly.
And while the website may have started out fairly straight forward when it was first built, over the last almost 10 years it has ballooned to over 35K pages, and finding sites or content is difficult or impossible.
Then, after much begging we convinced senior staff that a new CMS was required, sooner rather than later, and after a new one was selected (by committee of course) we were given about 6 months to set it up before any sites needed to start moving in.
A set of project goals was determined that range from switching to https, adding a new search engine, to creating new templates and then all the standard things like: taxonomies, improved customer journeys), with the over-arching goals of improved usability, accessibility and security.
As the only IA/UX person on the team, I helped out a quite a few of the overall goals, but I choose to focus on these three things…
I picked these three because:
At the university, we have a dispersed website management model, where of the 300+ websites the make up the university website, only a few of the very top level sites are centrally managed. Most are individually managed by departments or schools.
Centrally, we can control the top-level site structure, the features and functionality available within the CMS, and the templates and themes, but the content and how pages are layed out are up to individual units. And most of the people managing these unit websites are not web professionals (they may know very little about HTML, or web best practices).
So with the new CMS, I really want to build a lot of best practices into the CMS itself.
And while the third goal is always interesting (who doesn’t lover site maps and wireframes), I’m going to focus on the first two and the assist I got from this conference when planning the solutions
The first problem I had to solve was how to structure the site so that content was easier to find.
So first a little background on why I thought this was so important to the university - this chart is from research done last year on what tactics are most affective when trying to improve SEO. And while content, keywords and links are still at the top [CLICK], website structure is #4. So that is something that I could do that will make and appreciable difference to search engine rankings.
Also, Being findable is very important to the university because we’re a public university up against private and for-profit universities that can afford large ad campaigns (and we usually can’t) and stats like this one emphasis the importance to our overall university recruitment goals, for ranking in search.
And then there is research like this that shows that students and parents are using search engines not just find sites, but navigate them.
So, no more deeply nested subsites, or sites that are org chart driven. The new university website need to be structured in a way that makes finding content easier – our bottom line depends on it.
And while being found is important,
Being found first is even more important
From Robert Epstein and Ronald E. Robertson’s paper “The search engine manipulation effect (SEME) and its possible impact on the outcomes of elections(2015).
They used eye tracking studies and determined people are most likely to click on one of the top three results, regardless of the relevancy to their search or the credibility of the website.
Their conclusion was that: This happens because people trust search engine companies to assign higher ranks to the results best suited to their needs, even though these same people generally have no idea how search engines worked. They assume that google and bing are looking out for their best interests.
So when viewing search engine results, if a for-profit university is listed above us, prospective students may consider it more credible or relevant.
So how was I going to set up a structure that would help CMS users make better, more searchable websites?
So this will be my first of two paraphrases of previous IA Summit sessions – and hopefully, I’m close to the bullseye, but just incase I’m off – please go online and view it for yourselves.
Back in 2016 at the IA Summit in Atlanta, Jorge Arango did a talk about place-making, and creating environments that users can understand/learn and can easily move around in. He used the architecture of Disneyland as an example, speaking about how you want the environments that you create to be able to grow and expand with your users over time.
This talk really stuck with me. A year later, when thinking about how to architect the new website, I revisited the concepts he presented along with analyzing some of my favorite ‘well organized’ websites, (like the BBC site), [CLICK] and I made the recommendation that we move to a hub and spoke architecture for the university.
A hub and spoke architecture is basically what you see here – there is a central hub, surrounded by spokes – I’ll get into the benefits of this structure in a minute.
So, how did I work out the details…
To come up with a model, I:
Looked at all our current websites and broke them into groups. I found there were five general types of sites:
School/college sites
Campus specific sites
Audience specific sites (for groups that don’t identify with a specific campus or college)
Verticals (research, healthcare..)
‘Collection of’ sites (admin offices, centers, clinics, we’re even planning a policy spoke)
I took Jorge’s presentation quite literally in the initial version of the diagram (see here), I overlaid my recommendations for the university website on top of a map of Disneyland to help make the concept more easily understood by stakeholders, like deans and chancellors who are not web professionals and don’t know anything about sitemaps
The benefits of moving to the hub and spoke model include:
Better SEO (of course). And some of the reasons for this are that one – each spoke uses keywords to identify the ‘type’ of site in the URL
For example – On the current website, we have information about research at the university in (at least) three different subsites, with three different site goals, diluting our ability to rank high for research terms (we’re competing against ourselves in a way), and we’re creating confusion about where in-bound links should point.
with hub and spoke, all research related sites will all be located in a single spoke, their URL will contain the keyword ‘research’ that helps describe the content – but because we’re a university, we have sites named things like: CCTSI or COMIRB – designating them as research focused sites (using the word ‘research’), will help make them more findable
Also, grouping them together allows for them to combine their SEO juice. This is the concept of ‘content clustering’ to get more SEO weight.
Content clustering works because Instead of searching for keywords, people have started searching for questions or concepts, thanks in part to voice search and Siri and ALexa. Creating content clusters (also referred to as ‘pillars’) has proven to increase search results because it provides a better semantic understanding for search engines of the ‘aboutness’ of their content when taken as a whole. (Hubspot and Moz.com has been doing a lot of research around this concept).
Shorter, learnable URLs
For example, all administrative offices will be under /offices – need to find HR- easy, it’s at /offices/hr. Need to find the Office of the Chancellor - /offices/chancellor.
And because sites are no longer deeply nested, only 1 or 2 levels down at the most, shorter URLs are also an SEO win
The third benefit of moving to hub and spoke is that;
Navigation and content serves a specific audience – it no longer has to serve all audiencesWe can now have website sections that focus on specific types of users and their tasks
When talking about this point – I always call it the ‘multiple front doors’ benefit. In our current website, there is one front door – the top level homepage (it’s our primary landing page with 17% of all entrances). Most users come through this single entry point at some time, and then have to find their way to the content they really want. Historically, this has meant that the home page needs to address all audiences, throw in the politics of homepage content and this page has had everything to everyone (visit a university website and notice how much navigation they have on their homepage)
Primary navigation, audience based navigation, quick links, in-page navigation, fat footersWhile not all universities do this, quite a few still do
Moving to this new architecture allows us to create multiple, topic or audience focused front doors that will take the pressure off the homepage. And hopefully over time, search engines will start to recognize these new doors as well, sending users to those landing pages instead.
For these front-door spoke sites:
Their navigation is specific to their primary audience, their content answers those needs and helps that audience complete their tasks – it’s focused and simplified (hopefully).
And, Already hub and spoke has been paying off in other ways, it has made building out sites in the new CMS a lot easier. Sites can move from the old CMS to a spoke in the new CMS with fewer dependencies.
For example, because centers and clinics now have their own spoke, they can move over any time without having to wait on the school/college site to move first.
And, when the university has a new initiative that it wants to promote, something like ‘healthcare’ , this can become it’s own spoke instead of having to figure out where to ‘fit it in’.
The second goal I wanted to work on was around sharable and reusable content.
Currently, most of our webpages are being built as one-offs, with little or no shared content (mostly because it’s very difficult in the current CMS to do). So the CMS itself is making it difficult for our websites to rank because of three things:
Duplicate content (people copy/paste if they can’t find shared content)
Confusing or inaccurate content – because content gets out of date or isn’t being pulled from authoritative sources
Hard to reuse content (pulling in a tuition & fees table can overwhelm a page, so content managers aren’t doing it – they link to a tuition table on a different site, dropping the user into unknown territory)
We new all along that we were going to need to set up content types and a taxonomy in the new CMS, but where to start…
Again at the IA Summit in Atlanta, Ren Pope did a session on ontologies and basically told us not to be scared of using them to diagram relationships. He was very convincing.
At this time, the university was in the process of selecting the new CMS, so it was in the back of my mind – and I had a ‘light bulb’ moment in Ren’s session. Creating an ontology was the perfect way to work out what we needed to do.
For those that may have missed his talk, you can watch it online, but in the meantime, just to catch everybody up
While there are multiple meanings – for IA’s - an ontology is basically a way to model a system or environment using objects that have properties, and their relationships.
Of these definitions, the Wikipedia one probably makes the most sense, but I do love the top one from Stanford – I think I’m going to start using that phrase as my elevator pitch when someone asks what an IA does – we create specifications of conceptualizations.
So how do you create a set of content types that allow CMS users to easily share, reuse and repurpose content when you’re starting from scratch?
First I looked at our current websites for content that:
- could be structured
- was used by multiple sites (no one-offs)
- but I also considered content that needed to be consistent everywhere it appeared, even if it was only in a few places
The list of content types that I came up with include:
Bios/profiles
Academic programs
Events
FAQs
Newsrooms/articles/bylines
Testimonials
Alerts
Out-of-the-box content types were:
Lists
Blogs
That we also used.
And since this initial list, there have been additional types added – but this is the list that got us going.
This was initially a big excel spreadsheet with a tab for each content type and rows for each property(field). I added additional information around the fields where necessary – like example text, to help everyone on the team understand what was included.
Once I had a list of fields– I ran them by a stakeholder group for feedback to get the final list of content types and properties.
Next I did analysis. First I looked for blocks of content (meaning multiple fields) that were used by multiple content types that could be pulled out or refactored and made into their own related or child content types.
[CLICK]
For example, I noticed that:
Address (which included building, street address, city, state, zip) is used by: department, event, and person was pulled out on it’s own and used as a related content type so that there was a single source for that information that could be easily updated. And by doing this, it could also be used to create a directory of university buildings using the new address content type
Person information is used by: bio and byline and on its own could be used to create a directory of people
Second, [CLICK] I looked for single fields (not groups of fields) that were used across multiple content types to see if they could become part of the CMS’s tagging system.
This would allow these values to have a single source and use a controlled vocabulary
Common fields that needed a controlled vocabulary included:
Campus
School/college
Role/audience
Department
Academic program
And finally – I looked at all fields and added an attribute for if the value for that field should come from an authoritative source or if it would be manually entered. This allowed us to start working with the data team on getting that information imported into the CMS.
Now I had objects and properties, I just needed to map out their relationships. [CLICK]
While this can get a bit complicated, it was very helpful in determining if I need to do more refactoring for related content types, and it allowed me to evaluate relationships that weren’t definite (may refer to, may teach…) as places where I needed to make sure that the tagging functionality could tie those content types together.
For example a testimonial may refer to a program. These are not related or parent/child content types (because they have no fixed relationship), but they need to be able to be associate together, so that was done by making sure that testimonial used the ‘academic program’ name tag. This tag wasn’t part of the initial content inventory, but was added to create this association
After diagramming the content types as an ontology, I realized that some tags should go on most, if not all content types (tags like ‘campus’ and role’). This way, these tags could be used to create focused pages based on their value.
For example if I created a Faculty page – I can display faculty focused content from the newsrooms, the events feed, or bios
Having global tags has paid off big time, when we recently got the directive that we now have two top level domains, one for each campus, and we were going to be breaking our site into two– tagging insured that content could still live centrally or be managed by a single office, but on the campus sites, only the relevant content would show.
The ontology not only helped define content types, but it also helped to create the taxonomy that tied it all together.
Had there not be the ontology diagram, I’m not sure I would have been as thorough in the evaluation of our content.
AS the final step, I needed to create a document that the developers could use to build these content types and the ontology was overwhelming. [CLICK] for this, I created a specification document. It took all the information about a single content type and recorded it in a single document. Additional information was added around presentation of the content using wireframes. This document also made it easy to vet the content types with stakeholders without confusing them with an ontology
At the end, this has helped us to solve the three issues:
[CLICK] Frequently used content was standardized into content types that could be displayed on multiple pages, but with a single source for the content (no more duplicate content)
[CLICK] Important, high value content like tuition could be pulled in from an authoritative source, keeping it up to date (no inaccurate content) and now there were multiple views of the content so only relevant information needed to be shown (not everything)
[CLICK] Using tagging our CMS users could easily relate the content types together creating more focused pages (no more hard to reuse content).
Add as an added bonus - we were also able to pull the content types and tagging into our site search as facets, to help users filter their search results.
-------
Thanks you all for taking the time to be here, especially on a Sunday afternoon, it is much appreciated.
The information and ideas that are shared at these summits is really great stuff that we can all use to solve real-world problems. And I would like to thank Jorge and Ren, and everyone else who has presented over the years for sharing their ideas, this event really does help to move our industry forward.