The document summarizes the results of an SEO analysis of 52 research networking sites. It finds that sites using their own institutional domain and custom or Profiles software scored highest in Google rankings. Additionally, sites with many diverse incoming links from other domains performed better. The analysis provides recommendations for improving SEO, including establishing benchmarks, optimizing for Google, and encouraging other sites to link back. Overall, the state of SEO across the sites showed mixed results.
This document discusses secrets to having a successful UCSF Profiles page based on analytics of over 1.2 million visits in the last year. The top factors that predict a popular profile are: 1) being popular, newsworthy, and well-funded; 2) being an active online communicator through social media and blogging; 3) including links to other professional websites; 4) including a bio and photo; and 5) adding slides and videos. Simply including awards, keywords, or education does not strongly correlate with more profile visits.
VIVO2015 - Leveraging Personalized Google Analytics for Greater RNS EngagementBrian Turner
Description of a new feature in the ORNG suite that shows a researcher her/his profile page views and information about them in a user-friendly dashboard.
10 simple ways UCSF Profiles has been used to win funding, find collaborators...lesliey
UCSF Profiles has been used in 10 ways to help researchers, clinicians, and the university:
1) It connects students and trainees with potential faculty mentors based on shared interests.
2) It saves staff and faculty time by allowing campus websites to automatically update information from researcher profiles.
3) Administrators use profile data to generate reports that recognize researchers' achievements and new publications in top journals.
Outline of the UCSF approach to Research Networking, which focuses on rapid iterations of adding new data sources and features to see what works, and abandon what doesn't work.
The document summarizes the results of an SEO analysis of 52 research networking sites. It found that sites using their own institutional domain (like profiles.ucsf.edu) and custom or Profiles software performed best. The top sites had many incoming links from diverse domains. To improve SEO, sites should use high-quality profile pages to attract links, implement analytics and sitemaps, and encourage links from campus sites. Overall SEO performance across sites was mixed, depending on platform and link-building efforts.
Growth Hacking 101 for Research Networking (for VIVO Implementation & Dev call)anirvanchatterjee
This document outlines steps for increasing traffic to a research networking site called UCSF Profiles using search engine optimization techniques. It begins by establishing baselines using web analytics to understand traffic patterns. It recommends focusing efforts on researcher profile pages rather than the home page. Next it provides actions to ensure search engines can access pages through the robots.txt file and sitemaps. Further steps include optimizing page titles, descriptions and URLs to attract searchers. The document concludes by recommending ways to build inbound links from other university websites and through APIs to establish reputation.
UCSF Profiles is a campus resource that enables collaboration by identifying expertise. It provides public data on researchers that is syndicated across many UCSF websites and used for targeted emails. As an open source platform, it allows many to contribute additional applications. The site sees high traffic, with over 2,000 daily visits mostly from search engines like Google.
This document discusses secrets to having a successful UCSF Profiles page based on analytics of over 1.2 million visits in the last year. The top factors that predict a popular profile are: 1) being popular, newsworthy, and well-funded; 2) being an active online communicator through social media and blogging; 3) including links to other professional websites; 4) including a bio and photo; and 5) adding slides and videos. Simply including awards, keywords, or education does not strongly correlate with more profile visits.
VIVO2015 - Leveraging Personalized Google Analytics for Greater RNS EngagementBrian Turner
Description of a new feature in the ORNG suite that shows a researcher her/his profile page views and information about them in a user-friendly dashboard.
10 simple ways UCSF Profiles has been used to win funding, find collaborators...lesliey
UCSF Profiles has been used in 10 ways to help researchers, clinicians, and the university:
1) It connects students and trainees with potential faculty mentors based on shared interests.
2) It saves staff and faculty time by allowing campus websites to automatically update information from researcher profiles.
3) Administrators use profile data to generate reports that recognize researchers' achievements and new publications in top journals.
Outline of the UCSF approach to Research Networking, which focuses on rapid iterations of adding new data sources and features to see what works, and abandon what doesn't work.
The document summarizes the results of an SEO analysis of 52 research networking sites. It found that sites using their own institutional domain (like profiles.ucsf.edu) and custom or Profiles software performed best. The top sites had many incoming links from diverse domains. To improve SEO, sites should use high-quality profile pages to attract links, implement analytics and sitemaps, and encourage links from campus sites. Overall SEO performance across sites was mixed, depending on platform and link-building efforts.
Growth Hacking 101 for Research Networking (for VIVO Implementation & Dev call)anirvanchatterjee
This document outlines steps for increasing traffic to a research networking site called UCSF Profiles using search engine optimization techniques. It begins by establishing baselines using web analytics to understand traffic patterns. It recommends focusing efforts on researcher profile pages rather than the home page. Next it provides actions to ensure search engines can access pages through the robots.txt file and sitemaps. Further steps include optimizing page titles, descriptions and URLs to attract searchers. The document concludes by recommending ways to build inbound links from other university websites and through APIs to establish reputation.
UCSF Profiles is a campus resource that enables collaboration by identifying expertise. It provides public data on researchers that is syndicated across many UCSF websites and used for targeted emails. As an open source platform, it allows many to contribute additional applications. The site sees high traffic, with over 2,000 daily visits mostly from search engines like Google.
UCSF Profiles is a campus resource that enables collaboration by identifying expertise. It provides public data on researchers that is syndicated across many UCSF websites and used for targeted emails. As an open source platform, it allows many to contribute additional applications. The site sees high traffic, with over 2,000 daily visits mostly from search engines like Google.
Deconstructing a Dashboard: Inside the UCSF Profiles Team’s Monthly Key Metricsanirvanchatterjee
The document summarizes the key metrics that the UCSF Profiles team tracks in a monthly dashboard to monitor usage and performance of the UCSF Profiles research networking platform. It describes the types of metrics tracked, such as website traffic sources, user engagement, customization rates, and site performance. The dashboard is shared with stakeholders each month along with commentary on trends, issues, and goals. The metrics provide insights into how users interact with the site and how to better enable research networking through the platform.
This document discusses Smart Subjects, a subject recommendation engine that takes user search queries as input and outputs a list of related library subjects. It is intended to provide application-independent subject recommendations that can be integrated into library search tools. The system works by harvesting institutional data like course descriptions and faculty publications to create text extracts of academic departments, which are then indexed and queried to crosswalk departments to library subjects. Potential applications include integrating recommendations into quick search and an OpenSearch interface. Future plans include expanding the subject indices and gauging interest in a community platform.
This document provides guidance on finding and critically analyzing data about schools and communities. It discusses key sources of data like government agencies, non-profits, academic institutions, and the private sector. When finding data, it's important to consider topics that may be controversial, sampling techniques, and publication timeframes. The document outlines how to evaluate data sources and methodology, identify potential biases, and distinguish between correlations and causation. Specific data sources mentioned include the California Department of Education, California Healthy Kids Survey, School Accountability Report Cards, U.S. Census Bureau's American FactFinder, and the American Community Survey. Exercises are provided to have users find and analyze data for a particular school and neighborhood.
Access to Freely Available Journal Articles: Gold, Green, and Rogue Open Ac...Jason Price, PhD
A recent bibliometrics study found that 54% of 4.6 million scientific papers from peer-reviewed journals indexed in Scopus during the years 2011-2013 could be downloaded for free on the internet in April of 2014 (Archambault, et al. 2014). As time rolls on, authors and researchers are increasingly using more-and-less legal scholarly article sharing services to "take back the literature," or even just to access it more conveniently (Bohannon, 2016). The objective of this study was to evaluate a manageable sample of journal articles across the sciences, social sciences and humanities for their availability in gold, green and rogue open access forms, including ResearchGate and Sci-Hub. Attendees will gain a greater appreciation of the extent of open access availability through Google Scholar, Google and commercial discovery systems, and will be challenged to roll with the times by expanding the role of libraries in broadening access to the freely available literature.
Date: September 6th, 2017
Speaker: Jesse Chandler, PhD, is a survey researcher at Mathematica Policy Research and an Adjunct Faculty Associate at the Institute for Social Research at the University of Michigan.
Overview: Crowdsourcing has had a dramatic impact on the speed and scale at which scientific research can be conducted. Clinical scientists have particularly benefited from readily available research study participants and streamlined recruiting and payment systems afforded by Amazon Mechanical Turk (MTurk), a popular labor market for crowdsourcing workers. MTurk has been used in this capacity for more than five years. The popularity and novelty of the platform have spurred numerous methodological investigations, making it the most studied nonprobability sample available to researchers. This article summarizes what is known about MTurk sample composition and data quality with an emphasis on findings relevant to clinical psychological research. It then addresses methodological issues with using MTurk--many of which are common to other nonprobability samples but unfamiliar to clinical science researchers--and suggests concrete steps to avoid these issues or minimize their impact.
Software in the scientific literature: Problems with seeing, finding, and usi...James Howison
Software is increasingly crucial to scholarship, yet the visibility and usefulness of software in the scientific record is in question. Just as with data, the visibility of software in publications is related to incentives to share software in re-usable ways, and so promote efficient science. In this paper we examine software in publications through content analysis of a random sample of 90 biology articles. We develop a coding scheme to identify software “mentions,” and classify them according to their characteristics and ability to realize the functions of citations. Overall we find diverse and problematic practices: only between 31–43% of mentions involve formal citations; informal mentions are very common, even in high impact factor journals and across different kinds of software. Software is frequently inaccessible (15–29% of packages in any form; between 90–98% of specific versions; only between 24–40% provide source code). Cites to publications are particularly poor at providing version information, while informal mentions are particularly poor at providing crediting information. We provide recommendations to improve the practice of software citation, highlighting recent nascent efforts. Software plays an increasingly great role in scientific practice; it deserves a clear and useful place in scholarly communication.
Altmetrics: painting a broader picture of impactPaul Groth
Altmetrics aims to provide a broader view of scholarly impact by measuring online activity beyond traditional citations. This includes tracking the usage, mentions, views and sharing of different research artifacts like papers, preprints, slides and data across various online tools and environments. While still developing, altmetrics can help tell a more comprehensive story of a researcher's impact by considering things like workshop discussions, code usage, and public outreach in addition to bibliographic citations, which alone took 20 years to become an accepted measure. The goal is to map the network of how research spreads and is discussed online.
Identifying Twitter audiences: Who is tweeting about scientific papers?Stefanie Haustein
Haustein, S. & Costas, R. (2015). Identifying Twitter audiences: Who is tweeting about scientific papers?
Presentation at METRICS2015 ASIS&T SIG/MET Workshop
https://www.asist.org/SIG/SIGMET/
The Pattern of social media use and the impact on medical students' learningSHU Learning & Teaching
This document outlines a presentation on a study about the pattern of social media use among medical students in Saudi Arabia and its impact on learning. Some key findings of the study include:
- Nearly all medical students used some form of social media, with WhatsApp and YouTube being most popular. Social media was used mostly for entertainment and socializing.
- Male students tended to use YouTube and Facebook more, while female students used Instagram, Path, and Twitter more.
- Most students checked social media daily, including during lectures. Students with lower GPAs used social media more frequently during lectures.
- Reasons for social media use included staying up to date on news and socializing. Some perceived social media as distracting
Mene Zua is a biomedical engineering student at The George Washington University. He has relevant research and clinical experience through projects at GWU and internships at medical centers. Currently, he is creating an eye-tracking system for patients with eye dystonia. Mene maintains a 3.0 GPA and holds leadership roles as president of the Black Men's Initiative and a residential advisor. His technical skills include MATLAB, C, Java, RStudio, and Microsoft Office.
Telling your research story with (alt)metricsPaul Groth
Presentation on the use of altmetrics to inform stories about altmetrics. Presented for Open Access week 2013 in Amsterdam. See http://uba.uva.nl/home/componenten/agenda-2/agenda-2/content/folder/lezingen/13/10/altmetrics.html
Progress ALERT - Leaderboards to improve CPR SkillsINSPIRE_Network
1) The study aims to evaluate whether access to leaderboards improves CPR skills for high school students and in-hospital healthcare professionals compared to no access.
2) Participants will use CPR mannequins and be able to view their scores and others' on online leaderboards (intervention) or not view scores (control). Outcomes include practice frequency and quality of CPR techniques.
3) The study has begun recruiting sites and participants. Goals are to recruit more participants and locations, obtain additional funding, and conduct sub-studies at local sites.
Alternative Avenues of Discovery: Competition or PotentialJason Price, PhD
The document discusses alternative avenues of discovery for libraries, focusing on three emerging examples that correspond to the themes of reaching out, providing intuitive services, and gaining insights from analytics. The first example is Libhub via Zepheira, which uses linked data to extend library catalogs onto the web. The second is 1science Open Access Solutions, which expands discoverable content by making institutional repository publications freely available. The third is Yewno's inference engine, which supports discovery by revealing connections between concepts based on underlying scholarly content.
Progress ALERT - Sim-based Assessment Tools for General Pediatric MilestonesINSPIRE_Network
Leah Mallory presented on the development of simulation-based assessment tools for general pediatrics milestones. Four milestones were identified as suitable for simulation-based assessment through a survey of program directors and experts, and separate groups were formed to identify or develop tools for each. The groups have conducted literature reviews and are in different stages of tool development or identification, with some requiring validation studies. The goals are to finalize study designs, obtain necessary approvals, and establish feasibility and validity of the tools across multiple sites in the next year.
UCSF Profiles is a campus resource that enables collaboration by identifying expertise. It provides public data on researchers that is syndicated across many UCSF websites and used for targeted emails. As an open source platform, it allows many to contribute additional applications. The site sees high traffic, with over 2,000 daily visits mostly from search engines like Google.
Deconstructing a Dashboard: Inside the UCSF Profiles Team’s Monthly Key Metricsanirvanchatterjee
The document summarizes the key metrics that the UCSF Profiles team tracks in a monthly dashboard to monitor usage and performance of the UCSF Profiles research networking platform. It describes the types of metrics tracked, such as website traffic sources, user engagement, customization rates, and site performance. The dashboard is shared with stakeholders each month along with commentary on trends, issues, and goals. The metrics provide insights into how users interact with the site and how to better enable research networking through the platform.
This document discusses Smart Subjects, a subject recommendation engine that takes user search queries as input and outputs a list of related library subjects. It is intended to provide application-independent subject recommendations that can be integrated into library search tools. The system works by harvesting institutional data like course descriptions and faculty publications to create text extracts of academic departments, which are then indexed and queried to crosswalk departments to library subjects. Potential applications include integrating recommendations into quick search and an OpenSearch interface. Future plans include expanding the subject indices and gauging interest in a community platform.
This document provides guidance on finding and critically analyzing data about schools and communities. It discusses key sources of data like government agencies, non-profits, academic institutions, and the private sector. When finding data, it's important to consider topics that may be controversial, sampling techniques, and publication timeframes. The document outlines how to evaluate data sources and methodology, identify potential biases, and distinguish between correlations and causation. Specific data sources mentioned include the California Department of Education, California Healthy Kids Survey, School Accountability Report Cards, U.S. Census Bureau's American FactFinder, and the American Community Survey. Exercises are provided to have users find and analyze data for a particular school and neighborhood.
Access to Freely Available Journal Articles: Gold, Green, and Rogue Open Ac...Jason Price, PhD
A recent bibliometrics study found that 54% of 4.6 million scientific papers from peer-reviewed journals indexed in Scopus during the years 2011-2013 could be downloaded for free on the internet in April of 2014 (Archambault, et al. 2014). As time rolls on, authors and researchers are increasingly using more-and-less legal scholarly article sharing services to "take back the literature," or even just to access it more conveniently (Bohannon, 2016). The objective of this study was to evaluate a manageable sample of journal articles across the sciences, social sciences and humanities for their availability in gold, green and rogue open access forms, including ResearchGate and Sci-Hub. Attendees will gain a greater appreciation of the extent of open access availability through Google Scholar, Google and commercial discovery systems, and will be challenged to roll with the times by expanding the role of libraries in broadening access to the freely available literature.
Date: September 6th, 2017
Speaker: Jesse Chandler, PhD, is a survey researcher at Mathematica Policy Research and an Adjunct Faculty Associate at the Institute for Social Research at the University of Michigan.
Overview: Crowdsourcing has had a dramatic impact on the speed and scale at which scientific research can be conducted. Clinical scientists have particularly benefited from readily available research study participants and streamlined recruiting and payment systems afforded by Amazon Mechanical Turk (MTurk), a popular labor market for crowdsourcing workers. MTurk has been used in this capacity for more than five years. The popularity and novelty of the platform have spurred numerous methodological investigations, making it the most studied nonprobability sample available to researchers. This article summarizes what is known about MTurk sample composition and data quality with an emphasis on findings relevant to clinical psychological research. It then addresses methodological issues with using MTurk--many of which are common to other nonprobability samples but unfamiliar to clinical science researchers--and suggests concrete steps to avoid these issues or minimize their impact.
Software in the scientific literature: Problems with seeing, finding, and usi...James Howison
Software is increasingly crucial to scholarship, yet the visibility and usefulness of software in the scientific record is in question. Just as with data, the visibility of software in publications is related to incentives to share software in re-usable ways, and so promote efficient science. In this paper we examine software in publications through content analysis of a random sample of 90 biology articles. We develop a coding scheme to identify software “mentions,” and classify them according to their characteristics and ability to realize the functions of citations. Overall we find diverse and problematic practices: only between 31–43% of mentions involve formal citations; informal mentions are very common, even in high impact factor journals and across different kinds of software. Software is frequently inaccessible (15–29% of packages in any form; between 90–98% of specific versions; only between 24–40% provide source code). Cites to publications are particularly poor at providing version information, while informal mentions are particularly poor at providing crediting information. We provide recommendations to improve the practice of software citation, highlighting recent nascent efforts. Software plays an increasingly great role in scientific practice; it deserves a clear and useful place in scholarly communication.
Altmetrics: painting a broader picture of impactPaul Groth
Altmetrics aims to provide a broader view of scholarly impact by measuring online activity beyond traditional citations. This includes tracking the usage, mentions, views and sharing of different research artifacts like papers, preprints, slides and data across various online tools and environments. While still developing, altmetrics can help tell a more comprehensive story of a researcher's impact by considering things like workshop discussions, code usage, and public outreach in addition to bibliographic citations, which alone took 20 years to become an accepted measure. The goal is to map the network of how research spreads and is discussed online.
Identifying Twitter audiences: Who is tweeting about scientific papers?Stefanie Haustein
Haustein, S. & Costas, R. (2015). Identifying Twitter audiences: Who is tweeting about scientific papers?
Presentation at METRICS2015 ASIS&T SIG/MET Workshop
https://www.asist.org/SIG/SIGMET/
The Pattern of social media use and the impact on medical students' learningSHU Learning & Teaching
This document outlines a presentation on a study about the pattern of social media use among medical students in Saudi Arabia and its impact on learning. Some key findings of the study include:
- Nearly all medical students used some form of social media, with WhatsApp and YouTube being most popular. Social media was used mostly for entertainment and socializing.
- Male students tended to use YouTube and Facebook more, while female students used Instagram, Path, and Twitter more.
- Most students checked social media daily, including during lectures. Students with lower GPAs used social media more frequently during lectures.
- Reasons for social media use included staying up to date on news and socializing. Some perceived social media as distracting
Mene Zua is a biomedical engineering student at The George Washington University. He has relevant research and clinical experience through projects at GWU and internships at medical centers. Currently, he is creating an eye-tracking system for patients with eye dystonia. Mene maintains a 3.0 GPA and holds leadership roles as president of the Black Men's Initiative and a residential advisor. His technical skills include MATLAB, C, Java, RStudio, and Microsoft Office.
Telling your research story with (alt)metricsPaul Groth
Presentation on the use of altmetrics to inform stories about altmetrics. Presented for Open Access week 2013 in Amsterdam. See http://uba.uva.nl/home/componenten/agenda-2/agenda-2/content/folder/lezingen/13/10/altmetrics.html
Progress ALERT - Leaderboards to improve CPR SkillsINSPIRE_Network
1) The study aims to evaluate whether access to leaderboards improves CPR skills for high school students and in-hospital healthcare professionals compared to no access.
2) Participants will use CPR mannequins and be able to view their scores and others' on online leaderboards (intervention) or not view scores (control). Outcomes include practice frequency and quality of CPR techniques.
3) The study has begun recruiting sites and participants. Goals are to recruit more participants and locations, obtain additional funding, and conduct sub-studies at local sites.
Alternative Avenues of Discovery: Competition or PotentialJason Price, PhD
The document discusses alternative avenues of discovery for libraries, focusing on three emerging examples that correspond to the themes of reaching out, providing intuitive services, and gaining insights from analytics. The first example is Libhub via Zepheira, which uses linked data to extend library catalogs onto the web. The second is 1science Open Access Solutions, which expands discoverable content by making institutional repository publications freely available. The third is Yewno's inference engine, which supports discovery by revealing connections between concepts based on underlying scholarly content.
Progress ALERT - Sim-based Assessment Tools for General Pediatric MilestonesINSPIRE_Network
Leah Mallory presented on the development of simulation-based assessment tools for general pediatrics milestones. Four milestones were identified as suitable for simulation-based assessment through a survey of program directors and experts, and separate groups were formed to identify or develop tools for each. The groups have conducted literature reviews and are in different stages of tool development or identification, with some requiring validation studies. The goals are to finalize study designs, obtain necessary approvals, and establish feasibility and validity of the tools across multiple sites in the next year.
How to Harness the Power of Google Analytics, Email Marketing & Vanity to Inc...CTSI at UCSF
40 minute presentation by Nooshin Latour (@nooshin) & Anirvan Chatterjee (@anirvan) at the UC Computing Services Conference (UCCSC 2014). Evolution of UCSF Profiles research networking system, early promotion at launch, growth/SEO, and engagement with targeted personalized data emails. Full description here: https://uccsc.ucsf.edu/node/101
How to Harness the Power of Google Analytics, Email Marketing & Vanity to Inc...Nooshin Latour
40 minute presentation by Nooshin Latour & Anirvan Chatterjee at the UC Computing Services Conference (UCCSC 2014). Evolution of UCSF Profiles research networking system, early promotion at launch, growth/SEO, and engagement with targeted personalized data emails. Full description here: https://uccsc.ucsf.edu/node/101
This document provides an overview and introduction for new faculty attending a research orientation at Michigan State University. It discusses the importance and expectations of research at MSU, as well as resources and support available to faculty researchers. Key points include:
- MSU aims to have world-class research programs that address today's problems and advance knowledge. Research is important for education, outreach, and attracting top students and faculty.
- Faculty are expected to actively pursue research excellence and funding, publish in top journals, involve students, and use resources like the libraries and Office of Research and Graduate Studies for support.
- The document reviews compliance requirements and resources for topics like animal and human subjects research, as well as how to report
The Kitchen Is Closed: Main Menus, User Experience, & Competing OrdersAmanda Billy
Regardless of your organization or industry, the main menu of your website is arguably one of the most important elements you will develop. As the roadmap to your site and one of your users’ primary tools for getting around, its importance cannot be overstated. However, balancing web strategy, usability and information architecture best practices, and the perpetual influx of requests and demands from your various campus partners can be challenging, if not harrowing, for a well-intentioned web manager or administrator. What links should you include? What should they be called? Which should you omit? How do you get started? This poster presentation will explore the navigation development process, including the incorporation of goals and analytics in your decision-making process and the most common headings and links your users will encounter as they explore other higher education websites. The goal: to stop taking orders and solidify a main menu that works for your site.
"Undergrad ecologists aren't learning data management" - ESA 2013Carly Strasser
Presentation for Ecological Society of America 2013 Meeting in Minneapolis, MN on 6 August 2013. Results published in Ecosphere doi: 10.1890/ES12-00139.1
VIVO is an open-source semantic web application and information model that enables discovery of research across disciplines at institutions. It harvests data from verified sources to create detailed profiles of faculty and researchers. The structured linked data in VIVO allows for relationships and connections between researchers, publications, grants, and more to be visualized. Libraries can play important roles in implementing and supporting VIVO through activities like outreach, training, ontology development, and technical support.
UCSF Profiles: Research Networking Usage at a Large Biomedical InstitutionCTSI at UCSF
UCSF invested in a research networking system called UCSF Profiles to enable collaboration and accelerate translational research. Twenty-six months after launch, UCSF Profiles receives over 2,000 visits per day driven by search engine optimization and links from campus websites. Analysis of usage data from Google Analytics shows that the majority of users are researchers finding individual profiles through search engines to find potential collaborators or information for projects. Power users who spend more time on the site are more likely to visit other areas beyond profiles.
Identification of Early Career Researchers: How Universities and Funding Orga...ORCID, Inc
Funding agencies, universities, and research institutes all face challenges of reliably identifying their researchers and monitoring outcomes over time. All researchers—and especially early career researchers seeking to establish their careers—need to be reliably connected to their research outputs, without the confusion common, changeable names creates. Graduate students and postdoctoral researchers supported by grants also have specific challenges: if they are not the PI, they are not included in grant information; they may not even know which grant(s) they are supported by; and as a result, the existing challenges of reliably tying publications to grant funding are even more problematic. The use of the unique, persistent ORCID identifier can help support outcomes tracking and evaluation.
In 2012, the U.S. National Institutes of Health Biomedical Research Workforce Working Group made recommendations that the NIH should take to support a sustainable biomedical research workforce in the U.S. In the course of its study, working group members were “frustrated and sometimes stymied” by the lack of quality, comprehensive data about biomedical researchers. In response, NIH has recommended the development of a simple, comprehensive tracking system for trainees, implemented a shared, voluntary researcher profile system called the Science Experts Network Curriculum Vitae (SciENcv), and encouraged the adoption of unique, persistent ORCID identifiers for researchers. Additionally, NIH has begun collecting data about individuals in graduate and undergraduate student project roles who are supported by NIH grants.
Research universities like Texas A&M are also responding by incorporating the ORCID identifier into their systems, enabling the improved identification, data collection, and career outcome tracking of students and postdoctoral researchers--and educating these early career researchers about the benefits they will receive from a unique, persistent research identifier. They are also beginning to link Electronic Theses and Dissertations (ETDs) to early career researchers' ORCID records.
ORCID is an independent, non-profit organization that provides an open registry of unique and persistent identifiers for researchers and scholars. ORCID collaborates with the community to integrate ORCID identifiers into research systems and workflows, improving data management and accuracy across systems. ORCID enables interoperability between research systems worldwide, ensuring that researchers are correctly and automatically linked to their contributions. Since its launch in October 2012, ORCID has seen rapid adoption by more than 670,000 researchers and 130+ member organizations.
From Webinar 4/23/14, https://orcid.org/content/identification-early-career-researchers-how-universities-and-funding-organizations-are-using
With the flourishing environment of platforms for sharing data, establishing an online profile and engaging in scientific discourse through alternative modes of publishing and participation, there are numerous potential benefits. However, while many scientists invest significant amounts of time in sharing their activities and opinions with friends and family the majority do not make use of the new opportunities to participate in the developing social web of science, despite the potential impact and influence on future careers. We now have many new ways to contribute to science outside of the classical publishing model. These include the ability to annotate and curate data, to “publish” in new ways on blogs and micropublishing sites, and many of these activities can be as part of a growing crowdsourcing network. Our efforts in this area are already being indexed and exposed on the internet via our publications, presentations and data and increasingly we are being quantified. This presentation will provide an overview of the various types of networking and collaborative sites available to scientists and ways to expose their scientific activities online. Many of these can ultimately contribute to the developing metrics of a scientist as identified in the new world of alternative metrics. Participation offers a great opportunity to develop a scientific profile within the community and may ultimately be very beneficial, especially to scientists early in their career.
The Emergence of Research Information Management (RIM) within US LibrariesOCLC
Presented by Rebecca Bryant, Maliaca Oxnam, and Paolo Mangiafico, at the CNI Spring 2017 Membership Meeting, 3 April 2017, Albuquerque, New Mexico (USA).
4.16.15 Slides, “Enhancing Early Career Researcher Profiles: VIVO & ORCID Int...DuraSpace
Hot Topics: The DuraSpace Community Webinar Series
Series 11: Integrating ORCID Persistent Identifiers with DSpace, Fedora and VIVO
Webinar 3: “Enhancing Early Career Researcher Profiles: VIVO & ORCID Integration”
April 16, 2015
Curated by Josh Brown, ORCID
Presented by: Simeon Warner, Library Information Systems, Cornell University, Jon Corson-Rikert, Head of Information Technology Services, Cornell University and Kristi Holmes, Director, Galter Health Sciences Library, Northwestern University
Practical applications for altmetrics in a changing metrics landscapeDigital Science
"Practical applications for altmetrics in a changing metrics landscape" - Sara Rouhi, Altmetric product specialist, and Anirvan Chatterjee, Director Data Strategy for CTSI at UCSF
This presentation summarizes several important educational websites. It introduces websites that contain information about institutions, organizations, NGOs and companies. Key websites discussed include www.ssrn.com/en/, www.acer.edu.au/, www.icssr.org, and www.nhrc.nic.in. For each website, brief descriptions of 1-3 sentences outline their purpose and content, such as providing research papers, supporting educational research, promoting social science research, and providing human rights education and guidelines. In closing, the presentation invites audience questions and thanks attendees for their time.
An open access resource portal for arthropod vectors and agricultural pathosy...Surya Saha
AgriVectors.org is a systems biology resource for vector biologists that aims to provide omics resources and databases to identify targets for interdiction molecules. It utilizes a distributed data schema to rapidly release genome assemblies and transcriptomes. Undergraduate students manually curate genes and pathways of interest from NCBI gene models. The site also provides web-based tools to visualize and analyze high-dimensional experimental data like proteomics and gene expression networks. The goal is to build an ecosystem of integrated resources and tools to study vector-pathogen-host systems important for agriculture.
Westnet CIO Meeting - Tucson, AZ 1-4-16David Ernst
The document discusses open textbooks and the Open Textbook Network's efforts to increase adoption of open textbooks by faculty. Some key points:
1) Open textbooks are free to students and can help address the rising costs of textbooks that negatively impact students' academic performance and financial stress.
2) Barriers to faculty adoption include lack of awareness of open textbooks and their quality.
3) The Open Textbook Network works with partner institutions to build expertise on open textbooks through workshops and training to increase adoption among faculty.
4) To date their efforts have engaged over 500 faculty, reviewed 380 open textbooks, and achieved a 40% adoption rate among participating faculty.
This talk "Web authoring as a pedagogic tool: an example from the biosciences" by Chris Willmott and Jane Wellens was given at the Pushing the Boundaries event in January 2006. The slides describe an activity in which second year undergraduates were asked to produce websites about various bioethical issues. This activity was also described in a paper Willmott CJR and Wellens J (2004) Teaching about bioethics through authoring of websites Journal of biological Education 39:27-31.
More recently we have actually replaced this task with an activity in which students produce videos on bioethical topics (see other slideshare presentations or a chronological list at http://lefthandedbiochemist.wordpress.com/talks/). These slides have recently been added here for completion - the site where they were previously available having gone off-line.
Similar to Research Networking SEO state of the union 2015 (20)
Designing Digital Products for Diverse Populationslesliey
The Diverse eCohort project at UCSF aims to transform the university into a model research institution that champions diversity, equity, and inclusion. The project seeks to 1) build infrastructure to support online research tailored to diverse communities, 2) demonstrate how to partner with communities to improve research outcomes and health equity, and 3) jumpstart innovation to include diverse communities in collaboration with UCSF clinical research projects.
1. The document discusses four implemented clinical decision support apps that deliver AI to clinicians in real time within their workflow in the electronic health record.
2. The apps varied in the complexity of the AI/computation used, amount of patient-specific data required, and how the decision support was delivered to clinicians.
3. Apps that maintained patient data and computation externally were able to support more complex AI, while still delivering results seamlessly within the EHR using standards like SMART on FHIR.
Recommendations for usage tracking for research networking systems, v.1. July...lesliey
List of recommendations for usage tracking and analysis for institutions with research networking systems such as VIVO, Profiles or SciVal Experts. Version 1 of these recommendations was authored by a subgroup of the CTSA Research Networking Affinity Group (Wash U, Elsevier, UCSF).
Reduce the pain of updating faculty web pages lesliey
presentation created by Leslie Yuan, Eric Meeks, and Ed Chen. Presented at 2012 UCCSC (UC Developers conference). Describes use of UCSF Profiles APIs by various departments to automatically mine data and decrease administrative burden.
One example of a feature built as an OpenSocial gadgetlesliey
This document describes how researchers can display their conference posters and presentations on their profile page using OpenSocial gadgets. It provides an example of a researcher named Mini who has used this feature to showcase her presentations directly on her profile. OpenSocial gadgets allow users to add customizable content and applications to their profile.
UCSF is developing an OpenSocial framework to extend the functionality of its Profiles research networking tool without modifying the underlying code. This will allow UCSF to create reusable "gadgets" and foster sharing across institutions. Currently UCSF has implemented several gadgets for its Profiles including faculty mentoring, document sharing, and profile search capabilities. The goal is for OpenSocial to enable collaboration both within and across institutional social networks like Profiles, VIVO, and LinkedIn.
UCSF Profiles provides a searchable database of over 2,600 UCSF faculty profiles populated with publicly available data like publications from PubMed. It is part of a national effort to enable research networking across institutions. UCSF is leading a project for national research networking involving 15-20 other institutions using Profiles and other tools. A pilot launch is anticipated in January 2011 to showcase an aggregated federated search across participating institutions.
ViewShift: Hassle-free Dynamic Policy Enforcement for Every Data LakeWalaa Eldin Moustafa
Dynamic policy enforcement is becoming an increasingly important topic in today’s world where data privacy and compliance is a top priority for companies, individuals, and regulators alike. In these slides, we discuss how LinkedIn implements a powerful dynamic policy enforcement engine, called ViewShift, and integrates it within its data lake. We show the query engine architecture and how catalog implementations can automatically route table resolutions to compliance-enforcing SQL views. Such views have a set of very interesting properties: (1) They are auto-generated from declarative data annotations. (2) They respect user-level consent and preferences (3) They are context-aware, encoding a different set of transformations for different use cases (4) They are portable; while the SQL logic is only implemented in one SQL dialect, it is accessible in all engines.
#SQL #Views #Privacy #Compliance #DataLake
Learn SQL from basic queries to Advance queriesmanishkhaire30
Dive into the world of data analysis with our comprehensive guide on mastering SQL! This presentation offers a practical approach to learning SQL, focusing on real-world applications and hands-on practice. Whether you're a beginner or looking to sharpen your skills, this guide provides the tools you need to extract, analyze, and interpret data effectively.
Key Highlights:
Foundations of SQL: Understand the basics of SQL, including data retrieval, filtering, and aggregation.
Advanced Queries: Learn to craft complex queries to uncover deep insights from your data.
Data Trends and Patterns: Discover how to identify and interpret trends and patterns in your datasets.
Practical Examples: Follow step-by-step examples to apply SQL techniques in real-world scenarios.
Actionable Insights: Gain the skills to derive actionable insights that drive informed decision-making.
Join us on this journey to enhance your data analysis capabilities and unlock the full potential of SQL. Perfect for data enthusiasts, analysts, and anyone eager to harness the power of data!
#DataAnalysis #SQL #LearningSQL #DataInsights #DataScience #Analytics
Global Situational Awareness of A.I. and where its headedvikram sood
You can see the future first in San Francisco.
Over the past year, the talk of the town has shifted from $10 billion compute clusters to $100 billion clusters to trillion-dollar clusters. Every six months another zero is added to the boardroom plans. Behind the scenes, there’s a fierce scramble to secure every power contract still available for the rest of the decade, every voltage transformer that can possibly be procured. American big business is gearing up to pour trillions of dollars into a long-unseen mobilization of American industrial might. By the end of the decade, American electricity production will have grown tens of percent; from the shale fields of Pennsylvania to the solar farms of Nevada, hundreds of millions of GPUs will hum.
The AGI race has begun. We are building machines that can think and reason. By 2025/26, these machines will outpace college graduates. By the end of the decade, they will be smarter than you or I; we will have superintelligence, in the true sense of the word. Along the way, national security forces not seen in half a century will be un-leashed, and before long, The Project will be on. If we’re lucky, we’ll be in an all-out race with the CCP; if we’re unlucky, an all-out war.
Everyone is now talking about AI, but few have the faintest glimmer of what is about to hit them. Nvidia analysts still think 2024 might be close to the peak. Mainstream pundits are stuck on the wilful blindness of “it’s just predicting the next word”. They see only hype and business-as-usual; at most they entertain another internet-scale technological change.
Before long, the world will wake up. But right now, there are perhaps a few hundred people, most of them in San Francisco and the AI labs, that have situational awareness. Through whatever peculiar forces of fate, I have found myself amongst them. A few years ago, these people were derided as crazy—but they trusted the trendlines, which allowed them to correctly predict the AI advances of the past few years. Whether these people are also right about the next few years remains to be seen. But these are very smart people—the smartest people I have ever met—and they are the ones building this technology. Perhaps they will be an odd footnote in history, or perhaps they will go down in history like Szilard and Oppenheimer and Teller. If they are seeing the future even close to correctly, we are in for a wild ride.
Let me tell you what we see.
Analysis insight about a Flyball dog competition team's performanceroli9797
Insight of my analysis about a Flyball dog competition team's last year performance. Find more: https://github.com/rolandnagy-ds/flyball_race_analysis/tree/main
Predictably Improve Your B2B Tech Company's Performance by Leveraging DataKiwi Creative
Harness the power of AI-backed reports, benchmarking and data analysis to predict trends and detect anomalies in your marketing efforts.
Peter Caputa, CEO at Databox, reveals how you can discover the strategies and tools to increase your growth rate (and margins!).
From metrics to track to data habits to pick up, enhance your reporting for powerful insights to improve your B2B tech company's marketing.
- - -
This is the webinar recording from the June 2024 HubSpot User Group (HUG) for B2B Technology USA.
Watch the video recording at https://youtu.be/5vjwGfPN9lw
Sign up for future HUG events at https://events.hubspot.com/b2b-technology-usa/
STATATHON: Unleashing the Power of Statistics in a 48-Hour Knowledge Extravag...sameer shah
"Join us for STATATHON, a dynamic 2-day event dedicated to exploring statistical knowledge and its real-world applications. From theory to practice, participants engage in intensive learning sessions, workshops, and challenges, fostering a deeper understanding of statistical methodologies and their significance in various fields."
Beyond the Basics of A/B Tests: Highly Innovative Experimentation Tactics You...Aggregage
This webinar will explore cutting-edge, less familiar but powerful experimentation methodologies which address well-known limitations of standard A/B Testing. Designed for data and product leaders, this session aims to inspire the embrace of innovative approaches and provide insights into the frontiers of experimentation!
End-to-end pipeline agility - Berlin Buzzwords 2024Lars Albertsson
We describe how we achieve high change agility in data engineering by eliminating the fear of breaking downstream data pipelines through end-to-end pipeline testing, and by using schema metaprogramming to safely eliminate boilerplate involved in changes that affect whole pipelines.
A quick poll on agility in changing pipelines from end to end indicated a huge span in capabilities. For the question "How long time does it take for all downstream pipelines to be adapted to an upstream change," the median response was 6 months, but some respondents could do it in less than a day. When quantitative data engineering differences between the best and worst are measured, the span is often 100x-1000x, sometimes even more.
A long time ago, we suffered at Spotify from fear of changing pipelines due to not knowing what the impact might be downstream. We made plans for a technical solution to test pipelines end-to-end to mitigate that fear, but the effort failed for cultural reasons. We eventually solved this challenge, but in a different context. In this presentation we will describe how we test full pipelines effectively by manipulating workflow orchestration, which enables us to make changes in pipelines without fear of breaking downstream.
Making schema changes that affect many jobs also involves a lot of toil and boilerplate. Using schema-on-read mitigates some of it, but has drawbacks since it makes it more difficult to detect errors early. We will describe how we have rejected this tradeoff by applying schema metaprogramming, eliminating boilerplate but keeping the protection of static typing, thereby further improving agility to quickly modify data pipelines without fear.
The Ipsos - AI - Monitor 2024 Report.pdfSocial Samosa
According to Ipsos AI Monitor's 2024 report, 65% Indians said that products and services using AI have profoundly changed their daily life in the past 3-5 years.
1. The SEO State of the Union
Anirvan Chatterjee • Brian Turner • Eric Meeks • Leslie Yuan
Clinical & Translational Science Institute
University of California, San Francisco
2.
3.
4.
5.
6.
7. Agenda
1. Why SEO works
2. How we tested SEO for 52 websites
3. The SEO leaderboard
4. Three things we learned from the results
5. How to boost your rankings
24. How search-driven users navigate
People search for names, e.g.:
• kristine yaffe
• lawrence fong ucsf
• eric vittinghoff
• aaron fields ucsf
• jack taunton
• aimee kao ucsf
• joe derisi
• abul abbas
27. Homepage vs. search engine
Visits that start off on
our home page…
Visits that go from search
engine to a profile…
28. Homepage vs. search engine
Visits that start off on
our home page…
• 68% look at 2+ pages
per visit
• n = 3,222 in July 2015
• 2,190 visits of 2+ pages
in July 2015
Visits that go from search
engine to a profile…
• 18% look at 2+ pages
in a visit
• n = 92,743 in July 2015
• 16,245 visits of 2+ pages
in July 2015
34. Inclusion criteria
We picked 52 research networking sites that are…
• Associated with a single institution
– exclude trade groups, collaborations, etc.
• Based in a majority English-language locale
– exclude France, Germany, etc.
• Accessible to the public and search engines
– exclude systems behind a firewall
• Running on a host on port 80
– exclude 54.213.177.247 or hostname.edu:8080
35. Institutions
Albert Einstein College of Medicine
Arizona State University
Boston University
Case Western Reserve University
Clinical Translational Science
Institute at Children's National
Cornell
Duke University
Georgia Regents University
Harvard University
Indiana University
Johns Hopkins University
Michigan State University
Montana State University
Northern Arizona University
Northwestern University
Ohio State University
Oregon Health & Science University
Penn State
Scripps Research Institute
Stanford University
Temple University
Texas A&M
Thomas Jefferson University
University of Arizona
University of California, Davis
University of California, San Diego
University of California, San
Francisco
University of Colorado Boulder
University of Colorado Denver
University of Florida
University of Hawai‘i
University of Illinois - Chicago
University of Iowa
University of Maryland-Baltimore
University of Massachusetts
University of Melbourne
University of Miami
University of Minnesota
University of Montana
University of Nebraska
University of Nevada, Las Vegas
University of Nevada, Reno
University of Pennsylvania
University of Rochester
University of South Africa
University of Southern California
University of Utah
Wake Forest
Washington State University
Wayne State University
Western Michigan University
36. Platforms
• 21 SciVal Experts
• 13 VIVO
• 11 Profiles RNS
• 4 Elsevier Pure
• 3 Home Grown
40. Methodology, step 3 of 4 cont’d
How we picked institution names
1. First, look at the domain name of the homepage for the
institution
– profiles.ucsf.edu → www.ucsf.edu → ucsf
– vivo.experts.scival.com/indiana → www.iu.edu → iu
2. If the site’s not on the main domain, and the RNS URL
includes a common variation of the name, use that
– vivo.experts.scival.com/indiana → indiana
3. If #1 and #2 give different results, use both (with OR)
– indiana OR iu
42. Methodology, step 4 of 4 cont’d:
Why only consider the top 3 results?
The first 3 organic search results
made up 71% of all non-mobile
Google organic clicks.
Source:
“Google Organic Click-Through Rates in 2014”
Philip Petrescu, Advanced Web Ranking
https://moz.com/blog/google-organic-click-through-rates-in-2014
43. Methodology
• Looked at search rankings for 24,583 profile
pages across 52 sites
• Used the RankTank keyword rank checker tool
to automate the process
– http://www.ranktank.org/
44. In summary:
What % of a site’s profiles appear
in the top 3 Google results for
“First Last Institution”?
(e.g. “brian turner ucsf”)
51. #1. Use your own domain
Institutional domain? (e.g. vivo.cornell.edu)
• average score = 49%
Unrelated domain? (e.g. experts.scival.com/asu)
• average score = 21%
58. #3. Get incoming links:
search engine ranking factors
Links to the page
Links to the
subdomain
or domain
59. #3. Get incoming links:
linking root domains
A linking root domain is a domain under a
public suffix that includes links to your sites.
• *.cnn.com CNN
• *.ox.ac.uk Oxford University
• *.anoka.k12.ca.us Anoka School District, Minn.
60. #3. Get incoming links:
the top 3 sites
1. findanexpert.unimelb.edu.au (95%)
2. profiles.umassmed.edu (91%)
3. profiles.ucsf.edu (88%)
61. #3. Get incoming links
findanexpert.unimelb.edu.au
has 488 linking root domains:
• newscientist.com
• f1000.com
• anl.gov
• duraspace.org
• electionwatch.edu.au
• and 483 more root domains…
62. #3. Get incoming links
profiles.umassmed.edu
has 249 linking root domains:
• en.wikipedia.org
• grants.nih.gov
• theguardian.com
• bloomberg.com
• nih.gov
• and 244 more root domains…
63. #3. Get incoming links
profiles.ucsf.edu
has 858 linking root domains:
• sourceforge.net
• harvard.edu
• ucsf.edu
• universityofcalifornia.edu
• ucsfhealth.org
• and 853 more root domains…
65. 5. HOW TO BOOST YOUR RANKINGS
Photo: Ryan McFarland
66. 1. Be worthy of love
• Most people care about people, not generic
information-finding site
• Make profile pages beautiful and chock-full of
information, so people will want to link to them
67. 2. Establish benchmarks
• Install Google Analytics on every page
• Learn how to use it
– read Web Analytics 2.0 by Avinash Kaushik
68. 3. Get good with Google
• Add a sitemap.xml (sitemaps.org)
• Register on Google Webmaster Tools to:
• register your sitemap
• catch indexing errors early
• link to your Google Analytics account
69. 4. Look good with Google
<title> tag
<meta
name="description">
Schema.org people metadata
URL
70. Virtuous cycle
1. Some people link to you
2. You show up on Google
3. More people see you, and link to you
4. You do even better on Google
71. 5. Show the world you’re worthy of love
• Get campus sites to link to your homepage
as a trusted campus resource
• Get campus sites to link to individual profiles
from departmental profiles, news stories,
directory, etc.
• Encourage reuse of your data via APIs, and ask
for a link back as attribution
72. If that works…
• Some researchers will link to their profile pages
on their own sites
• Some blogs and social media will link to your
profile pages as authoritative sources
• Some departments may link to your profiles
because your data is more current than theirs
My fellow RNS implementors — Some of us are part of the VIVO community, others from the Profiles community, yet others are Elsevier customers. Some of us even built our own homegrown systems. But over half a decade into our work, we’ve come together, putting aside our differences, to help advance the state of research networking for all.
[Public domain image : https://commons.wikimedia.org/wiki/File:2011_State_of_the_Union_fisheye.jpg]
When we began our journey together, many of our leaders believed that “if you build it, they will come.” But time has shown that our users are fickle, that they will ignore our repeated pleas to visit our website, leaving our sites empty, and sadly bereft of users.
[Public domain image: http://pixabay.com/en/grandstand-audience-sit-chairs-334476/]
And what’s up with users, anyway? We built them such powerful tools, which they keep choosing to ignore…
[image: NIH Project Reporter screenshot]
Instead, they turn to Google — leaving us a choice: will we go where our users are, or will we wait for them to someday abandon Google and return to the fold?
Because we know they’re going to come back to interfaces that look like this, right?
[image: NIH Project Reporter screenshot]
Over the next half hour, we’re going to talk about five things.
First, we’re going to dive into search engine optimization, or SEO, how it works, and how it helped us grow usage of our system by an order of magnitude
Second, we’re going to describe how we put together the first systematic survey of search engine optimization in the research networking space.
Third, we’re going to release the results of our survey looking at 52 different universities. We’re going to name names — and if you stick around, you’ll see how your institution ranks compared to everybody else.
Fourth, we’re going to look at three things we learned from these results — what separates the winners and the losers?
And finally, we’re going to share 5 things you can do today to help improve your site’s search engine rankings
Any questions? Let’s begin!
First, what is search engine optimization, how does it work for research network sites, and why it matters
[Public domain image: https://pixabay.com/en/trombone-day-ulm-human-group-51221/]
I want you to think back to 2010…
[Photo: Duracell promotional image, via http://inhabitat.com/leds-light-up-new-years-eve-2010-in-times-square-nyc/
Barack Obama was President…
Katy Perry was on the charts…
A new gadget called the iPad hit the stores…
And a little university in San Francisco was about to launch its first research network system
When the site was launched, our team went all in. We had a high ranking official email every single member of the faculty. We sent postcards. We were giving away iPads. (We tried to book Katy Perry for the launch, but she was busy that day.) It was the biggest on-campus promotion we had ever done, and it was all coordinated to strike in September 2010
So in October 2010, a month after this hard-core promotion effort, we had 5,000 visits from on and off campus sources.
[Next]
By July 2015, that number had grown about 19-fold, to 98,000 visits per month
[Next]
So where did all this traffic come from?
So here’s that growth in visits, plotted out over the course of 5 years
And here it is, split up.
That’s a stacked area graph.
On the bottom is traffic from other websites, from bookmarks, from marketing campaigns, etc.
And on the top, is search engine traffic.
[Pause and let folks take that in]
When we launched our website back in 2010, we had this idea that…
[next step]
Users would look at the search options
[next step]
Type in a keyword
[next step]
And then hit the search button
And then the search results come up, and you see a list of people
…you see a list of people…
[next]
And you click on one of them
…and then you land on that researcher’s profile page. And as you look through their profile, you go, “hmm, OK, this is someone I’d love to collaborate with!”
And then you talk. And soon, there’s a beautiful research collaboration forming, just because you knew how to search for people using a research networking system
[Public domain mages by ClkerFreeVectorImages https://pixabay.com/en/stick-male-figure-looking-right-29937/ https://pixabay.com/en/stick-figure-face-look-right-30109/]
So that was our original expectation —that we can educate our users to come to our site, to run good queries, evaluate options, and make the perfect connection
[Creative Commons noncommercial no-derivatives photo: https://www.flickr.com/photos/josephwuorigami/4198854549/
But that was just theory. In reality, at UCSF, most of our traffic comes through search engines. And that SEO traffic is kind of dumb and messy, but it’s also awesome, and totally blows away our original expectations.
[Creative Commons photo: https://flic.kr/p/6cRASC]
The way SEO works, at least at UCSF, is that searchers will typically search Google for the name of a specific researcher
Because of our search engine optimization work, UCSF Profiles pages often rank high in the results, so users click that link
The user arrives on the page, looks around — and more often than not, they leave immediately thereafter, without doing any of the deep interaction or reading we had in mind.
So in current reality, we do have two groups.
The first group is smart and thoughtful, they come to our homepage, they run smart searches, they evaluate the data, etc.
The second group is a drive-by user. They randomly pop in via Google, take a look around, and usually leave right afterwards
And you see the differences when we look at the numbers. For the first group, 68%, 2 out of 3 of them, look at more than one page when they use the site. While for the second group, only 18% get beyond the first page that they look at. Which kind of makes sense. They’ve come from a site like Google, they just wanted to know about someone, maybe they glance at the page, and then leave. The first group is obviously way more hard core.
But things look different when we consider the volume of traffic from the two sources. In July, there were only three thousand people in the first group, so we had about 2,000 of those users who went two or more pages into the site.
But in the second group, n is close to 100,000 visits a month, so 18% of that is huge, 16,000 people a month. Even when it comes to engaged users, most of our engaged users come out of the pool of drive-by SEO users.
How many times do you hear “Google it!” during your day?
USAGE is why SEO matters - Everyone goes to a search engine for everything.
SEO has really helped drive traffic to UCSF Profiles over the years…
And we think we’re doing pretty great…
…there’s also a whole community of RNS implementors who have been investing in search engine optimization, and seeing the rewards.
How we tested SEO for 52 websites
So we decided that we needed to evaluate how we—meaning the larger research networking community—is actually doing on search engine optimization
[Public domain image: https://pixabay.com/en/ruler-dimension-measure-684005/]
We looked at 52 public English-language research networking sites run by a single university or organization, featuring profiles of its own people.
Here are the 52 institutions we looked at.
If you’re from one of these institutions, could you raise your hands?
These 52 institutions represent a wide range of platforms: VIVO, Profiles, SciVal Experts, Elsevier Pure, and even some homegrown systems
First, for every single site, we started off by getting a list of every profile on the system, or as many as we can get.
In some cases, we got the list via R2R, which is based on Dave Eichmann’s CTSAsearch at the University of Iowa.
In other cases, we were able to look at a sitemap.xml file
In other cases, we ran a search on the site, to get back a list of every person in a system, and just scraped that list
No matter the mechanism, we ended up with a list of names and URLs for pretty much every single profile in the system
Second, We then went through and picked out 500 random users from every single system.
In cases where there were fewer than 500 total profiles in the system, we just selected all of them.
Third, for each of the names, we basically searched Google for the name of the person, and then the name of that person’s institution
We include the institution to be extra-clear about which person we’re looking for. Including the institution name is a common search behavior, according to our web analytics data for UCSF Profiles.
For the name of the institution, we used a common short form. So for example, we’d say “John Doe UCSF,” not “John Doe University of California San Francisco”
If you’re interested, here are the gory details for how we actually determined the short form of the name of each institution.
But this is boring, so I’m going to skip ahead…
And finally, for every single name we’re testing, we look at the Google results to see if the profile shows up among the first 3 Google search results
There are lots of ways to measure SEO success. We picked whether or not a profile page shows up among the first 3 organic — or non-advertising — search results.
According to a 2014 analysis by Advanced Web Ranking, if a desktop Google user was going to click on a normal organic (on unpaid) link, 71% of the time they’ll do so on one of the first 3 results. (And according to their latest numbers, that’s even more pronounced on mobile devices.)
We checked the search rankings of almost 25,000 profile pages across 52 different institutions. We could have done that by hand, but it would have been incredibly slow and difficult, and Google may have blocked us. So we used a tool called RankTank keyword rank checker do do the bulk checking.
We’re ranking sites by what percent of a site’s profiles show up in the top 3 results on Google when you search for a person’s name and their institution
And after doing all that work, we finally have some results to share…
And here are the winners…
In first place is the University of Melbourne, then UMass, and UCSF, Cornell, Denver, BU, Harvard, Georgia Regent’s University, University of Minnesota, and Penn State
Each one of these schools had their profile pages come up in the top 3 results over 60% of the time.
[Hoot and holler! Encourage the audience to clap. If anyone from the institution’s here, ask them to stand up. This is a moment for over-the-top silliness.]
Next up, we have the folks doing a pretty good job on the SEO front. Is anyone here from these schools? Give them a round of applause!
Is anyone here from the third group?
[Make a joke about them being brave, or chicken, as the case may be]
And in last place for Google discoverability… Does anyone notice a pattern here? [wait for someone to mention SciVal Experts] Scival Experts seems to have this category almost locked up — but it’s more complicated than that, and I’m going to explain why in a second
OK, so we have the rankings, but what can we learn from them?
[Public domain photo by HamiJeezy @ Pixabay: https://pixabay.com/en/school-chalkboard-pear-hand-colored-172345/]
First, the single most important takeaway from this exercise is that you if you’re implementing an RNS for an institution, you should always put it under your own domain name, not the domain name of your vendor. Systems that used their own domain name had scores over double that of those who used some other name. Using your institution’s domain is a strong signal to Google that the contents are closely related to that institution.
Second, we can look at the top software. When we look at the top 10 sites, each of the platforms we looked at shows up at least once. Profiles shows up 6 times, and then 1 slot each for Custom, Vivo, Pure, and SciVal Experts. So while Profiles is obviously a really strong option, it’s actually possible to do pretty well with any one of the packages.
And here are each of the platforms, broke out by average score. Folks who used either Profiles RNS or a custom system did way better than everyone else, on average.
Look at the bottom of the list. SciVal Experts looks pretty bad, huh?
But here’s what happens when we look at the average score, based on your software and the kind of domain name you use. I see three things here.
First, institutions that use SciVal, Profiles, or a custom system on their own domain name did really well on average.
But then I look at the top and bottom of the list — and SciVal’s listed in both places. If you’re using SciVal with your own domain name, that’s about as good as it gets. But if you’re using their domain name, it’s about as bad as it gets.
And finally, I have to admit, I’m a little confused by VIVO. I don’t understand why, on average, VIVO installations do worse than every other platform. If you have an idea, maybe we can talk about it during the Q&A.
The third big takeaway from the data is the importance of incoming links. A lot of us here are software people, and this is veering a little bit toward the realm of user engagement and product marketing, but this is incredibly important, and I’m going to show you why.
Have any of you heard of PageRank? That’s the name for the original algorithm powering Google.
Basically, the more links you get from other sites — and particularly other “important” sites — the more “important” you are.
PageRank was critical to the original Google algorithm.
[Public domain image: https://en.wikipedia.org/wiki/File:PageRanks-Example.svg]
The Google algorithm has gotten way more complex over the years. But one of the best guides to reverse-engineer how Google’s algorithm works is the Moz search engine ranking factors document, which describes how 122 different factors seem to correlate with Google search rankings.
Moz looked at 122 different factors, and here are the top factors most predictive of high search rankings, broken out by category.
When we look at the top factors, two categories stand out.
All those factors labeled in light yellow are all about the volume and diversity of links you’re getting to a given page.
And the factors labeled in blue are about the volume and diversity of links you’re getting to other web content on your whole website, or across the rest of your domain.
Out of the 122 factors, we’re going to look at one in specific —linking root domains.
A linking root domain is a domain under a public suffix, like .edu for educational sites, or .ma.us for sites in Massachusetts, etc., which includes ones one or more links to your site.
So if my blog has 100 links from cnn.com, and one link from a harvard.edu site, that’s two linking root domains.
Does that make sense?
Here are our top 3 sites. We started digging into how many linking root domains each of the top sites got…
The University of Melbourne’s reserch profiling site has links from 488 different domains
The University of Massachusetts Medical School’s Profiles installation has incoming links from 249 sites
And UCSF’s installation of Profiles has links from 858 different sites
Here’s what it looks like, plotted out.
On the x axis is the number of linking root domains, or basically, the number of distinct sites linking to a website
On the y axis is the likelihood that it’s profiles are showing up in the top 3 search results when people search for them
We see a very clear correlation between how many diverse sites are linking to you, and how well you do in search rankings
So with all that in mind, we’re dive into how we can get people to link to our sites and click our links on Google
[Photo used under Creative Commons Attribution 2.0: https://www.flickr.com/photos/zieak/3360293485]
First, make sure your content is worthy of love. Your home page is important, but for most sites, people will be way more interested in the individual people profile pages. If your content isn’t useful, then nobody will want to link to it. At UCSF, we include publications, photos, grants, bios, Twitter handles, etc. for researchers. And we work hard to make it as easy to read and browse as possible.
Next you have to have some kind of analytics package set up, so you know how you’re doing. We recommend Google Analytics.
Third, make sure your site is getting properly indexed by Google. Use sitemaps, and set up Google Webmaster Tools to see if there are any hiccups in how your site’s seen by Google.
Fourth, make sure your results look good on Google. And you can control almost every part of this just by tweaking your HTML.
[next] The main title should be clean, and include the person’s name
[next] if you have a clean, authoritative-looking URL, more people are likely to trust it and link to it
[next] you can add extra job title metadata with Schema.org
[next] and you can add a short description of the page. In this case, we always say “Name’s profile, publications, research topic, and co-authors”
There’s a virtuous cycle here. You need to start getting people to link to you. If you do that, more people will see your site, add links of their own, and you’ll do better on Google.
And if you keep doing this, your SEO performance just keeps increasing.
But you need a way to kickstart this process. And you do that by priming the pump on stage 1.
Start off by maximizing links to your site from your own campus — because if your friends won’t link to you, nobody else will. And being linked to by folks on your own campus is a signal to Google that you’re legit.
The way you do this will vary at each institution, but at UCSF, we worked with a variety of departments and campus-wide websites to make sure they had a link to our RNS on their departmental websites.
But we went deeper, building in links to individual profile pages. For example, our campus-wide people directory now always includes links back to that person’s profile page.
And every time UCSF.edu publishes a news story about someone, they also link back to their profile page. There’s no silver bullet, but the more of this you do, the stronger signal it is to Google that this is valid and relevant content — otherwise they wouldn’t be linking to you.
If that works, you may start seeing signs of early traction…
All of these are further signals to Google that your pages are relevant, that they’re interesting, that they’re worthy of love
The research networking community has been at this for years, and I’m pleased to report that the state of RNS SEO union is strong. Our research shows that a range of research networking sites have figured out how to make SEO work for them, often successfully pivoting away from their original plans to make it succeed.
But the state of the union is also weak. Looking across 52 sites, far too many of them are being run on the assumption that just because you build it, they will come.
Some of us are paying good money for sites that fundamentally don’t work, that fundamentally aren’t meeting people’s needs because they’re simply not discoverable, and that’s not OK.
And in the end, what matters right now isn’t just what we’ve done, but how we help each other as a community. We hope you’ll congratulate our winners, and share best practices, so we can come back next year with better scores for everyone.
Thank you.
We’re sharing the results of this talk online, and we’d really appreciate it if you’d share it with your teams.
So can everyone pull out your phone, and go to bit.ly/rnsseo