This presentation describes the problems of accurately counting file downloads from institutional repositories using commonly applied web analytics methods: page tagging and log file analysis. The presentation introduces a new prototype web service called RAMP (Repository Analytics and Metrics Portal) that provides a much more accurate method of counting file downloads.
This presentation describes a prototype web service (RAMP) that accurately counts file downloads from institutional repositories (IR). The slides begin with the problems associated with current web analytics methods such as page tagging and log file analysis, and describes how the reporting from four IR dramatically improved through the use of RAMP. Research conducted for this study was funded by IMLS, and partners include Montana State University, OCLC Research, the University of New Mexico, and the Association of Research Libraries (ARL). The presentation was given at the ARL Assessment Forum at the American Library Association 2017 midwinter conference in Atlanta, GA.
Walk Before You Run: Prerequisites to Linked DataKenning Arlitsch
Presentation on April 23, 2015 at the Amigos Library Services online conference: "Linked Data & RDF: New Frontiers in Metadata and Access"
Covers traditional SEO and Semantic Web Optimization, including Semantic Web Identity and a Schema.org project at Montana State University Library.
Shows the use of Excel and Esri ArcGIS Desktop 10.1 to make statistics reports from Innovative Interfaces circulation data. Originally presented at IUG 2014 as part of panel: Slinging statistics and dicing data in the public library.
Data Stories: Using Narratives to Reflect on a Data Purchase Pilot ProgramNASIG
Anita Foster and Gene R. Springs, presenters
The Ohio State University Libraries, driven by campus demand, developed and implemented a data resource purchase pilot program that took place over one fiscal year. Having previously only prioritized the purchasing of subject-related data resources on a small scale, this initiative included large data resources, most of which can meet the research and teaching needs of a variety of academic disciplines. Beginning the pilot with very few criteria for selection and potential acquisition, the Collections Strategist and Electronic Resources Officer encountered various challenges along with way, each requiring additional exploration, research, and eventual resolution. As the pilot program proceeded, other criteria emerged as important considerations when examining data resources, particularly for content and licensing.
To best develop an understanding of what was learned over the year of this pilot program, the Collections Strategist and Electronic Resources Officer collaborated in writing "data stories," or narratives about each of the data resource options investigated for acquisition. Each narrative is structured similarly, from the requestor and initial stated need through the end result. Any pertinent details regarding content, access, or licensing were incorporated to complete the narratives. The data stories will be further analyzed to track commonalities among both the successful and unsuccessful acquisitions, with the proposed outcome of developing tested criteria for future acquisition of data resources.
Library review: improving back-of-house processes through richer integrations...Talis
Library review: improving back-of-house processes through richer integrations (Richard Cross, Nottingham Trent University)
How far can Talis develop back-of-house integrations joining Aspire to other local library and information systems, and how much integration will customers need to develop for themselves? This session presents an outline of the technical developments being planned at Nottingham Trent University to integrate Review data in Aspire with other sources of library data and intelligence, to improve informed, process-driven acquisitions decision-making. Does our thinking make sense? Does it sound technically feasible? Is this work that customers should expect Talis to deliver for us, or will customised acquisitions' integrations be something for customers to self-manage?
This presentation describes a prototype web service (RAMP) that accurately counts file downloads from institutional repositories (IR). The slides begin with the problems associated with current web analytics methods such as page tagging and log file analysis, and describes how the reporting from four IR dramatically improved through the use of RAMP. Research conducted for this study was funded by IMLS, and partners include Montana State University, OCLC Research, the University of New Mexico, and the Association of Research Libraries (ARL). The presentation was given at the ARL Assessment Forum at the American Library Association 2017 midwinter conference in Atlanta, GA.
Walk Before You Run: Prerequisites to Linked DataKenning Arlitsch
Presentation on April 23, 2015 at the Amigos Library Services online conference: "Linked Data & RDF: New Frontiers in Metadata and Access"
Covers traditional SEO and Semantic Web Optimization, including Semantic Web Identity and a Schema.org project at Montana State University Library.
Shows the use of Excel and Esri ArcGIS Desktop 10.1 to make statistics reports from Innovative Interfaces circulation data. Originally presented at IUG 2014 as part of panel: Slinging statistics and dicing data in the public library.
Data Stories: Using Narratives to Reflect on a Data Purchase Pilot ProgramNASIG
Anita Foster and Gene R. Springs, presenters
The Ohio State University Libraries, driven by campus demand, developed and implemented a data resource purchase pilot program that took place over one fiscal year. Having previously only prioritized the purchasing of subject-related data resources on a small scale, this initiative included large data resources, most of which can meet the research and teaching needs of a variety of academic disciplines. Beginning the pilot with very few criteria for selection and potential acquisition, the Collections Strategist and Electronic Resources Officer encountered various challenges along with way, each requiring additional exploration, research, and eventual resolution. As the pilot program proceeded, other criteria emerged as important considerations when examining data resources, particularly for content and licensing.
To best develop an understanding of what was learned over the year of this pilot program, the Collections Strategist and Electronic Resources Officer collaborated in writing "data stories," or narratives about each of the data resource options investigated for acquisition. Each narrative is structured similarly, from the requestor and initial stated need through the end result. Any pertinent details regarding content, access, or licensing were incorporated to complete the narratives. The data stories will be further analyzed to track commonalities among both the successful and unsuccessful acquisitions, with the proposed outcome of developing tested criteria for future acquisition of data resources.
Library review: improving back-of-house processes through richer integrations...Talis
Library review: improving back-of-house processes through richer integrations (Richard Cross, Nottingham Trent University)
How far can Talis develop back-of-house integrations joining Aspire to other local library and information systems, and how much integration will customers need to develop for themselves? This session presents an outline of the technical developments being planned at Nottingham Trent University to integrate Review data in Aspire with other sources of library data and intelligence, to improve informed, process-driven acquisitions decision-making. Does our thinking make sense? Does it sound technically feasible? Is this work that customers should expect Talis to deliver for us, or will customised acquisitions' integrations be something for customers to self-manage?
As libraries move to become centers of digital collections, maintaining information on the usage of these collections is ever more critical. It's also essential to be able to maintain common measures across heterogeneous collections, in order to be able to effectively analyze how the library's collection dollars are being spent. The Project COUNTER Code of Practice and the SUSHI protocol aid in this work. This session will explore the newly-published Release 4 of the COUNTER Code of Practice for e-Resources and highlight its use in conjunction with the SUSHI (Standardized Usage Statistics Harvesting Initiative) protocol in an active library environment.
The first workshop of the series "Services to support FAIR data" took place in Prague during the EOSC-hub week (on April 12, 2019).
Speaker: Maajke the Jong
The Meta-Barcoding Research And Visualization Environment (mBRAVE) platform presentation from the 7th International Barcode of Life Conference. An overview of the platform, priorities, and features are provided.
Presented at the 2010 Electronic Resources & Libraries Conference. --
Mary Feeney, Jim Martin, Ping Situ, University of Arizona --
Abstract: Searches, sessions, article requests - have access to data, but what's the next step? Learn how the University of Arizona Libraries' Spending Reductions Project analyzed usage of different types of resources to assess them against quality standards and make cancellation decisions. Tools, challenges, and organizational approaches will also be discussed.
Increasing traceability of physical library items through Koha: the case of S...Giannis Tsakonas
Presentation in KohaCon2016, the major event of Koha community, on May 31, 2016. The Library & Information Center, University of Patras, Greece has developed the SELIDA framework, which integrates a set of standardized and widespread library technologies in order to increase the identification and traceability of physical items, such as books. The framework makes use of RFID tags in order to assign unique identification marks, in the form of URIs that can be globally exchanged. The framework has been implemented in the fully translated and customized Koha installation of our Library and its core services support checking in/out of books and browsing of history transactions with geospatial visualization. Its use can support transactions between various libraries or branches of the same library. The proposed presentation will describe the architecture of the framework and how it connects to Koha, as well as the challenges we faced during its development.
The “Nomenclature of Multidimensionality” in the Digital Libraries Evaluation...Giannis Tsakonas
Digital libraries evaluation is characterised as an interdisciplinary and multidisciplinary domain posing a set of challenges to the research communities that intend to utilise and assess criteria, methods and tools. The amount of scientific production, which is published on the field, hinders and disorientates the researchers who are interested in the domain. The researchers need guidance in order to exploit the considerable amount of data and the diversity of methods effectively as well as to identify new research goals and develop their plans for future works. This paper proposes a methodological pathway to investigate the core topics of the digital library evaluation domain, author communities, their relationships, as well as the researchers who significantly contribute to major topics. The proposed methodology exploits topic modelling algorithms and network analysis on a corpus consisting of the digital library evaluation papers presented in JCDL,ECDL/TDPL and ICADL conferences in the period 2001–2013.
Full text at: dx.doi.org/10.1007/978-3-319-43997-6_19
Session: Digital Library Evaluation
Time: Thursday, 08/Sep/2016, 9:00am - 10:30am
Chair: Claus-Peter Klas
Location: Blauer Saal, Hannover Congress Centrum
This presentation was provided by Kate Byrne of Symplectic during the NISO virtual conference held on Feb 15, 2017, entitled Institutional Repositories: Ensuring Yours is Populated, Useful and Thriving.
This talk was provided by Sarah Shreeves of the University of Miami, during the NISO Virtual Conference held on Feb 15, 2017, entitled Institutional Repositories: Ensuring Yours Is Populated, Useful and Thriving.
As libraries move to become centers of digital collections, maintaining information on the usage of these collections is ever more critical. It's also essential to be able to maintain common measures across heterogeneous collections, in order to be able to effectively analyze how the library's collection dollars are being spent. The Project COUNTER Code of Practice and the SUSHI protocol aid in this work. This session will explore the newly-published Release 4 of the COUNTER Code of Practice for e-Resources and highlight its use in conjunction with the SUSHI (Standardized Usage Statistics Harvesting Initiative) protocol in an active library environment.
The first workshop of the series "Services to support FAIR data" took place in Prague during the EOSC-hub week (on April 12, 2019).
Speaker: Maajke the Jong
The Meta-Barcoding Research And Visualization Environment (mBRAVE) platform presentation from the 7th International Barcode of Life Conference. An overview of the platform, priorities, and features are provided.
Presented at the 2010 Electronic Resources & Libraries Conference. --
Mary Feeney, Jim Martin, Ping Situ, University of Arizona --
Abstract: Searches, sessions, article requests - have access to data, but what's the next step? Learn how the University of Arizona Libraries' Spending Reductions Project analyzed usage of different types of resources to assess them against quality standards and make cancellation decisions. Tools, challenges, and organizational approaches will also be discussed.
Increasing traceability of physical library items through Koha: the case of S...Giannis Tsakonas
Presentation in KohaCon2016, the major event of Koha community, on May 31, 2016. The Library & Information Center, University of Patras, Greece has developed the SELIDA framework, which integrates a set of standardized and widespread library technologies in order to increase the identification and traceability of physical items, such as books. The framework makes use of RFID tags in order to assign unique identification marks, in the form of URIs that can be globally exchanged. The framework has been implemented in the fully translated and customized Koha installation of our Library and its core services support checking in/out of books and browsing of history transactions with geospatial visualization. Its use can support transactions between various libraries or branches of the same library. The proposed presentation will describe the architecture of the framework and how it connects to Koha, as well as the challenges we faced during its development.
The “Nomenclature of Multidimensionality” in the Digital Libraries Evaluation...Giannis Tsakonas
Digital libraries evaluation is characterised as an interdisciplinary and multidisciplinary domain posing a set of challenges to the research communities that intend to utilise and assess criteria, methods and tools. The amount of scientific production, which is published on the field, hinders and disorientates the researchers who are interested in the domain. The researchers need guidance in order to exploit the considerable amount of data and the diversity of methods effectively as well as to identify new research goals and develop their plans for future works. This paper proposes a methodological pathway to investigate the core topics of the digital library evaluation domain, author communities, their relationships, as well as the researchers who significantly contribute to major topics. The proposed methodology exploits topic modelling algorithms and network analysis on a corpus consisting of the digital library evaluation papers presented in JCDL,ECDL/TDPL and ICADL conferences in the period 2001–2013.
Full text at: dx.doi.org/10.1007/978-3-319-43997-6_19
Session: Digital Library Evaluation
Time: Thursday, 08/Sep/2016, 9:00am - 10:30am
Chair: Claus-Peter Klas
Location: Blauer Saal, Hannover Congress Centrum
This presentation was provided by Kate Byrne of Symplectic during the NISO virtual conference held on Feb 15, 2017, entitled Institutional Repositories: Ensuring Yours is Populated, Useful and Thriving.
This talk was provided by Sarah Shreeves of the University of Miami, during the NISO Virtual Conference held on Feb 15, 2017, entitled Institutional Repositories: Ensuring Yours Is Populated, Useful and Thriving.
This presentation by David Wilcox was part of the NISO Virtual Conference, held on Feb 15, 2017, entitled Institutional Repositories: Ensuring Yours Is Populated, Useful and Thriving.
This presentation was provided by Violeta Ilik of Northwestern University during the NISO Virtual Conference held on Feb 15, 2017, entitled Institutional Repositories: Ensuring Yours is Populated, Useful and Thriving. The DOI for this presentation is http://dx.doi.org/10.18131/G3VP6R
This conversation with Cliff Lynch was the opening segment of the February 15, 2017 program, sponsored by NISO, entitled Institutional Repositories: Ensuring Yours Is Populated, Useful and Thriving
This presentation was provided by Sandi Caldrone of Purdue during the NISO Virtual Conference held on Feb 15, 2017, entitled Institutional Repositories: Ensuring Yours is Populated, Useful and Thriving.
This presentation was provided by Todd Digby and Robert Phillips of the University of Florida during the NISO Virtual Conference held on Feb 15, 2017, entitled Institutional Repositories: Ensuring Yours is Populated, Useful and Thriving.
This presentation was provided by Christine Stohn of ExLibris/Proquest during the NISO Virtual Conference held on February 15, 2017, entitled Institutional Repositories: Ensuring Yours is Populated, Useful and Thriving.
OCLC Research Update at ALA Chicago. June 26, 2017.OCLC
Rachel Frick, OCLC Executive Director of the OCLC Research Library Partnership, reviews some of the broad agenda items and recent publications related to the work of OCLC Research. Rachel is then joined for two presentations on specific research topics. First, Sharon Streams (OCLC Director of WebJunction) and Monika Sengul-Jones (OCLC Wikipedian-in-Residence) present on “Public Libraries and Wikipedia.” Next, Kenning Arlitsch (Dean, Montana State University Library) and Jeff Mixter (OCLC Senior Software Engineer) share their findings on “Accurate Institutional Repository Download Measurement using RAMP, the Repository Analytics and Metrics Portal.”
Jean-Claude Bradley presents on "SMIRP: Effective use of a self-evolving database for information capture and retrieval in an R&D environment" on August 14, 2002 at the Barnett International Conference on Laboratory Notebooks. Specific implementations of integrating human and automated workflows in chemistry and nanotechnology applications are detailed.
Comparison of various page Rank AlgorithmsEditor IJCTER
Web is expanding day by day and people generally rely on search engine to explore the
web Thus, it has become very important for the sources to give relevant and qualified result. The main aim of this paper is to get the knowledge about variouSs page rank algorithm and to find the optimize result among them. The comparison will be done on the basis of their speed, limitations, benefits and input parameters, efficiency of results.
Enhanced Web Usage Mining Using Fuzzy Clustering and Collaborative Filtering ...inventionjournals
Information is overloaded in the Internet due to the unstable growth of information and it makes information search as complicate process. Recommendation System (RS) is the tool and largely used nowadays in many areas to generate interest items to users. With the development of e-commerce and information access, recommender systems have become a popular technique to prune large information spaces so that users are directed toward those items that best meet their needs and preferences. As the exponential explosion of various contents generated on the Web, Recommendation techniques have become increasingly indispensable. Web recommendation systems assist the users to get the exact information and facilitate the information search easier. Web recommendation is one of the techniques of web personalization, which recommends web pages or items to the user based on the previous browsing history. But the tremendous growth in the amount of the available information and the number of visitors to web sites in recent years places some key challenges for recommender system. The recent recommender systems stuck with producing high quality recommendation with large information, resulting unwanted item instead of targeted item or product, and performing many recommendations per second for millions of user and items. To avoid these challenges a new recommender system technologies are needed that can quickly produce high quality recommendation, even for a very large scale problems. To address these issues we use two recommender system process using fuzzy clustering and collaborative filtering algorithms. Fuzzy clustering is used to predict the items or product that will be accessed in the future based on the previous action of user browsers behavior. Collaborative filtering recommendation process is used to produce the user expects result from the result of fuzzy clustering and collection of Web Database data items. Using this new recommendation system, it results the user expected product or item with minimum time. This system reduces the result of unrelated and unwanted item to user and provides the results with user interested domain.
Unobtrusive Usability Testing: Creating Measurable Goals to Evaluate a WebsiteTabby Farney
Presented at the 2013 ACRL Conference. Full paper available at: http://www.ala.org/acrl/sites/ala.org.acrl/files/content/conferences/confsandpreconfs/2013/papers/Farney_Unobtrusive.pdf
CONTENT AND USER CLICK BASED PAGE RANKING FOR IMPROVED WEB INFORMATION RETRIEVALijcsa
Search engines today are retrieving more than a few thousand web pages for a single query, most of which
are irrelevant. Listing results according to user needs is, therefore, a very real necessity. The challenge lies
in ordering retrieved pages and presenting them to users in line with their interests. Search engines,
therefore, utilize page rank algorithms to analyze and re-rank search results according to the relevance of
the user’s query by estimating (over the web) the importance of a web page. The proposed work
investigates web page ranking methods and recently-developed improvements in web page ranking.
Further, a new content-based web page rank technique is also proposed for implementation. The proposed
technique finds out how important a particular web page is by evaluating the data a user has clicked on, as
well as the contents available on these web pages. The results demonstrate the effectiveness of the proposed
page ranking technique and its efficiency.
Nowadays, the explosive growth of the World Wide Web generates tremendous amount of web data and consequently web data mining has become an important technique for discovering useful information and knowledge. Web mining is a vivid research area closely related to Information Extraction IE . Automatic content extraction from web pages is a challenging yet significant problem in the fields of information retrieval and data mining. Web Content mining refers to the discovery of useful information from web content such as text, images videos etc. Web content extraction is the process of organizing data instances into groups whose members are similar in some way. Content Extraction helps the user to easily select the topic of interest. Web Content Ming technology is useful in management information system. Web content mining extracts or mines useful information or knowledge from web page contents. This paper aims to study on web content extraction techniques. Aye Pwint Phyu | Khaing Khaing Wai "Study on Web Content Extraction Techniques" Published in International Journal of Trend in Scientific Research and Development (ijtsrd), ISSN: 2456-6470, Volume-3 | Issue-5 , August 2019, URL: https://www.ijtsrd.com/papers/ijtsrd27931.pdfPaper URL: https://www.ijtsrd.com/computer-science/data-miining/27931/study-on-web-content-extraction-techniques/aye-pwint-phyu
What is the current status quo of the Semantic Web as first mentioned by Tim Berners Lee in 2001?
Not only 10 blue links can drive you traffic anymore, Google has added many so called Knowlegde cards and panels to answer the specific informational need of their users. Sounds complicated, but it isn’t. If you ask for information, Google will try to answer it within the result pages.
I'll share my research from a theoretical point of view through exploring patents and papers, and actual testing cases in the live indices of Google. Getting your site listed as the source of an Answer Card can result in an increase of CTR as much as 16%. How to get listed? Come join my session and I'll shine some light on the factors that come into play when optimizing for Google's Knowledge graph.
How to Build Fast Data Applications: Evaluating the Top ContendersVoltDB
Massive amounts of data generated from mobile devices, M2M communications, sensors and other IoT devices is redefining the world. What kind of applications will you build to take advantage of this data and provide value to your customers? What technologies are out there to help you? This deck will illustrate the difference between fast OLAP, stream-processing, and OLTP database solutions. You will also learn the importance of state, real-time analytics and real-time decisions when building applications on streaming data, and how streaming applications deliver more value when built on a super-fast in-memory, SQL database. To view the webinar in its entirety, click here: http://learn.voltdb.com/WRFastDataAppsTopContenders.html
International conference On Computer Science And technologyanchalsinghdm
ICGCET 2019 | 5th International Conference on Green Computing and Engineering Technologies. The conference will be held on 7th September - 9th September 2019 in Morocco. International Conference On Engineering Technology
The conference aims to promote the work of researchers, scientists, engineers and students from across the world on advancement in electronic and computer systems.
Being found in commercial search engines, like Google, and writing indexable content have largely been on the periphery of library web development practice. In this session, we will explore the mechanics and principles of white hat SEO, identify components that contribute to successful harvesting of library web sites and microsites, and discuss the need to make library content findable in broader online settings. Come learn why SEO is not just "snake oil" and can be an integral part of library marketing and outreach initiatives.
Presented by Jason Clark: Jason is the Head of Digital Access and Web Services at Montana State University Library, where he builds library web applications and sets digital content strategies. You can find him online at http://jasonclark.info/ or on Twitter @jaclark.
Similar to Improving the reported use and impact of institutional repositories (20)
Data Centers - Striving Within A Narrow Range - Research Report - MCG - May 2...pchutichetpong
M Capital Group (“MCG”) expects to see demand and the changing evolution of supply, facilitated through institutional investment rotation out of offices and into work from home (“WFH”), while the ever-expanding need for data storage as global internet usage expands, with experts predicting 5.3 billion users by 2023. These market factors will be underpinned by technological changes, such as progressing cloud services and edge sites, allowing the industry to see strong expected annual growth of 13% over the next 4 years.
Whilst competitive headwinds remain, represented through the recent second bankruptcy filing of Sungard, which blames “COVID-19 and other macroeconomic trends including delayed customer spending decisions, insourcing and reductions in IT spending, energy inflation and reduction in demand for certain services”, the industry has seen key adjustments, where MCG believes that engineering cost management and technological innovation will be paramount to success.
MCG reports that the more favorable market conditions expected over the next few years, helped by the winding down of pandemic restrictions and a hybrid working environment will be driving market momentum forward. The continuous injection of capital by alternative investment firms, as well as the growing infrastructural investment from cloud service providers and social media companies, whose revenues are expected to grow over 3.6x larger by value in 2026, will likely help propel center provision and innovation. These factors paint a promising picture for the industry players that offset rising input costs and adapt to new technologies.
According to M Capital Group: “Specifically, the long-term cost-saving opportunities available from the rise of remote managing will likely aid value growth for the industry. Through margin optimization and further availability of capital for reinvestment, strong players will maintain their competitive foothold, while weaker players exit the market to balance supply and demand.”
Show drafts
volume_up
Empowering the Data Analytics Ecosystem: A Laser Focus on Value
The data analytics ecosystem thrives when every component functions at its peak, unlocking the true potential of data. Here's a laser focus on key areas for an empowered ecosystem:
1. Democratize Access, Not Data:
Granular Access Controls: Provide users with self-service tools tailored to their specific needs, preventing data overload and misuse.
Data Catalogs: Implement robust data catalogs for easy discovery and understanding of available data sources.
2. Foster Collaboration with Clear Roles:
Data Mesh Architecture: Break down data silos by creating a distributed data ownership model with clear ownership and responsibilities.
Collaborative Workspaces: Utilize interactive platforms where data scientists, analysts, and domain experts can work seamlessly together.
3. Leverage Advanced Analytics Strategically:
AI-powered Automation: Automate repetitive tasks like data cleaning and feature engineering, freeing up data talent for higher-level analysis.
Right-Tool Selection: Strategically choose the most effective advanced analytics techniques (e.g., AI, ML) based on specific business problems.
4. Prioritize Data Quality with Automation:
Automated Data Validation: Implement automated data quality checks to identify and rectify errors at the source, minimizing downstream issues.
Data Lineage Tracking: Track the flow of data throughout the ecosystem, ensuring transparency and facilitating root cause analysis for errors.
5. Cultivate a Data-Driven Mindset:
Metrics-Driven Performance Management: Align KPIs and performance metrics with data-driven insights to ensure actionable decision making.
Data Storytelling Workshops: Equip stakeholders with the skills to translate complex data findings into compelling narratives that drive action.
Benefits of a Precise Ecosystem:
Sharpened Focus: Precise access and clear roles ensure everyone works with the most relevant data, maximizing efficiency.
Actionable Insights: Strategic analytics and automated quality checks lead to more reliable and actionable data insights.
Continuous Improvement: Data-driven performance management fosters a culture of learning and continuous improvement.
Sustainable Growth: Empowered by data, organizations can make informed decisions to drive sustainable growth and innovation.
By focusing on these precise actions, organizations can create an empowered data analytics ecosystem that delivers real value by driving data-driven decisions and maximizing the return on their data investment.
As Europe's leading economic powerhouse and the fourth-largest hashtag#economy globally, Germany stands at the forefront of innovation and industrial might. Renowned for its precision engineering and high-tech sectors, Germany's economic structure is heavily supported by a robust service industry, accounting for approximately 68% of its GDP. This economic clout and strategic geopolitical stance position Germany as a focal point in the global cyber threat landscape.
In the face of escalating global tensions, particularly those emanating from geopolitical disputes with nations like hashtag#Russia and hashtag#China, hashtag#Germany has witnessed a significant uptick in targeted cyber operations. Our analysis indicates a marked increase in hashtag#cyberattack sophistication aimed at critical infrastructure and key industrial sectors. These attacks range from ransomware campaigns to hashtag#AdvancedPersistentThreats (hashtag#APTs), threatening national security and business integrity.
🔑 Key findings include:
🔍 Increased frequency and complexity of cyber threats.
🔍 Escalation of state-sponsored and criminally motivated cyber operations.
🔍 Active dark web exchanges of malicious tools and tactics.
Our comprehensive report delves into these challenges, using a blend of open-source and proprietary data collection techniques. By monitoring activity on critical networks and analyzing attack patterns, our team provides a detailed overview of the threats facing German entities.
This report aims to equip stakeholders across public and private sectors with the knowledge to enhance their defensive strategies, reduce exposure to cyber risks, and reinforce Germany's resilience against cyber threats.
Improving the reported use and impact of institutional repositories
1. Kenning Arlitsch, Dean - @kenning_msu
Patrick OBrien, Semantic Web Research Director - @sempob
Montana State University Library
Coalition for Networked Information
Washington, D.C.
December 13, 2016
Improving the Reported Use and
Impact of Institutional Repositories
2. Theme
Accurately assessing the impact of institutional repositories
Research funded by the Institute of Museum and Library Services (IMLS)
”Measuring Up: Assessing Accuracy of Reported Use and Impact of Digital Repositories”
http://scholarworks.montana.edu/xmlui/handle/1/8924
3. Agenda
❖A New Reporting Model
❖Accuracy of Analytics Reporting Tools
● Page Tagging
● Log file
❖RAMP
● New Prototype Web Service for IR reporting
4. A New Reporting Model
Page Type Definition Examples
Citable Content
Downloads
Non-HTML scholarly content
that may be formally cited in
the research process
● Publication (.pdf)
● Presentation (.ppt)
● Data Sets (.csv)
Item Summary
HTML pages to help user
decide to download the full
publication
● Title & Abstract
● Item Metadata
Ancillary
HTML pages that provide
general information or
navigation
● Search Results
● Browse by Author
● Statistics
5.
6.
7.
8. Two Classes of Web Analytics
HTML
Analytics Service (SaaS)
1
Log Files
2Page Tagging
{JavaScript}
9. Google Analytics used by most academic
libraries
❖Tested 279 academic library websites
● ARL
● DLF
● OCLC-RLP
❖90% US libraries contained Google
tracking code
14. Four IR During 2016 Spring Semester January
5 to May 17 (n = 134 days)
Study Participant IR Platform URL
Montana State University ScholarWorks DSpace scholarworks.montana.edu
McMaster University MacSphere DSpace macsphere.mcmaster.ca
University of New Mexico LoboVault DSpace repository.unm.edu
University of Utah USpace CONTENTdm uspace.utah.edu
15. Page Tagging does not track non-
HTML Citable Content Downloads
Non-HTML
Search Analytics
Does!
17. All four IR using Google Analytics
Page Type
Analytics
Search
Console
Pages Events Search Analytics
Citable Content
Downloads
- 26,355 562,933
Item Summary 284,303 - -
Ancillary 201,793 - -
CCD Tracking Improvement
2,000%
18. 54.3%
46.2%
47.9%
42.5%
45.7%
53.8%
52.1%
57.5%
0% 10% 20% 30% 40% 50% 60% 70% 80% 90% 100%
Most IR Activity is Unreported by Google Analytics
01/05/16 - 05/17/16 (days=134)
Reported Invisible
Most IR Activity Unreported by Google
Analytics.
19. 51%
54%
52%
63%
35%
27%
20%
20%
14%
19%
28%
17%
0% 10% 20% 30% 40% 50% 60% 70% 80% 90% 100%
~51% to 63% of Citable IR Activity is Unreported by Google Analytics
01/05/16 - 05/17/16 (days=134)
Citable Content Downloads Item Sumary PV Ancillary PV
Most IR activity consists of Citable Content
Downloads
20. Montana Method Challenges
❖ Missing non-Google direct link CCD
events
● Yahoo
● Bing
● Email
❖ Search Analytics limits time and access
● Moving 90 day window
● Granular data = programing skills to access
API
21. RAMP - Repository Analytics and Metrics
Portal
❖ Cloud Web service
❖ Currently accessible and free
● During grant period (December 2017)
❖ No training or configuration required
❖ Consistent method and terminology
❖ Benchmarking across time and
organization
❖ Request Access @ mixterj@oclc.org
22. Two New Publications
Published:
Patrick OBrien, Kenning Arlitsch, Leila Sterman, Jeff Mixter, Jonathan Wheeler, and
Susan Borda. “Undercounting File Downloads from Institutional Repositories,” Journal
of Library Administration, vol. 56, no. 7, 2016
Forthcoming:
Patrick OBrien, Kenning Arlitsch, Leila Sterman, Jeff Mixter, Jonathan Wheeler, and
Susan Borda. “RAMP: Repository Analytics and Metrics Portal: A Prototype Web
Service that Accurately Counts Item Downloads from Institutional Repositories,”
Submitted to Library Hi Tech, expected publication early 2017
23. Undercounting Research Team
❖Montana State University
●Kenning Arlitsch, Dean @kenning_msu
●Patrick OBrien, Semantic Web Research Director @sempob
●Leila Sterman, Scholarly Communication Librarian
@calamityleila
●Susan Borda, Digital Technologies Librarian @mutanthumb
❖OCLC Research
●Jeff Mixter, Software Engineer @jeffmixter
❖University of New Mexico
●Jonathan Wheeler, Data Curation Librarian