Shows the use of Excel and Esri ArcGIS Desktop 10.1 to make statistics reports from Innovative Interfaces circulation data. Originally presented at IUG 2014 as part of panel: Slinging statistics and dicing data in the public library.
Beyond COUNTER Compliant: Ways to Assess E-Resources Reporting ToolsNASIG
Kelly Marie Blanchat, presenter
The need to continually evaluate electronic resources should not limited to a metric for how resources perform. The reporting tools that monitor and collect e-resource usage need to have their performance evaluated as well. This presentation will cover how vendor-provided systems -- designed to aid in the decision making process of the e-resources lifecycle -- can be assessed for reporting accuracy. Following this session, participants will have an understanding of what data points to review when assessing vendor-provided usage statistic tools, and will have a method to begin evaluating their own systems. In summer 2015, Yale Library brought up ProQuest’s 360 COUNTER Data Retrieval Service (DRS), a service in which COUNTER-compliant usage statistics are uploaded, archived, and normalized into consolidated reports twice per year. To date 360 COUNTER has freed up a significant amount of time for Yale's E-Resources Group, allowing for staff resources to be allocated elsewhere in the e-resources lifecycle. This extra staff time also allowed time to “kick the tires” of the system, which resulted in an assessment workflow using Microsoft Excel to compare how raw COUNTER data uploaded to the system was affected by title normalization in the knowledgebase. This assessment workflow helped to identify the volume of data available in the system, and also gave clarity to how the 360 COUNTER system works and what steps need to be taken–by both ProQuest and Yale Library–to improve reporting accuracy. Please note that this presentation will touch on issues found within the system, and how ProQuest worked with Yale to identify the source through title normalization decisions, and correct errors when possible. The primary purpose is to bring awareness for the need of reporting tool assessment, which can be applied to any assessment tool, not just 360 COUNTER.
Turning the Corner at High Speed: How Collections Metrics Are Changing in a H...NASIG
Collections metrics have always been an important component of effectively managing libraries. But today they are more important than ever before as user-focused libraries and information centers attempt to adjust their collections to current and future library user needs. Frequently this requires sharp turns, smart traffic control, and even drafting behind other libraries who might be in the lead at any given stretch in order to achieve ultimate success. In this presentation, perspectives from a corporate library context and a liberal arts college library will be presented. What are the key metrics today vs. five years ago? What factors are at work that create changes in metrics value over time? What changes might we expect to see in the future? These and other questions will be addressed.
Speakers:
Marija Markovic, Independent Consultant
Steve Oberg, Wheaton College (IL)
Improving the reported use and impact of institutional repositoriesKenning Arlitsch
This presentation describes the problems of accurately counting file downloads from institutional repositories using commonly applied web analytics methods: page tagging and log file analysis. The presentation introduces a new prototype web service called RAMP (Repository Analytics and Metrics Portal) that provides a much more accurate method of counting file downloads.
This presentation describes a prototype web service (RAMP) that accurately counts file downloads from institutional repositories (IR). The slides begin with the problems associated with current web analytics methods such as page tagging and log file analysis, and describes how the reporting from four IR dramatically improved through the use of RAMP. Research conducted for this study was funded by IMLS, and partners include Montana State University, OCLC Research, the University of New Mexico, and the Association of Research Libraries (ARL). The presentation was given at the ARL Assessment Forum at the American Library Association 2017 midwinter conference in Atlanta, GA.
Data Stories: Using Narratives to Reflect on a Data Purchase Pilot ProgramNASIG
Anita Foster and Gene R. Springs, presenters
The Ohio State University Libraries, driven by campus demand, developed and implemented a data resource purchase pilot program that took place over one fiscal year. Having previously only prioritized the purchasing of subject-related data resources on a small scale, this initiative included large data resources, most of which can meet the research and teaching needs of a variety of academic disciplines. Beginning the pilot with very few criteria for selection and potential acquisition, the Collections Strategist and Electronic Resources Officer encountered various challenges along with way, each requiring additional exploration, research, and eventual resolution. As the pilot program proceeded, other criteria emerged as important considerations when examining data resources, particularly for content and licensing.
To best develop an understanding of what was learned over the year of this pilot program, the Collections Strategist and Electronic Resources Officer collaborated in writing "data stories," or narratives about each of the data resource options investigated for acquisition. Each narrative is structured similarly, from the requestor and initial stated need through the end result. Any pertinent details regarding content, access, or licensing were incorporated to complete the narratives. The data stories will be further analyzed to track commonalities among both the successful and unsuccessful acquisitions, with the proposed outcome of developing tested criteria for future acquisition of data resources.
Beyond COUNTER Compliant: Ways to Assess E-Resources Reporting ToolsNASIG
Kelly Marie Blanchat, presenter
The need to continually evaluate electronic resources should not limited to a metric for how resources perform. The reporting tools that monitor and collect e-resource usage need to have their performance evaluated as well. This presentation will cover how vendor-provided systems -- designed to aid in the decision making process of the e-resources lifecycle -- can be assessed for reporting accuracy. Following this session, participants will have an understanding of what data points to review when assessing vendor-provided usage statistic tools, and will have a method to begin evaluating their own systems. In summer 2015, Yale Library brought up ProQuest’s 360 COUNTER Data Retrieval Service (DRS), a service in which COUNTER-compliant usage statistics are uploaded, archived, and normalized into consolidated reports twice per year. To date 360 COUNTER has freed up a significant amount of time for Yale's E-Resources Group, allowing for staff resources to be allocated elsewhere in the e-resources lifecycle. This extra staff time also allowed time to “kick the tires” of the system, which resulted in an assessment workflow using Microsoft Excel to compare how raw COUNTER data uploaded to the system was affected by title normalization in the knowledgebase. This assessment workflow helped to identify the volume of data available in the system, and also gave clarity to how the 360 COUNTER system works and what steps need to be taken–by both ProQuest and Yale Library–to improve reporting accuracy. Please note that this presentation will touch on issues found within the system, and how ProQuest worked with Yale to identify the source through title normalization decisions, and correct errors when possible. The primary purpose is to bring awareness for the need of reporting tool assessment, which can be applied to any assessment tool, not just 360 COUNTER.
Turning the Corner at High Speed: How Collections Metrics Are Changing in a H...NASIG
Collections metrics have always been an important component of effectively managing libraries. But today they are more important than ever before as user-focused libraries and information centers attempt to adjust their collections to current and future library user needs. Frequently this requires sharp turns, smart traffic control, and even drafting behind other libraries who might be in the lead at any given stretch in order to achieve ultimate success. In this presentation, perspectives from a corporate library context and a liberal arts college library will be presented. What are the key metrics today vs. five years ago? What factors are at work that create changes in metrics value over time? What changes might we expect to see in the future? These and other questions will be addressed.
Speakers:
Marija Markovic, Independent Consultant
Steve Oberg, Wheaton College (IL)
Improving the reported use and impact of institutional repositoriesKenning Arlitsch
This presentation describes the problems of accurately counting file downloads from institutional repositories using commonly applied web analytics methods: page tagging and log file analysis. The presentation introduces a new prototype web service called RAMP (Repository Analytics and Metrics Portal) that provides a much more accurate method of counting file downloads.
This presentation describes a prototype web service (RAMP) that accurately counts file downloads from institutional repositories (IR). The slides begin with the problems associated with current web analytics methods such as page tagging and log file analysis, and describes how the reporting from four IR dramatically improved through the use of RAMP. Research conducted for this study was funded by IMLS, and partners include Montana State University, OCLC Research, the University of New Mexico, and the Association of Research Libraries (ARL). The presentation was given at the ARL Assessment Forum at the American Library Association 2017 midwinter conference in Atlanta, GA.
Data Stories: Using Narratives to Reflect on a Data Purchase Pilot ProgramNASIG
Anita Foster and Gene R. Springs, presenters
The Ohio State University Libraries, driven by campus demand, developed and implemented a data resource purchase pilot program that took place over one fiscal year. Having previously only prioritized the purchasing of subject-related data resources on a small scale, this initiative included large data resources, most of which can meet the research and teaching needs of a variety of academic disciplines. Beginning the pilot with very few criteria for selection and potential acquisition, the Collections Strategist and Electronic Resources Officer encountered various challenges along with way, each requiring additional exploration, research, and eventual resolution. As the pilot program proceeded, other criteria emerged as important considerations when examining data resources, particularly for content and licensing.
To best develop an understanding of what was learned over the year of this pilot program, the Collections Strategist and Electronic Resources Officer collaborated in writing "data stories," or narratives about each of the data resource options investigated for acquisition. Each narrative is structured similarly, from the requestor and initial stated need through the end result. Any pertinent details regarding content, access, or licensing were incorporated to complete the narratives. The data stories will be further analyzed to track commonalities among both the successful and unsuccessful acquisitions, with the proposed outcome of developing tested criteria for future acquisition of data resources.
Walk Before You Run: Prerequisites to Linked DataKenning Arlitsch
Presentation on April 23, 2015 at the Amigos Library Services online conference: "Linked Data & RDF: New Frontiers in Metadata and Access"
Covers traditional SEO and Semantic Web Optimization, including Semantic Web Identity and a Schema.org project at Montana State University Library.
NCompass Live - http://nlc.nebraska.gov/ncompasslive/
June 24, 2015.
Are you looking for ways to edit your catalog records more efficiently, transform your library data from one format to another, and easily detect misspellings and other inaccuracies in your metadata? MarcEdit and Open Refine are powerful tools that can help you deal with all of these issues. Emily Nimsakont, Head of Cataloging & Resource Management, Schmid Law Library, University of Nebraska - Lincoln, will show how you can harness the power of these tools to make your work easier.
Going All-Electronic and Keeping Track of It: Clickthrough Statistics for On...Christopher Brown
Brown, Christopher C. “Going All-Electronic and Keeping Track of It: Clickthrough Statistics for Online Document Usage.” Presentation given at the 2011 Missouri Government Documents Conference, 7 June 2011, Columbia, MO.
Process and steps that are followed in creation of successful visualization. Taking an example of Encyclopaedia of life data and tableu visualization prototype
Visualising statistical Linked Data with PloneEau de Web
Presentation of a Plone-based tool that can create graphical visualisations of semantic statistical data expressed using the RDF Data Cube Vocabulary and queried using generated SPARQL statements. The tool was developed under a project funded by the European Commission and is publicly available at www.digital-agenda-data.eu
Datavi$: Negotiate Resource Pricing Using Data VisualizationNASIG
Stephanie J. Spratt, presenter
Ready to ask for a reduction in the annual increase of an e-resource product but unclear on how to make your case? Want to try some innovative strategies to avoid spending more than your budget? Want to reduce the amount of heavy renewal work falling right at fiscal close? Attend this presentation to learn techniques on all of that and more!
The speaker will use commonly collected data to show how to combine and visualize metrics to help make a library’s case for requesting reductions in pricing, adjusting service fees, and asking for changes to subscription periods to balance out the renewal workload. Attendees will learn which data to analyze and combine as it relates to pricing negotiations along with the steps involved to make that data come alive in Excel graphs and charts. Alternate data visualization products will also be discussed. The data visualization techniques, not outcomes, will be the focus of this presentation with the goal of attendees taking back which techniques might be worthwhile endeavors at their own institutions. Attendees will also learn about negotiation strategies and internal and external considerations when preparing to negotiate.
Growing an awareness of negotiation techniques and factors in play both inside and outside the library will help librarians make their cases for equitable pricing and models for library resources. The data visualization techniques shown in this presentation will serve as a stepping-off point for any librarian who wishes to use honesty, directness, and real-world scenarios to negotiate pricing for content and other library expenditures.
A snake, a planet, and a bear ditching spreadsheets for quick, reproducible r...NASIG
Presenter: Andrew Kelly, Cataloging & E-Resources Librarian, Paul Smith's College
This poster has two accompanying handouts: https://www.slideshare.net/NASIG/a-snake-a-planet-and-a-bear-ditching-spreadsheets-handout1 and https://www.slideshare.net/NASIG/a-snake-a-planet-and-a-bear-ditching-spreadsheets-handout2slides.
LoCloud Geolocation enrichment tools, Siri Slettvag, Asplan Viak Internet (Av...locloud
Presentation about the Geolocation Enrichment tools developed by Avinet as part of the LoCloud project. The tool can be used to add geographic locations to existing datasets provided by cultural institutions. It can be used as part of existing workflows by curators, or in crowd-sourcing projects with users finding places and adding coordinates.
http://www.locloud.eu
Walk Before You Run: Prerequisites to Linked DataKenning Arlitsch
Presentation on April 23, 2015 at the Amigos Library Services online conference: "Linked Data & RDF: New Frontiers in Metadata and Access"
Covers traditional SEO and Semantic Web Optimization, including Semantic Web Identity and a Schema.org project at Montana State University Library.
NCompass Live - http://nlc.nebraska.gov/ncompasslive/
June 24, 2015.
Are you looking for ways to edit your catalog records more efficiently, transform your library data from one format to another, and easily detect misspellings and other inaccuracies in your metadata? MarcEdit and Open Refine are powerful tools that can help you deal with all of these issues. Emily Nimsakont, Head of Cataloging & Resource Management, Schmid Law Library, University of Nebraska - Lincoln, will show how you can harness the power of these tools to make your work easier.
Going All-Electronic and Keeping Track of It: Clickthrough Statistics for On...Christopher Brown
Brown, Christopher C. “Going All-Electronic and Keeping Track of It: Clickthrough Statistics for Online Document Usage.” Presentation given at the 2011 Missouri Government Documents Conference, 7 June 2011, Columbia, MO.
Process and steps that are followed in creation of successful visualization. Taking an example of Encyclopaedia of life data and tableu visualization prototype
Visualising statistical Linked Data with PloneEau de Web
Presentation of a Plone-based tool that can create graphical visualisations of semantic statistical data expressed using the RDF Data Cube Vocabulary and queried using generated SPARQL statements. The tool was developed under a project funded by the European Commission and is publicly available at www.digital-agenda-data.eu
Datavi$: Negotiate Resource Pricing Using Data VisualizationNASIG
Stephanie J. Spratt, presenter
Ready to ask for a reduction in the annual increase of an e-resource product but unclear on how to make your case? Want to try some innovative strategies to avoid spending more than your budget? Want to reduce the amount of heavy renewal work falling right at fiscal close? Attend this presentation to learn techniques on all of that and more!
The speaker will use commonly collected data to show how to combine and visualize metrics to help make a library’s case for requesting reductions in pricing, adjusting service fees, and asking for changes to subscription periods to balance out the renewal workload. Attendees will learn which data to analyze and combine as it relates to pricing negotiations along with the steps involved to make that data come alive in Excel graphs and charts. Alternate data visualization products will also be discussed. The data visualization techniques, not outcomes, will be the focus of this presentation with the goal of attendees taking back which techniques might be worthwhile endeavors at their own institutions. Attendees will also learn about negotiation strategies and internal and external considerations when preparing to negotiate.
Growing an awareness of negotiation techniques and factors in play both inside and outside the library will help librarians make their cases for equitable pricing and models for library resources. The data visualization techniques shown in this presentation will serve as a stepping-off point for any librarian who wishes to use honesty, directness, and real-world scenarios to negotiate pricing for content and other library expenditures.
A snake, a planet, and a bear ditching spreadsheets for quick, reproducible r...NASIG
Presenter: Andrew Kelly, Cataloging & E-Resources Librarian, Paul Smith's College
This poster has two accompanying handouts: https://www.slideshare.net/NASIG/a-snake-a-planet-and-a-bear-ditching-spreadsheets-handout1 and https://www.slideshare.net/NASIG/a-snake-a-planet-and-a-bear-ditching-spreadsheets-handout2slides.
LoCloud Geolocation enrichment tools, Siri Slettvag, Asplan Viak Internet (Av...locloud
Presentation about the Geolocation Enrichment tools developed by Avinet as part of the LoCloud project. The tool can be used to add geographic locations to existing datasets provided by cultural institutions. It can be used as part of existing workflows by curators, or in crowd-sourcing projects with users finding places and adding coordinates.
http://www.locloud.eu
An early experimenter with Zepheira's Linked Data for libraries discusses their experience with converting their MARC records to BIBFRAME/Linked Data and trying to measure the impact of this service on circulation, new borrower registrations, traffic counts, and Inter-Library Loans in 2016.
Rethinking Library Acquisition: Demand-Driven Purchasing for Scholarly Books
Librarians must reconsider how they collect monographs. Traditionally, academic libraries purchase books to support their curricular and research needs, without much consideration of use. Even though 40% or more of books in most academic libraries never get used, this model makes sense in a world in which books go out of print, shelf space is available, and collection budgets are stable. But the world has changed: as publishers shift to an electronic model, books will not go out of print, libraries are under pressure to convert shelf space to study space; and libraries have fewer funds to purchase books annually. This panel will discuss approaches to demand-driven acquisition of monographs at two institutions: the University of Arizona and the University of Denver. While discussing plans being developed at these libraries, we will also look at implications for libraries in general, scholarly publishing, book vendors and academia.
Moderator: Becky Clark, Marketing Director, Johns Hopkins University Press
Panelists: Matt Nauman, Director of Publisher Relations, Blackwell; Michael Levine-Clark, Collections Librarian, University of Denver; Stephen Bosch, Materials Budget, Procurement, and Licensing Librarian, University of Arizona Library; Kim Anderson, Senior Collection Development Manager and Bibliographer, YBP Library Services
Establishing the Connection: Creating a Linked Data Version of the BNBnw13
Presentation for Talis Linked Data in Libraries event July 14 2011
Describes some of the choices made and lessons learned in migrating from traditional bibliographic metadata to linked open data.
OCLC Research Update at ALA Chicago. June 26, 2017.OCLC
Rachel Frick, OCLC Executive Director of the OCLC Research Library Partnership, reviews some of the broad agenda items and recent publications related to the work of OCLC Research. Rachel is then joined for two presentations on specific research topics. First, Sharon Streams (OCLC Director of WebJunction) and Monika Sengul-Jones (OCLC Wikipedian-in-Residence) present on “Public Libraries and Wikipedia.” Next, Kenning Arlitsch (Dean, Montana State University Library) and Jeff Mixter (OCLC Senior Software Engineer) share their findings on “Accurate Institutional Repository Download Measurement using RAMP, the Repository Analytics and Metrics Portal.”
Using linked data in a heterogeneous sensor web: Challenges, experiments and ...Cybera Inc.
Presentation by Liang Yu during the Sensor Web Ontology and Semantics paper session of the Sensor Web Enablement workshop (held during the 2011 Cybera Summit).
2009 E Usage Stats LibMeter Zbw Hh WorkshopLibMeter
NatLibStats, dbs, bix, local: Counter OpenURL
Part 3-4 of 5; Beta Version 0.8 of Transparencies
about Management Workshop on Usage analysis of electronic library services
At ZBW-Hamburg 2009-11-06
(C) LibMeter , 2009, Peter Ahrens
all rights reserved
The slides for the keynote talk I presented at http://2.encontro.dados.gov.br/encontro.html the 2nd National Open Data Meetup in Brazil.
Talking about open data, open government and how opening data in and of itself won't be a magic solution - we need open processes and an engaged civil society and media sector. Some steps and some challenges. Distinction between person data and open data, how to keep the internet open etc
Apache CarbonData+Spark to realize data convergence and Unified high performa...Tech Triveni
Challenges in Data Analytics:
Different application scenarios need different storage solutions: HBASE is ideal for point query scenarios but unsuitable for multi-dimensional queries. MPP is suitable for data warehouse scenarios but engine and data are coupled together which hampers scalability. OLAP stores used in BI applications perform best for Aggregate queries but full scan queries perform at a sub-optimal performance. Moreover, they are not suitable for real-time analysis. These distinct systems lead to low resource sharing and need different pipelines for data and application management.
Techniques to optimize the pagerank algorithm usually fall in two categories. One is to try reducing the work per iteration, and the other is to try reducing the number of iterations. These goals are often at odds with one another. Skipping computation on vertices which have already converged has the potential to save iteration time. Skipping in-identical vertices, with the same in-links, helps reduce duplicate computations and thus could help reduce iteration time. Road networks often have chains which can be short-circuited before pagerank computation to improve performance. Final ranks of chain nodes can be easily calculated. This could reduce both the iteration time, and the number of iterations. If a graph has no dangling nodes, pagerank of each strongly connected component can be computed in topological order. This could help reduce the iteration time, no. of iterations, and also enable multi-iteration concurrency in pagerank computation. The combination of all of the above methods is the STICD algorithm. [sticd] For dynamic graphs, unchanged components whose ranks are unaffected can be skipped altogether.
Opendatabay - Open Data Marketplace.pptxOpendatabay
Opendatabay.com unlocks the power of data for everyone. Open Data Marketplace fosters a collaborative hub for data enthusiasts to explore, share, and contribute to a vast collection of datasets.
First ever open hub for data enthusiasts to collaborate and innovate. A platform to explore, share, and contribute to a vast collection of datasets. Through robust quality control and innovative technologies like blockchain verification, opendatabay ensures the authenticity and reliability of datasets, empowering users to make data-driven decisions with confidence. Leverage cutting-edge AI technologies to enhance the data exploration, analysis, and discovery experience.
From intelligent search and recommendations to automated data productisation and quotation, Opendatabay AI-driven features streamline the data workflow. Finding the data you need shouldn't be a complex. Opendatabay simplifies the data acquisition process with an intuitive interface and robust search tools. Effortlessly explore, discover, and access the data you need, allowing you to focus on extracting valuable insights. Opendatabay breaks new ground with a dedicated, AI-generated, synthetic datasets.
Leverage these privacy-preserving datasets for training and testing AI models without compromising sensitive information. Opendatabay prioritizes transparency by providing detailed metadata, provenance information, and usage guidelines for each dataset, ensuring users have a comprehensive understanding of the data they're working with. By leveraging a powerful combination of distributed ledger technology and rigorous third-party audits Opendatabay ensures the authenticity and reliability of every dataset. Security is at the core of Opendatabay. Marketplace implements stringent security measures, including encryption, access controls, and regular vulnerability assessments, to safeguard your data and protect your privacy.
Show drafts
volume_up
Empowering the Data Analytics Ecosystem: A Laser Focus on Value
The data analytics ecosystem thrives when every component functions at its peak, unlocking the true potential of data. Here's a laser focus on key areas for an empowered ecosystem:
1. Democratize Access, Not Data:
Granular Access Controls: Provide users with self-service tools tailored to their specific needs, preventing data overload and misuse.
Data Catalogs: Implement robust data catalogs for easy discovery and understanding of available data sources.
2. Foster Collaboration with Clear Roles:
Data Mesh Architecture: Break down data silos by creating a distributed data ownership model with clear ownership and responsibilities.
Collaborative Workspaces: Utilize interactive platforms where data scientists, analysts, and domain experts can work seamlessly together.
3. Leverage Advanced Analytics Strategically:
AI-powered Automation: Automate repetitive tasks like data cleaning and feature engineering, freeing up data talent for higher-level analysis.
Right-Tool Selection: Strategically choose the most effective advanced analytics techniques (e.g., AI, ML) based on specific business problems.
4. Prioritize Data Quality with Automation:
Automated Data Validation: Implement automated data quality checks to identify and rectify errors at the source, minimizing downstream issues.
Data Lineage Tracking: Track the flow of data throughout the ecosystem, ensuring transparency and facilitating root cause analysis for errors.
5. Cultivate a Data-Driven Mindset:
Metrics-Driven Performance Management: Align KPIs and performance metrics with data-driven insights to ensure actionable decision making.
Data Storytelling Workshops: Equip stakeholders with the skills to translate complex data findings into compelling narratives that drive action.
Benefits of a Precise Ecosystem:
Sharpened Focus: Precise access and clear roles ensure everyone works with the most relevant data, maximizing efficiency.
Actionable Insights: Strategic analytics and automated quality checks lead to more reliable and actionable data insights.
Continuous Improvement: Data-driven performance management fosters a culture of learning and continuous improvement.
Sustainable Growth: Empowered by data, organizations can make informed decisions to drive sustainable growth and innovation.
By focusing on these precise actions, organizations can create an empowered data analytics ecosystem that delivers real value by driving data-driven decisions and maximizing the return on their data investment.
Chatty Kathy - UNC Bootcamp Final Project Presentation - Final Version - 5.23...John Andrews
SlideShare Description for "Chatty Kathy - UNC Bootcamp Final Project Presentation"
Title: Chatty Kathy: Enhancing Physical Activity Among Older Adults
Description:
Discover how Chatty Kathy, an innovative project developed at the UNC Bootcamp, aims to tackle the challenge of low physical activity among older adults. Our AI-driven solution uses peer interaction to boost and sustain exercise levels, significantly improving health outcomes. This presentation covers our problem statement, the rationale behind Chatty Kathy, synthetic data and persona creation, model performance metrics, a visual demonstration of the project, and potential future developments. Join us for an insightful Q&A session to explore the potential of this groundbreaking project.
Project Team: Jay Requarth, Jana Avery, John Andrews, Dr. Dick Davis II, Nee Buntoum, Nam Yeongjin & Mat Nicholas
StarCompliance is a leading firm specializing in the recovery of stolen cryptocurrency. Our comprehensive services are designed to assist individuals and organizations in navigating the complex process of fraud reporting, investigation, and fund recovery. We combine cutting-edge technology with expert legal support to provide a robust solution for victims of crypto theft.
Our Services Include:
Reporting to Tracking Authorities:
We immediately notify all relevant centralized exchanges (CEX), decentralized exchanges (DEX), and wallet providers about the stolen cryptocurrency. This ensures that the stolen assets are flagged as scam transactions, making it impossible for the thief to use them.
Assistance with Filing Police Reports:
We guide you through the process of filing a valid police report. Our support team provides detailed instructions on which police department to contact and helps you complete the necessary paperwork within the critical 72-hour window.
Launching the Refund Process:
Our team of experienced lawyers can initiate lawsuits on your behalf and represent you in various jurisdictions around the world. They work diligently to recover your stolen funds and ensure that justice is served.
At StarCompliance, we understand the urgency and stress involved in dealing with cryptocurrency theft. Our dedicated team works quickly and efficiently to provide you with the support and expertise needed to recover your assets. Trust us to be your partner in navigating the complexities of the crypto world and safeguarding your investments.
38. Thank You!
Thank You!
Susan Lytinen
Data Projects Specialist
Gail Borden Public Library District
slytinen@gailborden.info
847-608-5013
39. Thank You!
Thank You from all of us!
Susan Lytinen
Data Projects Specialist
Gail Borden Public Library District
slytinen@gailborden.info
847-608-5013
Margaret Jasinski
Manager, Collection Services
Arlington Heights Memorial Library
mjasinsk@ahml.info
847-506-2643
Jan Sissors
Manager, Circulation Services
Arlington Heights Memorial Library
jsissors@ahml.info
847-506-2625
Pam Skittino
Head of Support Services
Deerfield Public Library
pskittino@deerfieldlibrary.org
847-580-8970
Editor's Notes
I am Susan Lytinen, a Data Projects Specialist for the Gail Borden Public Library District in Elgin, IL, about 50 miles west of Chicago.My position was created to gather data to help us make decisions.I’ll be motoring through this presentation fairly quickly, but it’s online, with practically every word I’m saying in the presenter notes, and you can always contact me with any questions.
I started saving daily checkout data from our Innovative ILS to make maps, but I soon realized that it could be used to gain many different insights about our 3 library service points: the Main Library, the Rakow Branch, and the MediaBank disc dispenser which is built into the outside wall of the Rakow Branch. Please note that the door that opens into the Rakow Branch is mere feet from the MediaBank. For instance, our Innovative reports of circulation by location code told us how many items were being checked out of the MediaBank, but analyzing daily checkout data told us how many individual people were using it, how often, and whether they also checked out materials from inside the Rakow Branch during the same visit. We can also produce lists of materials sent from the Main Library to the Rakow Branch, and vice versa, for collection development purposes. And these are just the first projects that occurred to me. Since I enjoy mapping, I am also including some maps that I made to show where the patrons live that use each of our library buildings.
It is always pleasant to hear an acknowledged expert recommend something you are already doing.On March 4, I watched a webinar given by the well-known library consultant Joan Frye Williams -- Measurements that matter: analyzing patron behavior : an Infopeople webinar / Joan Frye Williamshttps://infopeople.org/civicrm/event/info?id=377&reset=1Ms. Williams stressed the importance of this kind of information gathering and analysis.
I have been collecting daily checkout data since July 1, 2012. The following dates are included in the reports shown in this presentation.Checkout data from 7/1/12-12/31/13Patron records as of 1/27/14These screen shots show which fields I have been saving.
Some libraries may not have OUT LOC. This field is useful, because it tells you the Innovative terminal number for the computer where the item was checked out. Are your patrons are using your self-checks or still taking everything to the Circulation Desk? Are you sending a lot of material from one library branch to another?
When you are preserving information for checkouts, depending on how often you search and how short your loan periods are, you may need to search for checkouts by LOUTDATE as well. This search finds items that were checked out on your specified date, but were returned before you executed your search.Some libraries may not have LPATRON, LOUTDATE.There is no LAST OUT LOC field. You are going to miss some checkouts no matter what, but it is no use agonizing about it. I have seen recent Sierra discussion list posts about using SQL to search circ_trans and item_circ_history, but I am not there yet.
These are the Innovative Create Lists searches and exports I use.I eliminated the location “zfly” because we use that for temporary items that are created at the Circulation Desk when someone wants to check out material that is not in our database.I have started exporting csv files instead of txt because csv files will open right up in Excel – you don’t have to import them.
I cumulate the 2 daily csv files (outdate and last outdate) into a single monthly spreadsheet. These screen shots show the daily files for December 15, 2013, and the December 2013 spreadsheet.In order to do that:I use macros to add columns and change the column headers on the daily files.In the LOUTDATE files, I fake an OUT LOC by assuming that the item was checked out at the library where it is normally shelved. After all, why would someone request an item to be sent to a different building, only to return it on the same day?I add the last 5 columns. I copy the LocationID (OUT LOC) into the Checkout library field, then use “find and replace” to change it to a one-letter library code.The Owning library is the 1st letter of the CollID (LOCATION).Check/Own is the Checkout library concatenated to the Owning library.Date is the date from the DateTime field.Month is the 1st day of the month. I am rather inept, and cannot figure out how to get Excel to just say the month and year.Then I copy and paste the multiple files into a single spreadsheet.There must be more streamlined ways to do this – I just have to find out what they are!Annual spreadsheets get to be a little unwieldy for Excel to process. And in most cases, I want monthly statistics throughout the year.
Excel PivotTables are an easy way to analyze data. If you have not used them, they are not too difficult to make. The screen shots show the fields I used in the PivotTables.These tables are for the month of February 2013.The first PivotTable shows the number of items checked out per patron at each of 3 service points:m = MediaBank at the Rakow Branchg = Main Libraryr = Rakow BranchThe first patron in the table, however, is the enigmatic Patron 0.I do not know why, but a number of incomplete entries appear in each checkout file. Usually the item record information is complete, but the patronID is 0, and the DateTime is blank. I exclude these entries from the final count.A more conventional patron, Patron 1000792, had 29 MediaBank checkouts and 1 Rakow Branch checkout during the month.The second PivotTable adds the date to show whether people who checked items out from the MediaBank also went into the Rakow Branch and checked things out on the same day. You can see that Patron 1000792 checked out from the MediaBank on 8 different days, and checked out from Rakow on 1 of the days he used the MediaBank. BETTER QUESTION: Do MediaBank users ever visit the Rakow Branch without using the MediaBank? But I didn’t think of that in time for this presentation.
I decided to consider that all items checked out by the same patron at the same site on the same day would count as one session or visit.To find out how many MediaBank visits there were during the month, I copied the second PivotTable, then sorted by MediaBank checkouts (column C). There were 1,862 MediaBank visits in February 2013.To find out how many times people checked items out from the MediaBank, and also checked items out from the Rakow Branch on the same day, I sorted lines 2 through 1,863 of the spreadsheet by number of Rakow Branch checkouts (Column E). There were 290 occasions on which patrons checked items out from both the MediaBank and the Rakow Branch on the same day. Why is it that most people who visit the MediaBank, which is right by the door of the Rakow Branch, do not also go inside the building and check something out? Are they only interested in Blu-rays, DVDs, and videogames? Are they visiting when the Rakow Branch is closed? …?
Our fiscal year runs from July to June.This report includes Joan Frye Williams’ favorite statistic, “mode”, the value which occurs most often.The statistic “days between visits” is messy to figure out, but I will continue it for a while to see if it is used. I get all the dates of the visits into a spreadsheet, insert columns to hold the number of days between the visits, and then get averages, etc. for those numbers.
It is easy to get a report of items sent from one building to the other from the monthly spreadsheet.Items owned by the Rakow Branch have Owning library = r.Items checked out at the Main Library have Checkout library = g.I found it easier to look for these 2 codes in 1 field, so Check/Own = gr.I have a list of staff administrative cards, and I use VLOOKUP to eliminate checkouts on those cards.Collection Development staff like a list of the actual items. I sort it by location code, call number, title, and datetime so that repeated loans of the same title will appear together. I delete the PatronID column.
I use a PivotTable to get the number of items sent each month by location code, and add the caption for the location code to the finished report.
A map is a format that helps you visualize data. It’s also a lot of fun to make.
The most widely-used mapping software is Esri’s ArcGIS. When I started thinking about it, I heard that it is too expensive and too difficult. The standard exclamation is “They give advanced degrees in that!”Not too difficult: we teach ourselves Excel, Access, etc. Tools: Esri tutorial, books, classes.Not too expensive: ArcGIS Desktop Basic = $1500/ year. But only $250/year for educational/nonprofit institution using only for administrative purposes.I used GIS tutorial1: basic workbook, by Wilpen L. Gorr and Kristen S. Kurland. Redlands, CA: Esri Press. The 5th edition came out May 3, 2013. ISBN 978-158948-335-4 List price $79.95. Amazon price $43.86. It includes access to a 180-day trial of ArcGIS® 10.1 for Desktop Advanced software and a DVD with data for working through the exercises. There are also open source GIS programs, such as Quantum GIS.This screen shows ArcMap, the ArcGIS element that you use to make maps.Maps are made up of various files.The right pane shows the files you can choose to make the map.The center pane holds the map itself.The left pane shows the files that make up the map you are working on.You can also draw on the map, but whenever possible you want to use files that already exist.The most interesting thing about ArcMap for this project is its ability to take a spreadsheet of addresses and geocode them -- locate them on a map.
The first spreadsheet shows active patrons and their addresses (which I have whited out in the screen shot). As you know, Innovative patron records have one field that holds the entire address, but with patience it is possible to parse out the various elements. Perhaps this is easier when you get patron information via SQL?To make the map I want, I need to add a code to each patron showing which libraries that patron used.In the second spreadsheet, Columns A – D show the results of a PivotTable of checkout records. You can see how many items each patron checked out from each service point:g = Main Librarym = MediaBankr = Rakow BranchIn Columns E – G, I used formulas to change a checkout number greater than 0 into the letter for each building. You can see the formula I used for Column E in the screen shot.In Column H, I concatenated all 3 building codes, so you could see where the patron checked out items.But I was worried that the map would be too hard to read if I used all 3 locations (too many code combinations). So I simplified the codes to only 2 locations in Column I. Since the MediaBank is located at the Rakow Branch, I used “r” to mean either the MediaBank or the Rakow Branch.Another thing to notice is that the PatronID in the patron record is 2 characters longer than the PatronID in the checkout record. The PatronID in the patron record has a “p” at the beginning and a check digit at the end.
I used the Excel VLOOKUP function to add the checkout library code to the patron records.First I copied the PatronID from Column A of the patron spreadsheet into Column J and chopped off the beginning “p” and the ending check digit to make the ShortID. Then I added Column K to hold the checkout library code. Then I copied the PatronID and the checkout library code from the checkout record spreadsheet into Columns O – P of the patron spreadsheet.I used VLOOKUP to copy the checkout library code from Column P to Column K. You can see the formula in the screen shot. If the ShortID in Column J does not match any PatronID in Column O, the formula returns “#N/A”. That means that the patron did not check out any items in the time period covered by my report, and for this project I do not want to map patrons without checkouts.I copied Column K and pasted the values back into the spreadsheet, deleted Columns O – P, and deleted the lines with “#N/A”. That made the spreadsheet ready to pull into ArcMap.
I pulled the spreadsheet of patrons into ArcMap and geocoded the addresses.If you have your own file of mapped reference addresses, the geocoding operation is free. However, if you want to use Esri’s online World Geocode Service, as I did, there is an additional charge.You need to tell ArcMap which fields in your table contain the address parts.
Of 44,557 patrons in the spreadsheet, 36,374 (~82%) were geocoded. You have a chance to go over the records that did not geocode and match them manually to possible addresses in the reference file, but I did not do that in this case.The file that was formed by geocoding is called a shapefile. The shapefile is accompanied by an attribute table that has the fields from the Excel spreadsheet, and additional fields with address information from Esri. Again, I have whited out the addresses.You can use data from the attribute table to change the way the shapefile looks. You can also add data to the table.
I threw in a background map, added shapefiles of the library district boundaries and the 2 library buildings, and colored the dots by checkout library: Main Library only (27,077), Rakow Branch only (3,215), or both (6,082).Unfortunately, the dots cover each other, although I tried to layer them so that the Main Library only patrons were on the bottom with the largest dot and the faintest color, and the Rakow Branch only patrons on the top with the smallest dot and the most vivid color.
When you look at maps which show only one type of patron at a time, you can see how misleading the 3-color map is.
I made the dots smaller, so they would not overlay each other.You can see how difficult it can be to make a map communicate information effectively.However, it is obvious that patrons do not always go to the library that is closest to them. I wanted to find out how many patrons go to the library that is farther from them.
ArcMap has a “measure” tool that tells you the distance between 2 points, but not between 1 point and 36,374 other points. There is a tool that will measure the distances between large numbers of points, but it is not included in the basic version of ArcGIS.However, there is another way to record the approximate distance of each patron from each library building.I used the “measure” tool to find out that the distance from the Rakow Branch to the farthest patron in the northeast corner of the district is 8.2 miles.
“Select by location” let me select all patrons with a certain distance from the Rakow Branch. I used ¼ mile increments to measure how far patrons live from the branch.Since we have already established that the distance from the Rakow Branch to the farthest patron is 8.2 miles, it is not surprising that when I searched by 8.25 miles, I got all 36,374 patrons. However, when I searched by 8.0 miles, a few patrons in the northeast corner of the district were not selected. That means that those patrons live between 8.25 and 8.0 miles from the Rakow Branch. They can be seen on the map because their dots are dark, instead of the florescent blue that you see when a feature is selected.I wanted to label those patrons “8.25” by adding the information to the shapefile’s attribute table.
When you select features on the map, the attribute table’s lines for those features are also selected (highlighted in florescent blue). You can choose to see either all the lines in the table, or just the lines that have been selected. I chose to see the selected lines, and the attribute table told me there were 36,242 out of 36,347. That means that 132 patrons live between 8.25 and 8.0 miles from the Rakow Branch, and I wanted to label them “8.25”. I added a field to the attribute table called “RakowDist” to hold this information.Fortunately, the attribute table has a handy icon that lets you reverse the selection on the map (and on the attribute table). When you click on that icon, the dots that were highlighted turn dark, and the dots that were dark become highlighted. As you will see on the next page, the highlighted lines in the attribute table change, too.
When you choose to look at only the selected lines in the attribute table, there are now only 132. After you tell ArcMap that you want to edit the table, you can copy 132 lines from an Excel spreadsheet and paste them into the attribute table.
The next step is to search by location for patrons who live within 7.75 miles of the Rakow Branch. Then I reversed the selection using the attribute table. You can see that the little triangle of patrons in the northeast corner is bigger. These people live between 8.25 and 7.75 miles of the branch. This includes the 132 people who live between 8.25 and 8.0 miles of the branch, the ones we found in the previous search.
The attribute table tells us that that 351 people live between 8.25 and 7.75 miles of the branch. However, 132 of those people already have “8.25” in the “RakowDist” column. I sorted the column largest to smallest. Then I pasted “8.0” into the lines where RakowDist = 0, lines 133 – 351.I repeated this, decreasing the distance by 0.25 miles each time, until I had a distance from the Rakow Branch for each patron.I did the whole thing over again to get the distance from the Main Library for each patron.
The population around the Main Library is denser, so a map color-coded by distance for the Main Library is more striking than the map for the Rakow Branch. I thought a “heat map”, with the colors gradually going more blue the closer they were to the building, would be effective, but it is hard to read.
Concentric bands of contrasting colors are easier to see.
To find out how many patrons live closer to the Main Library, but go only to the Rakow Branch, I used “Select by attributes.”I chose patrons where the column “Two” (the code for the checkout library) = “r”andThe distance to the Main Library is less than the distance to the Rakow Branch.There are 3,215 patrons who go only to the Rakow Branch. 235, or 7.3%, live closer to the Main Library than to the Rakow Branch.Are these people drawn to the Rakow Branch by the MediaBank? Not all of them, as you can see from the attribute table. Several patrons have an “r” in the column that shows Rakow Branch use, but no “m” in column that shows MediaBank use.
What other factors would there be? The yellow dots showing these patrons are not grouped in a limited geographic area. Perhaps they work or shop by the Rakow Branch?
To find out how many patrons live closer to the Rakow Branch, but go only to the Main Library, I chose patrons where the column “Two” (the code for the checkout library) = “g”andThe distance to the Main Library is more than the distance to the Rakow Branch.There are 27,077 patrons who go only to the Main Library. 3,079, or 11.4%, live closer to the Rakow Branch than to the Main Library.
As you can see, some of the Main Library only patrons live very close to the Rakow Branch. There are 3,215 patrons who exclusively go to the Rakow Branch, and 3,079 patrons who live closer to Rakow yet shun it.
To avoid taking too much time and inducing boredom, I skipped steps that I used in making these reports. Please feel free to email or call me with any questions. My ArcGIS skills are not extensive, but, as you can see, the software is fun to experiment with.