Research Data Access and Preservation Summit, 2016
Atlanta, GA
May 4-7, 2016
Part of Panel 4, "Measuring Up: How Are We Defining Success for Research Data Services?"
Presenter:
Yasmeen Shorish, James Madison University
RDAP 16: Building Without a Plan: How do you assess structural strength? (Panel 4, Measuring Up)
1. Building Without a Plan
How do you assess structural strength?
Yasmeen Shorish (@yasmeen_azadi)
Physical & Life Sciences Librarian
James Madison University
Research Data Access and Preservation Summit 2016
2. Remember yesterday?
• Data services started in an ad hoc manner
• No infrastructure support to help strategically plan
• All individual planning incorporated alongside existing
liaison role
3. Ad hoc assessment
• Half-time data services = no time to develop assessment
tools
• Catch as catch can
• Time Time Time Time Time Time Time
4. Demand as measurement
• This is not an ideal metric
• Measures awareness (yay!) and not necessarily
impact/quality ( ¯_(ツ)_/¯ )
5. Measuring demand?
• Prior to 2015, no organizational analytic-capture
software. Relied on email records.
• Difficult to untangle multi-topic emails and categorize
6. Measuring demand?
• Starting in 2015, began recording data consultation
statistics in LibAnswers
• Outstanding issue – operator negligence
9. Outreach as measurement
• Coordinated with Office of Sponsored Programs to be
notified of upcoming grant submissions
• Email PI with reference to DMPTool and an invitation to
meet to discuss further
10. Setting up success
• Build intentional, internal networks:
• Liaisons
• Office of Research & Scholarship
11. Setting up success
• Establish best practices
• Capture multiple metrics
Image by Krista Quiroga
12. Drawing the plans
• Outcomes:
• Measure needs/demand over time
• Identify service gaps
• Document resource use (time, software, personnel)
Image by Dan Hetteix
I took more time to type “time” over and over again than to actually plan how to assess my data services. Seriously.
Recording all data-related encounters in LibAnswers helps assess the depth of the queries and answers, but only effective if I actually remember to enter email exchanges into it.
Recording all data-related encounters in LibAnswers helps assess the depth of the queries and answers, but only effective if I actually remember to enter email exchanges into it. Mostly a time issue.
Recording all data-related encounters in LibAnswers helps assess the depth of the queries and answers, but only effective if I actually remember to enter email exchanges into it. Mostly a time issue. And, as such, I haven’t been exceptional about capturing the richness of the answers.
These emails probably have a 40-50% acceptance of the invitation. Closer to 75% will use the DMPTool.
Single coordinator role = more time on one topic, but still requires a distributed network of expertise
Single coordinator role = single point person establishing organizational best practices. How liaisons use LibAnswers w/respect to data metrics. That internal network of collaborators means that I can deploy needs assessment surveys with more ease, which will hopefully give us trend data over time.
Our sad assessment identified what we need to have in place moving forward. We want to be able to achieve these outcomes, so what do we need to measure – what data do we actually need to gather – to do so? Still just one person, leading no defined department. Must be discriminatory about what data we gather – I am reliant on other people to do the data gathering to some extent.