This document provides an overview of DataShare, a system being developed to facilitate research data sharing across the University of California campuses. It begins with background on goals of catalyzing data sharing and lowering barriers. There is then a demo of the UCSF DataShare instance, along with technical details of the system components and interactions for depositing and downloading data. Other details covered include branding, customization, costs, and governance agreements. The document concludes with discussion of next steps, including potential additional features, communication plans, and timelines for getting initial instances set up and customized at each campus.
EZID makes it simple for researchers and others to obtain and manage long-term identifiers for their digital content. The service can create and resolve identifiers, and it also allows entry and maintenance of information about the identifier (metadata). This presentation was given as part of a webinar series.
To facilitate data sharing from within the University of California system and beyond, the University of California Curation Center (UC3) is developing a new ingest and discovery layer for our data curation service, Dash. Dash uses the Merritt repository for preservation and a self-service overlay layer for submission and discovery of research datasets. The new overlay– dubbed Stash (STore And SHare)– will feature an enhanced user interface with a simple and intuitive deposit workflow, while still accommodating rich metadata. Stash will enable individual scholars to upload data through local file browse or drag-and-drop operation; describe data in terms of scientifically-meaning metadata, including methods, references, and geospatial information; identify datasets for persistent citation and retrieval; preserve and share data in an appropriate repository; and discover, retrieve, and reuse data through faceted search and browse. Stash can be implemented in conjunction with any standards-compliant repository that supports the SWORD protocol for deposit and the OAI-PMH protocol for metadata harvesting. Stash will feature native support for the DataCite or Dublin Core metadata schemas, but is designed to accommodate other schemas to support discipline-specific applications. By alleviating many of the barriers that have historically precluded wider adoption of open data principles, Stash empowers individual scholars to assert active curation control over their research outputs; encourages more widespread data preservation, publication, sharing, and reuse; and promotes open scholarly inquiry and advancement.
EZID makes it simple for researchers and others to obtain and manage long-term identifiers for their digital content. The service can create and resolve identifiers, and it also allows entry and maintenance of information about the identifier (metadata). This presentation was given as part of a webinar series.
To facilitate data sharing from within the University of California system and beyond, the University of California Curation Center (UC3) is developing a new ingest and discovery layer for our data curation service, Dash. Dash uses the Merritt repository for preservation and a self-service overlay layer for submission and discovery of research datasets. The new overlay– dubbed Stash (STore And SHare)– will feature an enhanced user interface with a simple and intuitive deposit workflow, while still accommodating rich metadata. Stash will enable individual scholars to upload data through local file browse or drag-and-drop operation; describe data in terms of scientifically-meaning metadata, including methods, references, and geospatial information; identify datasets for persistent citation and retrieval; preserve and share data in an appropriate repository; and discover, retrieve, and reuse data through faceted search and browse. Stash can be implemented in conjunction with any standards-compliant repository that supports the SWORD protocol for deposit and the OAI-PMH protocol for metadata harvesting. Stash will feature native support for the DataCite or Dublin Core metadata schemas, but is designed to accommodate other schemas to support discipline-specific applications. By alleviating many of the barriers that have historically precluded wider adoption of open data principles, Stash empowers individual scholars to assert active curation control over their research outputs; encourages more widespread data preservation, publication, sharing, and reuse; and promotes open scholarly inquiry and advancement.
Presentation by Lisa Federer (UCLA) on 16 July 2013 as part of the IMLS-sponsored DMPTool Webinar Series.
Description: This webinar will discuss the special needs of health sciences researchers and help you learn how to talk to researchers in the health and medical fields about their data management needs. We will cover NIH Data Sharing Policy and how to write a data management plan that meets NIH’s requirements. After viewing this webinar, participants will understand: who is required to submit a plan; specific information that should be included in a plan; how to use the DMPTool to write an NIH-specific DMP; and where to find additional resources for help.
Data Publishing Models by Sünje Dallmeier-Tiessendatascienceiqss
Data Publishing is becoming an integral part of scholarly communication today. Thus, it is indispensable to understand how data publishing works across disciplines. Are there best practices others can learn from or even data publishing standards? How do they impact interoperability in the Open Science landscape? The presentation will look at a range of examples, and the main building blocks of data publishing today. The work has been conducted as part of the RDA Data Publishing Workflows group.
Dataverse in the Universe of Data by Christine L. Borgmandatascienceiqss
Data repositories are much more than "black boxes" where data go in but may never come out. Rather, they are situated in communities, with contributors, users, reusers, and repository staff who may engage actively or passively with participants. This talk will explore the roles that Dataverse plays – or could play – in individual communities.
Metadata & Data Curation Services by Thu-Mai Christiandatascienceiqss
The Odum Institute was an early adopter of the Dataverse Network™ (DVN) virtual archive platform, transferring all of its holdings to the Virtual Data Center (VDC), the DVN’s precursor, in 2005. This presentation will illustrate the Odum Institute Data Archive’s integration of the Dataverse Network™ into its current data curation pipeline process and discuss the Dataverse Network’s role in the Institute’s tiered levels of data curation services.
February 18 2015 NISO Virtual Conference Scientific Data Management: Caring for Your Institution and its Intellectual Wealth
Learning to Curate Research Data
Jennifer Doty, Research Data Librarian, Emory Center for Digital Scholarship, Emory University, Robert W. Woodruff Library
Center for Open Science and the Open Science Framework: Dataverse Add-on by S...datascienceiqss
The Open Science Framework (OSF: http://osf.io; supported and maintained by the Center for Open Science - COS: http://centerforopenscience.org/) is a free, open source workflow management service and repository designed for scientists to manage and connect everything across their research process. One of the first add-on connections was Dataverse, which provides value to users through an easy connection as a repository service. This talk will introduce the Dataverse add-on connection and provide a technical view of how it was built and how it connects the OSF and Dataverse.
Dataverse in China: Internationalization, Curation and Promotion by Yin Shenqindatascienceiqss
Zhang Jilong & Yin Shenqin will discuss the internationalization development work done by Fudan University to support a Chinese language user interface in Dataverse. Additionally, the practice of data curation at Fudan University will be presented, as well as the branding and dissemination of Dataverse in China.
This presentation was provided by Sandi Caldrone of Purdue during the NISO Virtual Conference held on Feb 15, 2017, entitled Institutional Repositories: Ensuring Yours is Populated, Useful and Thriving.
RDAP13 Elizabeth Moss: The impact of data reuseASIS&T
Kathleen Fear, ICPSR, University of Michigan
“The impact of data reuse: a pilot study of 5 measures”
Panel: Data citation and altmetrics
Research Data Access & Preservation Summit 2013
Baltimore, MD April 4, 2013 #rdap13
NISO Webinar on data curation services at the CDLCarly Strasser
"Building communities and Services in Support of Data-Intensive Research". Webinar on 18 Sept 2013 for the NISO Webinar Series. This was part 2 of 2 for Data Curation
A demonstration of the DMPTool, which helps researchers create data management plans now required by the Nat'l Science Foundation and other US grant funding agencies. See http://www.cdlib.org/uc3/webinars/20111019/
for recording.
February 18 2015 NISO Virtual Conference
Scientific Data Management: Caring for Your Institution and its Intellectual Wealth
Improving Integrity, Transparency, and Reproducibility Through Connection of the Scholarly Workflow
Andrew Sallans, Partnerships, Collaborations, and Funding, Center for Open Science
This webinar is intended for librarians, staff, and information professionals interested in improving usability for the DMPTool in their institution. This webinar will also help institutions begin to formalize which individuals or resources will be available to help researchers using the tool. This webinar will be most useful for users that need to customize the tool for their institution.
Presentation by Lisa Federer (UCLA) on 16 July 2013 as part of the IMLS-sponsored DMPTool Webinar Series.
Description: This webinar will discuss the special needs of health sciences researchers and help you learn how to talk to researchers in the health and medical fields about their data management needs. We will cover NIH Data Sharing Policy and how to write a data management plan that meets NIH’s requirements. After viewing this webinar, participants will understand: who is required to submit a plan; specific information that should be included in a plan; how to use the DMPTool to write an NIH-specific DMP; and where to find additional resources for help.
Data Publishing Models by Sünje Dallmeier-Tiessendatascienceiqss
Data Publishing is becoming an integral part of scholarly communication today. Thus, it is indispensable to understand how data publishing works across disciplines. Are there best practices others can learn from or even data publishing standards? How do they impact interoperability in the Open Science landscape? The presentation will look at a range of examples, and the main building blocks of data publishing today. The work has been conducted as part of the RDA Data Publishing Workflows group.
Dataverse in the Universe of Data by Christine L. Borgmandatascienceiqss
Data repositories are much more than "black boxes" where data go in but may never come out. Rather, they are situated in communities, with contributors, users, reusers, and repository staff who may engage actively or passively with participants. This talk will explore the roles that Dataverse plays – or could play – in individual communities.
Metadata & Data Curation Services by Thu-Mai Christiandatascienceiqss
The Odum Institute was an early adopter of the Dataverse Network™ (DVN) virtual archive platform, transferring all of its holdings to the Virtual Data Center (VDC), the DVN’s precursor, in 2005. This presentation will illustrate the Odum Institute Data Archive’s integration of the Dataverse Network™ into its current data curation pipeline process and discuss the Dataverse Network’s role in the Institute’s tiered levels of data curation services.
February 18 2015 NISO Virtual Conference Scientific Data Management: Caring for Your Institution and its Intellectual Wealth
Learning to Curate Research Data
Jennifer Doty, Research Data Librarian, Emory Center for Digital Scholarship, Emory University, Robert W. Woodruff Library
Center for Open Science and the Open Science Framework: Dataverse Add-on by S...datascienceiqss
The Open Science Framework (OSF: http://osf.io; supported and maintained by the Center for Open Science - COS: http://centerforopenscience.org/) is a free, open source workflow management service and repository designed for scientists to manage and connect everything across their research process. One of the first add-on connections was Dataverse, which provides value to users through an easy connection as a repository service. This talk will introduce the Dataverse add-on connection and provide a technical view of how it was built and how it connects the OSF and Dataverse.
Dataverse in China: Internationalization, Curation and Promotion by Yin Shenqindatascienceiqss
Zhang Jilong & Yin Shenqin will discuss the internationalization development work done by Fudan University to support a Chinese language user interface in Dataverse. Additionally, the practice of data curation at Fudan University will be presented, as well as the branding and dissemination of Dataverse in China.
This presentation was provided by Sandi Caldrone of Purdue during the NISO Virtual Conference held on Feb 15, 2017, entitled Institutional Repositories: Ensuring Yours is Populated, Useful and Thriving.
RDAP13 Elizabeth Moss: The impact of data reuseASIS&T
Kathleen Fear, ICPSR, University of Michigan
“The impact of data reuse: a pilot study of 5 measures”
Panel: Data citation and altmetrics
Research Data Access & Preservation Summit 2013
Baltimore, MD April 4, 2013 #rdap13
NISO Webinar on data curation services at the CDLCarly Strasser
"Building communities and Services in Support of Data-Intensive Research". Webinar on 18 Sept 2013 for the NISO Webinar Series. This was part 2 of 2 for Data Curation
A demonstration of the DMPTool, which helps researchers create data management plans now required by the Nat'l Science Foundation and other US grant funding agencies. See http://www.cdlib.org/uc3/webinars/20111019/
for recording.
February 18 2015 NISO Virtual Conference
Scientific Data Management: Caring for Your Institution and its Intellectual Wealth
Improving Integrity, Transparency, and Reproducibility Through Connection of the Scholarly Workflow
Andrew Sallans, Partnerships, Collaborations, and Funding, Center for Open Science
This webinar is intended for librarians, staff, and information professionals interested in improving usability for the DMPTool in their institution. This webinar will also help institutions begin to formalize which individuals or resources will be available to help researchers using the tool. This webinar will be most useful for users that need to customize the tool for their institution.
Although there is consensus that datasets should be treated like “first class” research objects in how they are discovered, cited, and recognized, this is still far from a reality. Datasets are poorly indexed by search engines, and they are rarely cited in formal reference lists. A solution that a number of journals are implementing is to publish discovery and citation proxy objects in the form of peer-reviewed “data papers.” A strength of this approach is that it requires dataset creators to write up rich and useful metadata for the paper, but an accompanying weakness is that busy creators are not always willing to invest the necessary time and energy. To enhance dataset discoverability without burdening creators, EZID (easy-eye-dee) will begin using dataset metadata to automatically generate lightweight, non-peer reviewed publications that will increase the exposure of the metadata to search engines. EZID (ezid.cdlib.org) maintains public DataCite metadata records for over 167,000 datasets, any of which could be viewed as HTML or as a dynamically generated PDF. In cases where the creator has submitted only the required DataCite metadata, the document will function as a cover-sheet or landing page. If the creator chooses to submit optional Abstract and Methods metadata (over 2,000 records already contain Abstracts), the document expands to more closely resemble a traditional journal article, while retaining the linking functionality of a landing page. A potential bonus is that providing an incrementally improved document in exchange for the effort of submitting incrementally improved metadata may encourage authors to submit more than the minimum required metadata.
10-1-13 “Research Data Curation at UC San Diego: An Overview” Presentation Sl...DuraSpace
“Hot Topics: The DuraSpace Community Webinar Series, " Series Six: Research Data in Repositories” Curated by David Minor, Research Data Curation Program, UC San Diego Library. Webinar 1: “Research Data Curation at UC San Diego: An Overview”
Presented by David Minor & Declan Fleming, Chief Technology Strategist, UC San Diego Library
A presentation given at the Coalition for Networked Information describing efforts undertaken by 3 partnered organizations (UCSF CTSI, UCSF Library, California Digital Library) to support sharing of research data by UCSF investigators
Information technology and resources are an integral and indispensable part of the contemporary academic enterprise. In particular, technological advances have nurtured a new paradigm of data-intensive research. However, far too much of this activity still takes place in silos, to the detriment of open scholarly inquiry, integrity, and advancement. To counteract this tendency, the University of California Curation Center (UC3) has been developing and deploying a comprehensive suite of curation services that facilitate widespread data management, preservation, publication, sharing, and reuse. Through these services UC3 is engaging with new communities of use: in addition to its traditional stakeholders in cultural heritage memory organizations, e.g., libraries, museums, and archives, the UC3 service suite is now attracting significant adoption by research projects, laboratories, and individual faculty researchers. This webinar will present an introduction to five specific services – DMPTool, DataUp, EZID, Merritt, Web Archiving Service (WAS) – applicable to data curation throughout the scholarly lifecycle, two recent initiatives in collaboration with UC campuses, UC Berkeley Research Hub and UC San Francisco DataShare, and the ways in which they encourage and promote new communities of practice and greater transparency in scholarly research.
In this presentation from the DDN User Meeting at SC13, Erik Deumans from SSERCA describes how the institution is sharing data with WOS from DDN.
Watch the video presentation: http://insidehpc.com/2013/11/13/ddn-user-meeting-coming-sc13-nov-18/
Merritt’s micro-services-based architecture provides a number of options for easy integration with diverse external discovery services with specific disciplinary focus on scientific data sharing. By removing many of the barriers faced by researchers interested in data publication, the integrations of Merritt with DataShare and Research Hub exemplify a new service model for cooperative and distributed data sharing. The widespread adoption of such sharing is critical to open scientific inquiry and advancement.
RDAP 16 Lightning: An Open Science Framework for Solving Institutional Challe...ASIS&T
Research Data Access and Preservation Summit, 2016
Atlanta, GA
May 4-7, 2016
Lightning Rounds (Thursday, May 5)
Presenter:
Matthew Spitzer, Center for Open Science
Linked Data Love: research representation, discovery, and assessment
#ALAAC15
The explosion of linked data platforms and data stores over the last five years has been profound – both in terms of quantity of data as well as its potential impact. Research information systems such as VIVO (www.vivoweb.org) play a significant role in enabling this work. VIVO is an open source, Semantic Web-based application that provides an integrated, searchable view of the scholarly activities of an organization. The uniform semantic structure of VIVO-ISF data enables a new class of tools to advance science. This presentation will provide a brief introduction and update to VIVO and present ways that this semantically-rich data can enable visualizations, reporting and assessment, next-generation collaboration and team building, and enhanced multi-site search. Libraries are uniquely positioned to facilitate the open representation of research information and its subsequent use to spur collaboration, discovery, and assessment. The talk will conclude with a description of ways librarians are engaged in this work – including visioning, metadata and ontology creation, policy creation, data curation and management, technical, and engagement activities.
Kristi Holmes, PhD
Director, Galter Health Sciences Library
Director of Evaluation, NUCATS
Associate Professor, Preventive Medicine-Health and Biomedical Informatics
Northwestern University Feinberg School of Medicine
10-15-13 “Metadata and Repository Services for Research Data Curation” Presen...DuraSpace
“Hot Topics: The DuraSpace Community Webinar Series," Series Six: Research Data in Repositories” Curated by David Minor, Research Data Curation Program, UC San Diego Library. Webinar 2: “Metadata and Repository Services for Research Data Curation”
Presented by Declan Fleming, Chief Technology Strategist, Arwen Hutt, Metadata Librarian & Matt Critchlow, Manager of Development and Web ServicesUC, San Diego Library.
How Cyverse.org enables scalable data discoverability and re-useMatthew Vaughn
Cyverse.org designs, builds, and operates an innovative, integrated life sciences cyberinfrastructure. It provides data management and analysis capabilities with point and click, cloud, API, and command-line interfaces that engage users of any computing proficiency and is based on an extensible platform that integrates local and national-scale HPC, storage, and cloud resources. Cyverse directly supports thousands of users who store and access over 2PB of research data, use millions of compute hours annually, and participate in the platform's improvement, plus a secondary user community from partner projects that have built atop it. Cyverse is organized around "Data Store" and "App Catalog" services, each of which enables users to upload digital research assets that can be kept private, shared, or made public. Recently, Cyverse has been transitioning from passively enabling digital sharing towards active facilitation. It is partnering with repositories like NCBI SRA to enable direct submission from Cyverse applications, adopting commonly-used ontologies, enabling import/export of virtual machine images, developing metadata-driven persistent landing pages for data sets, and providing DOI (and other identifier) services. These new features are expected to further catalyze growth of an interoperable, interconnected network of shared research infrastructure across the biological sciences.
Data grids are an emerging technology that enables the formation of sharable collections from data distributed across multiple storage resources. The integrated Rule Oriented Data System (iRODS) is a data grid developed by the DICE Center at UNC-CH. The iRODS data grid enforces management policies that control properties of the collection. Examples of policies include retention, disposition, distribution, replication, metadata extraction, time-dependent access controls, data processing, data redaction, and integrity checking. Policies can be defined that automate administrative functions (file migration and replication) and that validate assessment criteria (authenticity, integrity, chain of custody). iRODS is used to build data sharing environments, digital libraries, and preservation environments. The iRODS data grid is used at UNC-CH to support the Carolina Digital Repository, the LifeTime Library for the School of Information and Library Science, data grids for the Renaissance Computing Institute (RENCI), collaborations within North Carolina, and both national and international data sharing. At RENCI, the TUCASI data grid supports shared collections between UNC-CH, Duke, and NCSU. The RENCI data grid is federated with ten other data grids including the National Climatic Data Center, the Texas Advanced Computing Center data grid, and the Ocean Observatories Initiative data grid. International applications include the CyberSKA Square Kilometer Array for radio astronomy and the French National Institute for Nuclear Physics and Particle Physics. The collections that are assembled may contain hundreds of millions of files, and petabytes of data. A specific goal is the integration of institutional repositories with the national data infrastructure that is being assembled under the NSF DataNet program. The software is available as an open source distribution from http://irods.diceresearch.org.
Improving user engagement in a data repository with web analyticsIUPUI
Presented at LITA Forum 2013
Abstract: A goal of data curation activities is to enable discovery and reuse of valuable data sets. How well repositories facilitate these activities is difficult to measure with existing metrics. In this presentation we will discuss how to utilize usage statistics from DSpace (Apache SOLR) and Google Analytics to better understand how researchers discover, access, and use datasets archived in an institutional repository. Our focus will be on data analysis to explore the information seeking needs and behavior of data repository users. Ultimately, this analytic approach will inform the outreach, marketing, and impact evaluation of data repositories.
Also available at: http://hdl.handle.net/1805/3665
Identity and Access Management for User login and departmental level and federation level. User can be easily manageable through identity and access Management
Data “publication” attempts to appropriate for data the prestige of publication in the scholarly literature. While the scholarly communication community substantially endorses the idea, it hasn’t fully resolved what a data publication should look like or how data peer review should work. To contribute an important and neglected perspective on these issues, we surveyed ~250 researchers across the sciences and social sciences, asking what expectations “data publication” raises and what features would be useful to evaluate the trustworthiness and impact of a data publication and the contribution of its creator(s).
In early 2014, we asked science and social science researchers...
• What expectations do the terms publication and peer review raise in reference to data?
• What features would be useful to evaluate the trustworthiness, evaluate the impact, and enhance the prestige of a data publication?
Software development should build on the successful work of others. The DMPTool helps researchers with data management planning, but what about other phases of the data life cycle? In this webinar, we will discuss what software integration with the DMPTool might look like, and why it is important. Topics include:
1. Background: why tools integration is important; why we are talking about this in terms of the DMPTool.
2. Details and plans for DMPTool2 regarding software integration and compatibility.
3. Future possibilities for software integration for DMPTool2
4. Example of successful integration of tools: work at the Center for Open Science.
Data management plans existed long before the NSF started requiring them. DMPs have inherent value despite their being relatively unknown to researchers until now. Proper, thorough data management plans are potentially a major time saver and a huge asset for the project. In this webinar, we will cover how to go beyond funder requirements and develop more thorough data DMPs The Gulf of Mexico Research Initiative requires an extensive data management plan for projects it funds; we will hear about their efforts and how they are planning to use the DMPTool going forward.
This webinar will discuss the special needs of digital humanities researchers and help you learn how to talk them about their information management needs.
Topics that will be covered:
What is humanities data?
What special considerations are involved in creating DMPs for humanities data?
Where can you store humanities data?
What will humanities funding agencies be looking for? What regulations apply to humanities data (e.g., data sharing, data management, data availability)?
What librarians should know before meeting with a humanist; how humanists differ from other researchers in the way they think about their version of data.
The thorough integration of information technology and resources into scientific workflows has nurtured a new paradigm of data-intensive science. However, far too much research activity still takes place in silos, to the detriment of open scientific inquiry and advancement. Data-intensive science would be facilitated by more universal adoption of good data management practices ensuring the ongoing viability and usability of all legitimate research outputs, including data, and the encouragement of data publication and sharing for reuse. The centerpiece of such data sharing is the digital repository, acting as the foundation for external value-added services supporting and promoting effective data acquisition, publication, discovery, and dissemination. Since a general-purpose curation repository will not be able to offer the same level of specialized user experience provided by disciplinary tools and portals, a layered model built on a stable repository core is an appropriate division of labor, taking best advantage of the relative strengths of the concerned systems.
The Merritt repository, operated by the University of California Curation Center (UC3) at the California Digital Library (CDL), functions as a curation core for several data sharing initiatives, including the eScholarship open access publishing platform, the DataONE network, and the Open Context archaeological portal. This presentation with highlight two recent examples of external integration for purposes of research data sharing: DataShare, an open portal for biomedical data at UC, San Francisco; and Research Hub, an Alfresco-based content management system at UC, Berkeley. They both significantly extend Merritt’s coverage of the full research data lifecycle and workflows, both upstream, with augmented capabilities for data description, packaging, and deposit; and downstream, with enhanced domain-specific discovery. These efforts showcase the catalyzing effect that coupled integration of curation repositories and well-known public disciplinary search environments can have on research data sharing and scientific advancement.
9 July 2013; Presented by Joan Starr and Carly Strasser. Description: EZID makes it simple for researchers and others to obtain and manage long-term identifiers (DOIs and ARKs) for their digital content. EZID is a great tool for data management, and researchers can build EZID and identifiers into their data management plans. In this free summer webinar, we cover: The advantages of EZID and identifiers for data management; How to configure the DMPTool to point to your library's EZID services; How to use the DMPTool as a ready source of contact information for your outreach
More from University of California Curation Center (20)
LF Energy Webinar: Electrical Grid Modelling and Simulation Through PowSyBl -...DanBrown980551
Do you want to learn how to model and simulate an electrical network from scratch in under an hour?
Then welcome to this PowSyBl workshop, hosted by Rte, the French Transmission System Operator (TSO)!
During the webinar, you will discover the PowSyBl ecosystem as well as handle and study an electrical network through an interactive Python notebook.
PowSyBl is an open source project hosted by LF Energy, which offers a comprehensive set of features for electrical grid modelling and simulation. Among other advanced features, PowSyBl provides:
- A fully editable and extendable library for grid component modelling;
- Visualization tools to display your network;
- Grid simulation tools, such as power flows, security analyses (with or without remedial actions) and sensitivity analyses;
The framework is mostly written in Java, with a Python binding so that Python developers can access PowSyBl functionalities as well.
What you will learn during the webinar:
- For beginners: discover PowSyBl's functionalities through a quick general presentation and the notebook, without needing any expert coding skills;
- For advanced developers: master the skills to efficiently apply PowSyBl functionalities to your real-world scenarios.
UiPath Test Automation using UiPath Test Suite series, part 4DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 4. In this session, we will cover Test Manager overview along with SAP heatmap.
The UiPath Test Manager overview with SAP heatmap webinar offers a concise yet comprehensive exploration of the role of a Test Manager within SAP environments, coupled with the utilization of heatmaps for effective testing strategies.
Participants will gain insights into the responsibilities, challenges, and best practices associated with test management in SAP projects. Additionally, the webinar delves into the significance of heatmaps as a visual aid for identifying testing priorities, areas of risk, and resource allocation within SAP landscapes. Through this session, attendees can expect to enhance their understanding of test management principles while learning practical approaches to optimize testing processes in SAP environments using heatmap visualization techniques
What will you get from this session?
1. Insights into SAP testing best practices
2. Heatmap utilization for testing
3. Optimization of testing processes
4. Demo
Topics covered:
Execution from the test manager
Orchestrator execution result
Defect reporting
SAP heatmap example with demo
Speaker:
Deepak Rai, Automation Practice Lead, Boundaryless Group and UiPath MVP
Let's dive deeper into the world of ODC! Ricardo Alves (OutSystems) will join us to tell all about the new Data Fabric. After that, Sezen de Bruijn (OutSystems) will get into the details on how to best design a sturdy architecture within ODC.
Connector Corner: Automate dynamic content and events by pushing a buttonDianaGray10
Here is something new! In our next Connector Corner webinar, we will demonstrate how you can use a single workflow to:
Create a campaign using Mailchimp with merge tags/fields
Send an interactive Slack channel message (using buttons)
Have the message received by managers and peers along with a test email for review
But there’s more:
In a second workflow supporting the same use case, you’ll see:
Your campaign sent to target colleagues for approval
If the “Approve” button is clicked, a Jira/Zendesk ticket is created for the marketing design team
But—if the “Reject” button is pushed, colleagues will be alerted via Slack message
Join us to learn more about this new, human-in-the-loop capability, brought to you by Integration Service connectors.
And...
Speakers:
Akshay Agnihotri, Product Manager
Charlie Greenberg, Host
GDG Cloud Southlake #33: Boule & Rebala: Effective AppSec in SDLC using Deplo...James Anderson
Effective Application Security in Software Delivery lifecycle using Deployment Firewall and DBOM
The modern software delivery process (or the CI/CD process) includes many tools, distributed teams, open-source code, and cloud platforms. Constant focus on speed to release software to market, along with the traditional slow and manual security checks has caused gaps in continuous security as an important piece in the software supply chain. Today organizations feel more susceptible to external and internal cyber threats due to the vast attack surface in their applications supply chain and the lack of end-to-end governance and risk management.
The software team must secure its software delivery process to avoid vulnerability and security breaches. This needs to be achieved with existing tool chains and without extensive rework of the delivery processes. This talk will present strategies and techniques for providing visibility into the true risk of the existing vulnerabilities, preventing the introduction of security issues in the software, resolving vulnerabilities in production environments quickly, and capturing the deployment bill of materials (DBOM).
Speakers:
Bob Boule
Robert Boule is a technology enthusiast with PASSION for technology and making things work along with a knack for helping others understand how things work. He comes with around 20 years of solution engineering experience in application security, software continuous delivery, and SaaS platforms. He is known for his dynamic presentations in CI/CD and application security integrated in software delivery lifecycle.
Gopinath Rebala
Gopinath Rebala is the CTO of OpsMx, where he has overall responsibility for the machine learning and data processing architectures for Secure Software Delivery. Gopi also has a strong connection with our customers, leading design and architecture for strategic implementations. Gopi is a frequent speaker and well-known leader in continuous delivery and integrating security into software delivery.
JMeter webinar - integration with InfluxDB and GrafanaRTTS
Watch this recorded webinar about real-time monitoring of application performance. See how to integrate Apache JMeter, the open-source leader in performance testing, with InfluxDB, the open-source time-series database, and Grafana, the open-source analytics and visualization application.
In this webinar, we will review the benefits of leveraging InfluxDB and Grafana when executing load tests and demonstrate how these tools are used to visualize performance metrics.
Length: 30 minutes
Session Overview
-------------------------------------------
During this webinar, we will cover the following topics while demonstrating the integrations of JMeter, InfluxDB and Grafana:
- What out-of-the-box solutions are available for real-time monitoring JMeter tests?
- What are the benefits of integrating InfluxDB and Grafana into the load testing stack?
- Which features are provided by Grafana?
- Demonstration of InfluxDB and Grafana using a practice web application
To view the webinar recording, go to:
https://www.rttsweb.com/jmeter-integration-webinar
PHP Frameworks: I want to break free (IPC Berlin 2024)Ralf Eggert
In this presentation, we examine the challenges and limitations of relying too heavily on PHP frameworks in web development. We discuss the history of PHP and its frameworks to understand how this dependence has evolved. The focus will be on providing concrete tips and strategies to reduce reliance on these frameworks, based on real-world examples and practical considerations. The goal is to equip developers with the skills and knowledge to create more flexible and future-proof web applications. We'll explore the importance of maintaining autonomy in a rapidly changing tech landscape and how to make informed decisions in PHP development.
This talk is aimed at encouraging a more independent approach to using PHP frameworks, moving towards a more flexible and future-proof approach to PHP development.
2. Where we’re going
Background
Demo of UCSF DataShare
Technical details
Other details
Future plans
Q&A
From Flickr by Leo Hidalgo
3.
4. Goal
How
Catalyze widespread research data
sharing
Develop a system that lowers data
sharing barriers and builds an engaged
user community
5. Survey of users by Angela Rizk-‐Jackson
Has your research
group provided public
access to data?
Why?
Yes
No
How?
Other
Other
Journal
required
Funder
required
Repository
Website
n = 114
7. Repository choices…
Repositories
for data
Discipline-‐specific
General content
Institutional
Non-‐institutional
Publishers/for-‐profits
Short-‐term projects
8. Repository choices…
Which is more
important?
Depends
Institutional
• All data associated with
a paper
• Tells a story
• Clearinghouse for
researcher’s works
?
Which should a
researcher use?
Both
Discipline-‐specific
• Some of data for a
given paper
• Discoverable
• Integrated systems
• Collection policies
9. Institutional
• All data associated with
a paper
• Tells a story
• Clearinghouse for
researcher’s works
10. IR’s are SO
2002.
From Flickr by Colin ZHU
From Flickr by johnsons531
From Flickr by Ludie Cochrane
From Flickr by Kapil Karekar
11. Last
year…
… “Federal agencies investing in research and
development (more than $100 million in annual
expenditures) must have clear and coordinated
policies for increasing public access to research
products.”
13. But…
From Flickr by jackcheng
Not always self-‐service
Sometimes complicated
Data?
“Old” user interfaces
14. Simplify data deposit for UC
researchers
Simple metadata
Self-‐service upload and download
Branded for campus
Most Important:
Institutional Control Over Data
15. Background
Demo of UCSF DataShare
Technical details
Other details
Future plans
Q&A
From Flickr by Leo Hidalgo
16. Background
Demo of UCSF DataShare
Technical details
Other details
Future plans
Q&A
From Flickr by Leo Hidalgo
17. Technical goals
• Easy submission
• Persistent citation
• Preservation assurance
• Effective discovery
From www.dimensionsinfo.com
• Control over terms of use
• All the benefits of a centrally
hosted service, while
maintaining campus branding
and identity
From Flickr by Eric Peacock
18. System components
• Easy submission
UCSF drag-‐n-‐drop client
• Persistent citation
• Preservation assurance
• Effective discovery
• Control over terms of use
Data use agreements (DUAs)
• All the benefits of a centrally DNS, Apache, CSS, and
campus Shibboleth IdPs
hosted service, while
maintaining campus branding datashare.berkeley.edu
datashare.ucdavis.edu
and identity
datashare.uci.edu
datashare.ucla.edu
…
19. Deposit interactions
Researcher
(data producer)
datashare.campus.edu
DataShare portal
Campus
IdP
Authenticate
with campus
credentials
Shib
Drag-‐n-‐drop
client
Assemble dataset
Add metadata
Submit to Merritt
SDSC cloud
Preservation storage
Merritt
CSS
Atom
Discovery
Populate XTF index
(XTF)
Request DOI
Register metadata
Assign DOI
Data use
agreement
EZID
Request DOI
Register metadata
Assign DOI
Primo
Harvest for A&I discovery
DataCite
Data Citation
Index
Harvest for A&I discovery
20. Download interactions
Researcher
Synchronous for
small datasets;
asynchronous for
large (> 500 MB)
Campus
IdP
Download data
(data consumer)
datashare.campus.edu
DataShare portal
Drag-‐n-‐drop
client
Merritt
CSS
Discovery
(XTF)
Faceted search / browse
SDSC cloud
EZID
Retrieve data
Primo
Faceted search / browse
Data use
agreement
Accept DUA terms
DataCite
Data Citation
Index
Faceted search / browse
21. Background
Demo of UCSF DataShare
Technical details
Other details
Future plans
Q&A
From Flickr by Leo Hidalgo
22. Campus Library
Delivers service to community
Shapes user interface, URL, branding
Customizes key components
Develops help, training
Roles
UC3 / CDL
Guides the campus
Preserves content in Merritt
Connects to EZID
Deploys XTF for discovery
Works with vendors
SDSC
Maintains production storage
infrastructure
Holds three independent
copies of content
23. Branding &
Customization
From Flickr by Diorama Sky
•
•
•
•
Logo
URL
Contact information
Other…?
25. Cost
Anticipated cost of providing all campus ladder-‐track
faculty with 5 GBs for 10 years
Campus
Faculty
Threshold
Paid-‐up cost
Berkeley
1,260
10 TB
$ 29,300
Davis
1,240
10 TB
$ 29,300
Irvine
1,051
10 TB
$ 29,300
Los Angeles
1,701
10 TB
$ 29,300
Merced
159
1 TB
$ 2,930
Riverside
561
5 TB
$ 14,650
San Diego
1,109
10 TB
$ 29,300
San Francisco
366
2 TB
$ 5,860
Santa Barbara
746
5 TB
$ 14,650
Santa Cruz
485
5 TB
$ 14,650
Source: http://legacy-‐its.ucop.edu/uwnews/stat/headcount_fte/oct2013/welcome.html
26. Governance
& Agreements
Goal:
Simplify & Scale Data Use &
Deposit Agreements
27. Governance
& Agreements
Data
User
ODL or
similar
CDL
Terms of
service
UC Campus
ODL or similar
Terms of
service
Data
Depositor
28. Background
Demo of UCSF DataShare
Technical details
Other details
Next steps & future plans
Q&A
From Flickr by Leo Hidalgo
29. Who
Decides?
• CDL to work with each campus to
implement & shape service
• Campus-‐to-‐campus interaction
• Group meetings as needed
• SAG1 check-‐ins
• Communication (…)
37. DASH:
Helping Community
T Repositories
ob
eR
evi
seD
What Makes DASH Unique:
• Modern, intuitive user interface for superior user experience
• Freely available code for download and use by anyone
• User-‐friendly API(s) to ensure interoperability with existing
repositories (e.g., SWORD for deposit; Atom, OAI-‐PMH,
ResourceSync for populating the discovery index).
• Customizable interfaces that can be altered easily to reflect service
provider branding
• Authentication via institutional Identity Management Systems
38. Next Steps –
Next 2 Weeks
• details to be established
– who’s interested
– tech contact for interested
campuses
– communication lines
From Flickr by Themactep
39. Next Steps –
Next 2 Months
• get DataShare up and running
– Shibboleth configuration &
other authentication
– Domains/URLs established
– Customizations – logos etc.
From Flickr by Themactep
40. Next Steps –
Longer term
• in-‐person meeting?
• CDL camp?
• communication/outreach?
From Flickr by Themactep