DMVitals is a data management assessment tool that provides recommendations to researchers to improve their data management practices. It conducts interviews to assess a researcher's current practices, assigns a score based on a maturity model, and generates a report with suggested tasks to increase sustainability. The tool aims to standardize assessments while addressing each project's unique needs. It is being developed and tested through interviews at the University of Virginia, with the goal of releasing versions for broader use and expanding through collaborations.
Seminario eMadrid/SHEILA sobre "Analítica del Aprendizaje". ¿Cómo llegamos al...eMadrid network
Seminario eMadrid/SHEILA sobre "Analítica del Aprendizaje". ¿Cómo llegamos allí? Pasos hacia la adopción sistémica de la analítica de aprendizaje. Dragan Gasevic. Universidad de Edimburgo. 21/10/2016.
Krishnaprasad Thirunarayan, Trust Management: Multimodal Data Perspective,
Invited Tutorial, The 2015 International Conference on Collaboration
Technologies and Systems (CTS 2015), June 2015
UVa Library Scientific Data Consulting Group (SciDaC): New Partnerships and...Andrew Sallans
A. Sallans. "UVa Library Scientific Data Consulting Group (SciDaC): New Partnerships and Services to Support Scientific Data in the Library." Presented at the 2011 International Association for Social Science Information Services and Technology.
Seminario eMadrid/SHEILA sobre "Analítica del Aprendizaje". ¿Cómo llegamos al...eMadrid network
Seminario eMadrid/SHEILA sobre "Analítica del Aprendizaje". ¿Cómo llegamos allí? Pasos hacia la adopción sistémica de la analítica de aprendizaje. Dragan Gasevic. Universidad de Edimburgo. 21/10/2016.
Krishnaprasad Thirunarayan, Trust Management: Multimodal Data Perspective,
Invited Tutorial, The 2015 International Conference on Collaboration
Technologies and Systems (CTS 2015), June 2015
UVa Library Scientific Data Consulting Group (SciDaC): New Partnerships and...Andrew Sallans
A. Sallans. "UVa Library Scientific Data Consulting Group (SciDaC): New Partnerships and Services to Support Scientific Data in the Library." Presented at the 2011 International Association for Social Science Information Services and Technology.
Open Science Framework (OSF): Presentation and TrainingAndrew Sallans
Presentation Date: December 12, 2013.
Location: UC Berkeley, CA
Presenters: Johanna Cohoon & Andrew Sallans (Center for Open Science)
Center for Open Science website: http://centerforopenscience.org
Berkeley Initiative for Transparency in the Social Sciences website: http://bitss.org/annual-meeting/2013-2/
Improving Integrity, Transparency, and Reproducibility Through Connection of ...Andrew Sallans
The Center for Open Science (COS) was founded as a non-profit technology start-up in 2013 with the goal of improving transparency and reproducibility by connecting the scholarly workflow. COS achieves this goal through the development of a free, open source web application called the Open Science Framework (OSF), providing features like file sharing and citing, persistent urls, provenance tracking, and automated versioning. Initial workflow API connections focused on storage services and included Figshare, GitHub, Amazon S3, Dropbox, and Dataverse. The team is now working to connect other parts of the workflow with services like DMPTool, Databib/re3data, and Databrary. This session will introduce the core architecture and the problems that it solves, and illustrate how connecting services can benefit everyone involved in supporting the research ecosystem. COS is funded through the generosity of grants from the Laura and John Arnold Foundation, the John Templeton Foundation, the Alfred P. Sloan Foundation, the Association of Research Libraries, and others.
Presented at CNI Fall 2014, Washington, DC.
Coming to an Understanding: a Cross-institutional Examination of Assessments ...Stephanie Wright
Data curation has emerged as a strategic growth area for academic libraries. Many libraries have conducted needs assessments as a precursor towards developing services; however there have been few comparisons of the findings across institutions. This panel brings together four librarians from different institutions to discuss both common and distinct findings from their respective needs assessments. The panelists will speculate on the application of these findings at their specific libraries and in academic libraries generally.
Confronting Reality with Big Data & Learning Analytics
We are experiencing an explosion in the quantity of data available online from archives and live streams. Learning Analytics is concerned with how educational research, and learning platform design, can make more effective use of such data (Long & Siemens, 2011). Improving outcomes through the analysis of data is of interest to researchers, administrators, systems architects, social media developers, educators and learners. Analytics are being held up by some as a way to confront, and tackle, the tough new realities of less money, less attention, and higher accountability for quality of learning.
Researchers and vendors are building reporting capabilities into tools that provide unprecedented levels of data on learners. This symposium will show what is possible, and what's coming soon. What objections could possibly be raised to such progress?
However, information infrastructure embodies and shapes worldviews: classification schemes are not only systematic ways to capture and preserve, but also to forget, by virtue of what remains invisible (Bowker & Star, 1999). Learning analytics and recommendation engines are designed with a particular conception of ‘success’, driving the patterns deemed to be evidence of progress, the interventions that are deemed appropriate, the data captured and the rules that fire in software.
This symposium will air some of the critical arguments around the limits of decontextualised data and automated analytics, which often appear reductionist in nature, failing to illuminate higher order learning. There are complex ethical issues around data fusion, and it is not clear to what extent learners are empowered, in contrast to being merely the objects of tracking technology. Educators may also find themselves at the receiving end of a new battery of institutional ‘performance indicators’ that do not reflect what they consider to be authentic learning and teaching.
This Symposium will provide the opportunity to hear a series of brief presentations introducing contrasting perspectives, before the debate is opened to all. Speakers from a cross-section of The Open University will describe how we are connecting datasets, analysing student data and prototyping next generation analytics. Complementing this, JISC will present a national capability perspective, with an update on the JISC CETIS ‘landscape analysis’ of the field, which will clarify potential benefits, issues to consider, and help institutions to assess their current capability and possible next steps.
Participants will catch up with developments in this fast moving field, through exposure to the possibilities of analytics, as well as issues to be alert to.
The Modern Columbian Exchange: Biovision 2012 PresentationMerck
The Columbian Exchange is a term used to capture what happened to North American Native Indians when the arrival of European settlers introduced ideas, animals, plants, and diseases that otherwise they had not yet been exposed to. Today, the Modern Columbian Exchange is occurring at a global scale, caused by unprecedented global travel and the Internet. An outcome of this Modern Columbian Exchange is disease outbreaks which have and will continue to affect dozens of countries in a very short time, impacting agriculture, tourism, and ultimately resulting in social tensions and the loss of life. The global response requires tight and timely coordination across countries. This necessitates the processing of large volumes of data – “BIG DATA” – which implies variety, variability and velocity. In this presentation, we explore the challenges of BIG DATA for preventative global health care. We answer the questions: a) how can human intelligence be more effectively leveraged to develop new insights, and b) how does this impact the design of data and information repositories? We conclude “The Time is NOW” for a new real-time analytics paradigm to transform the discovery and learning process.
A Big Picture in Research Data ManagementCarole Goble
A personal view of the big picture in Research Data Management, given at GFBio - de.NBI Summer School 2018 Riding the Data Life Cycle! Braunschweig Integrated Centre of Systems Biology (BRICS), 03 - 07 September 2018
Bridging the Gap from Knowledge to Action: Putting Analytics in the Hands of ...Steven Lonn
Short Paper Presentation at Learning Analytics and Knowledge Conference 2012, May 1. #LAK12
This paper presents current findings from an ongoing design- based research project aimed at developing an early warning system (EWS) for academic mentors in an undergraduate engineering mentoring program. This paper details our progress in mining Learning Management System data and translating these data into an EWS for academic mentors. We focus on the role of mentors and advisors, and elaborate on their importance in learning analytics-based interventions developed for higher education.
A short presentation about the challenges associated with balancing IT innovation and operation excellence - and how Katz IS research and education focus on these issues.
I shall provide a summary of JISC work in the area of ‘Big Data’. My primary focus will be on how to manage the huge amount of research data produced in UK Universities. I shall cover the history of JISC interventions to improve research data management and look at next steps. I shall touch on some other areas of work like ‘Digging into Data’ and web archiving which also deal with ‘big data’.
Data Science: An Emerging Field for Future JobsJian Qin
Data deluge has become a reality in today's scientific research. What does it mean to future science workforce? How can you prepare yourself to embrace the data challenges and opportunities? This presentation will provide you with an overview of data science and what it means to you as future researchers and career scientists.
Opening/Framing Comments: John Behrens, Vice President, Center for Digital Data, Analytics, & Adaptive Learning Pearson
Discussion of how the field of educational measurement is changing; how long held assumptions may no longer be taken for granted and that new terminology and language are coming into the.
Panel 1: Beyond the Construct: New Forms of Measurement
This panel presents new views of what assessment can be and new species of big data that push our understanding for what can be used in evidentiary arguments.
Marcia Linn, Lydia Liu from UC Berkeley and ETS discuss continuous assessment of science and new kinds of constructs that relate to collaboration and student reasoning.
John Byrnes from SRI International discusses text and other semi-structured data sources and different methods of analysis.
Kristin Dicerbo from Pearson discusses hidden assessments and the different student interactions and events that can be used in inferential processes.
Panel 2: The Test is Just the Beginning: Assessments Meet Systems Context
This panel looks at how assessments are not the end game, but often the first step in larger big-data practices at districts/state/national levels.
Gerald Tindal from the University of Oregon discusses State data systems and special education, including curriculum-based measurement across geographic settings.
Jack Buckley Commissioner of the National Center for Educational Statistics discussing national datasets where tests and other data connect.
Lindsay Page, Will Marinell from the Strategic Data Project at Harvard discussing state and district datasets used for evaluating teachers, colleges of education, and student progress.
Panel 3: Connecting the Dots: Research Agendas to Integrate Different Worlds
This panel will look at how research organizations are viewing the connections between the perspectives presented in Panels 1 and 2; what is known, what is still yet to be discovered in order to achieve the promised of big connected data in education.
Andrea Conklin Bueschel Program Director at the Spencer Foundation
Ed Dieterle Senior Program Officer at the Bill and Melinda Gates Foundation
Edith Gummer Program Manager at National Science Foundation
This presentation was provided by Tim McGeary of Duke University during the NISO virtual conference, Open Data Projects, held on Wednesday, June 13, 2018.
Co-developing bespoke, enterprise-scale analytics systems with teaching staffDanny Liu
Presentation at the NSW Learning Analytics Working Group meeting, 3 February 2016, at the University of Technology, Sydney. Covering projects from Macquarie University and the University of Sydney.
Learning analytics are more than measurementDragan Gasevic
Slides used for the keynote
Learning analytics are more than measurement
at
Policies for Educational Data Mining and Learning Analytics Briefing
organized by http://www.laceproject.eu/
Open Science Framework (OSF): Presentation and TrainingAndrew Sallans
Presentation Date: December 12, 2013.
Location: UC Berkeley, CA
Presenters: Johanna Cohoon & Andrew Sallans (Center for Open Science)
Center for Open Science website: http://centerforopenscience.org
Berkeley Initiative for Transparency in the Social Sciences website: http://bitss.org/annual-meeting/2013-2/
Improving Integrity, Transparency, and Reproducibility Through Connection of ...Andrew Sallans
The Center for Open Science (COS) was founded as a non-profit technology start-up in 2013 with the goal of improving transparency and reproducibility by connecting the scholarly workflow. COS achieves this goal through the development of a free, open source web application called the Open Science Framework (OSF), providing features like file sharing and citing, persistent urls, provenance tracking, and automated versioning. Initial workflow API connections focused on storage services and included Figshare, GitHub, Amazon S3, Dropbox, and Dataverse. The team is now working to connect other parts of the workflow with services like DMPTool, Databib/re3data, and Databrary. This session will introduce the core architecture and the problems that it solves, and illustrate how connecting services can benefit everyone involved in supporting the research ecosystem. COS is funded through the generosity of grants from the Laura and John Arnold Foundation, the John Templeton Foundation, the Alfred P. Sloan Foundation, the Association of Research Libraries, and others.
Presented at CNI Fall 2014, Washington, DC.
Coming to an Understanding: a Cross-institutional Examination of Assessments ...Stephanie Wright
Data curation has emerged as a strategic growth area for academic libraries. Many libraries have conducted needs assessments as a precursor towards developing services; however there have been few comparisons of the findings across institutions. This panel brings together four librarians from different institutions to discuss both common and distinct findings from their respective needs assessments. The panelists will speculate on the application of these findings at their specific libraries and in academic libraries generally.
Confronting Reality with Big Data & Learning Analytics
We are experiencing an explosion in the quantity of data available online from archives and live streams. Learning Analytics is concerned with how educational research, and learning platform design, can make more effective use of such data (Long & Siemens, 2011). Improving outcomes through the analysis of data is of interest to researchers, administrators, systems architects, social media developers, educators and learners. Analytics are being held up by some as a way to confront, and tackle, the tough new realities of less money, less attention, and higher accountability for quality of learning.
Researchers and vendors are building reporting capabilities into tools that provide unprecedented levels of data on learners. This symposium will show what is possible, and what's coming soon. What objections could possibly be raised to such progress?
However, information infrastructure embodies and shapes worldviews: classification schemes are not only systematic ways to capture and preserve, but also to forget, by virtue of what remains invisible (Bowker & Star, 1999). Learning analytics and recommendation engines are designed with a particular conception of ‘success’, driving the patterns deemed to be evidence of progress, the interventions that are deemed appropriate, the data captured and the rules that fire in software.
This symposium will air some of the critical arguments around the limits of decontextualised data and automated analytics, which often appear reductionist in nature, failing to illuminate higher order learning. There are complex ethical issues around data fusion, and it is not clear to what extent learners are empowered, in contrast to being merely the objects of tracking technology. Educators may also find themselves at the receiving end of a new battery of institutional ‘performance indicators’ that do not reflect what they consider to be authentic learning and teaching.
This Symposium will provide the opportunity to hear a series of brief presentations introducing contrasting perspectives, before the debate is opened to all. Speakers from a cross-section of The Open University will describe how we are connecting datasets, analysing student data and prototyping next generation analytics. Complementing this, JISC will present a national capability perspective, with an update on the JISC CETIS ‘landscape analysis’ of the field, which will clarify potential benefits, issues to consider, and help institutions to assess their current capability and possible next steps.
Participants will catch up with developments in this fast moving field, through exposure to the possibilities of analytics, as well as issues to be alert to.
The Modern Columbian Exchange: Biovision 2012 PresentationMerck
The Columbian Exchange is a term used to capture what happened to North American Native Indians when the arrival of European settlers introduced ideas, animals, plants, and diseases that otherwise they had not yet been exposed to. Today, the Modern Columbian Exchange is occurring at a global scale, caused by unprecedented global travel and the Internet. An outcome of this Modern Columbian Exchange is disease outbreaks which have and will continue to affect dozens of countries in a very short time, impacting agriculture, tourism, and ultimately resulting in social tensions and the loss of life. The global response requires tight and timely coordination across countries. This necessitates the processing of large volumes of data – “BIG DATA” – which implies variety, variability and velocity. In this presentation, we explore the challenges of BIG DATA for preventative global health care. We answer the questions: a) how can human intelligence be more effectively leveraged to develop new insights, and b) how does this impact the design of data and information repositories? We conclude “The Time is NOW” for a new real-time analytics paradigm to transform the discovery and learning process.
A Big Picture in Research Data ManagementCarole Goble
A personal view of the big picture in Research Data Management, given at GFBio - de.NBI Summer School 2018 Riding the Data Life Cycle! Braunschweig Integrated Centre of Systems Biology (BRICS), 03 - 07 September 2018
Bridging the Gap from Knowledge to Action: Putting Analytics in the Hands of ...Steven Lonn
Short Paper Presentation at Learning Analytics and Knowledge Conference 2012, May 1. #LAK12
This paper presents current findings from an ongoing design- based research project aimed at developing an early warning system (EWS) for academic mentors in an undergraduate engineering mentoring program. This paper details our progress in mining Learning Management System data and translating these data into an EWS for academic mentors. We focus on the role of mentors and advisors, and elaborate on their importance in learning analytics-based interventions developed for higher education.
A short presentation about the challenges associated with balancing IT innovation and operation excellence - and how Katz IS research and education focus on these issues.
I shall provide a summary of JISC work in the area of ‘Big Data’. My primary focus will be on how to manage the huge amount of research data produced in UK Universities. I shall cover the history of JISC interventions to improve research data management and look at next steps. I shall touch on some other areas of work like ‘Digging into Data’ and web archiving which also deal with ‘big data’.
Data Science: An Emerging Field for Future JobsJian Qin
Data deluge has become a reality in today's scientific research. What does it mean to future science workforce? How can you prepare yourself to embrace the data challenges and opportunities? This presentation will provide you with an overview of data science and what it means to you as future researchers and career scientists.
Opening/Framing Comments: John Behrens, Vice President, Center for Digital Data, Analytics, & Adaptive Learning Pearson
Discussion of how the field of educational measurement is changing; how long held assumptions may no longer be taken for granted and that new terminology and language are coming into the.
Panel 1: Beyond the Construct: New Forms of Measurement
This panel presents new views of what assessment can be and new species of big data that push our understanding for what can be used in evidentiary arguments.
Marcia Linn, Lydia Liu from UC Berkeley and ETS discuss continuous assessment of science and new kinds of constructs that relate to collaboration and student reasoning.
John Byrnes from SRI International discusses text and other semi-structured data sources and different methods of analysis.
Kristin Dicerbo from Pearson discusses hidden assessments and the different student interactions and events that can be used in inferential processes.
Panel 2: The Test is Just the Beginning: Assessments Meet Systems Context
This panel looks at how assessments are not the end game, but often the first step in larger big-data practices at districts/state/national levels.
Gerald Tindal from the University of Oregon discusses State data systems and special education, including curriculum-based measurement across geographic settings.
Jack Buckley Commissioner of the National Center for Educational Statistics discussing national datasets where tests and other data connect.
Lindsay Page, Will Marinell from the Strategic Data Project at Harvard discussing state and district datasets used for evaluating teachers, colleges of education, and student progress.
Panel 3: Connecting the Dots: Research Agendas to Integrate Different Worlds
This panel will look at how research organizations are viewing the connections between the perspectives presented in Panels 1 and 2; what is known, what is still yet to be discovered in order to achieve the promised of big connected data in education.
Andrea Conklin Bueschel Program Director at the Spencer Foundation
Ed Dieterle Senior Program Officer at the Bill and Melinda Gates Foundation
Edith Gummer Program Manager at National Science Foundation
This presentation was provided by Tim McGeary of Duke University during the NISO virtual conference, Open Data Projects, held on Wednesday, June 13, 2018.
Co-developing bespoke, enterprise-scale analytics systems with teaching staffDanny Liu
Presentation at the NSW Learning Analytics Working Group meeting, 3 February 2016, at the University of Technology, Sydney. Covering projects from Macquarie University and the University of Sydney.
Learning analytics are more than measurementDragan Gasevic
Slides used for the keynote
Learning analytics are more than measurement
at
Policies for Educational Data Mining and Learning Analytics Briefing
organized by http://www.laceproject.eu/
Predictive Analytics - How to get stuff out of your Crystal BallDATAVERSITY
Everyone wants to leverage data. The optimal implementation of analytics is an organization-wide set of capabilities. These are called advantageous organizational analytic capabilities in that a clear ROI is demonstrable from these efforts. Turns out that there are a number of prerequisites to advantageous organizational analytics. These include:
Adopting a crawl, walk, run strategy
Understanding current and potential organizational maturity and corresponding capabilities
Achieving an appropriate technology/human capability balance
Implementing useful IT systems development practices
Installing necessary non-IT leadership
This webinar will explore these and other topics using examples drawn from DOD, healthcare researchers, and donation center operations.
Building an Intelligent Biobank to Power Research Decision-MakingDenodo
This presentation belongs to the workshop: "Building an Intelligent Biobank to Power Research Decision-Making", from ISBER 2015 Annual Meeting by Lori A. Ball (Chief Operating Officer, President of Integrated Client Solutions at BioStorage Technologies, Inc), Brian Brunner (Senior Manager, Clinical Practice at LabAnswer) and Suresh Chandrasekaran (Senior Vice President at Denodo).
The workshop cover three different topic areas:
- Research sample intelligence: the growing need for Global Data Integration (Biobank Sample and Data Stakeholders).
- Building a research data integration plan and cloud sourcing strategy (data integration).
- How data virtualization works and the value it delivers (a data virtualization introduction, solution portfolio and current customers in Life Sciences industry).
The biomedical R&D environment is increasingly dependent on data meta-analysis and bioinformatics to support research advancements. The integration of biorepository sample inventory data with biomarker and clinical research information has become a priority to R&D organizations. Therefore, a flexible IT system for managing sample collections, integrating sample data with clinical data and providing a data virtualization platform will enable the advancement of research studies. This workshop provides an overview of how sample data integration, virtualization and analytics can lead to more streamlined and unified sample intelligence to support global biobanking for future research.
Similar to DMVitals: A Data Management Assessment Recommendations Tool - IASSIST 2012 (20)
A. Sallans. "Practical Applications of e-Science." Presented at the 2011 eScience Bootcamp at the University of Virginia's Claude Moore Health Sciences Library. 4 March 2011
Understanding the Big Picture of e-ScienceAndrew Sallans
A. Sallans. "Understanding the Big Picture of e-Science." Presented at the 2011 eScience Bootcamp at the University of Virginia's Claude Moore Health Sciences Library. 4 March 2011
NSF Data Management Plan - Implications for LibrariansAndrew Sallans
A. Sallans. "NSF Data Management Plan - Implications for Librarians." Presented at the Science and Technology Section (STS) Hot Topics Discussion Group Meeting of the American Library Association's 2011 Midwinter Meeting. 8 January 2011
zkStudyClub - Reef: Fast Succinct Non-Interactive Zero-Knowledge Regex ProofsAlex Pruden
This paper presents Reef, a system for generating publicly verifiable succinct non-interactive zero-knowledge proofs that a committed document matches or does not match a regular expression. We describe applications such as proving the strength of passwords, the provenance of email despite redactions, the validity of oblivious DNS queries, and the existence of mutations in DNA. Reef supports the Perl Compatible Regular Expression syntax, including wildcards, alternation, ranges, capture groups, Kleene star, negations, and lookarounds. Reef introduces a new type of automata, Skipping Alternating Finite Automata (SAFA), that skips irrelevant parts of a document when producing proofs without undermining soundness, and instantiates SAFA with a lookup argument. Our experimental evaluation confirms that Reef can generate proofs for documents with 32M characters; the proofs are small and cheap to verify (under a second).
Paper: https://eprint.iacr.org/2023/1886
Maruthi Prithivirajan, Head of ASEAN & IN Solution Architecture, Neo4j
Get an inside look at the latest Neo4j innovations that enable relationship-driven intelligence at scale. Learn more about the newest cloud integrations and product enhancements that make Neo4j an essential choice for developers building apps with interconnected data and generative AI.
UiPath Test Automation using UiPath Test Suite series, part 6DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 6. In this session, we will cover Test Automation with generative AI and Open AI.
UiPath Test Automation with generative AI and Open AI webinar offers an in-depth exploration of leveraging cutting-edge technologies for test automation within the UiPath platform. Attendees will delve into the integration of generative AI, a test automation solution, with Open AI advanced natural language processing capabilities.
Throughout the session, participants will discover how this synergy empowers testers to automate repetitive tasks, enhance testing accuracy, and expedite the software testing life cycle. Topics covered include the seamless integration process, practical use cases, and the benefits of harnessing AI-driven automation for UiPath testing initiatives. By attending this webinar, testers, and automation professionals can gain valuable insights into harnessing the power of AI to optimize their test automation workflows within the UiPath ecosystem, ultimately driving efficiency and quality in software development processes.
What will you get from this session?
1. Insights into integrating generative AI.
2. Understanding how this integration enhances test automation within the UiPath platform
3. Practical demonstrations
4. Exploration of real-world use cases illustrating the benefits of AI-driven test automation for UiPath
Topics covered:
What is generative AI
Test Automation with generative AI and Open AI.
UiPath integration with generative AI
Speaker:
Deepak Rai, Automation Practice Lead, Boundaryless Group and UiPath MVP
UiPath Test Automation using UiPath Test Suite series, part 5DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 5. In this session, we will cover CI/CD with devops.
Topics covered:
CI/CD with in UiPath
End-to-end overview of CI/CD pipeline with Azure devops
Speaker:
Lyndsey Byblow, Test Suite Sales Engineer @ UiPath, Inc.
Climate Impact of Software Testing at Nordic Testing DaysKari Kakkonen
My slides at Nordic Testing Days 6.6.2024
Climate impact / sustainability of software testing discussed on the talk. ICT and testing must carry their part of global responsibility to help with the climat warming. We can minimize the carbon footprint but we can also have a carbon handprint, a positive impact on the climate. Quality characteristics can be added with sustainability, and then measured continuously. Test environments can be used less, and in smaller scale and on demand. Test techniques can be used in optimizing or minimizing number of tests. Test automation can be used to speed up testing.
In his public lecture, Christian Timmerer provides insights into the fascinating history of video streaming, starting from its humble beginnings before YouTube to the groundbreaking technologies that now dominate platforms like Netflix and ORF ON. Timmerer also presents provocative contributions of his own that have significantly influenced the industry. He concludes by looking at future challenges and invites the audience to join in a discussion.
A tale of scale & speed: How the US Navy is enabling software delivery from l...sonjaschweigert1
Rapid and secure feature delivery is a goal across every application team and every branch of the DoD. The Navy’s DevSecOps platform, Party Barge, has achieved:
- Reduction in onboarding time from 5 weeks to 1 day
- Improved developer experience and productivity through actionable findings and reduction of false positives
- Maintenance of superior security standards and inherent policy enforcement with Authorization to Operate (ATO)
Development teams can ship efficiently and ensure applications are cyber ready for Navy Authorizing Officials (AOs). In this webinar, Sigma Defense and Anchore will give attendees a look behind the scenes and demo secure pipeline automation and security artifacts that speed up application ATO and time to production.
We will cover:
- How to remove silos in DevSecOps
- How to build efficient development pipeline roles and component templates
- How to deliver security artifacts that matter for ATO’s (SBOMs, vulnerability reports, and policy evidence)
- How to streamline operations with automated policy checks on container images
DevOps and Testing slides at DASA ConnectKari Kakkonen
My and Rik Marselis slides at 30.5.2024 DASA Connect conference. We discuss about what is testing, then what is agile testing and finally what is Testing in DevOps. Finally we had lovely workshop with the participants trying to find out different ways to think about quality and testing in different parts of the DevOps infinity loop.
GraphSummit Singapore | The Future of Agility: Supercharging Digital Transfor...Neo4j
Leonard Jayamohan, Partner & Generative AI Lead, Deloitte
This keynote will reveal how Deloitte leverages Neo4j’s graph power for groundbreaking digital twin solutions, achieving a staggering 100x performance boost. Discover the essential role knowledge graphs play in successful generative AI implementations. Plus, get an exclusive look at an innovative Neo4j + Generative AI solution Deloitte is developing in-house.
Enchancing adoption of Open Source Libraries. A case study on Albumentations.AIVladimir Iglovikov, Ph.D.
Presented by Vladimir Iglovikov:
- https://www.linkedin.com/in/iglovikov/
- https://x.com/viglovikov
- https://www.instagram.com/ternaus/
This presentation delves into the journey of Albumentations.ai, a highly successful open-source library for data augmentation.
Created out of a necessity for superior performance in Kaggle competitions, Albumentations has grown to become a widely used tool among data scientists and machine learning practitioners.
This case study covers various aspects, including:
People: The contributors and community that have supported Albumentations.
Metrics: The success indicators such as downloads, daily active users, GitHub stars, and financial contributions.
Challenges: The hurdles in monetizing open-source projects and measuring user engagement.
Development Practices: Best practices for creating, maintaining, and scaling open-source libraries, including code hygiene, CI/CD, and fast iteration.
Community Building: Strategies for making adoption easy, iterating quickly, and fostering a vibrant, engaged community.
Marketing: Both online and offline marketing tactics, focusing on real, impactful interactions and collaborations.
Mental Health: Maintaining balance and not feeling pressured by user demands.
Key insights include the importance of automation, making the adoption process seamless, and leveraging offline interactions for marketing. The presentation also emphasizes the need for continuous small improvements and building a friendly, inclusive community that contributes to the project's growth.
Vladimir Iglovikov brings his extensive experience as a Kaggle Grandmaster, ex-Staff ML Engineer at Lyft, sharing valuable lessons and practical advice for anyone looking to enhance the adoption of their open-source projects.
Explore more about Albumentations and join the community at:
GitHub: https://github.com/albumentations-team/albumentations
Website: https://albumentations.ai/
LinkedIn: https://www.linkedin.com/company/100504475
Twitter: https://x.com/albumentations
GridMate - End to end testing is a critical piece to ensure quality and avoid...ThomasParaiso2
End to end testing is a critical piece to ensure quality and avoid regressions. In this session, we share our journey building an E2E testing pipeline for GridMate components (LWC and Aura) using Cypress, JSForce, FakerJS…
Unlocking Productivity: Leveraging the Potential of Copilot in Microsoft 365, a presentation by Christoforos Vlachos, Senior Solutions Manager – Modern Workplace, Uni Systems
How to Get CNIC Information System with Paksim Ga.pptxdanishmna97
Pakdata Cf is a groundbreaking system designed to streamline and facilitate access to CNIC information. This innovative platform leverages advanced technology to provide users with efficient and secure access to their CNIC details.
Sudheer Mechineni, Head of Application Frameworks, Standard Chartered Bank
Discover how Standard Chartered Bank harnessed the power of Neo4j to transform complex data access challenges into a dynamic, scalable graph database solution. This keynote will cover their journey from initial adoption to deploying a fully automated, enterprise-grade causal cluster, highlighting key strategies for modelling organisational changes and ensuring robust disaster recovery. Learn how these innovations have not only enhanced Standard Chartered Bank’s data infrastructure but also positioned them as pioneers in the banking sector’s adoption of graph technology.
Essentials of Automations: The Art of Triggers and Actions in FMESafe Software
In this second installment of our Essentials of Automations webinar series, we’ll explore the landscape of triggers and actions, guiding you through the nuances of authoring and adapting workspaces for seamless automations. Gain an understanding of the full spectrum of triggers and actions available in FME, empowering you to enhance your workspaces for efficient automation.
We’ll kick things off by showcasing the most commonly used event-based triggers, introducing you to various automation workflows like manual triggers, schedules, directory watchers, and more. Plus, see how these elements play out in real scenarios.
Whether you’re tweaking your current setup or building from the ground up, this session will arm you with the tools and insights needed to transform your FME usage into a powerhouse of productivity. Join us to discover effective strategies that simplify complex processes, enhancing your productivity and transforming your data management practices with FME. Let’s turn complexity into clarity and make your workspaces work wonders!
Essentials of Automations: The Art of Triggers and Actions in FME
DMVitals: A Data Management Assessment Recommendations Tool - IASSIST 2012
1. DMVitals:
A Data Management Assessment
Recommendations Tool
Andrew Sallans, Head of Strategic Data Initiatives
Sherry Lake, Senior Scientific Data Consultant
IASSIST 2012 - June 6, 2012
2. Interviews/Assessment Preface
• Over past two years we conducted about 25
data interviews
– Focus on learning about research data practices at
UVa and identifying service needs/opportunities
– Intention of leading into consulting opportunities
• Ended up with conundrum of how to manage
“unique” conditions of each research
environment against common characteristics of
data management within domains and
institutional framework
2
3. Consulting Workflow
Distribute final report
Send initial report to and begin DM
Conduct Data researcher for Implementation with
Interview approval/review Researcher
Produce "Data Code Data Extract action
Interview Report" Interview statements from
answers in the “DM Vitals
“DM Vitals” tool Recommendations
Report"
3
4. Recommendation Requirements
• Must be a fast process
• Must create actionable and repeatable
recommendations
• Must reduce subjectivity
• Must weigh all assessment factors
• Must address present DM condition while
showing path for improvement
4
5. Components of the DMVitals
• Data management best practice statements
– UVa sources (ISPRO, SciDaC Guidelines)
– ANDS long-term sustainability scoring model
• 8 data management categories
• Data interview questions and responses
• Data management maturity index of
Crowston & Qin Capability Maturity Model
(CMM) for Scientific Data Management
(SDM)
5
6. Crowston & Qin Capability Maturity Model for SDM
Crowston, K. & J. Qin. (2010). A capability maturity model for scientific data management. In: Proceedings of the
American Society for Information Science and Technology, October 24-26, 2010, Pittsburgh, PA. (Poster)
6
11. DMVitals Workflow Recap
Rank Create report
researchers’ Create “action” with a grade for
Associate DM
current DM statements for sustainability
practices with
practices researchers and list of tasks
research data
according to that correlate divided into
interview
level of with each level implementation
“sustainability” phases
11
12. DMVitals for Aggregate Learning
Research Data Management Sustainability
(Example - not real data)
70%
60%
% of Sustainability Guidelines Met
50%
40%
Total
30%
20%
10%
0%
STS ASTRO BIO BME CHEME CHEM CIVE CS PATH MOLPHYS CELLBIO
12
13. Major Challenges
1. Assessment tool design, specifically
dealing with appropriate weighting, false
positives, double negatives
2. Social/ethical implications of giving such
focused feedback and criticism to
researchers
3. Broader issue of motivations and
incentives
13
14. DMVitals Next Steps
• Release plan for others to use
– Starting to develop versions of package
– Will begin to make stable releases available
on our website, along with development
roadmap
• Collaboration opportunities for expansion
– Interested in collaborations to drive further
development and integration into services
– Seeking collaborators now!
14
15. References
• Australian National Data Service. (2011) ANDS and Data Storage. Available:
http://ands.org.au/guides/storage.html. Last accessed May 30, 2012.
• Crowston, K., & Qin J. (2010). A capability maturity model for scientific data
management. American Society for Information Science and Technology
Annual Meeting. Pittsburg, PA. Working Paper available:
http://crowston.syr.edu/content/capability-maturity-model-scientific-data-
management-0. Last accessed May 23, 2012.
• Digital Curation Center. (2011). CARDIO. Available: http://cardio.dcc.ac.uk/.
Last accessed May 30, 2012.
• Information Technology Security (2010). University of Virginia Information
Technology Security Risk Management (ITS-RM) Program. Available:
http://its.virginia.edu/security/riskmanagement/docs/ITS-RM_3-0.pdf. Last
accessed May 23, 2012.
• University of Virginia Library (2011). Scientific Data Consulting Data
Management Home. Web Site available:
http://www.lib.virginia.edu/brown/data/. Last access May 30, 2012.
15
16. Acknowledgements and Contact Information
• Acknowledgements
– Susan Borda
• UVa SciDaC Intern Summer 2011
• Recent graduate of Syracuse GSLIS program
• Starting as Digital Curation Librarian at University
of California – Merced this summer
• Contact Information
– Andrew Sallans, als9q@virginia.edu
– Sherry Lake, slake@virginia.edu
16
Editor's Notes
Andrew (1 of 6)
Andrew (2 of 6)
Andrew (3 of 6)
Andrew (4 of 6)
Andrew (5 of 6)
Andrew (6 of 6)
Sherry (1 of 5) (start @ around 7 min.)The columns represent the questions from our Data Interview process, Using Data management best practice statementsFrom UVa sources (ISPRO, SciDaC Guidelines) – Information Security, Policy and Records Office (ISPRO) Information Technology Security Risk ManagementANDS long-term sustainability storage modelWe then associated DM practices with research data interview questions & responsesthese data management best practices are listed under them.Using the answers from the interview, then coded “YES”, “NO”, or “NULL” to each corresponding Best Practice.
Sherry (2 of 5)Each Best practice statement is then mapped to one of 8 data management categories (FileFormatsDataTypes, Organization of Files, Security StorageBackups, Copyright Priv Confidentiality, Data DocumentationMetadata) Worksheet tabs @ bottomNote that in this current version, we are only using 5 of the management categories (Funding Guidelines, Archiving & Sharing, & Citing Data)And then each practice is given a “weight” for the 5 sustainability levels (Leastsust., fair, satisf, good, more sust)The responses from the Interview sheet are used (linked) to create a ratio of total # Yes (for current best practices) to total possible score.Ranks (subjectively) practices to sustainability levels. This is done for each category. The ratio for each value is then recorded on the Report sheet….
Sherry (3 of 5)Top chart has @ DM category and the resultant sustainabilty index (displayed as a % - per ratio)Rank researchers’ current DM practices according to level of “sustainability” and get an Average Sustainability Index(less sustainable, Fair, Satisfactory, Good, more sustainable) based on the ratio of best practices in use vs. total possible best practicesWith 5 levels of sustainability, we divided the ratio values into 5 groupings: 0 – 20%, 21- 40%, 41 – 60%, 61% - 80 %, >81%)Mapped the Avg. of Sustainability Index on the Crowston/Qin Capability Maturity Model for Scientific Data Management.Data management maturity index of Crowston & Qin Capability Maturity Model (CMM) for Scientific Data Management (SDM)--------Along with the score that is generated with a target to improve& includes actionable recommendations – Those practices not being done, are marked with “X”. And include action statements on how to improve. DM consultants then sorts the action statements by phases. – customizable, to help researchers get things done, move some actions to later phases
Sherry (4 of 5)General information. Sustainability Chart and then the action statements grouped into phases.Phase 1 (short-term)Phase 2 (long-term)Phase 3 (future)Once the report is created on the DMVitals (spreadsheet)…. We thenCreate report with their grade of DM sustainability and list of tasks divided into implementation phases. We then sit down with the researcher go over the recommendations and make adjustments on what actions are done in each phase. It’s the start of our Data Management Implementation.Action statements in each phase are tweaked as needed. The default gives a relative sense of sustainability & what actin to do. But can be customizable.
Sherry (5 of 5) finish by 12 min. (15 @ max)Just to recap how we use the DMVitals to create DM Recommendations from our Assessments.In this step you could add your institution’s policies and other best practices local to you.Even the “ranking” of sustainability can be adjusted per discipline, or institution. – where we put the best practices in columns – from Least sustain… to more sustain…Action statements definitely will require local customizations, it’s the actions that your researchers need to do for your institution. – they can include naming specific groups to go to get help.As I said at UVa, we meet with the researcher and customize the recommendations. Finally the report that you create is slightly different for each researcher based on their time and needs.
Andrew (1 of 3)Helps with identifying gaps in domain knowledge and/or skill areas (in which topical areas are people weakest, is it limited to certain domains, etc.). Very useful for targeted training and promotion of services and software/tools.