Dave de Roure's talk on myExperiment, including thoughts on protocol and workflow sharing and online communities. Presented at the Open Science workshop at the Pacific Symposium on Biocomputing, January 5th, 2009
This document discusses data citation and using identifiers to cite datasets. It explains that identifiers provide exposure, transparency, citation tracking and verification for datasets. Identifiers associate an alphanumeric string with the location of an object, like a dataset, and can include optional metadata. Common identifier systems like DOIs provide a precise way to identify and cite datasets. Services like EZID make it easy to create and manage identifiers for datasets. The document encourages attendees to get started with data citation by creating test identifiers and discussing options with librarians.
Libraries and Linked Data: Looking to the Future (1)ALATechSource
The presentation will cover an introduction to linked data, options for a new bibliographic framework in linked data terms, and some tools for working with linked data as well as how to discover other tools.
This document discusses the importance of publishing archaeological data openly on the web. It notes that raw data is not sufficient and requires cleaning, description and organization to be intelligible and reusable. Data publishing involves putting data in a form that is editorially vetted, machine-readable, linked to other resources, and archived for long-term access. This enhances data presentation, search, discovery and allows new types of research across multiple datasets. While challenging, data publishing provides benefits like reproducibility, new research opportunities, and professional advancement when done according to best practices.
The document describes the SFX framework for context-sensitive reference linking, which allows a user accessing a citation to be redirected to an appropriate full text or service based on their context. The framework uses an OpenURL standard to pass citation metadata from a link source to a parsing server, which then sends the metadata to a linking server to determine the most relevant services and create dynamic links to them based on the user's access and the available library collections and resources. The goal is to provide context-sensitive services to users based on their access and the cited item metadata rather than relying on pre-computed static links.
Introduction to Research Objects - Collaboartions Workshop 2015, Oxfordmatthewgamble
Introduction to Research Objects - http://www.researchobject.org. Presented at the Software Sustainability Institute's Collaborations Workshop 2015, University of Oxford, March 2015
Tripal v3, the Collaborative Online Database Platform Supporting an Internati...Bradford Condon
Talk given by Dr. Bradford Condon at the NSRP10 session of the Plant and Animal Genomes conference (PAG) 2019. Covers the basics of the biological database toolkit Tripal, and how Tripal enables FAIR data.
The document discusses using linked open data and linked data principles for libraries. It covers key concepts like URIs, RDF triples, ontologies and vocabularies. It then outlines options for libraries to both consume and publish linked data, such as enriching existing catalog data by linking to external sources, creating new information aggregates, and publishing library holdings and metadata as linked open data. Challenges include a lack of common identifiers, FRBRization of existing data, and the need for content curation and new technical systems to fully realize the benefits of linked open data for libraries.
This document discusses data citation and using identifiers to cite datasets. It explains that identifiers provide exposure, transparency, citation tracking and verification for datasets. Identifiers associate an alphanumeric string with the location of an object, like a dataset, and can include optional metadata. Common identifier systems like DOIs provide a precise way to identify and cite datasets. Services like EZID make it easy to create and manage identifiers for datasets. The document encourages attendees to get started with data citation by creating test identifiers and discussing options with librarians.
Libraries and Linked Data: Looking to the Future (1)ALATechSource
The presentation will cover an introduction to linked data, options for a new bibliographic framework in linked data terms, and some tools for working with linked data as well as how to discover other tools.
This document discusses the importance of publishing archaeological data openly on the web. It notes that raw data is not sufficient and requires cleaning, description and organization to be intelligible and reusable. Data publishing involves putting data in a form that is editorially vetted, machine-readable, linked to other resources, and archived for long-term access. This enhances data presentation, search, discovery and allows new types of research across multiple datasets. While challenging, data publishing provides benefits like reproducibility, new research opportunities, and professional advancement when done according to best practices.
The document describes the SFX framework for context-sensitive reference linking, which allows a user accessing a citation to be redirected to an appropriate full text or service based on their context. The framework uses an OpenURL standard to pass citation metadata from a link source to a parsing server, which then sends the metadata to a linking server to determine the most relevant services and create dynamic links to them based on the user's access and the available library collections and resources. The goal is to provide context-sensitive services to users based on their access and the cited item metadata rather than relying on pre-computed static links.
Introduction to Research Objects - Collaboartions Workshop 2015, Oxfordmatthewgamble
Introduction to Research Objects - http://www.researchobject.org. Presented at the Software Sustainability Institute's Collaborations Workshop 2015, University of Oxford, March 2015
Tripal v3, the Collaborative Online Database Platform Supporting an Internati...Bradford Condon
Talk given by Dr. Bradford Condon at the NSRP10 session of the Plant and Animal Genomes conference (PAG) 2019. Covers the basics of the biological database toolkit Tripal, and how Tripal enables FAIR data.
The document discusses using linked open data and linked data principles for libraries. It covers key concepts like URIs, RDF triples, ontologies and vocabularies. It then outlines options for libraries to both consume and publish linked data, such as enriching existing catalog data by linking to external sources, creating new information aggregates, and publishing library holdings and metadata as linked open data. Challenges include a lack of common identifiers, FRBRization of existing data, and the need for content curation and new technical systems to fully realize the benefits of linked open data for libraries.
This document outlines the forms, documents, and training an employee will need for safety induction including: an employee safety folder containing forms such as a job safety analysis, lockout tagout, and incident report forms; completing an online or in-person safety induction checklist; ensuring technical skills are up to date; reviewing the company's policy manual, training and competence plan, and risk awareness instructions; and understanding consultation and communication processes which involve gaining comments, ensuring forms are signed, and carrying out reviews and audits.
This document provides an overview of computers, including their basic components and functions. It discusses hardware such as the central processing unit, memory, input/output devices, storage, and communication devices. It also covers software types including system software and application software. Specific topics like random access memory, read only memory, and compilers are described at a high level.
This one sentence document repeats the phrase "Applied Electrical Technology" four times. It appears to be about the topic of applied electrical technology but provides no further details about the content.
Heather Piwowar - Measuring the adoption of Open Scienceshwu
Heather Piwowar's talk which presented a review of studies that measure data sharing behaviors among scientists. Presented at the Open Science workshop at the Pacific Symposium on Biocomputing, January 5th, 2009
myExperiment - Defining the Social Virtual Research EnvironmentDavid De Roure
myExperiment is a social networking site for scientists to share workflows, data, and other research objects. It allows users to create profiles, join groups, and share content while maintaining control over privacy. The site aims to facilitate collaboration and reuse in scientific research. It was launched in 2007 and has over 1000 registered users sharing hundreds of workflows and other research objects. The open source software powering the site can also be downloaded and customized for specific communities or projects.
1) myExperiment is a social software platform that allows scientists to share, reuse, and repurpose workflows in order to reduce time spent on experiments and avoid duplicating work.
2) It has over 950 registered users who have shared over 290 workflows and 100 files across 80 groups. Content on the site sees thousands of downloads and views each month.
3) The platform provides functionality for discovering, executing, and collaborating on workflows. It aims to promote sharing and reuse of workflows across disciplines and experience levels.
Digital Identity is fundamental to collaboration in bioinformatics research and development because it enables attribution, contribution, publication to be recorded and quantified.
However, current models of identity are often obsolete and have problems capturing both small contributions "microattribution" and large contributions "mega-attribution" in Science. Without adequate identity mechanisms, the incentive for collaboration can be reduced, and the utility of collaborative social tools hindered.
Using examples of metabolic pathway analysis with the taverna workbench and myexperiment.org, this talk will illustrate problems and solutions to identifying scientists accurately and effectively in collaborative bioinformatics networks on the Web.
MyExperiment.org is a social networking site and marketplace aimed at scientists who use workflows and services for their research. It allows users to publish, discover, share, and reuse experimental artifacts like workflows. The site aims to make these tools easy to use with a familiar social media-style interface. Key goals include crossing boundaries between individual experiments, disciplines, and systems to facilitate collaboration and intellectual fusion. Challenges include addressing issues around user incentives, metadata, provenance, intellectual property, and quality control as experiments are shared in an open yet curated environment.
See the WEBCAST as well!! mms://wmedia.it.su.se/SUB/NordLib/3.wmv
Presentation at Nordlib 2.0 in Stockholm, November 21th 2008
http://www.nordlib20.org/programme/
myExperiment and the Rise of Social MachinesDavid De Roure
Talk at hubbub 2012, Indianapolis, 25 September 2012. The talk introduces myExperiment and Wf4Ever, discusses the future of research communication including FORCE11, and introduces the SOCIAM project (Theory and Practice of Social Machines) which launches in October 2012.
Presentation at EMTACL10, http://www.ntnu.no/ub/emtacl/
Guus van den Brekel
Central medical library, UMCG
Virtual Research Networks: towards Research 2.0
In the next few years, the further development of social, educational and research networks – with its extensive collaborative possibilities – will be dictating how users will search for, manage and exchange information. The network – evolved by technology – is changing the user's behaviour and that will affect the future of information services. Many envision a possible leading role for libraries in collaboration and community building services.
Users are not only heavily using new tools, but are also creating and shaping their own preferred tools.
Today's students are incorporating Web 2.0 skills in daily life, in their social and learning environments.
Tomorrow's research staff will expect to be able to use their preferred tools and resources within their work environment.
Today's ánd tomorrow's libraries should support students and staff in the learning and research process by integrating library services and resources into their environments.
Understanding Research 2.0 from a Socio-technical PerspectiveYuwei Lin
This document discusses Research 2.0 from a socio-technical perspective. It outlines key concepts of Web 2.0 like blogging, social networking, and wikis. It also discusses O'Reilly's design patterns for Web 2.0 and De Roure and Goble's six principles for software design. The document examines challenges in developing Research 2.0 environments like involving users and addressing ethical and legal issues. It argues a socio-technical approach is needed to develop Research 2.0 that considers both technological and social aspects.
Knowledge Infrastructure for Global Systems ScienceDavid De Roure
Presentation at the First Open Global Systems Science Conference, Brussels, 8-10 November 2012
http://www.gsdp.eu/nc/news/news/date/2012/10/31/first-open-global-systems-science-conference/
The ELIXIR FAIR Knowledge Ecosystem for practical know-how: RDMkit and FAIRCo...Carole Goble
Presented at the FAIR Data in Practice Symposium, 16 may 2023 at BioITWorld Boston. https://www.bio-itworldexpo.com/fair-data. The ELIXIR European research Infrastructure for life science data is an inter-governmental organizations coordinating, integrating and sustaining FAIR data and software resources across its 23 nations. To help advise users, data stewards, project managers and service providers, ELIXIR has developed complementary community-driven, open knowledge resources for guiding FAIR Research Data Management (RDMkit) and providing FAIRification recipes (FAIRCookbook). 150+ people have contributed content so far, including representatives of the pharmaceutical industry.
The document discusses a project called CollOnBus that aims to extract knowledge from social tagging data on the web. It presents an approach called "metadata first, ontologies second" which involves mapping tags to Dublin Core metadata structures before converting them to ontologies. The document also describes tools developed as part of CollOnBus called folk2onto and Tag Distiller that are used to filter tags, map them to senses, and generate XML files representing the mapped tags and their relationships.
Research Objects: more than the sum of the partsCarole Goble
Workshop on Managing Digital Research Objects in an Expanding Science Ecosystem, 15 Nov 2017, Bethesda, USA
https://www.rd-alliance.org/managing-digital-research-objects-expanding-science-ecosystem
Research output is more than just the rhetorical narrative. The experimental methods, computational codes, data, algorithms, workflows, Standard Operating Procedures, samples and so on are the objects of research that enable reuse and reproduction of scientific experiments, and they too need to be examined and exchanged as research knowledge.
A first step is to think of Digital Research Objects as a broadening out to embrace these artefacts or assets of research. The next is to recognise that investigations use multiple, interlinked, evolving artefacts. Multiple datasets and multiple models support a study; each model is associated with datasets for construction, validation and prediction; an analytic pipeline has multiple codes and may be made up of nested sub-pipelines, and so on. Research Objects (http://researchobject.org/) is a framework by which the many, nested and contributed components of research can be packaged together in a systematic way, and their context, provenance and relationships richly described.
This document outlines the forms, documents, and training an employee will need for safety induction including: an employee safety folder containing forms such as a job safety analysis, lockout tagout, and incident report forms; completing an online or in-person safety induction checklist; ensuring technical skills are up to date; reviewing the company's policy manual, training and competence plan, and risk awareness instructions; and understanding consultation and communication processes which involve gaining comments, ensuring forms are signed, and carrying out reviews and audits.
This document provides an overview of computers, including their basic components and functions. It discusses hardware such as the central processing unit, memory, input/output devices, storage, and communication devices. It also covers software types including system software and application software. Specific topics like random access memory, read only memory, and compilers are described at a high level.
This one sentence document repeats the phrase "Applied Electrical Technology" four times. It appears to be about the topic of applied electrical technology but provides no further details about the content.
Heather Piwowar - Measuring the adoption of Open Scienceshwu
Heather Piwowar's talk which presented a review of studies that measure data sharing behaviors among scientists. Presented at the Open Science workshop at the Pacific Symposium on Biocomputing, January 5th, 2009
myExperiment - Defining the Social Virtual Research EnvironmentDavid De Roure
myExperiment is a social networking site for scientists to share workflows, data, and other research objects. It allows users to create profiles, join groups, and share content while maintaining control over privacy. The site aims to facilitate collaboration and reuse in scientific research. It was launched in 2007 and has over 1000 registered users sharing hundreds of workflows and other research objects. The open source software powering the site can also be downloaded and customized for specific communities or projects.
1) myExperiment is a social software platform that allows scientists to share, reuse, and repurpose workflows in order to reduce time spent on experiments and avoid duplicating work.
2) It has over 950 registered users who have shared over 290 workflows and 100 files across 80 groups. Content on the site sees thousands of downloads and views each month.
3) The platform provides functionality for discovering, executing, and collaborating on workflows. It aims to promote sharing and reuse of workflows across disciplines and experience levels.
Digital Identity is fundamental to collaboration in bioinformatics research and development because it enables attribution, contribution, publication to be recorded and quantified.
However, current models of identity are often obsolete and have problems capturing both small contributions "microattribution" and large contributions "mega-attribution" in Science. Without adequate identity mechanisms, the incentive for collaboration can be reduced, and the utility of collaborative social tools hindered.
Using examples of metabolic pathway analysis with the taverna workbench and myexperiment.org, this talk will illustrate problems and solutions to identifying scientists accurately and effectively in collaborative bioinformatics networks on the Web.
MyExperiment.org is a social networking site and marketplace aimed at scientists who use workflows and services for their research. It allows users to publish, discover, share, and reuse experimental artifacts like workflows. The site aims to make these tools easy to use with a familiar social media-style interface. Key goals include crossing boundaries between individual experiments, disciplines, and systems to facilitate collaboration and intellectual fusion. Challenges include addressing issues around user incentives, metadata, provenance, intellectual property, and quality control as experiments are shared in an open yet curated environment.
See the WEBCAST as well!! mms://wmedia.it.su.se/SUB/NordLib/3.wmv
Presentation at Nordlib 2.0 in Stockholm, November 21th 2008
http://www.nordlib20.org/programme/
myExperiment and the Rise of Social MachinesDavid De Roure
Talk at hubbub 2012, Indianapolis, 25 September 2012. The talk introduces myExperiment and Wf4Ever, discusses the future of research communication including FORCE11, and introduces the SOCIAM project (Theory and Practice of Social Machines) which launches in October 2012.
Presentation at EMTACL10, http://www.ntnu.no/ub/emtacl/
Guus van den Brekel
Central medical library, UMCG
Virtual Research Networks: towards Research 2.0
In the next few years, the further development of social, educational and research networks – with its extensive collaborative possibilities – will be dictating how users will search for, manage and exchange information. The network – evolved by technology – is changing the user's behaviour and that will affect the future of information services. Many envision a possible leading role for libraries in collaboration and community building services.
Users are not only heavily using new tools, but are also creating and shaping their own preferred tools.
Today's students are incorporating Web 2.0 skills in daily life, in their social and learning environments.
Tomorrow's research staff will expect to be able to use their preferred tools and resources within their work environment.
Today's ánd tomorrow's libraries should support students and staff in the learning and research process by integrating library services and resources into their environments.
Understanding Research 2.0 from a Socio-technical PerspectiveYuwei Lin
This document discusses Research 2.0 from a socio-technical perspective. It outlines key concepts of Web 2.0 like blogging, social networking, and wikis. It also discusses O'Reilly's design patterns for Web 2.0 and De Roure and Goble's six principles for software design. The document examines challenges in developing Research 2.0 environments like involving users and addressing ethical and legal issues. It argues a socio-technical approach is needed to develop Research 2.0 that considers both technological and social aspects.
Knowledge Infrastructure for Global Systems ScienceDavid De Roure
Presentation at the First Open Global Systems Science Conference, Brussels, 8-10 November 2012
http://www.gsdp.eu/nc/news/news/date/2012/10/31/first-open-global-systems-science-conference/
The ELIXIR FAIR Knowledge Ecosystem for practical know-how: RDMkit and FAIRCo...Carole Goble
Presented at the FAIR Data in Practice Symposium, 16 may 2023 at BioITWorld Boston. https://www.bio-itworldexpo.com/fair-data. The ELIXIR European research Infrastructure for life science data is an inter-governmental organizations coordinating, integrating and sustaining FAIR data and software resources across its 23 nations. To help advise users, data stewards, project managers and service providers, ELIXIR has developed complementary community-driven, open knowledge resources for guiding FAIR Research Data Management (RDMkit) and providing FAIRification recipes (FAIRCookbook). 150+ people have contributed content so far, including representatives of the pharmaceutical industry.
The document discusses a project called CollOnBus that aims to extract knowledge from social tagging data on the web. It presents an approach called "metadata first, ontologies second" which involves mapping tags to Dublin Core metadata structures before converting them to ontologies. The document also describes tools developed as part of CollOnBus called folk2onto and Tag Distiller that are used to filter tags, map them to senses, and generate XML files representing the mapped tags and their relationships.
Research Objects: more than the sum of the partsCarole Goble
Workshop on Managing Digital Research Objects in an Expanding Science Ecosystem, 15 Nov 2017, Bethesda, USA
https://www.rd-alliance.org/managing-digital-research-objects-expanding-science-ecosystem
Research output is more than just the rhetorical narrative. The experimental methods, computational codes, data, algorithms, workflows, Standard Operating Procedures, samples and so on are the objects of research that enable reuse and reproduction of scientific experiments, and they too need to be examined and exchanged as research knowledge.
A first step is to think of Digital Research Objects as a broadening out to embrace these artefacts or assets of research. The next is to recognise that investigations use multiple, interlinked, evolving artefacts. Multiple datasets and multiple models support a study; each model is associated with datasets for construction, validation and prediction; an analytic pipeline has multiple codes and may be made up of nested sub-pipelines, and so on. Research Objects (http://researchobject.org/) is a framework by which the many, nested and contributed components of research can be packaged together in a systematic way, and their context, provenance and relationships richly described.
Presentation for Harvard's ABCD Technology in Education group:
The Institute for Quantitative Social Science (IQSS) is a unique entity at Harvard - it combines research, software development, and specialized services to provide innovative solutions to research and scholarship problems at Harvard and beyond. I will talk about the software projects that IQSS is currently working on (Dataverse, Zelig, Consilience, and OpenScholar), including the research and development processes, the benefits provided to the Harvard community, and the impacts on research and scholarship.
The Liber 2009 presentation repeated for a Dutch audience IN Dutch but with the english slides (just the first one is in Dutch :-)
Samenwerking Hogeschool bibliotheken SHB, 5 november 2009
OSFair2017 Workshop | How FAIR friendly is the FAIRDOM Hub? Exposing metadata...Open Science Fair
Carole Goble presents the FAIRDOM | OSFair2017 Workshop
Workshop title: How FAIR friendly is your data catalogue?
Workshop overview:
This workshop will build upon the work planned by the EOSCpilot data interoperability task and the BlueBridge workshop held on April 3 at the RDA meeting. We will investigate common mechanisms for interoperation of data catalogues that preserve established community standards, norms and resources, while simplifying the process of being/becoming FAIR. Can we have a simple interoperability architecture based on a common set of metadata types? What are the minimum metadata requirements to expose FAIR data to EOSC services and EOSC users?
DAY 3 - PARALLEL SESSION 6 & 7
Being FAIR: FAIR data and model management SSBSS 2017 Summer SchoolCarole Goble
Lecture 1:
Being FAIR: FAIR data and model management
In recent years we have seen a change in expectations for the management of all the outcomes of research – that is the “assets” of data, models, codes, SOPs, workflows. The “FAIR” (Findable, Accessible, Interoperable, Reusable) Guiding Principles for scientific data management and stewardship [1] have proved to be an effective rallying-cry. Funding agencies expect data (and increasingly software) management retention and access plans. Journals are raising their expectations of the availability of data and codes for pre- and post- publication. The multi-component, multi-disciplinary nature of Systems and Synthetic Biology demands the interlinking and exchange of assets and the systematic recording of metadata for their interpretation.
Our FAIRDOM project (http://www.fair-dom.org) supports Systems Biology research projects with their research data, methods and model management, with an emphasis on standards smuggled in by stealth and sensitivity to asset sharing and credit anxiety. The FAIRDOM Platform has been installed by over 30 labs or projects. Our public, centrally hosted Asset Commons, the FAIRDOMHub.org, supports the outcomes of 50+ projects.
Now established as a grassroots association, FAIRDOM has over 8 years of experience of practical asset sharing and data infrastructure at the researcher coal-face ranging across European programmes (SysMO and ERASysAPP ERANets), national initiatives (Germany's de.NBI and Systems Medicine of the Liver; Norway's Digital Life) and European Research Infrastructures (ISBE) as well as in PI's labs and Centres such as the SynBioChem Centre at Manchester.
In this talk I will show explore how FAIRDOM has been designed to support Systems Biology projects and show examples of its configuration and use. I will also explore the technical and social challenges we face.
I will also refer to European efforts to support public archives for the life sciences. ELIXIR (http:// http://www.elixir-europe.org/) the European Research Infrastructure of 21 national nodes and a hub funded by national agreements to coordinate and sustain key data repositories and archives for the Life Science community, improve access to them and related tools, support training and create a platform for dataset interoperability. As the Head of the ELIXIR-UK Node and co-lead of the ELIXIR Interoperability Platform I will show how this work relates to your projects.
[1] Wilkinson et al, The FAIR Guiding Principles for scientific data management and stewardship Scientific Data 3, doi:10.1038/sdata.2016.18
2013-07-19 myExperiment research objects, beyond workflows and packs (PPTX)Stian Soiland-Reyes
Presentation at BOSC 2013 / ISMB 2013. (PowerPoint 2013 source)
PDF: https://www.slideshare.net/soilandreyes/2013-0719bosc-2013myexperimentresearchobjectsslides
See also poster at http://www.slideshare.net/soilandreyes/2013-0718bosc-2013myexperimentresearchobjectsposter-24242509 or
submitted abstract: https://docs.google.com/document/d/1jaAuPV-EnbsyI14L56HKHBQP7eDVfeXGLlK-LwohnWw/edit?usp=sharing
We have evolved Research Objects as a mechanism to preserve digital resources related to research, by providing mechanisms, formats and architecture for describing aggregated resources (hypothesis, workflow, datasets, scripts, services), their relations (is input for, explains, used by), provenance (graph was derived from dataset A, B and C) and attribution (who contributed what, and when?).
The website myExperiment is already popular for collaborating on, publishing and sharing scientific workflows, however we have found that for understanding and preserving a workflow over time, its definition is not enough, specially faced with workflow decay, services and tools that change over time. We have therefore adapted the research object model as a foundation for the myExperiment packs, allowing uploading of workflow runs, inputs, outputs and other files relevant to the workflow, relating them with annotations and integrated the Wf4Ever architecture for performing decay analysis and tracking a research object’s evolution as it and its constituent resources change over time.
Research Objects for improved sharing and reproducibilityOscar Corcho
Presentation about the usage of Research Objects to improve scientific experiment sharing and reproducibility, given at the Dagstuhl Perspective Workshop on the intersection between Computer Sciences and Psychology (July 2015)
Similar to Dave de Roure - The myExperiment approach towards Open Science (20)
UiPath Test Automation using UiPath Test Suite series, part 6DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 6. In this session, we will cover Test Automation with generative AI and Open AI.
UiPath Test Automation with generative AI and Open AI webinar offers an in-depth exploration of leveraging cutting-edge technologies for test automation within the UiPath platform. Attendees will delve into the integration of generative AI, a test automation solution, with Open AI advanced natural language processing capabilities.
Throughout the session, participants will discover how this synergy empowers testers to automate repetitive tasks, enhance testing accuracy, and expedite the software testing life cycle. Topics covered include the seamless integration process, practical use cases, and the benefits of harnessing AI-driven automation for UiPath testing initiatives. By attending this webinar, testers, and automation professionals can gain valuable insights into harnessing the power of AI to optimize their test automation workflows within the UiPath ecosystem, ultimately driving efficiency and quality in software development processes.
What will you get from this session?
1. Insights into integrating generative AI.
2. Understanding how this integration enhances test automation within the UiPath platform
3. Practical demonstrations
4. Exploration of real-world use cases illustrating the benefits of AI-driven test automation for UiPath
Topics covered:
What is generative AI
Test Automation with generative AI and Open AI.
UiPath integration with generative AI
Speaker:
Deepak Rai, Automation Practice Lead, Boundaryless Group and UiPath MVP
Securing your Kubernetes cluster_ a step-by-step guide to success !KatiaHIMEUR1
Today, after several years of existence, an extremely active community and an ultra-dynamic ecosystem, Kubernetes has established itself as the de facto standard in container orchestration. Thanks to a wide range of managed services, it has never been so easy to set up a ready-to-use Kubernetes cluster.
However, this ease of use means that the subject of security in Kubernetes is often left for later, or even neglected. This exposes companies to significant risks.
In this talk, I'll show you step-by-step how to secure your Kubernetes cluster for greater peace of mind and reliability.
LF Energy Webinar: Electrical Grid Modelling and Simulation Through PowSyBl -...DanBrown980551
Do you want to learn how to model and simulate an electrical network from scratch in under an hour?
Then welcome to this PowSyBl workshop, hosted by Rte, the French Transmission System Operator (TSO)!
During the webinar, you will discover the PowSyBl ecosystem as well as handle and study an electrical network through an interactive Python notebook.
PowSyBl is an open source project hosted by LF Energy, which offers a comprehensive set of features for electrical grid modelling and simulation. Among other advanced features, PowSyBl provides:
- A fully editable and extendable library for grid component modelling;
- Visualization tools to display your network;
- Grid simulation tools, such as power flows, security analyses (with or without remedial actions) and sensitivity analyses;
The framework is mostly written in Java, with a Python binding so that Python developers can access PowSyBl functionalities as well.
What you will learn during the webinar:
- For beginners: discover PowSyBl's functionalities through a quick general presentation and the notebook, without needing any expert coding skills;
- For advanced developers: master the skills to efficiently apply PowSyBl functionalities to your real-world scenarios.
DevOps and Testing slides at DASA ConnectKari Kakkonen
My and Rik Marselis slides at 30.5.2024 DASA Connect conference. We discuss about what is testing, then what is agile testing and finally what is Testing in DevOps. Finally we had lovely workshop with the participants trying to find out different ways to think about quality and testing in different parts of the DevOps infinity loop.
Generative AI Deep Dive: Advancing from Proof of Concept to ProductionAggregage
Join Maher Hanafi, VP of Engineering at Betterworks, in this new session where he'll share a practical framework to transform Gen AI prototypes into impactful products! He'll delve into the complexities of data collection and management, model selection and optimization, and ensuring security, scalability, and responsible use.
Goodbye Windows 11: Make Way for Nitrux Linux 3.5.0!SOFTTECHHUB
As the digital landscape continually evolves, operating systems play a critical role in shaping user experiences and productivity. The launch of Nitrux Linux 3.5.0 marks a significant milestone, offering a robust alternative to traditional systems such as Windows 11. This article delves into the essence of Nitrux Linux 3.5.0, exploring its unique features, advantages, and how it stands as a compelling choice for both casual users and tech enthusiasts.
UiPath Test Automation using UiPath Test Suite series, part 5DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 5. In this session, we will cover CI/CD with devops.
Topics covered:
CI/CD with in UiPath
End-to-end overview of CI/CD pipeline with Azure devops
Speaker:
Lyndsey Byblow, Test Suite Sales Engineer @ UiPath, Inc.
Full-RAG: A modern architecture for hyper-personalizationZilliz
Mike Del Balso, CEO & Co-Founder at Tecton, presents "Full RAG," a novel approach to AI recommendation systems, aiming to push beyond the limitations of traditional models through a deep integration of contextual insights and real-time data, leveraging the Retrieval-Augmented Generation architecture. This talk will outline Full RAG's potential to significantly enhance personalization, address engineering challenges such as data management and model training, and introduce data enrichment with reranking as a key solution. Attendees will gain crucial insights into the importance of hyperpersonalization in AI, the capabilities of Full RAG for advanced personalization, and strategies for managing complex data integrations for deploying cutting-edge AI solutions.
Encryption in Microsoft 365 - ExpertsLive Netherlands 2024Albert Hoitingh
In this session I delve into the encryption technology used in Microsoft 365 and Microsoft Purview. Including the concepts of Customer Key and Double Key Encryption.
Alt. GDG Cloud Southlake #33: Boule & Rebala: Effective AppSec in SDLC using ...James Anderson
Effective Application Security in Software Delivery lifecycle using Deployment Firewall and DBOM
The modern software delivery process (or the CI/CD process) includes many tools, distributed teams, open-source code, and cloud platforms. Constant focus on speed to release software to market, along with the traditional slow and manual security checks has caused gaps in continuous security as an important piece in the software supply chain. Today organizations feel more susceptible to external and internal cyber threats due to the vast attack surface in their applications supply chain and the lack of end-to-end governance and risk management.
The software team must secure its software delivery process to avoid vulnerability and security breaches. This needs to be achieved with existing tool chains and without extensive rework of the delivery processes. This talk will present strategies and techniques for providing visibility into the true risk of the existing vulnerabilities, preventing the introduction of security issues in the software, resolving vulnerabilities in production environments quickly, and capturing the deployment bill of materials (DBOM).
Speakers:
Bob Boule
Robert Boule is a technology enthusiast with PASSION for technology and making things work along with a knack for helping others understand how things work. He comes with around 20 years of solution engineering experience in application security, software continuous delivery, and SaaS platforms. He is known for his dynamic presentations in CI/CD and application security integrated in software delivery lifecycle.
Gopinath Rebala
Gopinath Rebala is the CTO of OpsMx, where he has overall responsibility for the machine learning and data processing architectures for Secure Software Delivery. Gopi also has a strong connection with our customers, leading design and architecture for strategic implementations. Gopi is a frequent speaker and well-known leader in continuous delivery and integrating security into software delivery.
zkStudyClub - Reef: Fast Succinct Non-Interactive Zero-Knowledge Regex ProofsAlex Pruden
This paper presents Reef, a system for generating publicly verifiable succinct non-interactive zero-knowledge proofs that a committed document matches or does not match a regular expression. We describe applications such as proving the strength of passwords, the provenance of email despite redactions, the validity of oblivious DNS queries, and the existence of mutations in DNA. Reef supports the Perl Compatible Regular Expression syntax, including wildcards, alternation, ranges, capture groups, Kleene star, negations, and lookarounds. Reef introduces a new type of automata, Skipping Alternating Finite Automata (SAFA), that skips irrelevant parts of a document when producing proofs without undermining soundness, and instantiates SAFA with a lookup argument. Our experimental evaluation confirms that Reef can generate proofs for documents with 32M characters; the proofs are small and cheap to verify (under a second).
Paper: https://eprint.iacr.org/2023/1886
2. scientists Graduate Students Undergraduate Students experimentation Data, Metadata Provenance Workflows Ontologies Digital Libraries The social process of Science 2.0 Local Web Repositories Virtual Learning Environment Technical Reports Reprints Peer-Reviewed Journal & Conference Papers Preprints & Metadata Certified Experimental Results & Analyses
3. Sharing pieces of process Carole Goble and David De Roure, Curating Scientific Web Services and Workflows , EDUCAUSE Review, vol. 43, no. 5 (September/October 2008) http://usefulchem.wikispaces.com/page/code/EXPLAN001 http://www.microsoft.com/mscorp/tc/trident.mspx http://www.mygrid.org.uk/tools/taverna/
4.
5.
6.
7.
8.
9.
10. “ Do I want my Science associated with a medium like this?”
25. Content Capture and Curation Workflows and Services Experts Social by User Community refine validate refine validate Self by Service Providers seed seed refine validate seed Automated refine validate seed Reuse and Symbiosis