Session 1
How to implement Open Science
Antónia Correia & Pedro Principe, University of Minho
Open Access Publishing
How to implement Open Access and Open Science
What is Open Access and how to provide Open Access
Open Access in Horizon 2020: how to comply with H2020 Open Science requirements
Managing and Sharing Research Data
Open, closed and shared data
Data Management Plans
Open Data in Horizon 2020: how to comply with H2020 Open Science requirements
Open Science and European Access Policies in H2020 Reme Melero
GEOTEC UJI and FOSTER project organized a training seminar in the context of GEO-C ESR entitled “Open Science and European Open Access policies in H2020”.
The seminar took place in Castellon (Spain), Feb 12th from 9.30 to 14.00.
OpenAIRE services and tools for researchers/authors and projects (FOSTER work...Pedro Príncipe
GEOTEC UJI and FOSTER project organized a training seminar in the context of GEO-C ESR titled “Open Science and European Open Access policies in H2020”.
The seminar took place in Castellon (Spain), Feb 12th from 9.30 to 14.00.
This presentation shows the end-results of the SURE2 project. It delivered an API and some dashboard functionalities for researchers and repositorymanagers to see the usage of their publications.
Visualizing the information of a Linked Open Data enabled Research Informatio...andimou
The Open Access movement and the research management can take a new turn if the research information is published as Linked Open Data. The management of the research information within institutions and across institutions can be facilitated, the quality of the available data can be improved and their availability to the public is assured. Although, non-expert users lack of understanding regarding how to take advantage of the interlinked information offered by Linked Open Data. In order to address this limitation, we present in this paper a use case of publishing research metadata as Linked Open Data and principally supporting users by consuming them through visualizations.
Presentation of http://dspacecris.eurocris.org/jspui/handle/123456789/191
Session 1
How to implement Open Science
Antónia Correia & Pedro Principe, University of Minho
Open Access Publishing
How to implement Open Access and Open Science
What is Open Access and how to provide Open Access
Open Access in Horizon 2020: how to comply with H2020 Open Science requirements
Managing and Sharing Research Data
Open, closed and shared data
Data Management Plans
Open Data in Horizon 2020: how to comply with H2020 Open Science requirements
Open Science and European Access Policies in H2020 Reme Melero
GEOTEC UJI and FOSTER project organized a training seminar in the context of GEO-C ESR entitled “Open Science and European Open Access policies in H2020”.
The seminar took place in Castellon (Spain), Feb 12th from 9.30 to 14.00.
OpenAIRE services and tools for researchers/authors and projects (FOSTER work...Pedro Príncipe
GEOTEC UJI and FOSTER project organized a training seminar in the context of GEO-C ESR titled “Open Science and European Open Access policies in H2020”.
The seminar took place in Castellon (Spain), Feb 12th from 9.30 to 14.00.
This presentation shows the end-results of the SURE2 project. It delivered an API and some dashboard functionalities for researchers and repositorymanagers to see the usage of their publications.
Visualizing the information of a Linked Open Data enabled Research Informatio...andimou
The Open Access movement and the research management can take a new turn if the research information is published as Linked Open Data. The management of the research information within institutions and across institutions can be facilitated, the quality of the available data can be improved and their availability to the public is assured. Although, non-expert users lack of understanding regarding how to take advantage of the interlinked information offered by Linked Open Data. In order to address this limitation, we present in this paper a use case of publishing research metadata as Linked Open Data and principally supporting users by consuming them through visualizations.
Presentation of http://dspacecris.eurocris.org/jspui/handle/123456789/191
Moving from an IR to a CRIS, the why & howDavid T Palmer
IRs collect, manage and display publications, and their metadata. However, an institution’s research, expertise and capacity is described by more than publications. The HKU Scholars Hub, hosted in DSpace, began as the IR of The University of Hong Kong (HKU) in 2005. Asking for voluntary deposit of publications from HKU academics, it received little notice, and more importantly, little support from University senior management. In 2009 a new HKU initiative, Knowledge Exchange, adopted the Hub as a key vehicle to share knowledge and skill with the community outside HKU. With funding support from the Office of KE, we extended the data model of DSpace to include relational tables on non-publication objects, including people, grants, and patents, holding attributes of these objects, such as co-investigators, co-inventors, co-prize winners, research interests, languages spoken, supervision of postgraduate theses, etc. The DSpace user interface now delivers an integrated search and display on these objects and attributes, as well as on ones newly derived, such as authority work on name disambiguation and synonymy in Roman and Hanzi (漢字), visualizations on networks of co-authors, co-investigators, etc, metrics extracted from external sources such as Scopus, WoS, PubMed, Google Scholar Citations, internal alt-metrics of view and download counts, and more. Beyond the functions of an IR, the Hub now performs as a system for reputation management, impact management, and research networking and profiling -- all of which are concepts included in the broad term, “Current Research Information System” (CRIS). These new objects and attributes curated from several trusted sources, and integrated into the present mashup, contextualize and highlight HKU research, and attract more hits, than an IR with only publications.
The HKU Office of Knowledge Exchange has now funded the modularization of these new HKU features of DSpace. Together with our partner, CINECA of Italy, we are making this work available in open source for the DSpace community.
Institutional Repositories have grown in importance over the last 10 years to offer a core University and Library service, however, their role is developing faster now than it has ever done. Funder Open Access requirements, internal reporting, research data. Ref2020 and more are increasing the demands on the traditional repository, putting pressure on staff resources and challenging the underlying software.
This webinar outlines these issues as well as looks at how the needs and use of repositories may change in the future.
Please respect the CC BY 2.5 licence.
2.24.16 Slides, “VIVO plus SHARE: Closing the Loop on Tracking Scholarly Acti...DuraSpace
Hot Topics: The DuraSpace Community Webinar Series
Series 13: “VIVO plus SHARE: Closing the Loop on Scholarly Activity”
Webinar 1: , “VIVO plus SHARE: Closing the Loop on Tracking Scholarly Activity” 2.24.16
Curated by Rick Johnson, Program Co-Director, Digital Initiatives and Scholarship Head, Data Curation and Digital Library Solutions Hesburgh Libraries, University of Notre Dame; Visiting Program Officer for SHARE at the Association of Research Libraries. Presented by Rick Johnson & Mike Conlon, VIVO Project Director, DuraSpace
4.16.15 Slides, “Enhancing Early Career Researcher Profiles: VIVO & ORCID Int...DuraSpace
Hot Topics: The DuraSpace Community Webinar Series
Series 11: Integrating ORCID Persistent Identifiers with DSpace, Fedora and VIVO
Webinar 3: “Enhancing Early Career Researcher Profiles: VIVO & ORCID Integration”
April 16, 2015
Curated by Josh Brown, ORCID
Presented by: Simeon Warner, Library Information Systems, Cornell University, Jon Corson-Rikert, Head of Information Technology Services, Cornell University and Kristi Holmes, Director, Galter Health Sciences Library, Northwestern University
Data Science: History repeated? – The heritage of the Free and Open Source GI...Peter Löwe
Data Science is described as the process of knowledge extraction from large data sets by means of scientific
methods. The discipline draws heavily from techniques and theories from many fields, which are jointly used to
furthermore develop information retrieval on structured or unstructured very large datasets. While the term Data
Science was already coined in 1960, the current perception of this field places is still in the first section of the hype cycle according to Gartner, being well en route from the technology trigger stage to the peak of inflated
expectations.
In our view the future development of Data Science could benefit from the analysis of experiences from
related evolutionary processes. One predecessor is the area of Geographic Information Systems (GIS). The
intrinsic scope of GIS is the integration and storage of spatial information from often heterogeneous sources, data
analysis, sharing of reconstructed or aggregated results in visual form or via data transfer. GIS is successfully
applied to process and analyse spatially referenced content in a wide and still expanding range of science
areas, spanning from human and social sciences like archeology, politics and architecture to environmental and
geoscientific applications, even including planetology.
This paper presents proven patterns for innovation and organisation derived from the evolution of GIS,
which can be ported to Data Science. Within the GIS landscape, three strategic interacting tiers can be denoted: i) Standardisation, ii) applications based on closed-source software, without the option of access to and analysis of the implemented algorithms, and iii) Free and Open Source Software (FOSS) based on freely accessible program code enabling analysis, education and ,improvement by everyone. This paper focuses on patterns gained from the synthesis of three decades of FOSS development. We identified best-practices which evolved from long term FOSS projects, describe the role of community-driven global umbrella organisations such as OSGeo, as well as the standardization of innovative services. The main driver is the acknowledgement of a meritocratic attitude.
These patterns follow evolutionary processes of establishing and maintaining a web-based democratic culture
spawning new kinds of communication and projects. This culture transcends the established compartmentation and
stratification of science by creating mutual benefits for the participants, irrespective of their respective research
interest and standing. Adopting these best practices will enable
4.2.15 Slides, “Hydra: many heads, many connections. Enriching Fedora Reposit...DuraSpace
Hot Topics: The DuraSpace Community Webinar Series
Series 11: Integrating ORCID Persistent Identifiers with DSpace, Fedora and VIVO
Webinar 2: “Hydra: many heads, many connections. Enriching Fedora Repositories with ORCID.”
Thursday, April 2, 2015
Curated by Josh Brown, ORCID
Presented by: Laura Paglione, Technical Director, ORCID and Rick Johnson, Head of Digital Library Services, University of Notre Dame
DSpace-CRIS_An open source solution for Research_EDU15Michele Mennielli
The research area is a complex world to manage. It involves collecting data, supporting researchers and administrators, monitoring results, allocating resources efficiently, enhancing visibility, and strengthening national and international collaborations. RIMs manage these activities, but they might be too expensive. This is why Cineca developed DSpace-CRIS, and released it in open source.
Session 1 and 2 "Challenges and Opportunities with Big Linked Data Visualiza...Laura Po
"Challenges and Opportunities with Big Linked Data Visualization" tutorial @ISWC 2018
A book on the topic published by the author is
"Linked Data Visualization: Techniques, Tools and Big Data"
Laura Po, Nikos Bikakis, Federico Desimoni & George Papastefanatos
Synthesis Lectures on Data, Semantics and Knowledge
Morgan & Claypool, 2020
ISBN: 9781681737256 | 9781681737263 (ebook)
DOI: 10.2200/S00967ED1V01Y201911WBE019
Morgan & Claypool: https://www.morganclaypool.com/doi/abs/10.2200/S00967ED1V01Y201911WBE019
Homepage: http://www.linkeddatavisualization.com
Presentation held at the dutch digital author identifier conference 2013-03-14, to demonstrate the range of metrics services can come available to researchers, when the identifer infrastructure has been set into place.
Moving from an IR to a CRIS, the why & howDavid T Palmer
IRs collect, manage and display publications, and their metadata. However, an institution’s research, expertise and capacity is described by more than publications. The HKU Scholars Hub, hosted in DSpace, began as the IR of The University of Hong Kong (HKU) in 2005. Asking for voluntary deposit of publications from HKU academics, it received little notice, and more importantly, little support from University senior management. In 2009 a new HKU initiative, Knowledge Exchange, adopted the Hub as a key vehicle to share knowledge and skill with the community outside HKU. With funding support from the Office of KE, we extended the data model of DSpace to include relational tables on non-publication objects, including people, grants, and patents, holding attributes of these objects, such as co-investigators, co-inventors, co-prize winners, research interests, languages spoken, supervision of postgraduate theses, etc. The DSpace user interface now delivers an integrated search and display on these objects and attributes, as well as on ones newly derived, such as authority work on name disambiguation and synonymy in Roman and Hanzi (漢字), visualizations on networks of co-authors, co-investigators, etc, metrics extracted from external sources such as Scopus, WoS, PubMed, Google Scholar Citations, internal alt-metrics of view and download counts, and more. Beyond the functions of an IR, the Hub now performs as a system for reputation management, impact management, and research networking and profiling -- all of which are concepts included in the broad term, “Current Research Information System” (CRIS). These new objects and attributes curated from several trusted sources, and integrated into the present mashup, contextualize and highlight HKU research, and attract more hits, than an IR with only publications.
The HKU Office of Knowledge Exchange has now funded the modularization of these new HKU features of DSpace. Together with our partner, CINECA of Italy, we are making this work available in open source for the DSpace community.
Institutional Repositories have grown in importance over the last 10 years to offer a core University and Library service, however, their role is developing faster now than it has ever done. Funder Open Access requirements, internal reporting, research data. Ref2020 and more are increasing the demands on the traditional repository, putting pressure on staff resources and challenging the underlying software.
This webinar outlines these issues as well as looks at how the needs and use of repositories may change in the future.
Please respect the CC BY 2.5 licence.
2.24.16 Slides, “VIVO plus SHARE: Closing the Loop on Tracking Scholarly Acti...DuraSpace
Hot Topics: The DuraSpace Community Webinar Series
Series 13: “VIVO plus SHARE: Closing the Loop on Scholarly Activity”
Webinar 1: , “VIVO plus SHARE: Closing the Loop on Tracking Scholarly Activity” 2.24.16
Curated by Rick Johnson, Program Co-Director, Digital Initiatives and Scholarship Head, Data Curation and Digital Library Solutions Hesburgh Libraries, University of Notre Dame; Visiting Program Officer for SHARE at the Association of Research Libraries. Presented by Rick Johnson & Mike Conlon, VIVO Project Director, DuraSpace
4.16.15 Slides, “Enhancing Early Career Researcher Profiles: VIVO & ORCID Int...DuraSpace
Hot Topics: The DuraSpace Community Webinar Series
Series 11: Integrating ORCID Persistent Identifiers with DSpace, Fedora and VIVO
Webinar 3: “Enhancing Early Career Researcher Profiles: VIVO & ORCID Integration”
April 16, 2015
Curated by Josh Brown, ORCID
Presented by: Simeon Warner, Library Information Systems, Cornell University, Jon Corson-Rikert, Head of Information Technology Services, Cornell University and Kristi Holmes, Director, Galter Health Sciences Library, Northwestern University
Data Science: History repeated? – The heritage of the Free and Open Source GI...Peter Löwe
Data Science is described as the process of knowledge extraction from large data sets by means of scientific
methods. The discipline draws heavily from techniques and theories from many fields, which are jointly used to
furthermore develop information retrieval on structured or unstructured very large datasets. While the term Data
Science was already coined in 1960, the current perception of this field places is still in the first section of the hype cycle according to Gartner, being well en route from the technology trigger stage to the peak of inflated
expectations.
In our view the future development of Data Science could benefit from the analysis of experiences from
related evolutionary processes. One predecessor is the area of Geographic Information Systems (GIS). The
intrinsic scope of GIS is the integration and storage of spatial information from often heterogeneous sources, data
analysis, sharing of reconstructed or aggregated results in visual form or via data transfer. GIS is successfully
applied to process and analyse spatially referenced content in a wide and still expanding range of science
areas, spanning from human and social sciences like archeology, politics and architecture to environmental and
geoscientific applications, even including planetology.
This paper presents proven patterns for innovation and organisation derived from the evolution of GIS,
which can be ported to Data Science. Within the GIS landscape, three strategic interacting tiers can be denoted: i) Standardisation, ii) applications based on closed-source software, without the option of access to and analysis of the implemented algorithms, and iii) Free and Open Source Software (FOSS) based on freely accessible program code enabling analysis, education and ,improvement by everyone. This paper focuses on patterns gained from the synthesis of three decades of FOSS development. We identified best-practices which evolved from long term FOSS projects, describe the role of community-driven global umbrella organisations such as OSGeo, as well as the standardization of innovative services. The main driver is the acknowledgement of a meritocratic attitude.
These patterns follow evolutionary processes of establishing and maintaining a web-based democratic culture
spawning new kinds of communication and projects. This culture transcends the established compartmentation and
stratification of science by creating mutual benefits for the participants, irrespective of their respective research
interest and standing. Adopting these best practices will enable
4.2.15 Slides, “Hydra: many heads, many connections. Enriching Fedora Reposit...DuraSpace
Hot Topics: The DuraSpace Community Webinar Series
Series 11: Integrating ORCID Persistent Identifiers with DSpace, Fedora and VIVO
Webinar 2: “Hydra: many heads, many connections. Enriching Fedora Repositories with ORCID.”
Thursday, April 2, 2015
Curated by Josh Brown, ORCID
Presented by: Laura Paglione, Technical Director, ORCID and Rick Johnson, Head of Digital Library Services, University of Notre Dame
DSpace-CRIS_An open source solution for Research_EDU15Michele Mennielli
The research area is a complex world to manage. It involves collecting data, supporting researchers and administrators, monitoring results, allocating resources efficiently, enhancing visibility, and strengthening national and international collaborations. RIMs manage these activities, but they might be too expensive. This is why Cineca developed DSpace-CRIS, and released it in open source.
Session 1 and 2 "Challenges and Opportunities with Big Linked Data Visualiza...Laura Po
"Challenges and Opportunities with Big Linked Data Visualization" tutorial @ISWC 2018
A book on the topic published by the author is
"Linked Data Visualization: Techniques, Tools and Big Data"
Laura Po, Nikos Bikakis, Federico Desimoni & George Papastefanatos
Synthesis Lectures on Data, Semantics and Knowledge
Morgan & Claypool, 2020
ISBN: 9781681737256 | 9781681737263 (ebook)
DOI: 10.2200/S00967ED1V01Y201911WBE019
Morgan & Claypool: https://www.morganclaypool.com/doi/abs/10.2200/S00967ED1V01Y201911WBE019
Homepage: http://www.linkeddatavisualization.com
Presentation held at the dutch digital author identifier conference 2013-03-14, to demonstrate the range of metrics services can come available to researchers, when the identifer infrastructure has been set into place.
Just an idea I have about Research Analytics. So the Dutch government would like to see performance indicators by the Higher Education institutions. This information can be drawn from many silo's and just give the indicator. However, when the information from these silo's can be combined, more advanced analytic's can be drawn from this information by data and text mining techniques.
But the first step is to integrate the information in a scalable data infrastructure. Research information is not big data, but it is wise to think about scalability to overcome performance issues in the future.
Multiple sources provide the data with a daily update, such as repositories'metadata, web log files, altmetrics data, citation references, funding and grand information, etc. The information is calculated and implicit relations are made explicit. In he end it is provided as open data invarious formats, including RESTful API's. Services can draw information from these API's to base their services on. Examples are metrics and analytics services, but also resolution and information portals.
This presentation depicts the contrast of the conservatism of the academic tradition compared to the new trends happening in the digital information age
The MyResearch Portal is a proof of concept online collaboration tool for researchers. On this portal they can make add third party gadgets and securely share it with they peers and colleague co-workers. In this particular case they can make use of almost all products resulted from the SURFshare progam, especially for making Enhanced Publications and deposit Research Data.
This presentation is in DUTCH.
Brief introduction about SURF.
Zooming in to Enhanced Publication Activities
This is the version WITH the VIDEO's
http://dl.dropbox.com/u/1120383/SURF-VerrijktePublicaties-OCW-OWB-2011-06-23-compressed-shared.pptx
Enhanced Publications - Guest Lecture @Utrecht University - Design of Interac...maurice.vanderfeesten
The evolution of academic papers involves a paradigm shift of the scholarly communication when the medium changes to electronic distribution of knowledge.
Pushing the limits of ePRTC: 100ns holdover for 100 daysAdtran
At WSTS 2024, Alon Stern explored the topic of parametric holdover and explained how recent research findings can be implemented in real-world PNT networks to achieve 100 nanoseconds of accuracy for up to 100 days.
Why You Should Replace Windows 11 with Nitrux Linux 3.5.0 for enhanced perfor...SOFTTECHHUB
The choice of an operating system plays a pivotal role in shaping our computing experience. For decades, Microsoft's Windows has dominated the market, offering a familiar and widely adopted platform for personal and professional use. However, as technological advancements continue to push the boundaries of innovation, alternative operating systems have emerged, challenging the status quo and offering users a fresh perspective on computing.
One such alternative that has garnered significant attention and acclaim is Nitrux Linux 3.5.0, a sleek, powerful, and user-friendly Linux distribution that promises to redefine the way we interact with our devices. With its focus on performance, security, and customization, Nitrux Linux presents a compelling case for those seeking to break free from the constraints of proprietary software and embrace the freedom and flexibility of open-source computing.
Securing your Kubernetes cluster_ a step-by-step guide to success !KatiaHIMEUR1
Today, after several years of existence, an extremely active community and an ultra-dynamic ecosystem, Kubernetes has established itself as the de facto standard in container orchestration. Thanks to a wide range of managed services, it has never been so easy to set up a ready-to-use Kubernetes cluster.
However, this ease of use means that the subject of security in Kubernetes is often left for later, or even neglected. This exposes companies to significant risks.
In this talk, I'll show you step-by-step how to secure your Kubernetes cluster for greater peace of mind and reliability.
In the rapidly evolving landscape of technologies, XML continues to play a vital role in structuring, storing, and transporting data across diverse systems. The recent advancements in artificial intelligence (AI) present new methodologies for enhancing XML development workflows, introducing efficiency, automation, and intelligent capabilities. This presentation will outline the scope and perspective of utilizing AI in XML development. The potential benefits and the possible pitfalls will be highlighted, providing a balanced view of the subject.
We will explore the capabilities of AI in understanding XML markup languages and autonomously creating structured XML content. Additionally, we will examine the capacity of AI to enrich plain text with appropriate XML markup. Practical examples and methodological guidelines will be provided to elucidate how AI can be effectively prompted to interpret and generate accurate XML markup.
Further emphasis will be placed on the role of AI in developing XSLT, or schemas such as XSD and Schematron. We will address the techniques and strategies adopted to create prompts for generating code, explaining code, or refactoring the code, and the results achieved.
The discussion will extend to how AI can be used to transform XML content. In particular, the focus will be on the use of AI XPath extension functions in XSLT, Schematron, Schematron Quick Fixes, or for XML content refactoring.
The presentation aims to deliver a comprehensive overview of AI usage in XML development, providing attendees with the necessary knowledge to make informed decisions. Whether you’re at the early stages of adopting AI or considering integrating it in advanced XML development, this presentation will cover all levels of expertise.
By highlighting the potential advantages and challenges of integrating AI with XML development tools and languages, the presentation seeks to inspire thoughtful conversation around the future of XML development. We’ll not only delve into the technical aspects of AI-powered XML development but also discuss practical implications and possible future directions.
Threats to mobile devices are more prevalent and increasing in scope and complexity. Users of mobile devices desire to take full advantage of the features
available on those devices, but many of the features provide convenience and capability but sacrifice security. This best practices guide outlines steps the users can take to better protect personal devices and information.
Dr. Sean Tan, Head of Data Science, Changi Airport Group
Discover how Changi Airport Group (CAG) leverages graph technologies and generative AI to revolutionize their search capabilities. This session delves into the unique search needs of CAG’s diverse passengers and customers, showcasing how graph data structures enhance the accuracy and relevance of AI-generated search results, mitigating the risk of “hallucinations” and improving the overall customer journey.
A tale of scale & speed: How the US Navy is enabling software delivery from l...sonjaschweigert1
Rapid and secure feature delivery is a goal across every application team and every branch of the DoD. The Navy’s DevSecOps platform, Party Barge, has achieved:
- Reduction in onboarding time from 5 weeks to 1 day
- Improved developer experience and productivity through actionable findings and reduction of false positives
- Maintenance of superior security standards and inherent policy enforcement with Authorization to Operate (ATO)
Development teams can ship efficiently and ensure applications are cyber ready for Navy Authorizing Officials (AOs). In this webinar, Sigma Defense and Anchore will give attendees a look behind the scenes and demo secure pipeline automation and security artifacts that speed up application ATO and time to production.
We will cover:
- How to remove silos in DevSecOps
- How to build efficient development pipeline roles and component templates
- How to deliver security artifacts that matter for ATO’s (SBOMs, vulnerability reports, and policy evidence)
- How to streamline operations with automated policy checks on container images
20 Comprehensive Checklist of Designing and Developing a WebsitePixlogix Infotech
Dive into the world of Website Designing and Developing with Pixlogix! Looking to create a stunning online presence? Look no further! Our comprehensive checklist covers everything you need to know to craft a website that stands out. From user-friendly design to seamless functionality, we've got you covered. Don't miss out on this invaluable resource! Check out our checklist now at Pixlogix and start your journey towards a captivating online presence today.
Maruthi Prithivirajan, Head of ASEAN & IN Solution Architecture, Neo4j
Get an inside look at the latest Neo4j innovations that enable relationship-driven intelligence at scale. Learn more about the newest cloud integrations and product enhancements that make Neo4j an essential choice for developers building apps with interconnected data and generative AI.
zkStudyClub - Reef: Fast Succinct Non-Interactive Zero-Knowledge Regex ProofsAlex Pruden
This paper presents Reef, a system for generating publicly verifiable succinct non-interactive zero-knowledge proofs that a committed document matches or does not match a regular expression. We describe applications such as proving the strength of passwords, the provenance of email despite redactions, the validity of oblivious DNS queries, and the existence of mutations in DNA. Reef supports the Perl Compatible Regular Expression syntax, including wildcards, alternation, ranges, capture groups, Kleene star, negations, and lookarounds. Reef introduces a new type of automata, Skipping Alternating Finite Automata (SAFA), that skips irrelevant parts of a document when producing proofs without undermining soundness, and instantiates SAFA with a lookup argument. Our experimental evaluation confirms that Reef can generate proofs for documents with 32M characters; the proofs are small and cheap to verify (under a second).
Paper: https://eprint.iacr.org/2023/1886
Climate Impact of Software Testing at Nordic Testing DaysKari Kakkonen
My slides at Nordic Testing Days 6.6.2024
Climate impact / sustainability of software testing discussed on the talk. ICT and testing must carry their part of global responsibility to help with the climat warming. We can minimize the carbon footprint but we can also have a carbon handprint, a positive impact on the climate. Quality characteristics can be added with sustainability, and then measured continuously. Test environments can be used less, and in smaller scale and on demand. Test techniques can be used in optimizing or minimizing number of tests. Test automation can be used to speed up testing.
DevOps and Testing slides at DASA ConnectKari Kakkonen
My and Rik Marselis slides at 30.5.2024 DASA Connect conference. We discuss about what is testing, then what is agile testing and finally what is Testing in DevOps. Finally we had lovely workshop with the participants trying to find out different ways to think about quality and testing in different parts of the DevOps infinity loop.
Alt. GDG Cloud Southlake #33: Boule & Rebala: Effective AppSec in SDLC using ...James Anderson
Effective Application Security in Software Delivery lifecycle using Deployment Firewall and DBOM
The modern software delivery process (or the CI/CD process) includes many tools, distributed teams, open-source code, and cloud platforms. Constant focus on speed to release software to market, along with the traditional slow and manual security checks has caused gaps in continuous security as an important piece in the software supply chain. Today organizations feel more susceptible to external and internal cyber threats due to the vast attack surface in their applications supply chain and the lack of end-to-end governance and risk management.
The software team must secure its software delivery process to avoid vulnerability and security breaches. This needs to be achieved with existing tool chains and without extensive rework of the delivery processes. This talk will present strategies and techniques for providing visibility into the true risk of the existing vulnerabilities, preventing the introduction of security issues in the software, resolving vulnerabilities in production environments quickly, and capturing the deployment bill of materials (DBOM).
Speakers:
Bob Boule
Robert Boule is a technology enthusiast with PASSION for technology and making things work along with a knack for helping others understand how things work. He comes with around 20 years of solution engineering experience in application security, software continuous delivery, and SaaS platforms. He is known for his dynamic presentations in CI/CD and application security integrated in software delivery lifecycle.
Gopinath Rebala
Gopinath Rebala is the CTO of OpsMx, where he has overall responsibility for the machine learning and data processing architectures for Secure Software Delivery. Gopi also has a strong connection with our customers, leading design and architecture for strategic implementations. Gopi is a frequent speaker and well-known leader in continuous delivery and integrating security into software delivery.
Communications Mining Series - Zero to Hero - Session 1DianaGray10
This session provides introduction to UiPath Communication Mining, importance and platform overview. You will acquire a good understand of the phases in Communication Mining as we go over the platform with you. Topics covered:
• Communication Mining Overview
• Why is it important?
• How can it help today’s business and the benefits
• Phases in Communication Mining
• Demo on Platform overview
• Q/A
20240605 QFM017 Machine Intelligence Reading List May 2024
SURE2 Statistics dashboard mockup
1. INSIGHT USAGE STATISTICS
Mockup Presentation
for indepth usage analysis of
Repositories, Publications and Authors
maurice.vanderfeesten 1
2. “Since it is their [Funding Agencies] mission to
promote research, many funding agencies
support open access. [...] Thus the question
arises whether the money invested in open
access is on a par with its research impact.”
Johannes Fournier (German Research Foundation)
Berlin6, 13 May 2008, Session 7 - Pondering on Research Impact and Cost Effectiveness
http://oa.mpg.de/berlin6/index8a92.html?page_id=75
maurice.vanderfeesten 2
3. “ Scientific information has the power to
transform our lives for the better – it is too
valuable to be locked away. In addition, every EU
citizen has the right to access and benefit from
knowledge produced using public funds.”
Neelie Kroes, Vice-President of the European Commission for the Digital Agenda
Ghent, 2 December 2010
maurice.vanderfeesten 3
4. The question of how to measure the
effectiveness of such a publishing model [Open
Access] was simple, according to Liebrand – by
usage levels.
Wim Liebrand, chief executive of SURFfoundation
Research Information, January 2011, Issue 51, page 17
maurice.vanderfeesten 4
36. Business case
• Op een aantrekkelijke manier de impact en
gebruik van Open Access Publicaties
presenteren van Repositories in Nederland.
• Biedt een additionele mogelijkheid om impact
te meten (in de maatschappij).
• Voor wie: Repository Managers &
Onderzoekers
maurice.vanderfeesten 36
37. Resultaten
• Downloads en views per Auteur, Publication
• Statistics Dashboard
• Verder uitrol en inbedding in landelijke
repository community en Narcis
• Resultaat op OpenAccess Week 2011 (okt)
maurice.vanderfeesten 37
38. NWO/CWI
TUD
UL DANS
UM
UU
UvA
UvT
VU
WUR
Embedable </> HTML code snippets
maurice.vanderfeesten 38
39. Project indeling
WP1: Project Management
WP2: Repository
Dashboard (buiten narcis)
development + roadmapping WP5: Repositories
WP3: End-user implementation and
Dashboard (in narcis) Support
WP4: Log Harvesting
WP6: Kennis disseminatie
maurice.vanderfeesten 39
40. Project uitvoering
• WP1: Project management
– UvT
• WP2: Repository Dashboard development (buiten narcis)
– DANS
• WP3: End-user Dashboard development (in narcis)
– DANS
• WP4: Log Harvesting
– DANS
• WP5: Repository implementatie en Support
– Implementation: EUR, NWO/CWI, TUD, UM, UU, UvT, WUR (≈16uur per IR)
– Support: VU, UvA, UL (≈2uur per te implementeren IR)
• WP6: Kennis disseminatie
– UL
– Artikel & Conferentie?
– Exposure in SURFshare community: nieuwsbrief, scienceguide, etc.
maurice.vanderfeesten 40
41. Budget verdeling
€7.500
WP1: Project Management
WP2: Repository
Dashboard (buiten narcis)
development + roadmapping WP5: Repositories
€12.500 WP3: End-user implementation and
Eigen-
Dashboard (in narcis) support bijdrage
WP4: Log Harvesting
WP6: Kennis disseminatie €5.000
Penvoerder 100% subsidie
DANS 50% subsidie + 50% matching
Instellingen Eigen bijdrage (=100% matching)
maurice.vanderfeesten Instelling 100% subsidie 41
42. Vervolgstappen
• Project Plan – richting BIK
• Letter of intent (eindverantwoordelijke repositories)
• Subsidievoorwaarde penvoerder
• Planning communiceren
– Kickoff: 1 juni
– Resultaat: okt OpenAccess Week 2011
maurice.vanderfeesten 42