Identity privacy and data protection in the cloud – what is being done is it ...Mark Skilton
“Identity, Privacy, and Data Protection in the Cloud – What is Being Done? Is it Enough?” GOAL Global Outsourcing Lawers Conference. Cpagemini Mark Skilton
RCA OCORA: Safe Computing Platform using open standardsAdaCore
The railway sector is facing a major transition as it moves towards more fully automated systems on both the train and infrastructure side. This in turn, requires the development of appropriate, future-proof connectivity and IT platforms.
The Reference Control Command and Signalling Architecture (RCA) and Open Control Command and Signalling Onboard Reference Architecture (OCORA) have developed a functional architecture for future trackside and onboard functions. The RCA OCORA open Control Command Signalling (CCS) on-board reference architecture introduces a standardized separation of safety-relevant and non-safety-relevant railway applications and the underlying IT platforms. This allows rail operators to decouple the very distinct life cycles of the domains and aggregate multiple railway applications on common IT platforms.
Based on a Safe Computing Platform (SCP), the architecture accommodates a Platform Independent Application Programming Interface (PI API) between safety-relevant railway applications and IT platforms. This approach supports the portability of railway applications among IT platform realisations from different vendors.
Two of its authors will discuss the RCA OCORA architecture with emphasis on its safe computing framework. The talk will review the required operating system standards and the discuss the newly-released DDS Reference Implementation for Safe Computing Platform Messaging. While designed for rail, this architecture will have elements of interest for other industries.
A NoSQL database is ideal for storing, querying, and managing the any-structured information and new data types of the Big Data world … but does that mean a NoSQL database is ready for the enterprise? We say yes. People assume that Relational is always ACID and NoSQL is always BASE. Is that actually true? We say no.
In this 45-min webinar, Jason Hunter, Chief Architect of MarkLogic, and his colleague, Diane Burley, Chief Content Strategist, will discuss MarkLogic, the world's only Enterprise NoSQL Database.
You will learn:
- What's different about a NoSQL database
- What makes MarkLogic an Enterprise NoSQL Database
- How you can do ad hoc queries against ad hoc structured data
- How MarkLogic handles the CAP theorem limitations
- How MarkLogic opens up new opportunities in Big Data
Snowflake’s Cloud Data Platform and Modern AnalyticsSenturus
See a demo and learn how Snowflake meets the needs of performant BI. Designed to handle both structured and unstructured data, Snowflake can serve as a single data repository, providing elastic performance and scalability.
Senturus offers a full spectrum of services in business intelligence and training on Tableau, Power BI and Cognos. Our resource library has hundreds of free live and recorded webinars, blog posts, demos and unbiased product reviews available on our website at: http://www.senturus.com/senturus-resources/.
Identity privacy and data protection in the cloud – what is being done is it ...Mark Skilton
“Identity, Privacy, and Data Protection in the Cloud – What is Being Done? Is it Enough?” GOAL Global Outsourcing Lawers Conference. Cpagemini Mark Skilton
RCA OCORA: Safe Computing Platform using open standardsAdaCore
The railway sector is facing a major transition as it moves towards more fully automated systems on both the train and infrastructure side. This in turn, requires the development of appropriate, future-proof connectivity and IT platforms.
The Reference Control Command and Signalling Architecture (RCA) and Open Control Command and Signalling Onboard Reference Architecture (OCORA) have developed a functional architecture for future trackside and onboard functions. The RCA OCORA open Control Command Signalling (CCS) on-board reference architecture introduces a standardized separation of safety-relevant and non-safety-relevant railway applications and the underlying IT platforms. This allows rail operators to decouple the very distinct life cycles of the domains and aggregate multiple railway applications on common IT platforms.
Based on a Safe Computing Platform (SCP), the architecture accommodates a Platform Independent Application Programming Interface (PI API) between safety-relevant railway applications and IT platforms. This approach supports the portability of railway applications among IT platform realisations from different vendors.
Two of its authors will discuss the RCA OCORA architecture with emphasis on its safe computing framework. The talk will review the required operating system standards and the discuss the newly-released DDS Reference Implementation for Safe Computing Platform Messaging. While designed for rail, this architecture will have elements of interest for other industries.
A NoSQL database is ideal for storing, querying, and managing the any-structured information and new data types of the Big Data world … but does that mean a NoSQL database is ready for the enterprise? We say yes. People assume that Relational is always ACID and NoSQL is always BASE. Is that actually true? We say no.
In this 45-min webinar, Jason Hunter, Chief Architect of MarkLogic, and his colleague, Diane Burley, Chief Content Strategist, will discuss MarkLogic, the world's only Enterprise NoSQL Database.
You will learn:
- What's different about a NoSQL database
- What makes MarkLogic an Enterprise NoSQL Database
- How you can do ad hoc queries against ad hoc structured data
- How MarkLogic handles the CAP theorem limitations
- How MarkLogic opens up new opportunities in Big Data
Snowflake’s Cloud Data Platform and Modern AnalyticsSenturus
See a demo and learn how Snowflake meets the needs of performant BI. Designed to handle both structured and unstructured data, Snowflake can serve as a single data repository, providing elastic performance and scalability.
Senturus offers a full spectrum of services in business intelligence and training on Tableau, Power BI and Cognos. Our resource library has hundreds of free live and recorded webinars, blog posts, demos and unbiased product reviews available on our website at: http://www.senturus.com/senturus-resources/.
Turtles, Trust and The Future of Cybersecurity
Faith in our institutions is collapsing, and GDPR is at the door. What would cybersecurity look like if we started from scratch, right now, in our hybrid, interdependent world? It would focus relentlessly on data. Learn how a data-centric security approach can reduce risk, increase efficiency and re-engineer trust in a society where faith has been shaken by unstoppable breaches.
This session provides a brief overview of the various models available for adopting cloud and their strategic considerations, ranging from providing Enterprise class service to business alignment. This session also explores the infrastructure, management, and benefits of cloud computing and cloud storage.
Objective 1: Understand the various cloud models and their associated benefits and considerations.
After this session you will be able to:
Objective 2: Gain a high-level understanding of technologies that EMC can provide to accelerate adoption of the cloud models.
Objective 3: Understand the tactical approaches to cloud consumption available to their organization based on its needs and transformation phase.
Watch the recordings via http://www.brainshark.com/emcworld/vu?pi=zGfzHnlI1zB8sLz0
GDPR Compliance: Transparent Handing of Personally Identifiable Information i...confluent
By nature Event-driven systems transform data and propagate it across multiple services. This characteristic makes the GDPR compliance challenging. Immutable Kafka logs make it impossible to explicitly delete a published message that may contain Personally Identifiable Information (PII). A general solution has been to choose a short-enough retention duration for such topics so that the data is eventually removed within the allowed time limit. As for consumers of the data, one typically has to audit and trace where the data is propagated, and request each of the consuming services to purge their copy. Even then PII may still continue to exist, for example in backups, intermediate stating environments like S3 buckets, and ad-hoc copies of the data used for business analytics, data science, etc.
This talk presents a way to build GDPR compliance into the message propagation protocol itself, and utilise crypto-shredding to in effect render all copies of PII decipherable on demand. The talk explains how a message schema such as ProtocolBuffer can be extended to allow publishers of data to mark data as PII. It shows how GDPR compliance can be integrated into existing APIs that were not designed with GDPR in mind, with minimum disruption. It illustrates how the marked data is encrypted before it is stored in Kafka and guarantee that the data remains encrypted throughout its entire propagation journey. The talk shows how the key management system works transparently across thousands of services to control access to data with different granularity and protect against cross referencing to avoid unauthorized access to data.
MT126 Virtustream Storage Cloud: Hyperscale Cloud Object Storage Built for th...Dell EMC World
Are you looking for an object store for cloud-native applications? Or do you want to archive, backup, or tier data to the cloud? Attend this session to learn about how Virtustream Storage Cloud supports cloud apps, and works with cloud enablers such as EMC CloudArray, CloudBoost, and CloudPools to move data to secure, scalable, and competitively-priced enterprise-class cloud object storage.
Dr. Jeff Daniels and Dr. Ben Amaba discuss performance without limits 2.0 with a special focus on emerging research in industrial control systems, cybersecurity, cloud computing, and the pervasiveness of mobile devices.
This talk was given at the Winspear Opera House in Dallas, Texas.
Cross Domain Cyber Situational Awareness in a Multi Cloud, Multi-Network Fede...SolarWinds
Completely isolating mission critical clouds and networks is fundamental to maximize cyber security in defense, intelligence, law enforcement and certain commercial segments. However, the negative impact on IT operations is significant as each domain must be monitored and managed separately. Cross domain solutions are meant to reduce or eliminate this impact. Join us for a free webinar to learn how SolarWinds and BlueSpace have partnered to enable the SolarWinds Enterprise Operations Console to provide a global, enterprise view of the cyber environment across security domains and how your agency can benefit from a Cross Domain Cyber Situational Awareness solution.
Modern industrial security attacks are growing in volume
and sophistication, often targeting systems control
infrastructure. A single attack can cost millions of dollars for
offshore drilling services like Diamond Offshore Drilling.
Through Rockwell Automation® Asset Centre and Cisco’s Threat
Detection Services, the company now has systems in place to
help detect and respond to security threats, and expedite the
recovery process for critical infrastructure.
The Journey from Zero to SOC: How Citadel built its Security Operations from ...Elasticsearch
See how Citadel Group replaced their IT ops infrastructure monitoring tool with Elastic Security and Elastic Cloud Enterprise — and how it positively impacted their enterprise software and services managed offerings for their end customers across the world.
Aerial Data Management and The Digital EnterpriseAdvisian
Broader industries have seen improved performance across the whole of asset lifecycle with the extensive application of digital technologies and an increase in the integration of information systems. We explain how.
Rethink Protecting Your Data in a Virtual World - StorageIO at VMworld 2014Nexsan by Imation
For those who joined us for our VMworld breakfast in San Francisco this morning, we want to thank you for your participation and please feel free to review the presentation to make sure you got all the facts!
Weren't able to join us? No sweat! Get the full presentation given my Greg Schulz of StorageIO!
Looking for more information on about storage solutions? Visit the website - http://www.nexsan.com/ - or get the latest updates on our social media accounts:
twitter.com/nexsan
linkedin.com/nexsan
facebook.com/nexsan
youtube.com/nexsan
Welcome to "Cybersecurity for SAP, Everywhere You Need It.. This presentation is about Securing SAP Solutions for the Digital Enterprise. It is audio enabled, hence the best experience comes with downloading the PPSX file (90MB) and watching it offline. Or watch the YouTube video at https://youtu.be/rNaG5QvmFs4.
Globus Compute wth IRI Workflows - GlobusWorld 2024Globus
As part of the DOE Integrated Research Infrastructure (IRI) program, NERSC at Lawrence Berkeley National Lab and ALCF at Argonne National Lab are working closely with General Atomics on accelerating the computing requirements of the DIII-D experiment. As part of the work the team is investigating ways to speedup the time to solution for many different parts of the DIII-D workflow including how they run jobs on HPC systems. One of these routes is looking at Globus Compute as a way to replace the current method for managing tasks and we describe a brief proof of concept showing how Globus Compute could help to schedule jobs and be a tool to connect compute at different facilities.
Climate Science Flows: Enabling Petabyte-Scale Climate Analysis with the Eart...Globus
The Earth System Grid Federation (ESGF) is a global network of data servers that archives and distributes the planet’s largest collection of Earth system model output for thousands of climate and environmental scientists worldwide. Many of these petabyte-scale data archives are located in proximity to large high-performance computing (HPC) or cloud computing resources, but the primary workflow for data users consists of transferring data, and applying computations on a different system. As a part of the ESGF 2.0 US project (funded by the United States Department of Energy Office of Science), we developed pre-defined data workflows, which can be run on-demand, capable of applying many data reduction and data analysis to the large ESGF data archives, transferring only the resultant analysis (ex. visualizations, smaller data files). In this talk, we will showcase a few of these workflows, highlighting how Globus Flows can be used for petabyte-scale climate analysis.
More Related Content
Similar to Solving the 100-Year Data Lifecycle Dilemma
Turtles, Trust and The Future of Cybersecurity
Faith in our institutions is collapsing, and GDPR is at the door. What would cybersecurity look like if we started from scratch, right now, in our hybrid, interdependent world? It would focus relentlessly on data. Learn how a data-centric security approach can reduce risk, increase efficiency and re-engineer trust in a society where faith has been shaken by unstoppable breaches.
This session provides a brief overview of the various models available for adopting cloud and their strategic considerations, ranging from providing Enterprise class service to business alignment. This session also explores the infrastructure, management, and benefits of cloud computing and cloud storage.
Objective 1: Understand the various cloud models and their associated benefits and considerations.
After this session you will be able to:
Objective 2: Gain a high-level understanding of technologies that EMC can provide to accelerate adoption of the cloud models.
Objective 3: Understand the tactical approaches to cloud consumption available to their organization based on its needs and transformation phase.
Watch the recordings via http://www.brainshark.com/emcworld/vu?pi=zGfzHnlI1zB8sLz0
GDPR Compliance: Transparent Handing of Personally Identifiable Information i...confluent
By nature Event-driven systems transform data and propagate it across multiple services. This characteristic makes the GDPR compliance challenging. Immutable Kafka logs make it impossible to explicitly delete a published message that may contain Personally Identifiable Information (PII). A general solution has been to choose a short-enough retention duration for such topics so that the data is eventually removed within the allowed time limit. As for consumers of the data, one typically has to audit and trace where the data is propagated, and request each of the consuming services to purge their copy. Even then PII may still continue to exist, for example in backups, intermediate stating environments like S3 buckets, and ad-hoc copies of the data used for business analytics, data science, etc.
This talk presents a way to build GDPR compliance into the message propagation protocol itself, and utilise crypto-shredding to in effect render all copies of PII decipherable on demand. The talk explains how a message schema such as ProtocolBuffer can be extended to allow publishers of data to mark data as PII. It shows how GDPR compliance can be integrated into existing APIs that were not designed with GDPR in mind, with minimum disruption. It illustrates how the marked data is encrypted before it is stored in Kafka and guarantee that the data remains encrypted throughout its entire propagation journey. The talk shows how the key management system works transparently across thousands of services to control access to data with different granularity and protect against cross referencing to avoid unauthorized access to data.
MT126 Virtustream Storage Cloud: Hyperscale Cloud Object Storage Built for th...Dell EMC World
Are you looking for an object store for cloud-native applications? Or do you want to archive, backup, or tier data to the cloud? Attend this session to learn about how Virtustream Storage Cloud supports cloud apps, and works with cloud enablers such as EMC CloudArray, CloudBoost, and CloudPools to move data to secure, scalable, and competitively-priced enterprise-class cloud object storage.
Dr. Jeff Daniels and Dr. Ben Amaba discuss performance without limits 2.0 with a special focus on emerging research in industrial control systems, cybersecurity, cloud computing, and the pervasiveness of mobile devices.
This talk was given at the Winspear Opera House in Dallas, Texas.
Cross Domain Cyber Situational Awareness in a Multi Cloud, Multi-Network Fede...SolarWinds
Completely isolating mission critical clouds and networks is fundamental to maximize cyber security in defense, intelligence, law enforcement and certain commercial segments. However, the negative impact on IT operations is significant as each domain must be monitored and managed separately. Cross domain solutions are meant to reduce or eliminate this impact. Join us for a free webinar to learn how SolarWinds and BlueSpace have partnered to enable the SolarWinds Enterprise Operations Console to provide a global, enterprise view of the cyber environment across security domains and how your agency can benefit from a Cross Domain Cyber Situational Awareness solution.
Modern industrial security attacks are growing in volume
and sophistication, often targeting systems control
infrastructure. A single attack can cost millions of dollars for
offshore drilling services like Diamond Offshore Drilling.
Through Rockwell Automation® Asset Centre and Cisco’s Threat
Detection Services, the company now has systems in place to
help detect and respond to security threats, and expedite the
recovery process for critical infrastructure.
The Journey from Zero to SOC: How Citadel built its Security Operations from ...Elasticsearch
See how Citadel Group replaced their IT ops infrastructure monitoring tool with Elastic Security and Elastic Cloud Enterprise — and how it positively impacted their enterprise software and services managed offerings for their end customers across the world.
Aerial Data Management and The Digital EnterpriseAdvisian
Broader industries have seen improved performance across the whole of asset lifecycle with the extensive application of digital technologies and an increase in the integration of information systems. We explain how.
Rethink Protecting Your Data in a Virtual World - StorageIO at VMworld 2014Nexsan by Imation
For those who joined us for our VMworld breakfast in San Francisco this morning, we want to thank you for your participation and please feel free to review the presentation to make sure you got all the facts!
Weren't able to join us? No sweat! Get the full presentation given my Greg Schulz of StorageIO!
Looking for more information on about storage solutions? Visit the website - http://www.nexsan.com/ - or get the latest updates on our social media accounts:
twitter.com/nexsan
linkedin.com/nexsan
facebook.com/nexsan
youtube.com/nexsan
Welcome to "Cybersecurity for SAP, Everywhere You Need It.. This presentation is about Securing SAP Solutions for the Digital Enterprise. It is audio enabled, hence the best experience comes with downloading the PPSX file (90MB) and watching it offline. Or watch the YouTube video at https://youtu.be/rNaG5QvmFs4.
Similar to Solving the 100-Year Data Lifecycle Dilemma (20)
Globus Compute wth IRI Workflows - GlobusWorld 2024Globus
As part of the DOE Integrated Research Infrastructure (IRI) program, NERSC at Lawrence Berkeley National Lab and ALCF at Argonne National Lab are working closely with General Atomics on accelerating the computing requirements of the DIII-D experiment. As part of the work the team is investigating ways to speedup the time to solution for many different parts of the DIII-D workflow including how they run jobs on HPC systems. One of these routes is looking at Globus Compute as a way to replace the current method for managing tasks and we describe a brief proof of concept showing how Globus Compute could help to schedule jobs and be a tool to connect compute at different facilities.
Climate Science Flows: Enabling Petabyte-Scale Climate Analysis with the Eart...Globus
The Earth System Grid Federation (ESGF) is a global network of data servers that archives and distributes the planet’s largest collection of Earth system model output for thousands of climate and environmental scientists worldwide. Many of these petabyte-scale data archives are located in proximity to large high-performance computing (HPC) or cloud computing resources, but the primary workflow for data users consists of transferring data, and applying computations on a different system. As a part of the ESGF 2.0 US project (funded by the United States Department of Energy Office of Science), we developed pre-defined data workflows, which can be run on-demand, capable of applying many data reduction and data analysis to the large ESGF data archives, transferring only the resultant analysis (ex. visualizations, smaller data files). In this talk, we will showcase a few of these workflows, highlighting how Globus Flows can be used for petabyte-scale climate analysis.
We describe the deployment and use of Globus Compute for remote computation. This content is aimed at researchers who wish to compute on remote resources using a unified programming interface, as well as system administrators who will deploy and operate Globus Compute services on their research computing infrastructure.
Globus Connect Server Deep Dive - GlobusWorld 2024Globus
We explore the Globus Connect Server (GCS) architecture and experiment with advanced configuration options and use cases. This content is targeted at system administrators who are familiar with GCS and currently operate—or are planning to operate—broader deployments at their institution.
Innovating Inference - Remote Triggering of Large Language Models on HPC Clus...Globus
Large Language Models (LLMs) are currently the center of attention in the tech world, particularly for their potential to advance research. In this presentation, we'll explore a straightforward and effective method for quickly initiating inference runs on supercomputers using the vLLM tool with Globus Compute, specifically on the Polaris system at ALCF. We'll begin by briefly discussing the popularity and applications of LLMs in various fields. Following this, we will introduce the vLLM tool, and explain how it integrates with Globus Compute to efficiently manage LLM operations on Polaris. Attendees will learn the practical aspects of setting up and remotely triggering LLMs from local machines, focusing on ease of use and efficiency. This talk is ideal for researchers and practitioners looking to leverage the power of LLMs in their work, offering a clear guide to harnessing supercomputing resources for quick and effective LLM inference.
Providing Globus Services to Users of JASMIN for Environmental Data AnalysisGlobus
JASMIN is the UK’s high-performance data analysis platform for environmental science, operated by STFC on behalf of the UK Natural Environment Research Council (NERC). In addition to its role in hosting the CEDA Archive (NERC’s long-term repository for climate, atmospheric science & Earth observation data in the UK), JASMIN provides a collaborative platform to a community of around 2,000 scientists in the UK and beyond, providing nearly 400 environmental science projects with working space, compute resources and tools to facilitate their work. High-performance data transfer into and out of JASMIN has always been a key feature, with many scientists bringing model outputs from supercomputers elsewhere in the UK, to analyse against observational or other model data in the CEDA Archive. A growing number of JASMIN users are now realising the benefits of using the Globus service to provide reliable and efficient data movement and other tasks in this and other contexts. Further use cases involve long-distance (intercontinental) transfers to and from JASMIN, and collecting results from a mobile atmospheric radar system, pushing data to JASMIN via a lightweight Globus deployment. We provide details of how Globus fits into our current infrastructure, our experience of the recent migration to GCSv5.4, and of our interest in developing use of the wider ecosystem of Globus services for the benefit of our user community.
First Steps with Globus Compute Multi-User EndpointsGlobus
In this presentation we will share our experiences around getting started with the Globus Compute multi-user endpoint. Working with the Pharmacology group at the University of Auckland, we have previously written an application using Globus Compute that can offload computationally expensive steps in the researcher's workflows, which they wish to manage from their familiar Windows environments, onto the NeSI (New Zealand eScience Infrastructure) cluster. Some of the challenges we have encountered were that each researcher had to set up and manage their own single-user globus compute endpoint and that the workloads had varying resource requirements (CPUs, memory and wall time) between different runs. We hope that the multi-user endpoint will help to address these challenges and share an update on our progress here.
Enhancing Research Orchestration Capabilities at ORNL.pdfGlobus
Cross-facility research orchestration comes with ever-changing constraints regarding the availability and suitability of various compute and data resources. In short, a flexible data and processing fabric is needed to enable the dynamic redirection of data and compute tasks throughout the lifecycle of an experiment. In this talk, we illustrate how we easily leveraged Globus services to instrument the ACE research testbed at the Oak Ridge Leadership Computing Facility with flexible data and task orchestration capabilities.
Understanding Globus Data Transfers with NetSageGlobus
NetSage is an open privacy-aware network measurement, analysis, and visualization service designed to help end-users visualize and reason about large data transfers. NetSage traditionally has used a combination of passive measurements, including SNMP and flow data, as well as active measurements, mainly perfSONAR, to provide longitudinal network performance data visualization. It has been deployed by dozens of networks world wide, and is supported domestically by the Engagement and Performance Operations Center (EPOC), NSF #2328479. We have recently expanded the NetSage data sources to include logs for Globus data transfers, following the same privacy-preserving approach as for Flow data. Using the logs for the Texas Advanced Computing Center (TACC) as an example, this talk will walk through several different example use cases that NetSage can answer, including: Who is using Globus to share data with my institution, and what kind of performance are they able to achieve? How many transfers has Globus supported for us? Which sites are we sharing the most data with, and how is that changing over time? How is my site using Globus to move data internally, and what kind of performance do we see for those transfers? What percentage of data transfers at my institution used Globus, and how did the overall data transfer performance compare to the Globus users?
How to Position Your Globus Data Portal for Success Ten Good PracticesGlobus
Science gateways allow science and engineering communities to access shared data, software, computing services, and instruments. Science gateways have gained a lot of traction in the last twenty years, as evidenced by projects such as the Science Gateways Community Institute (SGCI) and the Center of Excellence on Science Gateways (SGX3) in the US, The Australian Research Data Commons (ARDC) and its platforms in Australia, and the projects around Virtual Research Environments in Europe. A few mature frameworks have evolved with their different strengths and foci and have been taken up by a larger community such as the Globus Data Portal, Hubzero, Tapis, and Galaxy. However, even when gateways are built on successful frameworks, they continue to face the challenges of ongoing maintenance costs and how to meet the ever-expanding needs of the community they serve with enhanced features. It is not uncommon that gateways with compelling use cases are nonetheless unable to get past the prototype phase and become a full production service, or if they do, they don't survive more than a couple of years. While there is no guaranteed pathway to success, it seems likely that for any gateway there is a need for a strong community and/or solid funding streams to create and sustain its success. With over twenty years of examples to draw from, this presentation goes into detail for ten factors common to successful and enduring gateways that effectively serve as best practices for any new or developing gateway.
Exploring Innovations in Data Repository Solutions - Insights from the U.S. G...Globus
The U.S. Geological Survey (USGS) has made substantial investments in meeting evolving scientific, technical, and policy driven demands on storing, managing, and delivering data. As these demands continue to grow in complexity and scale, the USGS must continue to explore innovative solutions to improve its management, curation, sharing, delivering, and preservation approaches for large-scale research data. Supporting these needs, the USGS has partnered with the University of Chicago-Globus to research and develop advanced repository components and workflows leveraging its current investment in Globus. The primary outcome of this partnership includes the development of a prototype enterprise repository, driven by USGS Data Release requirements, through exploration and implementation of the entire suite of the Globus platform offerings, including Globus Flow, Globus Auth, Globus Transfer, and Globus Search. This presentation will provide insights into this research partnership, introduce the unique requirements and challenges being addressed and provide relevant project progress.
Developing Distributed High-performance Computing Capabilities of an Open Sci...Globus
COVID-19 had an unprecedented impact on scientific collaboration. The pandemic and its broad response from the scientific community has forged new relationships among public health practitioners, mathematical modelers, and scientific computing specialists, while revealing critical gaps in exploiting advanced computing systems to support urgent decision making. Informed by our team’s work in applying high-performance computing in support of public health decision makers during the COVID-19 pandemic, we present how Globus technologies are enabling the development of an open science platform for robust epidemic analysis, with the goal of collaborative, secure, distributed, on-demand, and fast time-to-solution analyses to support public health.
The Department of Energy's Integrated Research Infrastructure (IRI)Globus
We will provide an overview of DOE’s IRI initiative as it moves into early implementation, what drives the IRI vision, and the role of DOE in the larger national research ecosystem.
Listen to the keynote address and hear about the latest developments from Rachana Ananthakrishnan and Ian Foster who review the updates to the Globus Platform and Service, and the relevance of Globus to the scientific community as an automation platform to accelerate scientific discovery.
Enhancing Performance with Globus and the Science DMZGlobus
ESnet has led the way in helping national facilities—and many other institutions in the research community—configure Science DMZs and troubleshoot network issues to maximize data transfer performance. In this talk we will present a summary of approaches and tips for getting the most out of your network infrastructure using Globus Connect Server.
Extending Globus into a Site-wide Automated Data Infrastructure.pdfGlobus
The Rosalind Franklin Institute hosts a variety of scientific instruments, which allow us to capture a multifaceted and multilevel view of biological systems, generating around 70 terabytes of data a month. Distributed solutions, such as Globus and Ceph, facilitates storage, access, and transfer of large amount of data. However, we still must deal with the heterogeneity of the file formats and directory structure at acquisition, which is optimised for fast recording, rather than for efficient storage and processing. Our data infrastructure includes local storage at the instruments and workstations, distributed object stores with POSIX and S3 access, remote storage on HPCs, and taped backup. This can pose a challenge in ensuring fast, secure, and efficient data transfer. Globus allows us to handle this heterogeneity, while its Python SDK allows us to automate our data infrastructure using Globus microservices integrated with our data access models. Our data management workflows are becoming increasingly complex and heterogenous, including desktop PCs, virtual machines, and offsite HPCs, as well as several open-source software tools with different computing and data structure requirements. This complexity commands that data is annotated with enough details about the experiments and the analysis to ensure efficient and reproducible workflows. This talk explores how we extend Globus into different parts of our data lifecycle to create a secure, scalable, and high performing automated data infrastructure that can provide FAIR[1,2] data for all our science.
1. https://doi.org/10.1038/sdata.2016.18
2. https://www.go-fair.org/fair-principles
Providing Globus Services to Users of JASMIN for Environmental Data AnalysisGlobus
JASMIN is the UK’s high-performance data analysis platform for environmental science, operated by STFC on behalf of the UK Natural Environment Research Council (NERC). In addition to its role in hosting the CEDA Archive (NERC’s long-term repository for climate, atmospheric science & Earth observation data in the UK), JASMIN provides a collaborative platform to a community of around 2,000 scientists in the UK and beyond, providing nearly 400 environmental science projects with working space, compute resources and tools to facilitate their work. High-performance data transfer into and out of JASMIN has always been a key feature, with many scientists bringing model outputs from supercomputers elsewhere in the UK, to analyse against observational or other model data in the CEDA Archive. A growing number of JASMIN users are now realising the benefits of using the Globus service to provide reliable and efficient data movement and other tasks in this and other contexts. Further use cases involve long-distance (intercontinental) transfers to and from JASMIN, and collecting results from a mobile atmospheric radar system, pushing data to JASMIN via a lightweight Globus deployment. We provide details of how Globus fits into our current infrastructure, our experience of the recent migration to GCSv5.4, and of our interest in developing use of the wider ecosystem of Globus services for the benefit of our user community.
Globus Compute with Integrated Research Infrastructure (IRI) workflowsGlobus
As part of the DOE Integrated Research Infrastructure (IRI) program, NERSC at Lawrence Berkeley National Lab and ALCF at Argonne National Lab are working closely with General Atomics on accelerating the computing requirements of the DIII-D experiment. As part of the work the team is investigating ways to speedup the time to solution for many different parts of the DIII-D workflow including how they run jobs on HPC systems. One of these routes is looking at Globus Compute as a way to replace the current method for managing tasks and I will give a brief proof of concept showing how Globus Compute could help to schedule jobs and be a tool to connect compute at different facilities.
Reactive Documents and Computational Pipelines - Bridging the GapGlobus
As scientific discovery and experimentation become increasingly reliant on computational methods, the static nature of traditional publications renders them progressively fragmented and unreproducible. How can workflow automation tools, such as Globus, be leveraged to address these issues and potentially create a new, higher-value form of publication? LivePublication leverages Globus’s custom Action Provider integrations and Compute nodes to capture semantic and provenance information during distributed flow executions. This information is then embedded within an RO-crate and interfaced with a programmatic document, creating a seamless pipeline from instruments, to computation, to publication.
GraphRAG is All You need? LLM & Knowledge GraphGuy Korland
Guy Korland, CEO and Co-founder of FalkorDB, will review two articles on the integration of language models with knowledge graphs.
1. Unifying Large Language Models and Knowledge Graphs: A Roadmap.
https://arxiv.org/abs/2306.08302
2. Microsoft Research's GraphRAG paper and a review paper on various uses of knowledge graphs:
https://www.microsoft.com/en-us/research/blog/graphrag-unlocking-llm-discovery-on-narrative-private-data/
UiPath Test Automation using UiPath Test Suite series, part 4DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 4. In this session, we will cover Test Manager overview along with SAP heatmap.
The UiPath Test Manager overview with SAP heatmap webinar offers a concise yet comprehensive exploration of the role of a Test Manager within SAP environments, coupled with the utilization of heatmaps for effective testing strategies.
Participants will gain insights into the responsibilities, challenges, and best practices associated with test management in SAP projects. Additionally, the webinar delves into the significance of heatmaps as a visual aid for identifying testing priorities, areas of risk, and resource allocation within SAP landscapes. Through this session, attendees can expect to enhance their understanding of test management principles while learning practical approaches to optimize testing processes in SAP environments using heatmap visualization techniques
What will you get from this session?
1. Insights into SAP testing best practices
2. Heatmap utilization for testing
3. Optimization of testing processes
4. Demo
Topics covered:
Execution from the test manager
Orchestrator execution result
Defect reporting
SAP heatmap example with demo
Speaker:
Deepak Rai, Automation Practice Lead, Boundaryless Group and UiPath MVP
The Art of the Pitch: WordPress Relationships and SalesLaura Byrne
Clients don’t know what they don’t know. What web solutions are right for them? How does WordPress come into the picture? How do you make sure you understand scope and timeline? What do you do if sometime changes?
All these questions and more will be explored as we talk about matching clients’ needs with what your agency offers without pulling teeth or pulling your hair out. Practical tips, and strategies for successful relationship building that leads to closing the deal.
GDG Cloud Southlake #33: Boule & Rebala: Effective AppSec in SDLC using Deplo...James Anderson
Effective Application Security in Software Delivery lifecycle using Deployment Firewall and DBOM
The modern software delivery process (or the CI/CD process) includes many tools, distributed teams, open-source code, and cloud platforms. Constant focus on speed to release software to market, along with the traditional slow and manual security checks has caused gaps in continuous security as an important piece in the software supply chain. Today organizations feel more susceptible to external and internal cyber threats due to the vast attack surface in their applications supply chain and the lack of end-to-end governance and risk management.
The software team must secure its software delivery process to avoid vulnerability and security breaches. This needs to be achieved with existing tool chains and without extensive rework of the delivery processes. This talk will present strategies and techniques for providing visibility into the true risk of the existing vulnerabilities, preventing the introduction of security issues in the software, resolving vulnerabilities in production environments quickly, and capturing the deployment bill of materials (DBOM).
Speakers:
Bob Boule
Robert Boule is a technology enthusiast with PASSION for technology and making things work along with a knack for helping others understand how things work. He comes with around 20 years of solution engineering experience in application security, software continuous delivery, and SaaS platforms. He is known for his dynamic presentations in CI/CD and application security integrated in software delivery lifecycle.
Gopinath Rebala
Gopinath Rebala is the CTO of OpsMx, where he has overall responsibility for the machine learning and data processing architectures for Secure Software Delivery. Gopi also has a strong connection with our customers, leading design and architecture for strategic implementations. Gopi is a frequent speaker and well-known leader in continuous delivery and integrating security into software delivery.
Encryption in Microsoft 365 - ExpertsLive Netherlands 2024Albert Hoitingh
In this session I delve into the encryption technology used in Microsoft 365 and Microsoft Purview. Including the concepts of Customer Key and Double Key Encryption.
Communications Mining Series - Zero to Hero - Session 1DianaGray10
This session provides introduction to UiPath Communication Mining, importance and platform overview. You will acquire a good understand of the phases in Communication Mining as we go over the platform with you. Topics covered:
• Communication Mining Overview
• Why is it important?
• How can it help today’s business and the benefits
• Phases in Communication Mining
• Demo on Platform overview
• Q/A
Removing Uninteresting Bytes in Software FuzzingAftab Hussain
Imagine a world where software fuzzing, the process of mutating bytes in test seeds to uncover hidden and erroneous program behaviors, becomes faster and more effective. A lot depends on the initial seeds, which can significantly dictate the trajectory of a fuzzing campaign, particularly in terms of how long it takes to uncover interesting behaviour in your code. We introduce DIAR, a technique designed to speedup fuzzing campaigns by pinpointing and eliminating those uninteresting bytes in the seeds. Picture this: instead of wasting valuable resources on meaningless mutations in large, bloated seeds, DIAR removes the unnecessary bytes, streamlining the entire process.
In this work, we equipped AFL, a popular fuzzer, with DIAR and examined two critical Linux libraries -- Libxml's xmllint, a tool for parsing xml documents, and Binutil's readelf, an essential debugging and security analysis command-line tool used to display detailed information about ELF (Executable and Linkable Format). Our preliminary results show that AFL+DIAR does not only discover new paths more quickly but also achieves higher coverage overall. This work thus showcases how starting with lean and optimized seeds can lead to faster, more comprehensive fuzzing campaigns -- and DIAR helps you find such seeds.
- These are slides of the talk given at IEEE International Conference on Software Testing Verification and Validation Workshop, ICSTW 2022.
Sudheer Mechineni, Head of Application Frameworks, Standard Chartered Bank
Discover how Standard Chartered Bank harnessed the power of Neo4j to transform complex data access challenges into a dynamic, scalable graph database solution. This keynote will cover their journey from initial adoption to deploying a fully automated, enterprise-grade causal cluster, highlighting key strategies for modelling organisational changes and ensuring robust disaster recovery. Learn how these innovations have not only enhanced Standard Chartered Bank’s data infrastructure but also positioned them as pioneers in the banking sector’s adoption of graph technology.
GridMate - End to end testing is a critical piece to ensure quality and avoid...ThomasParaiso2
End to end testing is a critical piece to ensure quality and avoid regressions. In this session, we share our journey building an E2E testing pipeline for GridMate components (LWC and Aura) using Cypress, JSForce, FakerJS…
Epistemic Interaction - tuning interfaces to provide information for AI supportAlan Dix
Paper presented at SYNERGY workshop at AVI 2024, Genoa, Italy. 3rd June 2024
https://alandix.com/academic/papers/synergy2024-epistemic/
As machine learning integrates deeper into human-computer interactions, the concept of epistemic interaction emerges, aiming to refine these interactions to enhance system adaptability. This approach encourages minor, intentional adjustments in user behaviour to enrich the data available for system learning. This paper introduces epistemic interaction within the context of human-system communication, illustrating how deliberate interaction design can improve system understanding and adaptation. Through concrete examples, we demonstrate the potential of epistemic interaction to significantly advance human-computer interaction by leveraging intuitive human communication strategies to inform system design and functionality, offering a novel pathway for enriching user-system engagements.
PHP Frameworks: I want to break free (IPC Berlin 2024)Ralf Eggert
In this presentation, we examine the challenges and limitations of relying too heavily on PHP frameworks in web development. We discuss the history of PHP and its frameworks to understand how this dependence has evolved. The focus will be on providing concrete tips and strategies to reduce reliance on these frameworks, based on real-world examples and practical considerations. The goal is to equip developers with the skills and knowledge to create more flexible and future-proof web applications. We'll explore the importance of maintaining autonomy in a rapidly changing tech landscape and how to make informed decisions in PHP development.
This talk is aimed at encouraging a more independent approach to using PHP frameworks, moving towards a more flexible and future-proof approach to PHP development.
Essentials of Automations: The Art of Triggers and Actions in FMESafe Software
In this second installment of our Essentials of Automations webinar series, we’ll explore the landscape of triggers and actions, guiding you through the nuances of authoring and adapting workspaces for seamless automations. Gain an understanding of the full spectrum of triggers and actions available in FME, empowering you to enhance your workspaces for efficient automation.
We’ll kick things off by showcasing the most commonly used event-based triggers, introducing you to various automation workflows like manual triggers, schedules, directory watchers, and more. Plus, see how these elements play out in real scenarios.
Whether you’re tweaking your current setup or building from the ground up, this session will arm you with the tools and insights needed to transform your FME usage into a powerhouse of productivity. Join us to discover effective strategies that simplify complex processes, enhancing your productivity and transforming your data management practices with FME. Let’s turn complexity into clarity and make your workspaces work wonders!
Observability Concepts EVERY Developer Should Know -- DeveloperWeek Europe.pdfPaige Cruz
Monitoring and observability aren’t traditionally found in software curriculums and many of us cobble this knowledge together from whatever vendor or ecosystem we were first introduced to and whatever is a part of your current company’s observability stack.
While the dev and ops silo continues to crumble….many organizations still relegate monitoring & observability as the purview of ops, infra and SRE teams. This is a mistake - achieving a highly observable system requires collaboration up and down the stack.
I, a former op, would like to extend an invitation to all application developers to join the observability party will share these foundational concepts to build on: