My introductory talk at GlobusWorld 2012, in which I review the progress made with Globus Online during the last year, and introduce our new Globus Storage, Globus Collaborate, and Globus Integrate services. A great demo showed it all working.
Introduction to the Globus SaaS (GlobusWorld Tour - STFC)Globus
This document summarizes a presentation about the Globus data management platform. It includes an agenda covering an introduction to the Globus Software as a Service and Platform as a Service, automating research data workflows, facilitating collaboration, and building services. There are demonstrations of file transfers, data sharing, publication, and high assurance endpoints. The sustainability model is discussed, with standard and high assurance subscriptions, branded websites, premium storage connectors, and identity providers. Support resources like documentation, email lists, and professional services are also mentioned.
We provide a summary review of Globus features targeted at those new to Globus. We demonstrate how to transfer and share data, and install a Globus Connect Personal endpoint on your laptop.
Presented at a workshop at Oak Ridge National Laboratory on June 22, 2022.
Globus is a non-profit data management service that allows users to transfer, share, and access data across different storage systems and platforms through software-as-a-service. It has transferred over 1.34 exabytes of data and aims to unify access to research data across different tiers of storage through connectors, APIs, and user interfaces. Globus ensures secure data transfers and sharing by using user identities, access controls, encryption, and audit logging without storing user credentials or data.
Globus Online aims to simplify big data transfer and sharing by providing a user-friendly interface similar to Dropbox. It allows reliable and secure transfer of large research files between different storage systems with high performance. Users can easily share data with others without needing to move files to cloud storage. Globus Online uses Globus Connect software to integrate various local storage systems and networks with its centralized platform, enabling Dropbox-like simplicity for scientific data management and collaboration.
Introduction to Data Transfer and Sharing for ResearchersGlobus
We will provide a summary review of Globus features targeted at those new to Globus. We will present various use cases that illustrate the power of Globus data sharing capabilities.
Introduction to Globus: Research Data Management Software at the ALCFGlobus
This document provides an introduction and overview of Globus, a research data management platform. It discusses how Globus can be used to move, share, discover, and reproduce data across different storage tiers and resources. Globus delivers fast and reliable big data transfer, sharing, and platform services directly from existing storage systems via software-as-a-service using existing identities, with the goal of unifying access to data across different locations and resources. The document demonstrates how Globus can be used via its web interface, command line interface, REST API, and as a platform for building other research applications and workflows.
A talk at NASA Goddard, February 27, 2013
Large and diverse data result in challenging data management problems that researchers and facilities are often ill-equipped to handle. I propose a new approach to these problems based on the outsourcing of research data management tasks to software-as-a-service providers. I argue that this approach can both achieve significant economies of scale and accelerate discovery by allowing researchers to focus on research rather than mundane information technology tasks. I present early results with the approach in the context of Globus Online
Introduction to the Globus SaaS (GlobusWorld Tour - STFC)Globus
This document summarizes a presentation about the Globus data management platform. It includes an agenda covering an introduction to the Globus Software as a Service and Platform as a Service, automating research data workflows, facilitating collaboration, and building services. There are demonstrations of file transfers, data sharing, publication, and high assurance endpoints. The sustainability model is discussed, with standard and high assurance subscriptions, branded websites, premium storage connectors, and identity providers. Support resources like documentation, email lists, and professional services are also mentioned.
We provide a summary review of Globus features targeted at those new to Globus. We demonstrate how to transfer and share data, and install a Globus Connect Personal endpoint on your laptop.
Presented at a workshop at Oak Ridge National Laboratory on June 22, 2022.
Globus is a non-profit data management service that allows users to transfer, share, and access data across different storage systems and platforms through software-as-a-service. It has transferred over 1.34 exabytes of data and aims to unify access to research data across different tiers of storage through connectors, APIs, and user interfaces. Globus ensures secure data transfers and sharing by using user identities, access controls, encryption, and audit logging without storing user credentials or data.
Globus Online aims to simplify big data transfer and sharing by providing a user-friendly interface similar to Dropbox. It allows reliable and secure transfer of large research files between different storage systems with high performance. Users can easily share data with others without needing to move files to cloud storage. Globus Online uses Globus Connect software to integrate various local storage systems and networks with its centralized platform, enabling Dropbox-like simplicity for scientific data management and collaboration.
Introduction to Data Transfer and Sharing for ResearchersGlobus
We will provide a summary review of Globus features targeted at those new to Globus. We will present various use cases that illustrate the power of Globus data sharing capabilities.
Introduction to Globus: Research Data Management Software at the ALCFGlobus
This document provides an introduction and overview of Globus, a research data management platform. It discusses how Globus can be used to move, share, discover, and reproduce data across different storage tiers and resources. Globus delivers fast and reliable big data transfer, sharing, and platform services directly from existing storage systems via software-as-a-service using existing identities, with the goal of unifying access to data across different locations and resources. The document demonstrates how Globus can be used via its web interface, command line interface, REST API, and as a platform for building other research applications and workflows.
A talk at NASA Goddard, February 27, 2013
Large and diverse data result in challenging data management problems that researchers and facilities are often ill-equipped to handle. I propose a new approach to these problems based on the outsourcing of research data management tasks to software-as-a-service providers. I argue that this approach can both achieve significant economies of scale and accelerate discovery by allowing researchers to focus on research rather than mundane information technology tasks. I present early results with the approach in the context of Globus Online
We provide a summary review of Globus features targeted at those new to Globus. We demonstrate how to transfer and share data, and install a Globus Connect Personal endpoint on your laptop.
Simplified Research Data Management with the Globus PlatformGlobus
Overview of the Globus research data management platform, as presented at the Fall 2018 Membership Meeting of the Coalition for Networked Information (CNI), held in Washington, D.C., December 10-11, 2018
This document summarizes a presentation about providing next-generation sequencing analysis capabilities using Globus Genomics. It outlines challenges with current manual approaches to sequencing data analysis, including difficulties moving large datasets between locations and maintaining complex analysis scripts. The presentation introduces Globus Genomics, which uses Globus data transfer services integrated with Galaxy to provide a workflow-based system for sequencing analysis without requiring local installation or configuration. Key benefits include on-demand access to scalable cloud resources, ability to easily modify and reuse analysis workflows, and integration with data sources. The system aims to accelerate genomic research by automating and simplifying analysis.
Globus Genomics: How Science-as-a-Service is Accelerating Discovery (BDT310) ...Amazon Web Services
"In this talk, hear about two high-performant research services developed and operated by the Computation Institute at the University of Chicago running on AWS. Globus.org, a high-performance, reliable, robust file transfer service, has over 10,000 registered users who have moved over 25 petabytes of data using the service. The Globus service is operated entirely on AWS, leveraging Amazon EC2, Amazon EBS, Amazon S3, Amazon SES, Amazon SNS, etc. Globus Genomics is an end-to-end next-gen sequencing analysis service with state-of-art research data management capabilities. Globus Genomics uses Amazon EC2 for scaling out analysis, Amazon EBS for persistent storage, and Amazon S3 for archival storage. Attend this session to learn how to move data quickly at any scale as well as how to use genomic analysis tools and pipelines for next generation sequencers using Globus on AWS.
"
Delivering a Campus Research Data Service with GlobusIan Foster
Keynote talk at the 2014 GlobusWorld conference (www.globusworld.org). Reviews science success stories, new features introduced over the past year, status of adoption, and our sustainability plans. Previewed our new publication service.
This tutorial was given at the 2019 GlobusWorld Conference in Chicago, IL by Globus Head of Products Rachana Ananthakrishnan and Director of Customer Engagement Greg Nawrocki.
Introduction to Globus (GlobusWorld Tour West)Globus
This document introduces Globus, which provides fast and reliable data transfer, sharing, and platform services across different storage systems and resources. It does this through software-as-a-service that uses existing user identities, with the goal of unifying access to data across different tiers like HPC, storage, cloud, and personal resources. Key features include secure data transfers without moving files, access control and sharing capabilities, and tools for building automations and integrating with science gateways. It also discusses options for handling protected data like health information with additional security controls and business agreements.
GlobusWorld 2021 Tutorial: Introduction to GlobusGlobus
An introduction to the core features of the Globus data management service. This tutorial was presented at the GlobusWorld 2021 conference in Chicago, IL by Greg Nawrocki.
This presentation is by Ian Foster, director of the Computation Institute at The University of Chicago. It was given at the Great Plains Network Annual Meeting, on May 29, 2013.
For more information on Globus Online, visit globusonline.org.
"What would a Dropbox for science look like?" asks Foster. "It should be trivial to collect, move, sync, share, analyze, annotate, publish, search, backup, and archive Big Data. But in reality it's often very challenging."
Globus Online, a software as a service for data management, solves these problems. This slideshow explains how Globus Online does that for universities and laboratories around the world.
This document discusses building a "discovery cloud" to accelerate scientific discovery through on-demand computing. It proposes identifying time-consuming research activities that can be automated and outsourced as software-as-a-service. This would achieve economies of scale through leveraging infrastructure-as-a-service. The goal is to create a great user experience for scientific data management similar to consumer services like Dropbox. It also discusses integrating services like Globus for data management and Galaxy for analysis to provide flexible and scalable genomics analysis.
Science as a Service: How On-Demand Computing can Accelerate DiscoveryIan Foster
This document discusses the potential for "Science as a Service" by leveraging on-demand computing capabilities. It notes that most research labs have limited resources, so automation and outsourcing are needed to apply sophisticated methods to larger datasets. The author proposes building a "discovery cloud" by identifying time-consuming research activities that can be automated and delivered as software/platform/infrastructure as a service. This would help accelerate scientific discovery. Globus is highlighted as an example of a platform providing data management services using a software as a service model.
Science Services and Science Platforms: Using the Cloud to Accelerate and Dem...Ian Foster
Ever more data- and compute-intensive science makes computing increasingly important for research. But for advanced computing infrastructure to benefit more than the scientific 1%, we need new delivery methods that slash access costs, new sustainability models beyond direct research funding, and new platform capabilities to accelerate the development of new, interoperable tools and services.
The Globus team has been working towards these goals since 2010. We have developed software-as-a-service methods that move complex and time-consuming research IT tasks out of the lab and into the cloud, thus greatly reducing the expertise and resources required to use them. We have demonstrated a subscription-based funding model that engages research institutions in supporting service operations. And we are now also showing how the platform services that underpin Globus applications can accelerate the development and use of an integrated ecosystem of advanced science applications, such as NCAR’s Research Data Archive and OSG Connect, thus enabling access to powerful data and compute resources by many more people than is possible today.
In this talk, I introduce Globus services and the underlying Globus platform. I present representative applications and discuss opportunities that this platform presents for both small science and large facilities.
Globus: Research Data Management as Service and Platform - pearc17Mary Bass
Scientists have embraced the use of specialized cloud-hosted services to perform data management operations. Globus offers a suite of data and user management capabilities to the community, encompassing data transfer and sharing, user identity and authorization, and data publication. Globus capabilities are accessible via both a web browser and REST APIs. Web access allows Globus to address the needs of research labs through a software-as-a-service model; the newer REST APIs address the needs of developers of research services, who can now use Globus as a platform, outsourcing complex user and data management tasks to Globus cloud-hosted services. Here we review Globus capabilities and outline how it is being applied as a platform for scientific services. Presentation by Steve Tuecke from The University of Chicago. Steve is Globus Founder and Project Lead.
BHL in the Cloud: A Pilot Project with DuraCloudChris Freeland
The Biodiversity Heritage Library (BHL) conducted a pilot project using the DuraCloud cloud storage service to evaluate its applicability for the BHL's large-scale digitization activities. The project involved transferring over 10 terabytes of content from the Internet Archive to DuraCloud. While cloud storage could provide benefits like scalability and redundancy, the BHL faces challenges around funding, technical skills of partners, and cultural preferences for control over digital materials. The pilot demonstrated that cloud is viable for storage and transfer but not a perfect solution, as large file sizes in the BHL corpus pose problems. Future uses of cloud infrastructure will depend on the needs and abilities of BHL partners.
The document discusses Google Cloud Platform and its capabilities for big data and analytics. It notes that Google Cloud Platform is built on Google's infrastructure which powers its own services and has 17 years of experience building cloud infrastructure. It then summarizes several key services including Compute Engine, App Engine, BigQuery, Cloud Dataflow, and Cloud Dataproc that can be used for infrastructure, platforms, software, as well as big data, analytics, and machine learning.
Video: https://youtu.be/LuVT0jsIrZk
------------------------------------------------------------------------------------------------------------------------------------
Hay trabajos y hay carreras. Las oportunidades vienen a golpear la puerta cuando menos lo esperas. La decisión es tuya. Desde tener la oportunidad de hacer algo significativo día tras día, hasta estar rodeado de gente supremamente inteligente y motivada.
¿Estás listo?
Descúbre todas nuestras oportunidades acá:https://mycareer.globant.com/
------------------------------------------------------------------------------------------------------------------------------------
Siguenos en:
Facebook: https://www.facebook.com/Globant/
Twitter: https://twitter.com/Globant
Instagram: https://www.instagram.com/globantpics/
Linkedin: https://www.linkedin.com/company/globant/
Global Services for Global Science March 2023.pptxIan Foster
We are on the verge of a global communications revolution based on ubiquitous high-speed 5G, 6G, and free-space optics technologies. The resulting global communications fabric can enable new ultra-collaborative research modalities that pool sensors, data, and computation with unprecedented flexibility and focus. But realizing these modalities requires new services to overcome the tremendous friction currently associated with any actions that traverse institutional boundaries. The solution, I argue, is new global science services to mediate between user intent and infrastructure realities. I describe our experiences building and operating such services and the principles that we have identified as needed for successful deployment and operations.
The Earth System Grid Federation: Origins, Current State, EvolutionIan Foster
The Earth System Grid Federation (ESGF) is a distributed network of climate data servers that archives and shares model output data used by scientists worldwide. ESGF has led data archiving for the Coupled Model Intercomparison Project (CMIP) since its inception. The ESGF Holdings have grown significantly from CMIP5 to CMIP6 and are expected to continue growing rapidly. A new ESGF2 project funded by the US Department of Energy aims to modernize ESGF to handle exabyte scale data volumes through a new architecture based on centralized Globus services, improved data discovery tools, and data proximate computing capabilities.
We provide a summary review of Globus features targeted at those new to Globus. We demonstrate how to transfer and share data, and install a Globus Connect Personal endpoint on your laptop.
Simplified Research Data Management with the Globus PlatformGlobus
Overview of the Globus research data management platform, as presented at the Fall 2018 Membership Meeting of the Coalition for Networked Information (CNI), held in Washington, D.C., December 10-11, 2018
This document summarizes a presentation about providing next-generation sequencing analysis capabilities using Globus Genomics. It outlines challenges with current manual approaches to sequencing data analysis, including difficulties moving large datasets between locations and maintaining complex analysis scripts. The presentation introduces Globus Genomics, which uses Globus data transfer services integrated with Galaxy to provide a workflow-based system for sequencing analysis without requiring local installation or configuration. Key benefits include on-demand access to scalable cloud resources, ability to easily modify and reuse analysis workflows, and integration with data sources. The system aims to accelerate genomic research by automating and simplifying analysis.
Globus Genomics: How Science-as-a-Service is Accelerating Discovery (BDT310) ...Amazon Web Services
"In this talk, hear about two high-performant research services developed and operated by the Computation Institute at the University of Chicago running on AWS. Globus.org, a high-performance, reliable, robust file transfer service, has over 10,000 registered users who have moved over 25 petabytes of data using the service. The Globus service is operated entirely on AWS, leveraging Amazon EC2, Amazon EBS, Amazon S3, Amazon SES, Amazon SNS, etc. Globus Genomics is an end-to-end next-gen sequencing analysis service with state-of-art research data management capabilities. Globus Genomics uses Amazon EC2 for scaling out analysis, Amazon EBS for persistent storage, and Amazon S3 for archival storage. Attend this session to learn how to move data quickly at any scale as well as how to use genomic analysis tools and pipelines for next generation sequencers using Globus on AWS.
"
Delivering a Campus Research Data Service with GlobusIan Foster
Keynote talk at the 2014 GlobusWorld conference (www.globusworld.org). Reviews science success stories, new features introduced over the past year, status of adoption, and our sustainability plans. Previewed our new publication service.
This tutorial was given at the 2019 GlobusWorld Conference in Chicago, IL by Globus Head of Products Rachana Ananthakrishnan and Director of Customer Engagement Greg Nawrocki.
Introduction to Globus (GlobusWorld Tour West)Globus
This document introduces Globus, which provides fast and reliable data transfer, sharing, and platform services across different storage systems and resources. It does this through software-as-a-service that uses existing user identities, with the goal of unifying access to data across different tiers like HPC, storage, cloud, and personal resources. Key features include secure data transfers without moving files, access control and sharing capabilities, and tools for building automations and integrating with science gateways. It also discusses options for handling protected data like health information with additional security controls and business agreements.
GlobusWorld 2021 Tutorial: Introduction to GlobusGlobus
An introduction to the core features of the Globus data management service. This tutorial was presented at the GlobusWorld 2021 conference in Chicago, IL by Greg Nawrocki.
This presentation is by Ian Foster, director of the Computation Institute at The University of Chicago. It was given at the Great Plains Network Annual Meeting, on May 29, 2013.
For more information on Globus Online, visit globusonline.org.
"What would a Dropbox for science look like?" asks Foster. "It should be trivial to collect, move, sync, share, analyze, annotate, publish, search, backup, and archive Big Data. But in reality it's often very challenging."
Globus Online, a software as a service for data management, solves these problems. This slideshow explains how Globus Online does that for universities and laboratories around the world.
This document discusses building a "discovery cloud" to accelerate scientific discovery through on-demand computing. It proposes identifying time-consuming research activities that can be automated and outsourced as software-as-a-service. This would achieve economies of scale through leveraging infrastructure-as-a-service. The goal is to create a great user experience for scientific data management similar to consumer services like Dropbox. It also discusses integrating services like Globus for data management and Galaxy for analysis to provide flexible and scalable genomics analysis.
Science as a Service: How On-Demand Computing can Accelerate DiscoveryIan Foster
This document discusses the potential for "Science as a Service" by leveraging on-demand computing capabilities. It notes that most research labs have limited resources, so automation and outsourcing are needed to apply sophisticated methods to larger datasets. The author proposes building a "discovery cloud" by identifying time-consuming research activities that can be automated and delivered as software/platform/infrastructure as a service. This would help accelerate scientific discovery. Globus is highlighted as an example of a platform providing data management services using a software as a service model.
Science Services and Science Platforms: Using the Cloud to Accelerate and Dem...Ian Foster
Ever more data- and compute-intensive science makes computing increasingly important for research. But for advanced computing infrastructure to benefit more than the scientific 1%, we need new delivery methods that slash access costs, new sustainability models beyond direct research funding, and new platform capabilities to accelerate the development of new, interoperable tools and services.
The Globus team has been working towards these goals since 2010. We have developed software-as-a-service methods that move complex and time-consuming research IT tasks out of the lab and into the cloud, thus greatly reducing the expertise and resources required to use them. We have demonstrated a subscription-based funding model that engages research institutions in supporting service operations. And we are now also showing how the platform services that underpin Globus applications can accelerate the development and use of an integrated ecosystem of advanced science applications, such as NCAR’s Research Data Archive and OSG Connect, thus enabling access to powerful data and compute resources by many more people than is possible today.
In this talk, I introduce Globus services and the underlying Globus platform. I present representative applications and discuss opportunities that this platform presents for both small science and large facilities.
Globus: Research Data Management as Service and Platform - pearc17Mary Bass
Scientists have embraced the use of specialized cloud-hosted services to perform data management operations. Globus offers a suite of data and user management capabilities to the community, encompassing data transfer and sharing, user identity and authorization, and data publication. Globus capabilities are accessible via both a web browser and REST APIs. Web access allows Globus to address the needs of research labs through a software-as-a-service model; the newer REST APIs address the needs of developers of research services, who can now use Globus as a platform, outsourcing complex user and data management tasks to Globus cloud-hosted services. Here we review Globus capabilities and outline how it is being applied as a platform for scientific services. Presentation by Steve Tuecke from The University of Chicago. Steve is Globus Founder and Project Lead.
BHL in the Cloud: A Pilot Project with DuraCloudChris Freeland
The Biodiversity Heritage Library (BHL) conducted a pilot project using the DuraCloud cloud storage service to evaluate its applicability for the BHL's large-scale digitization activities. The project involved transferring over 10 terabytes of content from the Internet Archive to DuraCloud. While cloud storage could provide benefits like scalability and redundancy, the BHL faces challenges around funding, technical skills of partners, and cultural preferences for control over digital materials. The pilot demonstrated that cloud is viable for storage and transfer but not a perfect solution, as large file sizes in the BHL corpus pose problems. Future uses of cloud infrastructure will depend on the needs and abilities of BHL partners.
The document discusses Google Cloud Platform and its capabilities for big data and analytics. It notes that Google Cloud Platform is built on Google's infrastructure which powers its own services and has 17 years of experience building cloud infrastructure. It then summarizes several key services including Compute Engine, App Engine, BigQuery, Cloud Dataflow, and Cloud Dataproc that can be used for infrastructure, platforms, software, as well as big data, analytics, and machine learning.
Video: https://youtu.be/LuVT0jsIrZk
------------------------------------------------------------------------------------------------------------------------------------
Hay trabajos y hay carreras. Las oportunidades vienen a golpear la puerta cuando menos lo esperas. La decisión es tuya. Desde tener la oportunidad de hacer algo significativo día tras día, hasta estar rodeado de gente supremamente inteligente y motivada.
¿Estás listo?
Descúbre todas nuestras oportunidades acá:https://mycareer.globant.com/
------------------------------------------------------------------------------------------------------------------------------------
Siguenos en:
Facebook: https://www.facebook.com/Globant/
Twitter: https://twitter.com/Globant
Instagram: https://www.instagram.com/globantpics/
Linkedin: https://www.linkedin.com/company/globant/
Global Services for Global Science March 2023.pptxIan Foster
We are on the verge of a global communications revolution based on ubiquitous high-speed 5G, 6G, and free-space optics technologies. The resulting global communications fabric can enable new ultra-collaborative research modalities that pool sensors, data, and computation with unprecedented flexibility and focus. But realizing these modalities requires new services to overcome the tremendous friction currently associated with any actions that traverse institutional boundaries. The solution, I argue, is new global science services to mediate between user intent and infrastructure realities. I describe our experiences building and operating such services and the principles that we have identified as needed for successful deployment and operations.
The Earth System Grid Federation: Origins, Current State, EvolutionIan Foster
The Earth System Grid Federation (ESGF) is a distributed network of climate data servers that archives and shares model output data used by scientists worldwide. ESGF has led data archiving for the Coupled Model Intercomparison Project (CMIP) since its inception. The ESGF Holdings have grown significantly from CMIP5 to CMIP6 and are expected to continue growing rapidly. A new ESGF2 project funded by the US Department of Energy aims to modernize ESGF to handle exabyte scale data volumes through a new architecture based on centralized Globus services, improved data discovery tools, and data proximate computing capabilities.
Better Information Faster: Programming the ContinuumIan Foster
This document discusses the computing continuum and efforts to enable better information faster through computation. It provides examples of how techniques like executing tasks closer to data sources or on specialized hardware can significantly accelerate applications. Programming models and managed services are explored for specifying and executing workloads across diverse infrastructure. There are still open questions around optimizing networks, algorithms, and applications for the computing continuum.
ESnet6 provides an ultra-fast and reliable network that enables new smart instruments for 21st century science. The network capacity has increased dramatically over time, with 2022 bandwidth being 500,000 times greater than 1993. This network allows rapid data transfer between facilities, such as replicating 7 petabytes of climate data between three labs. It also enables fast assembly and use of new instruments like high energy diffraction microscopy that can perform an analysis in 31 seconds. The integrated research infrastructure provided by Globus further supports use of remote resources and smart instruments that will drive scientific discovery.
Linking Scientific Instruments and ComputationIan Foster
[Talk presented at Monterey Data Conference, August 31, 2022]
Powerful detectors at modern experimental facilities routinely collect data at multiple GB/s. Online analysis methods are needed to enable the collection of only interesting subsets of such massive data streams, such as by explicitly discarding some data elements or by directing instruments to relevant areas of experimental space. Thus, methods are required for configuring and running distributed computing pipelines—what we call flows—that link instruments, computers (e.g., for analysis, simulation, AI model training), edge computing (e.g., for analysis), data stores, metadata catalogs, and high-speed networks. We review common patterns associated with such flows and describe methods for instantiating these patterns. We present experiences with the application of these methods to the processing of data from five different scientific instruments, each of which engages powerful computers for data inversion, machine learning model training, or other purposes. We also discuss implications of such methods for operators and users of scientific facilities.
A Global Research Data Platform: How Globus Services Enable Scientific DiscoveryIan Foster
Talk in the National Science Data Fabric (NSDF) Distinguished Speaker Series
The Globus team has spent more than a decade developing software-as-a-service methods for research data management, available at globus.org. Globus transfer, sharing, search, publication, identity and access management (IAM), automation, and other services enable reliable, secure, and efficient managed access to exabytes of scientific data on tens of thousands of storage systems. For developers, flexible and open platform APIs reduce greatly the cost of developing and operating customized data distribution, sharing, and analysis applications. With 200,000 registered users at more than 2,000 institutions, more than 1.5 exabytes and 100 billion files handled, and 100s of registered applications and services, the services that comprise the Globus platform have become essential infrastructure for many researchers, projects, and institutions. I describe the design of the Globus platform, present illustrative applications, and discuss lessons learned for cyberinfrastructure software architecture, dissemination, and sustainability.
Video is at https://www.youtube.com/watch?v=p8pCHkFFq1E
Daniel Lopresti, Bill Gropp, Mark D. Hill, Katie Schuman, and I put together a white paper on "Building a National Discovery Cloud" for the Computing Community Consortium (http://cra.org/ccc). I presented these slides at a Computing Research Association "Best Practices on using the Cloud for Computing Research Workshop" (https://cra.org/industry/events/cloudworkshop/).
Abstract from White Paper:
The nature of computation and its role in our lives have been transformed in the past two decades by three remarkable developments: the emergence of public cloud utilities as a new computing platform; the ability to extract information from enormous quantities of data via machine learning; and the emergence of computational simulation as a research method on par with experimental science. Each development has major implications for how societies function and compete; together, they represent a change in technological foundations of society as profound as the telegraph or electrification. Societies that embrace these changes will lead in the 21st Century; those that do not, will decline in prosperity and influence. Nowhere is this stark choice more evident than in research and education, the two sectors that produce the innovations that power the future and prepare a workforce able to exploit those innovations, respectively. In this article, we introduce these developments and suggest steps that the US government might take to prepare the research and education system for its implications.
Big Data, Big Computing, AI, and Environmental ScienceIan Foster
I presented to the Environmental Data Science group at UChicago, with the goal of getting them excited about the opportunities inherent in big data, big computing, and AI--and to think about how to collaborate with Argonne in those areas. We had a great and long conversation about Takuya Kurihana's work on unsupervised learning for cloud classification. I also mentioned our work making NASA and CMIP data accessible on AI supercomputers.
The document discusses using artificial intelligence (AI) to accelerate materials innovation for clean energy applications. It outlines six elements needed for a Materials Acceleration Platform: 1) automated experimentation, 2) AI for materials discovery, 3) modular robotics for synthesis and characterization, 4) computational methods for inverse design, 5) bridging simulation length and time scales, and 6) data infrastructure. Examples of opportunities include using AI to bridge simulation scales, assist complex measurements, and enable automated materials design. The document argues that a cohesive infrastructure is needed to make effective use of AI, data, computation, and experiments for materials science.
In 2001, as early high-speed networks were deployed, George Gilder observed that “when the network is as fast as the computer's internal links, the machine disintegrates across the net into a set of special purpose appliances.” Two decades later, our networks are 1,000 times faster, our appliances are increasingly specialized, and our computer systems are indeed disintegrating. As hardware acceleration overcomes speed-of-light delays, time and space merge into a computing continuum. Familiar questions like “where should I compute,” “for what workloads should I design computers,” and "where should I place my computers” seem to allow for a myriad of new answers that are exhilarating but also daunting. Are there concepts that can help guide us as we design applications and computer systems in a world that is untethered from familiar landmarks like center, cloud, edge? I propose some ideas and report on experiments in coding the continuum.
Data Tribology: Overcoming Data Friction with Cloud AutomationIan Foster
A talk at the CODATA/RDA meeting in Gaborone, Botswana. I made the case that the biggest barriers to effective data sharing and reuse are often those associated with "data friction" and that cloud automation can be used to overcome those barriers.
The image on the first slide shows a few of the more than 20,000 active Globus endpoints.
Research Automation for Data-Driven DiscoveryIan Foster
This document discusses research automation and data-driven discovery. It notes that data volumes are growing much faster than computational power, creating a productivity crisis in research. However, most labs have limited resources to handle these large data volumes. The document proposes applying lessons from industry to create cloud-based science services with standardized APIs that can automate and outsource common tasks like data transfer, sharing, publishing, and searching. This would help scientists focus on their core research instead of computational infrastructure. Examples of existing services from Argonne National Lab and the University of Chicago Globus project are provided. The goal is to establish robust, scalable, and persistent cloud platforms to help address the challenges of data-driven scientific discovery.
Scaling collaborative data science with Globus and JupyterIan Foster
The Globus service simplifies the utilization of large and distributed data on the Jupyter platform. Ian Foster explains how to use Globus and Jupyter to seamlessly access notebooks using existing institutional credentials, connect notebooks with data residing on disparate storage systems, and make data securely available to business partners and research collaborators.
Deep learning is finding applications in science such as predicting material properties. DLHub is being developed to facilitate sharing of deep learning models, data, and code for science. It will collect, publish, serve, and enable retraining of models on new data. This will help address challenges of applying deep learning to science like accessing relevant resources and integrating models into workflows. The goal is to deliver deep learning capabilities to thousands of scientists through software for managing data, models and workflows.
Plenary talk at the international Synchrotron Radiation Instrumentation conference in Taiwan, on work with great colleagues Ben Blaiszik, Ryan Chard, Logan Ward, and others.
Rapidly growing data volumes at light sources demand increasingly automated data collection, distribution, and analysis processes, in order to enable new scientific discoveries while not overwhelming finite human capabilities. I present here three projects that use cloud-hosted data automation and enrichment services, institutional computing resources, and high- performance computing facilities to provide cost-effective, scalable, and reliable implementations of such processes. In the first, Globus cloud-hosted data automation services are used to implement data capture, distribution, and analysis workflows for Advanced Photon Source and Advanced Light Source beamlines, leveraging institutional storage and computing. In the second, such services are combined with cloud-hosted data indexing and institutional storage to create a collaborative data publication, indexing, and discovery service, the Materials Data Facility (MDF), built to support a host of informatics applications in materials science. The third integrates components of the previous two projects with machine learning capabilities provided by the Data and Learning Hub for science (DLHub) to enable on-demand access to machine learning models from light source data capture and analysis workflows, and provides simplified interfaces to train new models on data from sources such as MDF on leadership scale computing resources. I draw conclusions about best practices for building next-generation data automation systems for future light sources.
Team Argon proposes a commons platform using reusable components to promote continuous FAIRness of data. These components include Globus Connect Server for standardized data access and transfer across storage systems, Globus Auth for authentication and authorization, and BDBags for exchange of query results and cohorts using a common manifest format. Together these aim to provide uniform, secure, and reliable access, transfer, and sharing of data while supporting identification, search, and virtualization of derived data products.
This document discusses lessons learned for achieving interoperability. It recommends having a clear purpose, starting with basic conventions like identifiers, monitoring commitments to build trust, and focusing on outward-facing interoperability through simple APIs and platforms rather than full software stacks. Observance of industry practices like authentication methods and cloud-based platforms is also advised to promote rapid development and distribution of applications.
We presented these slides at the NIH Data Commons kickoff meeting, showing some of the technologies that we propose to integrate in our "full stack" pilot.
Going Smart and Deep on Materials at ALCFIan Foster
As we acquire large quantities of science data from experiment and simulation, it becomes possible to apply machine learning (ML) to those data to build predictive models and to guide future simulations and experiments. Leadership Computing Facilities need to make it easy to assemble such data collections and to develop, deploy, and run associated ML models.
We describe and demonstrate here how we are realizing such capabilities at the Argonne Leadership Computing Facility. In our demonstration, we use large quantities of time-dependent density functional theory (TDDFT) data on proton stopping power in various materials maintained in the Materials Data Facility (MDF) to build machine learning models, ranging from simple linear models to complex artificial neural networks, that are then employed to manage computations, improving their accuracy and reducing their cost. We highlight the use of new services being prototyped at Argonne to organize and assemble large data collections (MDF in this case), associate ML models with data collections, discover available data and models, work with these data and models in an interactive Jupyter environment, and launch new computations on ALCF resources.
A tale of scale & speed: How the US Navy is enabling software delivery from l...sonjaschweigert1
Rapid and secure feature delivery is a goal across every application team and every branch of the DoD. The Navy’s DevSecOps platform, Party Barge, has achieved:
- Reduction in onboarding time from 5 weeks to 1 day
- Improved developer experience and productivity through actionable findings and reduction of false positives
- Maintenance of superior security standards and inherent policy enforcement with Authorization to Operate (ATO)
Development teams can ship efficiently and ensure applications are cyber ready for Navy Authorizing Officials (AOs). In this webinar, Sigma Defense and Anchore will give attendees a look behind the scenes and demo secure pipeline automation and security artifacts that speed up application ATO and time to production.
We will cover:
- How to remove silos in DevSecOps
- How to build efficient development pipeline roles and component templates
- How to deliver security artifacts that matter for ATO’s (SBOMs, vulnerability reports, and policy evidence)
- How to streamline operations with automated policy checks on container images
Securing your Kubernetes cluster_ a step-by-step guide to success !KatiaHIMEUR1
Today, after several years of existence, an extremely active community and an ultra-dynamic ecosystem, Kubernetes has established itself as the de facto standard in container orchestration. Thanks to a wide range of managed services, it has never been so easy to set up a ready-to-use Kubernetes cluster.
However, this ease of use means that the subject of security in Kubernetes is often left for later, or even neglected. This exposes companies to significant risks.
In this talk, I'll show you step-by-step how to secure your Kubernetes cluster for greater peace of mind and reliability.
How to Get CNIC Information System with Paksim Ga.pptxdanishmna97
Pakdata Cf is a groundbreaking system designed to streamline and facilitate access to CNIC information. This innovative platform leverages advanced technology to provide users with efficient and secure access to their CNIC details.
Pushing the limits of ePRTC: 100ns holdover for 100 daysAdtran
At WSTS 2024, Alon Stern explored the topic of parametric holdover and explained how recent research findings can be implemented in real-world PNT networks to achieve 100 nanoseconds of accuracy for up to 100 days.
Essentials of Automations: The Art of Triggers and Actions in FMESafe Software
In this second installment of our Essentials of Automations webinar series, we’ll explore the landscape of triggers and actions, guiding you through the nuances of authoring and adapting workspaces for seamless automations. Gain an understanding of the full spectrum of triggers and actions available in FME, empowering you to enhance your workspaces for efficient automation.
We’ll kick things off by showcasing the most commonly used event-based triggers, introducing you to various automation workflows like manual triggers, schedules, directory watchers, and more. Plus, see how these elements play out in real scenarios.
Whether you’re tweaking your current setup or building from the ground up, this session will arm you with the tools and insights needed to transform your FME usage into a powerhouse of productivity. Join us to discover effective strategies that simplify complex processes, enhancing your productivity and transforming your data management practices with FME. Let’s turn complexity into clarity and make your workspaces work wonders!
In the rapidly evolving landscape of technologies, XML continues to play a vital role in structuring, storing, and transporting data across diverse systems. The recent advancements in artificial intelligence (AI) present new methodologies for enhancing XML development workflows, introducing efficiency, automation, and intelligent capabilities. This presentation will outline the scope and perspective of utilizing AI in XML development. The potential benefits and the possible pitfalls will be highlighted, providing a balanced view of the subject.
We will explore the capabilities of AI in understanding XML markup languages and autonomously creating structured XML content. Additionally, we will examine the capacity of AI to enrich plain text with appropriate XML markup. Practical examples and methodological guidelines will be provided to elucidate how AI can be effectively prompted to interpret and generate accurate XML markup.
Further emphasis will be placed on the role of AI in developing XSLT, or schemas such as XSD and Schematron. We will address the techniques and strategies adopted to create prompts for generating code, explaining code, or refactoring the code, and the results achieved.
The discussion will extend to how AI can be used to transform XML content. In particular, the focus will be on the use of AI XPath extension functions in XSLT, Schematron, Schematron Quick Fixes, or for XML content refactoring.
The presentation aims to deliver a comprehensive overview of AI usage in XML development, providing attendees with the necessary knowledge to make informed decisions. Whether you’re at the early stages of adopting AI or considering integrating it in advanced XML development, this presentation will cover all levels of expertise.
By highlighting the potential advantages and challenges of integrating AI with XML development tools and languages, the presentation seeks to inspire thoughtful conversation around the future of XML development. We’ll not only delve into the technical aspects of AI-powered XML development but also discuss practical implications and possible future directions.
LF Energy Webinar: Electrical Grid Modelling and Simulation Through PowSyBl -...DanBrown980551
Do you want to learn how to model and simulate an electrical network from scratch in under an hour?
Then welcome to this PowSyBl workshop, hosted by Rte, the French Transmission System Operator (TSO)!
During the webinar, you will discover the PowSyBl ecosystem as well as handle and study an electrical network through an interactive Python notebook.
PowSyBl is an open source project hosted by LF Energy, which offers a comprehensive set of features for electrical grid modelling and simulation. Among other advanced features, PowSyBl provides:
- A fully editable and extendable library for grid component modelling;
- Visualization tools to display your network;
- Grid simulation tools, such as power flows, security analyses (with or without remedial actions) and sensitivity analyses;
The framework is mostly written in Java, with a Python binding so that Python developers can access PowSyBl functionalities as well.
What you will learn during the webinar:
- For beginners: discover PowSyBl's functionalities through a quick general presentation and the notebook, without needing any expert coding skills;
- For advanced developers: master the skills to efficiently apply PowSyBl functionalities to your real-world scenarios.
“An Outlook of the Ongoing and Future Relationship between Blockchain Technologies and Process-aware Information Systems.” Invited talk at the joint workshop on Blockchain for Information Systems (BC4IS) and Blockchain for Trusted Data Sharing (B4TDS), co-located with with the 36th International Conference on Advanced Information Systems Engineering (CAiSE), 3 June 2024, Limassol, Cyprus.
UiPath Test Automation using UiPath Test Suite series, part 6DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 6. In this session, we will cover Test Automation with generative AI and Open AI.
UiPath Test Automation with generative AI and Open AI webinar offers an in-depth exploration of leveraging cutting-edge technologies for test automation within the UiPath platform. Attendees will delve into the integration of generative AI, a test automation solution, with Open AI advanced natural language processing capabilities.
Throughout the session, participants will discover how this synergy empowers testers to automate repetitive tasks, enhance testing accuracy, and expedite the software testing life cycle. Topics covered include the seamless integration process, practical use cases, and the benefits of harnessing AI-driven automation for UiPath testing initiatives. By attending this webinar, testers, and automation professionals can gain valuable insights into harnessing the power of AI to optimize their test automation workflows within the UiPath ecosystem, ultimately driving efficiency and quality in software development processes.
What will you get from this session?
1. Insights into integrating generative AI.
2. Understanding how this integration enhances test automation within the UiPath platform
3. Practical demonstrations
4. Exploration of real-world use cases illustrating the benefits of AI-driven test automation for UiPath
Topics covered:
What is generative AI
Test Automation with generative AI and Open AI.
UiPath integration with generative AI
Speaker:
Deepak Rai, Automation Practice Lead, Boundaryless Group and UiPath MVP
Removing Uninteresting Bytes in Software FuzzingAftab Hussain
Imagine a world where software fuzzing, the process of mutating bytes in test seeds to uncover hidden and erroneous program behaviors, becomes faster and more effective. A lot depends on the initial seeds, which can significantly dictate the trajectory of a fuzzing campaign, particularly in terms of how long it takes to uncover interesting behaviour in your code. We introduce DIAR, a technique designed to speedup fuzzing campaigns by pinpointing and eliminating those uninteresting bytes in the seeds. Picture this: instead of wasting valuable resources on meaningless mutations in large, bloated seeds, DIAR removes the unnecessary bytes, streamlining the entire process.
In this work, we equipped AFL, a popular fuzzer, with DIAR and examined two critical Linux libraries -- Libxml's xmllint, a tool for parsing xml documents, and Binutil's readelf, an essential debugging and security analysis command-line tool used to display detailed information about ELF (Executable and Linkable Format). Our preliminary results show that AFL+DIAR does not only discover new paths more quickly but also achieves higher coverage overall. This work thus showcases how starting with lean and optimized seeds can lead to faster, more comprehensive fuzzing campaigns -- and DIAR helps you find such seeds.
- These are slides of the talk given at IEEE International Conference on Software Testing Verification and Validation Workshop, ICSTW 2022.
Communications Mining Series - Zero to Hero - Session 1DianaGray10
This session provides introduction to UiPath Communication Mining, importance and platform overview. You will acquire a good understand of the phases in Communication Mining as we go over the platform with you. Topics covered:
• Communication Mining Overview
• Why is it important?
• How can it help today’s business and the benefits
• Phases in Communication Mining
• Demo on Platform overview
• Q/A
Generative AI Deep Dive: Advancing from Proof of Concept to ProductionAggregage
Join Maher Hanafi, VP of Engineering at Betterworks, in this new session where he'll share a practical framework to transform Gen AI prototypes into impactful products! He'll delve into the complexities of data collection and management, model selection and optimization, and ensuring security, scalability, and responsible use.
3. Run a research project from a coffee shop?
www.globustoolkit.org 3 www.globusonline.org
4. Towards “research IT as a service”
• Dark Energy Survey • SBGrid structural biology consortium
• Galaxy genomics • NCAR climate data applications
• LIGO observatory • Land use change; economics
www.globustoolkit.org www.globusonline.org
5. Let’s rethink how we provide research IT
Accelerate discovery and innovation worldwide by
providing research IT as a service
Leverage the cloud to
• provide millions of researchers with unprecedented
access to powerful tools;
• enable a massive shortening of cycle times in
time-consuming research processes; and
• reduce research IT costs dramatically via
economies of scale
www.globustoolkit.org www.globusonline.org
6. 2011: Grid meets Cloud
Globus Toolkit Globus Online
Build the Grid Use the Grid
Components for building Reliable file transfer
custom grid solutions Software-as-a-Service
globustoolkit.org globusonline.org
www.globustoolkit.org 6 www.globusonline.org
7. Globus Online – The first 18 months
• >4,500 registered users; adding ~300/month
• >3.5 PB moved; averaging ~50 TB/week
Data Transferred (TB/week) Support Requests Received
200 250
200
150
150
100
100
50 50
0 0
Jan 2011 Feb 2012 Dec 2010 Jan 2011
www.globustoolkit.org 7 www.globusonline.org
8. Globus Online – The first 18 months
• 514 active endpoints (year-to-date) including
62 at Tier 1 research institutions
• >390,000,000 files moved
• >100,000 transfer requests
• 99.9% uptime in 2012
• 18 identity providers supported, using multiple
protocols: X.509, MyProxy OAuth, OpenID
www.globustoolkit.org 8 www.globusonline.org
11. 2012: Move. Store. Collaborate.
globus online
Globus Globus Globus
Transfer Storage Collaborate
Globus Integrate Globus Connect Multi User
Globus Connect
Globus Toolkit Globus Nexus
www.globustoolkit.org 11 www.globusonline.org
12. Towards “research IT as a service”
Research Data Management-as-a-Service
Globus Globus Globus Globus …SaaS
Transfer Storage Collaborate Catalog
Globus Integrate …PaaS
www.globustoolkit.org www.globusonline.org
13. Globus Storage: For when you want to …
• Place your data
where you want
• Access it from
Globus Transfer, HTTP/REST, Desktop sync
anywhere via different
protocols
• Update it, version it, Globus
Storage
and take snapshots volume
• Share versions with
who you want
• Synchronize among Commercial National Campus
storage service research computin
locations provider center g center
www.globustoolkit.org www.globusonline.org
14. Globus Collaborate: For when you want to
Join with a few or many people to:
• Share documents
• Track tasks
• Send email
• Share data
• Do whatever
With:
• Common groups
• Delegated
management
www.globustoolkit.org www.globusonline.org
15. Globus Integrate: For when you want to…
Write programs that access/manage user
identities, profiles, groups, resources—and data …
Globus Globus Globus
Transfer Storage Collaborate
Globus Integrate Globus Connect Multi User
Globus Connect
Globus Toolkit Globus Nexus
… via REST APIs and command line programs
www.globustoolkit.org www.globusonline.org
16. Earth System Grid – Portal integration
• Outsource data transfer to Globus
– Data download to user machine
from search
– Data transfer to another server
by user
– Replication of data between sites
by administrator
• No ESGF client software needed
www.globustoolkit.org 16 www.globusonline.org
18. Let’s take a look at what’s coming…
Traumatic Brain Injury (TBI)
Diffusion Tensor Imaging (DTI)
www.globustoolkit.org 18 www.globusonline.org
19. Let’s take a look at what’s coming…
Globus Connect
Bryce Move DTI results to PADS
Bryce’s laptop Compute
DTI Group Cluster
- Kyle
- Bryce Globus Storage Globus Transfer
Create snapshot to Copy TBI data to
share with group compute cluster
Globus Nexus Globus Transfer
Add Bryce to TBI Move DTI results
collaboration to shared volume
Globus Collaborate
Publish DTI data to TBI
web site
Amazon S3
Globus Storage
Create volume and
share with TBI group SDSC
UChicago
Cloud
Kyle “TBI” Object
Globus Connect volume Store
Move MRI files to Cornell
TBI shared volume Red Cloud
www.globustoolkit.org 19 www.globusonline.org
20. Globus Toolkit update
GT Releases 2011-2012
5/4/11 GT 5.1.0
5/18/11 GT 5.0.4
8/2/11 GT 5.1.1 Highlights
11/16/11 GT 5.1.3 (5.2 beta) • Native packaging
12/15/11 GT 5.2.0 • GridFTP: DCSC support
3/8/12 GT 5.0.5 • New GRAM5 version;
Likely last 5.0.x release in use on OSG
4/5/12 GT 5.2.1rc1
5.2.1 final planned for April
www.globustoolkit.org 20 www.globusonline.org
21. Globus Toolkit update – GridFTP
• GT 5.2.0
– Support for DCSC command
• Allows client to specify credentials used to secure data
channel connection
• Utilized by Globus Transfer for seamless data movement
across multiple security domains
– Server administrators may now restrict client access
to a set of paths
• Coming in 5.2.1
– MLSC – stream directory listings over control channel
• Enables simple (control channel-only) implementation of
Globus Storage ftp server
• Can help directory listings for gridftp servers behind
www.globustoolkit.org restrictive firewalls 21 www.globusonline.org
22. Delivery plans
Globus Transfer Globus Storage Globus Collaborate
• Generally available • Early release • Initial projects at
• Service and Web UI • Generally available in UChicago
enhancements Fall 2012 • Early release sometime
continue before year-end 2012
Globus Integrate
Globus Connect Multi User (GA)
• Transfer API available
• APIs for Storage, Collaborate
planned after app release Globus Connect (GA)
Globus Toolkit (GA V5.2) Globus Nexus (Alpha)
www.globustoolkit.org www.globusonline.org
23. Delivery plans
• Globus Storage
– Current: Early access – contact us if interested
– Open beta planned in late Spring; GA in late Summer
– V1.0: UChicago and Amazon S3 object stores
• Globus Collaborate
– Current: Pre-release with two UChicago/CI groups
– First release planned around year-end
www.globustoolkit.org 23 www.globusonline.org
24. Premium offerings
Motivated by sustainability: resource cost
recovery, user support, future development
• Globus Transfer plans (resource providers)
– www.globusonline.org/premium-plans/
• Globus Storage premium plans
– Plans variables include: size, object store
type, durability, support level, personalization
• Globus Collaborate premium plans
– Plans variables include: branding capabilities, support
level, integration (e.g. identity)
www.globustoolkit.org 24 www.globusonline.org
25. Premium offerings
Motivated by sustainability: resource cost
recovery, user support, future development
• Globus Transfer plans (resource providers)
– www.globusonline.org/premium-plans/
• Globus Storage premium plans
– Plans variables include: size, object store
type, durability, support level, personalization
• Globus Collaborate premium plans
– Plans variables include: branding capabilities, support
level, integration (e.g. identity)
www.globustoolkit.org 25 www.globusonline.org
26. The Globus yeam
Bryce Allen Rachana Lisa
Ananthakris Childers
hnan
Martin Tom Howe Raj Jack Kordas Lukasz
Feller Kettimuthu Lacinski
Lee Liming JP Navarro Karl Pickett
Andrew Peter Zich
Zich
www.globustoolkit.org 26 www.globusonline.org
Genome (x10,000 in 6 years!), astronomy, social sciences, HPC …Big projects, e.g., LHC, have things under controlWhat about small and medium labs? They are struggling
Small and medium businesses deal with these challenges by outsourcing many business functions to third-party “cloud” providers.Last year we introduced the seemingly farfetched notion that one could run a research project from a coffee shop…
We have been investigating over the past few years just what those services may be.
Not (particularly) computing as a serviceBut the IT functions that researchers need to functionInclude collaboration as a service
An example of integrating Globus services, in this case Globus Transfer, using Globus NexusMention branding is/will be available for every Globus service100 GB/sec
Moving beyond Transfer service… introduce planned services hereAlso introduce the new naming hereToday: Globus Online Globus Transfer; moving to Globus (+ Service Name)