Kubeflow is an open-source project that makes it easy to deploy and manage machine learning workloads on Kubernetes. The Kubeflow organization on GitHub contains many repositories that provide tools and services for Kubeflow. These repositories include ones for the main Kubeflow deployment, documentation, examples, a CLI for deployment, metadata tracking, testing infrastructure, common libraries, a frontend dashboard, machine learning pipelines, TensorFlow and PyTorch operators, hyperparameter tuning, and serverless inferencing.
End to end Machine Learning using Kubeflow - Build, Train, Deploy and ManageAnimesh Singh
With the breadth of sheer functionalities which need to be addressed in the Machine Learning world around building, training, serving and managing models, getting it done in a consistent, composable, portable, and scalable manner is hard. The Kubernetes framework is well suited to address these issues, which is why it's a great foundation for deploying ML workloads. Kubeflow is designed to take advantage of these benefits. In this talk, we are going to address how to make it easy for everyone to develop, deploy, and manage portable, scalable ML everywhere and support the full lifecycle Machine Learning using open source technologies like Kubeflow, Tensorflow, PyTorch,Tekton, Knative, Istio and others. We are going to discuss how to enable distributed training of models, model serving, canary rollouts, drift detection, model explainability, metadata management, pipelines and others. Additionally we will discuss Watson productization in progress based on Kubeflow Pipelines and Tekton, and point to Kubeflow Dojo materials and follow-on workshops.
Deep dive into Kubeflow Pipelines, and details about Tekton backend implementation for KFP, including compiler, logging, artifacts and lineage tracking
End to end Machine Learning using Kubeflow - Build, Train, Deploy and ManageAnimesh Singh
With the breadth of sheer functionalities which need to be addressed in the Machine Learning world around building, training, serving and managing models, getting it done in a consistent, composable, portable, and scalable manner is hard. The Kubernetes framework is well suited to address these issues, which is why it's a great foundation for deploying ML workloads. Kubeflow is designed to take advantage of these benefits. In this talk, we are going to address how to make it easy for everyone to develop, deploy, and manage portable, scalable ML everywhere and support the full lifecycle Machine Learning using open source technologies like Kubeflow, Tensorflow, PyTorch,Tekton, Knative, Istio and others. We are going to discuss how to enable distributed training of models, model serving, canary rollouts, drift detection, model explainability, metadata management, pipelines and others. Additionally we will discuss Watson productization in progress based on Kubeflow Pipelines and Tekton, and point to Kubeflow Dojo materials and follow-on workshops.
Deep dive into Kubeflow Pipelines, and details about Tekton backend implementation for KFP, including compiler, logging, artifacts and lineage tracking
Kubeflow: Machine Learning en Cloud para todosGlobant
Speaker: Juan Camilo Díaz
Video: https://youtu.be/jfH93vdRmTk
Kubeflow hace que implementar workflows de Machine Learning en Kubernetes sean simples, portátiles y escalables. Kubeflow es el kit de herramientas que permite implementar procesos de Machine Learning, ampliando la capacidad de Kubernetes para ejecutar pasos independientes y configurables, con bibliotecas y frameworks específicos.
---------------------------------------------------------------------------------------------------------------------------------------------------------------
Hay trabajos y hay carreras. Las oportunidades vienen a golpear la puerta cuando menos lo esperas. La decisión es tuya. Desde tener la oportunidad de hacer algo significativo día tras día, hasta estar rodeado de gente supremamente inteligente y motivada.
¿Estás listo?
Descúbre todas nuestras oportunidades acá: https://bit.ly/2PWKky9
---------------------------------------------------------------------------------------------------------------------------------------------------------------
Síguenos en:
Facebook: https://www.facebook.com/Globant/
Twitter: https://twitter.com/Globant
Instagram: https://www.instagram.com/globantpics/
Linkedin: https://www.linkedin.com/company/globant
KFServing - Serverless Model InferencingAnimesh Singh
Deep dive into KFServing: Serverless Model Inferencing Platform built on top of KNative and Istio. Part of the Kubeflow project, and deployed in production across organizations.
Kubeflow at Spotify (For the Kubeflow Summit)Josh Baer
A lightning talk discussing some important challenges facing ML engineers and how the introduction of Kubeflow Pipelines will help.
Full slides w/ speaker notes here: https://docs.google.com/presentation/d/12dwhS_x4568G6XQjI9SEUacD-n4hFQczBcRBLdbHNEM/edit
ODSC webinar "Kubeflow, MLFlow and Beyond — augmenting ML delivery" Stepan Pu...Provectus
What's a machine learning workflow? What open source tools can you use to automate ML workflow?
Reproducible ML pipelines in research and production with monitoring insights from live inference clusters could enable and accelerate the delivery of AI solutions for enterprises. There is a growing ecosystem of tools that augment researchers and machine learning engineers in their day to day operations.
Still, there are big gaps in the machine learning workflow when it comes to training dataset versioning, training performance and metadata tracking, integration testing, inferencing quality monitoring, bias detection, concept drift detection and other aspects that prevent the adoption of AI in organizations of all sizes.
Kubernetes Helm (Boulder Kubernetes Meetup, June 2016)Matt Butcher
Kubernetes Helm is the package manager for Kubernetes. In this presentation, we walk through the basics of Helm, Tiller, and the Helm Charts file format.
Introduction to Helm, the package manager for Kubernetes: Create and use Kubernetes charts. Deploy releases on a cluster ... and rollback your releases. Get for instance Prometheus up and running with just a single command.
The slides used during the mlops.community meetup on KFServing. We looked inside some popular model formats like the SavedModel of Tensorflow and the Model Archiver of PyTorch, to understand how the weights of the NN are saved there, the graph and the signature concepts. We discussed the relevant resources of the deployment stack of Istio (the ingress gateway, the sidecar and the virtual service) and Knative (the service and revisions), as well as Kubeflow and KFServing. Then we got into the design details of KFServing, its custom resources, its controller. Then we spent some time to discuss the monitoring stack, the metrics of the servable as well as the model metrics which end up to Prometheus. We looked at the inference payload and prediction logging to observe drifts and trigger the retraining of the pipeline. Finally, a few words about the awesome community and the roadmap of the project on multi-model serving and inference routing graph.
Future of Apache Flink Deployments: Containers, Kubernetes and More - Flink F...Till Rohrmann
Container technology experiences an ever increasing adoption throughout many industries. Not only does this technology make your applications portable across different machines and operating systems, it also allows to scale applications in a matter of seconds. Moreover, it significantly simplifies and speeds up deployments which decreases development and operation costs. Consequently, more and more Flink deployments run in containerized environments which poses new challenges for Flink.
In this talk, we will take a look at Flink's current and future container support which will make it a first class citizen of the container world. First of all, we will explain how the new reactive execution mode will solve the problem of seamless application scaling and how it blends in with any environment. Complementary to the reactive mode, the active execution mode demonstrates its strengths when it comes to changing workloads such as batch jobs. Last but not least, we will take a look beyond Flink's own nose and investigate how Flink can be used together with Kubernetes operators or data Artisans' Application Manager. We will conclude the talk with a short demo of Flink's native Kubernetes support and giving an outlook on future developments in the container realm.
Helm - the Better Way to Deploy on Kubernetes - Reinhard Nägele - Codemotion...Codemotion
Helm is the official package manager for Kubernetes. This session introduces Helm and illustrates its advantages over "kubectl" with plain Kubernetes manifests. We will learn about its architecture and features, such as lifecycle management, parameterizability using Go templating, chart dependencies, etc. Demos will explain how all the bits and pieces work together.
Continuous Delivery for Kubernetes Apps with Helm and ChartMuseumCodefresh
**View the full webinar here: https://codefresh.io/cd-helm-chartmuseum-lp/
Sign up for a FREE Codefresh account today: https://codefresh.io/codefresh-signup/
In this webinar, Stef Arnold from SUSE CaaS Platform & Josh Dolitsky from Codefresh talked about streamlining the delivery of Kubernetes-based applications using the open-source tools Helm and ChartMuseum. They showed you how to use Helm to package your application as a chart, which is a deployable collection of Kubernetes files. Then, how to release your chart to ChartMuseum, which serves as an artifact repository for Helm charts.
Streaming your Lyft Ride Prices - Flink Forward SF 2019Thomas Weise
At Lyft we dynamically price our rides with a combination of various data sources, machine learning models, and streaming infrastructure for low latency, reliability and scalability. Dynamic pricing allows us to quickly adapt to real world changes and be fair to drivers (by say raising rates when there's a lot of demand) and fair to passengers (by let’s say offering to return 10 mins later for a cheaper rate). The streaming platform powers pricing by bringing together the best of two worlds using Apache Beam; ML algorithms in Python and Apache Flink as the streaming engine.
https://sf-2019.flink-forward.org/conference-program#streaming-your-lyft-ride-prices
These slides were used during a technical session for the Cloud-Native El Salvador community. It covers the basic Kubernetes components, some installers and main Kubernetes resources. For the demo, it was used the capabilites provided by the Horizontal Pod Autoscaler.
Vertex AI brings all of the components of a production machine learning project into one platform in the cloud, based on Google's Kubeflow. It executes ML jobs through pipelines, a set of connected Docker images that perform different functions in the process of training and executing a machine learning model. In this session you will learn how to develop and deploy components of pipelines.
As your company accumulates more data, it’s important to leverage all of it to develop new advanced machine learning models. And now, you can scale Spark using Kubernetes. Thanks to the new native integration between Apache Spark’s and Kubernetes, scaling data processing has never been easier. Apache Spark is a well designed high level application that can increase your data processing speed and accuracy. It can handle batch and real-time analytic and data processing workloads. This high level and efficient technology can be used with Java/Spark/Python and R. Joined with Kubernetes, you can get twice the efficiency. Kubernetes is a great engine with the most popular framework for managing compute resources. Unfortunately, running Apache Spark on Kubernetes can be a pain for first-time users.
Join CTO of cnvrg.io Leah Kolben as she brings you through a step by step tutorial on how to run Spark on Kubernetes. You’ll have your Spark up and running on Kubernetes in just 30 minutes.
Running Spark on Kubernetes will help you:
Process larger amounts of data
Segment your data into sub groups
Watch all our webinars at https://cnvrg.io/webinars-and-workshops/
Kubeflow: Machine Learning en Cloud para todosGlobant
Speaker: Juan Camilo Díaz
Video: https://youtu.be/jfH93vdRmTk
Kubeflow hace que implementar workflows de Machine Learning en Kubernetes sean simples, portátiles y escalables. Kubeflow es el kit de herramientas que permite implementar procesos de Machine Learning, ampliando la capacidad de Kubernetes para ejecutar pasos independientes y configurables, con bibliotecas y frameworks específicos.
---------------------------------------------------------------------------------------------------------------------------------------------------------------
Hay trabajos y hay carreras. Las oportunidades vienen a golpear la puerta cuando menos lo esperas. La decisión es tuya. Desde tener la oportunidad de hacer algo significativo día tras día, hasta estar rodeado de gente supremamente inteligente y motivada.
¿Estás listo?
Descúbre todas nuestras oportunidades acá: https://bit.ly/2PWKky9
---------------------------------------------------------------------------------------------------------------------------------------------------------------
Síguenos en:
Facebook: https://www.facebook.com/Globant/
Twitter: https://twitter.com/Globant
Instagram: https://www.instagram.com/globantpics/
Linkedin: https://www.linkedin.com/company/globant
KFServing - Serverless Model InferencingAnimesh Singh
Deep dive into KFServing: Serverless Model Inferencing Platform built on top of KNative and Istio. Part of the Kubeflow project, and deployed in production across organizations.
Kubeflow at Spotify (For the Kubeflow Summit)Josh Baer
A lightning talk discussing some important challenges facing ML engineers and how the introduction of Kubeflow Pipelines will help.
Full slides w/ speaker notes here: https://docs.google.com/presentation/d/12dwhS_x4568G6XQjI9SEUacD-n4hFQczBcRBLdbHNEM/edit
ODSC webinar "Kubeflow, MLFlow and Beyond — augmenting ML delivery" Stepan Pu...Provectus
What's a machine learning workflow? What open source tools can you use to automate ML workflow?
Reproducible ML pipelines in research and production with monitoring insights from live inference clusters could enable and accelerate the delivery of AI solutions for enterprises. There is a growing ecosystem of tools that augment researchers and machine learning engineers in their day to day operations.
Still, there are big gaps in the machine learning workflow when it comes to training dataset versioning, training performance and metadata tracking, integration testing, inferencing quality monitoring, bias detection, concept drift detection and other aspects that prevent the adoption of AI in organizations of all sizes.
Kubernetes Helm (Boulder Kubernetes Meetup, June 2016)Matt Butcher
Kubernetes Helm is the package manager for Kubernetes. In this presentation, we walk through the basics of Helm, Tiller, and the Helm Charts file format.
Introduction to Helm, the package manager for Kubernetes: Create and use Kubernetes charts. Deploy releases on a cluster ... and rollback your releases. Get for instance Prometheus up and running with just a single command.
The slides used during the mlops.community meetup on KFServing. We looked inside some popular model formats like the SavedModel of Tensorflow and the Model Archiver of PyTorch, to understand how the weights of the NN are saved there, the graph and the signature concepts. We discussed the relevant resources of the deployment stack of Istio (the ingress gateway, the sidecar and the virtual service) and Knative (the service and revisions), as well as Kubeflow and KFServing. Then we got into the design details of KFServing, its custom resources, its controller. Then we spent some time to discuss the monitoring stack, the metrics of the servable as well as the model metrics which end up to Prometheus. We looked at the inference payload and prediction logging to observe drifts and trigger the retraining of the pipeline. Finally, a few words about the awesome community and the roadmap of the project on multi-model serving and inference routing graph.
Future of Apache Flink Deployments: Containers, Kubernetes and More - Flink F...Till Rohrmann
Container technology experiences an ever increasing adoption throughout many industries. Not only does this technology make your applications portable across different machines and operating systems, it also allows to scale applications in a matter of seconds. Moreover, it significantly simplifies and speeds up deployments which decreases development and operation costs. Consequently, more and more Flink deployments run in containerized environments which poses new challenges for Flink.
In this talk, we will take a look at Flink's current and future container support which will make it a first class citizen of the container world. First of all, we will explain how the new reactive execution mode will solve the problem of seamless application scaling and how it blends in with any environment. Complementary to the reactive mode, the active execution mode demonstrates its strengths when it comes to changing workloads such as batch jobs. Last but not least, we will take a look beyond Flink's own nose and investigate how Flink can be used together with Kubernetes operators or data Artisans' Application Manager. We will conclude the talk with a short demo of Flink's native Kubernetes support and giving an outlook on future developments in the container realm.
Helm - the Better Way to Deploy on Kubernetes - Reinhard Nägele - Codemotion...Codemotion
Helm is the official package manager for Kubernetes. This session introduces Helm and illustrates its advantages over "kubectl" with plain Kubernetes manifests. We will learn about its architecture and features, such as lifecycle management, parameterizability using Go templating, chart dependencies, etc. Demos will explain how all the bits and pieces work together.
Continuous Delivery for Kubernetes Apps with Helm and ChartMuseumCodefresh
**View the full webinar here: https://codefresh.io/cd-helm-chartmuseum-lp/
Sign up for a FREE Codefresh account today: https://codefresh.io/codefresh-signup/
In this webinar, Stef Arnold from SUSE CaaS Platform & Josh Dolitsky from Codefresh talked about streamlining the delivery of Kubernetes-based applications using the open-source tools Helm and ChartMuseum. They showed you how to use Helm to package your application as a chart, which is a deployable collection of Kubernetes files. Then, how to release your chart to ChartMuseum, which serves as an artifact repository for Helm charts.
Streaming your Lyft Ride Prices - Flink Forward SF 2019Thomas Weise
At Lyft we dynamically price our rides with a combination of various data sources, machine learning models, and streaming infrastructure for low latency, reliability and scalability. Dynamic pricing allows us to quickly adapt to real world changes and be fair to drivers (by say raising rates when there's a lot of demand) and fair to passengers (by let’s say offering to return 10 mins later for a cheaper rate). The streaming platform powers pricing by bringing together the best of two worlds using Apache Beam; ML algorithms in Python and Apache Flink as the streaming engine.
https://sf-2019.flink-forward.org/conference-program#streaming-your-lyft-ride-prices
These slides were used during a technical session for the Cloud-Native El Salvador community. It covers the basic Kubernetes components, some installers and main Kubernetes resources. For the demo, it was used the capabilites provided by the Horizontal Pod Autoscaler.
Vertex AI brings all of the components of a production machine learning project into one platform in the cloud, based on Google's Kubeflow. It executes ML jobs through pipelines, a set of connected Docker images that perform different functions in the process of training and executing a machine learning model. In this session you will learn how to develop and deploy components of pipelines.
As your company accumulates more data, it’s important to leverage all of it to develop new advanced machine learning models. And now, you can scale Spark using Kubernetes. Thanks to the new native integration between Apache Spark’s and Kubernetes, scaling data processing has never been easier. Apache Spark is a well designed high level application that can increase your data processing speed and accuracy. It can handle batch and real-time analytic and data processing workloads. This high level and efficient technology can be used with Java/Spark/Python and R. Joined with Kubernetes, you can get twice the efficiency. Kubernetes is a great engine with the most popular framework for managing compute resources. Unfortunately, running Apache Spark on Kubernetes can be a pain for first-time users.
Join CTO of cnvrg.io Leah Kolben as she brings you through a step by step tutorial on how to run Spark on Kubernetes. You’ll have your Spark up and running on Kubernetes in just 30 minutes.
Running Spark on Kubernetes will help you:
Process larger amounts of data
Segment your data into sub groups
Watch all our webinars at https://cnvrg.io/webinars-and-workshops/
Learn how to deploy your model to production in 30 minutes and..
• Reduce costs and man power with auto scaling
• Load balanced the traffic
• Natively monitored by Kubernetes
• Update your model continuously: canary deployments, blue/green deployments
The goal of data science teams are to build and deploy high impact models. Data scientists prefer to focus on building algorithms, while data engineers focus on performance and productionizing machine learning. Kubernetes is an orchestration platform that can be deployed anywhere and can serve any kind of machine and deep learning environment. Kubernetes is a great tool for data scientists to use to stay productive and for data engineers to get production-ready results. In this free workshop you’ll learn how to build your own Kubernetes to use in your next machine learning pipeline.
Watch all our webinars at https://cnvrg.io/webinars-and-workshops/
A Million ways of Deploying a Kubernetes ClusterJimmy Lu
Developers and operators tend to build and develop different ways to set up a Kubernetes cluster due to its complexity and openness. Most of the time, it's quite confusing for the newcomers to get started with the Kubernetes. In this short talk, I'll introduce you some popular ways of Kubernetes deployment and briefly talk about pros and cons of each solution.
Overview of kubernetes and its use as a DevOps cluster management framework.
Problems with deployment via kube-up.sh and improving kubernetes on AWS via custom cloud formation template.
We will walk through the exploration, training and serving of a machine learning model by leveraging Kubeflow's main components. We will use Jupyter notebooks on the cluster to train the model and then introduce Kubeflow Pipelines to chain all the steps together, to automate the entire process.
Helm - Application deployment management for KubernetesAlexei Ledenev
Use Helm to package and deploy a composed application to any Kubernetes cluster. Manage your releases easily over time and across multiple K8s clusters.
Apache Spark on Kubernetes Anirudh Ramanathan and Tim ChenDatabricks
Kubernetes is a fast growing open-source platform which provides container-centric infrastructure. Conceived by Google in 2014, and leveraging over a decade of experience running containers at scale internally, it is one of the fastest moving projects on GitHub with 1000+ contributors and 40,000+ commits. Kubernetes has first class support on Google Cloud Platform, Amazon Web Services, and Microsoft Azure.
Unlike YARN, Kubernetes started as a general purpose orchestration framework with a focus on serving jobs. Support for long-running, data intensive batch workloads required some careful design decisions. Engineers across several organizations have been working on Kubernetes support as a cluster scheduler backend within Spark. During this process, we encountered several challenges in translating Spark considerations into idiomatic Kubernetes constructs. In this talk, we describe the challenges and the ways in which we solved them. This talk will be technical and is aimed at people who are looking to run Spark effectively on their clusters. The talk assumes basic familiarity with cluster orchestration and containers.
Once a model is deployed, you have a responsibility to ensure its reliability and performance in production. That means that in addition to system monitoring, you should be checking and monitoring its ML health and vitals such as accuracy, bias, and variance as new data comes in. In this online workshop we’ll discuss how to build a system to monitor your machine learning model in production on Kubernetes. You’ll learn to keep track of different models and their model performance over time, and how to set up custom alerts for your models. We’ll discuss what types of variants to monitor, and how to measure its performance. Join CTO of cnvrg.io, Leah Kolben in this hands-on workshop on critical practices for monitoring your machine learning models in production. Using the power of Kubernetes, we’ll build a complete system for model tracking that ensures high performing models in production.
Watch the full presentation with video and audio here: https://info.cnvrg.io/monitor-machine-learning-model-workshop
What you’ll learn:
- Why we monitor models in production
- The critical vitals to track and monitor performance
- How to set up automated alerts
- How to set up Kubernetes for monitoring
- Use tools like Grafana and Kibana to monitor and visualize your system and ML health
To watch the full live presentation click here: https://info.cnvrg.io/monitor-machine-learning-model-workshop
Running Kafka on Kubernetes, across three clouds at AdobeDoKC
While running a stateful service like Kafka on Kubernetes may be intimidating at the first glance, we share our thought process, the tools, and the results that can make this a reality in any organization.
The Kubernetes Operator pattern helped us automate all the operational aspects for the lifecycle of the cluster; abstract away the cloud specifics allowing us to focus on Kafka; achieve increased resilience and elasticity; implement automated Kafka rebalancing using CruiseControl, and harness all the metrics to implement an observable environment. We also plan to demo how these all come together.
This talk was given by Adi Muraru for DoK Day Europe @ KubeCon 2022.
In software engineering, the right architecture is essential for robust, scalable platforms. Wix has undergone a pivotal shift from event sourcing to a CRUD-based model for its microservices. This talk will chart the course of this pivotal journey.
Event sourcing, which records state changes as immutable events, provided robust auditing and "time travel" debugging for Wix Stores' microservices. Despite its benefits, the complexity it introduced in state management slowed development. Wix responded by adopting a simpler, unified CRUD model. This talk will explore the challenges of event sourcing and the advantages of Wix's new "CRUD on steroids" approach, which streamlines API integration and domain event management while preserving data integrity and system resilience.
Participants will gain valuable insights into Wix's strategies for ensuring atomicity in database updates and event production, as well as caching, materialization, and performance optimization techniques within a distributed system.
Join us to discover how Wix has mastered the art of balancing simplicity and extensibility, and learn how the re-adoption of the modest CRUD has turbocharged their development velocity, resilience, and scalability in a high-growth environment.
Providing Globus Services to Users of JASMIN for Environmental Data AnalysisGlobus
JASMIN is the UK’s high-performance data analysis platform for environmental science, operated by STFC on behalf of the UK Natural Environment Research Council (NERC). In addition to its role in hosting the CEDA Archive (NERC’s long-term repository for climate, atmospheric science & Earth observation data in the UK), JASMIN provides a collaborative platform to a community of around 2,000 scientists in the UK and beyond, providing nearly 400 environmental science projects with working space, compute resources and tools to facilitate their work. High-performance data transfer into and out of JASMIN has always been a key feature, with many scientists bringing model outputs from supercomputers elsewhere in the UK, to analyse against observational or other model data in the CEDA Archive. A growing number of JASMIN users are now realising the benefits of using the Globus service to provide reliable and efficient data movement and other tasks in this and other contexts. Further use cases involve long-distance (intercontinental) transfers to and from JASMIN, and collecting results from a mobile atmospheric radar system, pushing data to JASMIN via a lightweight Globus deployment. We provide details of how Globus fits into our current infrastructure, our experience of the recent migration to GCSv5.4, and of our interest in developing use of the wider ecosystem of Globus services for the benefit of our user community.
Software Engineering, Software Consulting, Tech Lead.
Spring Boot, Spring Cloud, Spring Core, Spring JDBC, Spring Security,
Spring Transaction, Spring MVC,
Log4j, REST/SOAP WEB-SERVICES.
Check out the webinar slides to learn more about how XfilesPro transforms Salesforce document management by leveraging its world-class applications. For more details, please connect with sales@xfilespro.com
If you want to watch the on-demand webinar, please click here: https://www.xfilespro.com/webinars/salesforce-document-management-2-0-smarter-faster-better/
Accelerate Enterprise Software Engineering with PlatformlessWSO2
Key takeaways:
Challenges of building platforms and the benefits of platformless.
Key principles of platformless, including API-first, cloud-native middleware, platform engineering, and developer experience.
How Choreo enables the platformless experience.
How key concepts like application architecture, domain-driven design, zero trust, and cell-based architecture are inherently a part of Choreo.
Demo of an end-to-end app built and deployed on Choreo.
Listen to the keynote address and hear about the latest developments from Rachana Ananthakrishnan and Ian Foster who review the updates to the Globus Platform and Service, and the relevance of Globus to the scientific community as an automation platform to accelerate scientific discovery.
Navigating the Metaverse: A Journey into Virtual Evolution"Donna Lenk
Join us for an exploration of the Metaverse's evolution, where innovation meets imagination. Discover new dimensions of virtual events, engage with thought-provoking discussions, and witness the transformative power of digital realms."
Custom Healthcare Software for Managing Chronic Conditions and Remote Patient...Mind IT Systems
Healthcare providers often struggle with the complexities of chronic conditions and remote patient monitoring, as each patient requires personalized care and ongoing monitoring. Off-the-shelf solutions may not meet these diverse needs, leading to inefficiencies and gaps in care. It’s here, custom healthcare software offers a tailored solution, ensuring improved care and effectiveness.
Enhancing Research Orchestration Capabilities at ORNL.pdfGlobus
Cross-facility research orchestration comes with ever-changing constraints regarding the availability and suitability of various compute and data resources. In short, a flexible data and processing fabric is needed to enable the dynamic redirection of data and compute tasks throughout the lifecycle of an experiment. In this talk, we illustrate how we easily leveraged Globus services to instrument the ACE research testbed at the Oak Ridge Leadership Computing Facility with flexible data and task orchestration capabilities.
Exploring Innovations in Data Repository Solutions - Insights from the U.S. G...Globus
The U.S. Geological Survey (USGS) has made substantial investments in meeting evolving scientific, technical, and policy driven demands on storing, managing, and delivering data. As these demands continue to grow in complexity and scale, the USGS must continue to explore innovative solutions to improve its management, curation, sharing, delivering, and preservation approaches for large-scale research data. Supporting these needs, the USGS has partnered with the University of Chicago-Globus to research and develop advanced repository components and workflows leveraging its current investment in Globus. The primary outcome of this partnership includes the development of a prototype enterprise repository, driven by USGS Data Release requirements, through exploration and implementation of the entire suite of the Globus platform offerings, including Globus Flow, Globus Auth, Globus Transfer, and Globus Search. This presentation will provide insights into this research partnership, introduce the unique requirements and challenges being addressed and provide relevant project progress.
top nidhi software solution freedownloadvrstrong314
This presentation emphasizes the importance of data security and legal compliance for Nidhi companies in India. It highlights how online Nidhi software solutions, like Vector Nidhi Software, offer advanced features tailored to these needs. Key aspects include encryption, access controls, and audit trails to ensure data security. The software complies with regulatory guidelines from the MCA and RBI and adheres to Nidhi Rules, 2014. With customizable, user-friendly interfaces and real-time features, these Nidhi software solutions enhance efficiency, support growth, and provide exceptional member services. The presentation concludes with contact information for further inquiries.
Understanding Globus Data Transfers with NetSageGlobus
NetSage is an open privacy-aware network measurement, analysis, and visualization service designed to help end-users visualize and reason about large data transfers. NetSage traditionally has used a combination of passive measurements, including SNMP and flow data, as well as active measurements, mainly perfSONAR, to provide longitudinal network performance data visualization. It has been deployed by dozens of networks world wide, and is supported domestically by the Engagement and Performance Operations Center (EPOC), NSF #2328479. We have recently expanded the NetSage data sources to include logs for Globus data transfers, following the same privacy-preserving approach as for Flow data. Using the logs for the Texas Advanced Computing Center (TACC) as an example, this talk will walk through several different example use cases that NetSage can answer, including: Who is using Globus to share data with my institution, and what kind of performance are they able to achieve? How many transfers has Globus supported for us? Which sites are we sharing the most data with, and how is that changing over time? How is my site using Globus to move data internally, and what kind of performance do we see for those transfers? What percentage of data transfers at my institution used Globus, and how did the overall data transfer performance compare to the Globus users?
TROUBLESHOOTING 9 TYPES OF OUTOFMEMORYERRORTier1 app
Even though at surface level ‘java.lang.OutOfMemoryError’ appears as one single error; underlyingly there are 9 types of OutOfMemoryError. Each type of OutOfMemoryError has different causes, diagnosis approaches and solutions. This session equips you with the knowledge, tools, and techniques needed to troubleshoot and conquer OutOfMemoryError in all its forms, ensuring smoother, more efficient Java applications.
In 2015, I used to write extensions for Joomla, WordPress, phpBB3, etc and I ...Juraj Vysvader
In 2015, I used to write extensions for Joomla, WordPress, phpBB3, etc and I didn't get rich from it but it did have 63K downloads (powered possible tens of thousands of websites).
Climate Science Flows: Enabling Petabyte-Scale Climate Analysis with the Eart...Globus
The Earth System Grid Federation (ESGF) is a global network of data servers that archives and distributes the planet’s largest collection of Earth system model output for thousands of climate and environmental scientists worldwide. Many of these petabyte-scale data archives are located in proximity to large high-performance computing (HPC) or cloud computing resources, but the primary workflow for data users consists of transferring data, and applying computations on a different system. As a part of the ESGF 2.0 US project (funded by the United States Department of Energy Office of Science), we developed pre-defined data workflows, which can be run on-demand, capable of applying many data reduction and data analysis to the large ESGF data archives, transferring only the resultant analysis (ex. visualizations, smaller data files). In this talk, we will showcase a few of these workflows, highlighting how Globus Flows can be used for petabyte-scale climate analysis.
Gamify Your Mind; The Secret Sauce to Delivering Success, Continuously Improv...Shahin Sheidaei
Games are powerful teaching tools, fostering hands-on engagement and fun. But they require careful consideration to succeed. Join me to explore factors in running and selecting games, ensuring they serve as effective teaching tools. Learn to maintain focus on learning objectives while playing, and how to measure the ROI of gaming in education. Discover strategies for pitching gaming to leadership. This session offers insights, tips, and examples for coaches, team leads, and enterprise leaders seeking to teach from simple to complex concepts.
A Comprehensive Look at Generative AI in Retail App Testing.pdfkalichargn70th171
Traditional software testing methods are being challenged in retail, where customer expectations and technological advancements continually shape the landscape. Enter generative AI—a transformative subset of artificial intelligence technologies poised to revolutionize software testing.
We describe the deployment and use of Globus Compute for remote computation. This content is aimed at researchers who wish to compute on remote resources using a unified programming interface, as well as system administrators who will deploy and operate Globus Compute services on their research computing infrastructure.
2. Overview of Kubeflow Repositories
• Kubeflow is an open, community driven
project to make it easy to deploy and
manage an ML stack on Kubernetes
• https://kubeflow.org
• Kubeflow org (github.com/kubeflow) is a
collection of many repositories
3. Overview of Kubeflow Repositories
https://www.kubeflow.org/docs/started/kubeflow-overview/
4. Overview of Kubeflow Repositories
• kubeflow – machine learning toolkit for Kubernetes
• Main repo, provides access management, central dashboard, jupyter web app, notebook controller,
profile controller and more cloud native deployments and services
• website – Kubeflow’s public website
• Documentation
• community – information about the Kubeflow community including proposals and
governance information
• examples – a repository to host extended examples and tutorials
• end-to-end, component-focused and demos
• code-intelligence – ML-powered developer tools, using Kubeflow
• manifests – a repository for Kustomize manifests
• Kubeflow installs with Kustomize, this repo provides KfDef manifests containing the Kustomize
applications
• kfctl – a CLI for deploying and managing Kubeflow
• Build, deploy and manage Kubeflow
5. Overview of Kubeflow Repositories
• metadata – repository for assets related to Metadata
• Tracking and managing metadata of machine learning workflows
• Includes info about executions, models, datasets and other artifacts
• testing – test infrastructure and tooling for Kubeflow
• common – common APIs and libraries shared by other Kubeflow operator
repositories
• Common controller code helpers for operators generated by kube-builder or operator-sdk
• frontend – repository for Kubeflow frontend
• pipelines – machine learning pipelines for Kubeflow
• Reusable end-to-end ML workflows built using the Kubeflow Pipelines SDK
• Workflow engine through Argo, Pipelines SDK and UI
• kfp-tekton – experimental project exploring Tekton and KFP integration
• Kubeflow Pipelines with Tekton
6. Overview of Kubeflow Repositories
• tf-operator – tools for ML/Tensorflow on Kubernetes
• mpi-operator – Kubernetes operator for allreduce-style distributed training
• mxnet-operator – a Kubernetes operator for mxnet jobs
• pytorch-operator – tools for ML/Tensorflow on Kubernetes
• katib – repository for hyperparameter tuning
• fairing – Python SDK for building, training, and deploying ML models
• kfserving – serverless inferencing on Kubernetes
• Others