Cloud Native London talk about the control layer of Hopsworks.ai and our choice of cloud native services. We built our own multi-tenant services as cloud native services, for the most part.
This session shows an overview of the features and architecture of SQL Server on Linux and Containers. It covers install, config, performance, security, HADR, Docker containers, and tools. Find the demos on http://aka.ms/bobwardms
Making Clouds: Turning OpenNebula into a ProductNETWAYS
What does it takes to bring innovations like private clouds to small and medium enterprises? In the course of this talk we will present our experience in creating a self-service toolkit for creating a complete virtualization and cloud platform based on OpenNebula, as well as our experience gathered in tens of installations of all sizes. From scalable storage (with benchmarks!) to autonomic optimization, we will present what in our view is needed to bring private clouds to everyone, what components and additions we created to better solve our customers’ problems (from replacing industrial control systems to medium scale virtual desktop infrastructures), and why OpenNebula has been chosen over other competing cloud toolkits.
OpenNebulaConf 2013 - Making Clouds: Turning OpenNebula into a Product by Car...OpenNebula Project
What does it takes to bring innovations like private clouds to small and medium enterprises? In the course of this talk we will present our experience in creating a self-service toolkit for creating a complete virtualization and cloud platform based on OpenNebula, as well as our experience gathered in tens of installations of all sizes. From scalable storage (with benchmarks!) to autonomic optimization, we will present what in our view is needed to bring private clouds to everyone, what components and additions we created to better solve our customers’ problems (from replacing industrial control systems to medium scale virtual desktop infrastructures), and why OpenNebula has been chosen over other competing cloud toolkits.
Bio:
Carlo Daffara the Technical director of Cloudweavers, and formerly head of research and development at Conecta, a consulting firm specializing in open source systems and distributed computing; Italian member of the European Working Group on Libre Software and co-coordinator of the working group on SMEs of the EU ICT task force on competitiveness. Since 1999, works as evaluator for IST programme submissions in the field of component-based software engineering, GRIDs and international cooperation. Coordinator of the open source platforms technical area of the IEEE technical committee on scalable computing, co-chair of the SIENA EU cloud initiative roadmap editorial board and part of the editorial review board of the International Journal of Open Source Software & Processes (IJOSSP).
JBoss Architect Forum London - October 2013 - Platform as a What?JBossArchitectForum
• State of the Container: From Tomcat to JEE and beyond
• In-Memory Computing: How can a Data Grid accelerate your applications?
• PaaS: Learn how Red Hat's OpenShift has helped PayPal increase developer productivity
Docker - Demo on PHP Application deployment Arun prasath
Docker is an open-source project to easily create lightweight, portable, self-sufficient containers from any application. The same container that a developer builds and tests on a laptop can run at scale, in production, on VMs, bare metal, OpenStack clusters, public clouds and more.
In this demo, I will show how to build a Apache image from a Dockerfile and deploy a PHP application which is present in an external folder using custom configuration files.
This session shows an overview of the features and architecture of SQL Server on Linux and Containers. It covers install, config, performance, security, HADR, Docker containers, and tools. Find the demos on http://aka.ms/bobwardms
Making Clouds: Turning OpenNebula into a ProductNETWAYS
What does it takes to bring innovations like private clouds to small and medium enterprises? In the course of this talk we will present our experience in creating a self-service toolkit for creating a complete virtualization and cloud platform based on OpenNebula, as well as our experience gathered in tens of installations of all sizes. From scalable storage (with benchmarks!) to autonomic optimization, we will present what in our view is needed to bring private clouds to everyone, what components and additions we created to better solve our customers’ problems (from replacing industrial control systems to medium scale virtual desktop infrastructures), and why OpenNebula has been chosen over other competing cloud toolkits.
OpenNebulaConf 2013 - Making Clouds: Turning OpenNebula into a Product by Car...OpenNebula Project
What does it takes to bring innovations like private clouds to small and medium enterprises? In the course of this talk we will present our experience in creating a self-service toolkit for creating a complete virtualization and cloud platform based on OpenNebula, as well as our experience gathered in tens of installations of all sizes. From scalable storage (with benchmarks!) to autonomic optimization, we will present what in our view is needed to bring private clouds to everyone, what components and additions we created to better solve our customers’ problems (from replacing industrial control systems to medium scale virtual desktop infrastructures), and why OpenNebula has been chosen over other competing cloud toolkits.
Bio:
Carlo Daffara the Technical director of Cloudweavers, and formerly head of research and development at Conecta, a consulting firm specializing in open source systems and distributed computing; Italian member of the European Working Group on Libre Software and co-coordinator of the working group on SMEs of the EU ICT task force on competitiveness. Since 1999, works as evaluator for IST programme submissions in the field of component-based software engineering, GRIDs and international cooperation. Coordinator of the open source platforms technical area of the IEEE technical committee on scalable computing, co-chair of the SIENA EU cloud initiative roadmap editorial board and part of the editorial review board of the International Journal of Open Source Software & Processes (IJOSSP).
JBoss Architect Forum London - October 2013 - Platform as a What?JBossArchitectForum
• State of the Container: From Tomcat to JEE and beyond
• In-Memory Computing: How can a Data Grid accelerate your applications?
• PaaS: Learn how Red Hat's OpenShift has helped PayPal increase developer productivity
Docker - Demo on PHP Application deployment Arun prasath
Docker is an open-source project to easily create lightweight, portable, self-sufficient containers from any application. The same container that a developer builds and tests on a laptop can run at scale, in production, on VMs, bare metal, OpenStack clusters, public clouds and more.
In this demo, I will show how to build a Apache image from a Dockerfile and deploy a PHP application which is present in an external folder using custom configuration files.
Scaleable PHP Applications in KubernetesRobert Lemke
Kubernetes is also called the "distributed Linux of the cloud" – which implies that it provides fundamental infrastructure, which can solve a lot of challenges. Let’s see how PHP applications fit into this picture. In this presentation, we are going to explore when Kubernetes is a good fit for operating your PHP application and how it can be done in practice. We’ll look at the whole lifecycle: how to build your application, create or choose the right Docker images, deploy and scale, and how to deal with performance and monitoring. At the end you will have a good understanding about all the different stages and building blocks for running a PHP application with Kubernetes in production.
Data processing at the speed of 100 Gbps@Apache Crail (Incubating)DataWorks Summit
Once the staple of HPC clusters, today high-performance network and storage devices are everywhere. For a fraction of the cost, one can rent 40/100 Gbps RDMA networks and high-end NVMe flash devices supporting 10s GB/s bandwidths, less than 100 microseconds of latencies, with millions of IOPS. How does one leverage this phenomenal performance for popular data processing frameworks such as Apache Spark, Flink, Hadoop that we all know and love?
In this talk, I will introduce the Apache Crail (Incubating), which is a fast, distributed data store that is designed specifically for high-performance network and storage devices. The goal of the project is to deliver the true hardware performance to Apache data processing frameworks in the most accessible way. With its modular design, Crail supports multiple storage back ends (DRAM, NVMe Flash, and 3D XPoint) and networking protocols (RDMA and TPC/sockets). Crail provides multiple flexible APIs (file system, KV, HDFS, streaming) for a better integration with the high-level data access operations in Apache compute frameworks. As a result, on a 100 Gbps network infrastructure, Crail delivers all-to-all shuffle operations at 80+ Gbps speed, broadcast operations at less than 10 usec latencies, and more than 8M lookups/namenode, etc. Moreover, Crail is a generic solution that integrates well with the Apache ecosystem including frameworks like Spark, Hadoop, Hive, etc.
I will present the case for Crail, its current status, and future plans. As Crail is a young Apache project, we are seeking to build a community and expand its application to other interesting domains.
Speaker
Animesh Trivedi, IBM Research, Research Staff Member (RSM)
Kubernetes is exploding in popularity right now and has all the buzz and cargo-culting that Docker enjoyed just a few years ago. But what even is Kubernetes? How do I run my PHP apps in it? Should I run my PHP apps in it ?
IBM BP Session - Multiple CLoud Paks and Cloud Paks Foundational Services.pptxGeorg Ember
Diese Präsentation beinhaltet Erfahrungen, Empfehlungen und Planungs-Gedanken, die man beachten sollte, wenn man multiple IBM Cloud Paks auf der Container Platform OpenShift installieren / deployen möchte. Es beschreibt die Grundlagen zu "common services", auch "foundational services" genannt, die als Basis-Services die Lauffähigkeit dieser Cloud Paks auf OpenShift erläutert und wie man Cloud Paks auch logisch trennen kann auf OpenShift worker nodes über taints und node selectors.
Serverless frameworks are changing the way we do computing. In open source container world, Kubernetes is playing a pivotal role in manifesting this. This presentation will go deep into various features of Kubernetes to create serverless functions.
Also includes a comparative study of various serverless frameworks such as Kubeless, Fission and Funktion are available in open source world. Will conclude with an implementation demo and some real world use cases.
Presented in serverless summit 2017: www.inserverless.com
Kubernetes for FaaS (Function as a Service) - Serverless evolution, some basic constructs, kubenetes features, comparisons - from Serverless conference 2017 Bangalore.
It’s no longer a world of just relational databases. Companies are increasingly adopting specialized datastores such as Hadoop, HBase, MongoDB, Elasticsearch, Solr and S3. Apache Drill, an open source, in-memory, columnar SQL execution engine, enables interactive SQL queries against more datastores.
These slides accompanied a live install of Triton Elastic Container Infrastructure as described in the following blog post:
https://www.joyent.com/blog/spin-up-a-docker-dev-test-environment-in-60-minutes-or-less
Presentation abstract:
Hardware hypervisors were a first generation approach to the challenges of resource and security isolation, but they’re unnecessarily shackling operations and developers with limitations that are no longer relevant to containerized deployments.
We need bare metal performance, but how can we get the security isolation and elasticity that we need without VMs? Container -- truly secure, bare metal containers -- offer an alternative that improve performance while reducing costs (and CO2 emissions too!).
What are they, how do they work, and how does containerization affect my apps??
These slides were presented at:
http://www.meetup.com/austin-devops/events/223284754/
http://www.meetup.com/PhillyDevOps/events/223197735/
http://www.meetup.com/DevOpsandAutomationNJ/events/223432942/
OSDC 2015: Bernd Mathiske | Why the Datacenter Needs an Operating SystemNETWAYS
Developers are moving away from their host-based patterns and adopting a new mindset around the idea that the datacenter is the computer. It?s quickly becoming a mainstream model that you can view a warehouse full of servers as a single computer (with terabytes of memory and tens of thousands of cores). There is a key missing piece, which is an operating system for the datacenter (DCOS), which would provide the same OS functionality and core OS abstractions across thousands of machines that an OS provides on a single machine today. In this session, we will discuss:
How the abstraction of an OS has evolved over time and can cleanly scale to spand thousands of machines in a datacenter.
How key open source technologies like the Apache Mesos distributed systems kernel provide the key underpinnings for a DCOS.
How developers can layer core system services on top of a distributed systems kernel, including an init system (Marathon), cron (Chronos), service discovery (DNS), and storage (HDFS)
What would the interface to the DCOS look like? How would you use it?
How you would install and operate datacenter services, including Apache Spark, Apache Cassandra, Apache Kafka, Apache Hadoop, Apache YARN, Apache HDFS, and Google's Kubernetes.
How will developers build datacenter-scale apps, programmed against the datacenter OS like it?s a single machine?
OpenSource API Server based on Node.js API framework built on supported Node.js platform with Tooling and DevOps. Use cases are Omni-channel API Server, Mobile Backend as a Service (mBaaS) or Next Generation Enterprise Service Bus. Key functionality include built in enterprise connectors, ORM, Offline Sync, Mobile and JS SDKs, Isomorphic JavaScript and Graphical API creation tool.
Extending DevOps to Big Data Applications with KubernetesNicola Ferraro
DevOps, continuous delivery and modern architectural trends can incredibly speed up the software development process. Big Data applications cannot be an exception and need to keep the same pace.
PyData Berlin 2023 - Mythical ML Pipeline.pdfJim Dowling
This talk is a mental map for building ML systems as ML Pipelines that are factored into Feature Pipelines, Training Pipelines, and Inference Pipelines.
More Related Content
Similar to Building Hopsworks, a cloud-native managed feature store for machine learning
Scaleable PHP Applications in KubernetesRobert Lemke
Kubernetes is also called the "distributed Linux of the cloud" – which implies that it provides fundamental infrastructure, which can solve a lot of challenges. Let’s see how PHP applications fit into this picture. In this presentation, we are going to explore when Kubernetes is a good fit for operating your PHP application and how it can be done in practice. We’ll look at the whole lifecycle: how to build your application, create or choose the right Docker images, deploy and scale, and how to deal with performance and monitoring. At the end you will have a good understanding about all the different stages and building blocks for running a PHP application with Kubernetes in production.
Data processing at the speed of 100 Gbps@Apache Crail (Incubating)DataWorks Summit
Once the staple of HPC clusters, today high-performance network and storage devices are everywhere. For a fraction of the cost, one can rent 40/100 Gbps RDMA networks and high-end NVMe flash devices supporting 10s GB/s bandwidths, less than 100 microseconds of latencies, with millions of IOPS. How does one leverage this phenomenal performance for popular data processing frameworks such as Apache Spark, Flink, Hadoop that we all know and love?
In this talk, I will introduce the Apache Crail (Incubating), which is a fast, distributed data store that is designed specifically for high-performance network and storage devices. The goal of the project is to deliver the true hardware performance to Apache data processing frameworks in the most accessible way. With its modular design, Crail supports multiple storage back ends (DRAM, NVMe Flash, and 3D XPoint) and networking protocols (RDMA and TPC/sockets). Crail provides multiple flexible APIs (file system, KV, HDFS, streaming) for a better integration with the high-level data access operations in Apache compute frameworks. As a result, on a 100 Gbps network infrastructure, Crail delivers all-to-all shuffle operations at 80+ Gbps speed, broadcast operations at less than 10 usec latencies, and more than 8M lookups/namenode, etc. Moreover, Crail is a generic solution that integrates well with the Apache ecosystem including frameworks like Spark, Hadoop, Hive, etc.
I will present the case for Crail, its current status, and future plans. As Crail is a young Apache project, we are seeking to build a community and expand its application to other interesting domains.
Speaker
Animesh Trivedi, IBM Research, Research Staff Member (RSM)
Kubernetes is exploding in popularity right now and has all the buzz and cargo-culting that Docker enjoyed just a few years ago. But what even is Kubernetes? How do I run my PHP apps in it? Should I run my PHP apps in it ?
IBM BP Session - Multiple CLoud Paks and Cloud Paks Foundational Services.pptxGeorg Ember
Diese Präsentation beinhaltet Erfahrungen, Empfehlungen und Planungs-Gedanken, die man beachten sollte, wenn man multiple IBM Cloud Paks auf der Container Platform OpenShift installieren / deployen möchte. Es beschreibt die Grundlagen zu "common services", auch "foundational services" genannt, die als Basis-Services die Lauffähigkeit dieser Cloud Paks auf OpenShift erläutert und wie man Cloud Paks auch logisch trennen kann auf OpenShift worker nodes über taints und node selectors.
Serverless frameworks are changing the way we do computing. In open source container world, Kubernetes is playing a pivotal role in manifesting this. This presentation will go deep into various features of Kubernetes to create serverless functions.
Also includes a comparative study of various serverless frameworks such as Kubeless, Fission and Funktion are available in open source world. Will conclude with an implementation demo and some real world use cases.
Presented in serverless summit 2017: www.inserverless.com
Kubernetes for FaaS (Function as a Service) - Serverless evolution, some basic constructs, kubenetes features, comparisons - from Serverless conference 2017 Bangalore.
It’s no longer a world of just relational databases. Companies are increasingly adopting specialized datastores such as Hadoop, HBase, MongoDB, Elasticsearch, Solr and S3. Apache Drill, an open source, in-memory, columnar SQL execution engine, enables interactive SQL queries against more datastores.
These slides accompanied a live install of Triton Elastic Container Infrastructure as described in the following blog post:
https://www.joyent.com/blog/spin-up-a-docker-dev-test-environment-in-60-minutes-or-less
Presentation abstract:
Hardware hypervisors were a first generation approach to the challenges of resource and security isolation, but they’re unnecessarily shackling operations and developers with limitations that are no longer relevant to containerized deployments.
We need bare metal performance, but how can we get the security isolation and elasticity that we need without VMs? Container -- truly secure, bare metal containers -- offer an alternative that improve performance while reducing costs (and CO2 emissions too!).
What are they, how do they work, and how does containerization affect my apps??
These slides were presented at:
http://www.meetup.com/austin-devops/events/223284754/
http://www.meetup.com/PhillyDevOps/events/223197735/
http://www.meetup.com/DevOpsandAutomationNJ/events/223432942/
OSDC 2015: Bernd Mathiske | Why the Datacenter Needs an Operating SystemNETWAYS
Developers are moving away from their host-based patterns and adopting a new mindset around the idea that the datacenter is the computer. It?s quickly becoming a mainstream model that you can view a warehouse full of servers as a single computer (with terabytes of memory and tens of thousands of cores). There is a key missing piece, which is an operating system for the datacenter (DCOS), which would provide the same OS functionality and core OS abstractions across thousands of machines that an OS provides on a single machine today. In this session, we will discuss:
How the abstraction of an OS has evolved over time and can cleanly scale to spand thousands of machines in a datacenter.
How key open source technologies like the Apache Mesos distributed systems kernel provide the key underpinnings for a DCOS.
How developers can layer core system services on top of a distributed systems kernel, including an init system (Marathon), cron (Chronos), service discovery (DNS), and storage (HDFS)
What would the interface to the DCOS look like? How would you use it?
How you would install and operate datacenter services, including Apache Spark, Apache Cassandra, Apache Kafka, Apache Hadoop, Apache YARN, Apache HDFS, and Google's Kubernetes.
How will developers build datacenter-scale apps, programmed against the datacenter OS like it?s a single machine?
OpenSource API Server based on Node.js API framework built on supported Node.js platform with Tooling and DevOps. Use cases are Omni-channel API Server, Mobile Backend as a Service (mBaaS) or Next Generation Enterprise Service Bus. Key functionality include built in enterprise connectors, ORM, Offline Sync, Mobile and JS SDKs, Isomorphic JavaScript and Graphical API creation tool.
Extending DevOps to Big Data Applications with KubernetesNicola Ferraro
DevOps, continuous delivery and modern architectural trends can incredibly speed up the software development process. Big Data applications cannot be an exception and need to keep the same pace.
Similar to Building Hopsworks, a cloud-native managed feature store for machine learning (20)
PyData Berlin 2023 - Mythical ML Pipeline.pdfJim Dowling
This talk is a mental map for building ML systems as ML Pipelines that are factored into Feature Pipelines, Training Pipelines, and Inference Pipelines.
Metadata and Provenance for ML Pipelines with Hopsworks Jim Dowling
This talk describes the scale-out, consistent metadata architecture of Hopsworks and how we use it to support custom metadata and provenance for ML Pipelines with Hopsworks Feature Store, NDB, and ePipe . The talk is here: https://www.youtube.com/watch?v=oPp8PJ9QBnU&feature=emb_logo
Asynchronous Hyperparameter Search with Spark on Hopsworks and MaggyJim Dowling
Spark AI Summit Europe 2019 talk: Asynchronous Hyperparameter Search with Spark on Hopsworks and Maggy. How can you do directed search efficiently with Spark? The answer is Maggy - asynchronous directed search on PySpark.
Hopsworks at Google AI Huddle, SunnyvaleJim Dowling
Hopsworks is a platform for designing and operating End to End Machine Learning using PySpark and TensorFlow/PyTorch. Early access is now available on GCP. Hopsworks includes the industry's first Feature Store. Hopsworks is open-source.
Hopsworks in the cloud Berlin Buzzwords 2019 Jim Dowling
This talk, given at Berlin Buzzwords 2019, describes the recent progress in making Hopsworks a cloud-native platform, with HA data-center support added for HopsFS.
Epistemic Interaction - tuning interfaces to provide information for AI supportAlan Dix
Paper presented at SYNERGY workshop at AVI 2024, Genoa, Italy. 3rd June 2024
https://alandix.com/academic/papers/synergy2024-epistemic/
As machine learning integrates deeper into human-computer interactions, the concept of epistemic interaction emerges, aiming to refine these interactions to enhance system adaptability. This approach encourages minor, intentional adjustments in user behaviour to enrich the data available for system learning. This paper introduces epistemic interaction within the context of human-system communication, illustrating how deliberate interaction design can improve system understanding and adaptation. Through concrete examples, we demonstrate the potential of epistemic interaction to significantly advance human-computer interaction by leveraging intuitive human communication strategies to inform system design and functionality, offering a novel pathway for enriching user-system engagements.
The Art of the Pitch: WordPress Relationships and SalesLaura Byrne
Clients don’t know what they don’t know. What web solutions are right for them? How does WordPress come into the picture? How do you make sure you understand scope and timeline? What do you do if sometime changes?
All these questions and more will be explored as we talk about matching clients’ needs with what your agency offers without pulling teeth or pulling your hair out. Practical tips, and strategies for successful relationship building that leads to closing the deal.
Dev Dives: Train smarter, not harder – active learning and UiPath LLMs for do...UiPathCommunity
💥 Speed, accuracy, and scaling – discover the superpowers of GenAI in action with UiPath Document Understanding and Communications Mining™:
See how to accelerate model training and optimize model performance with active learning
Learn about the latest enhancements to out-of-the-box document processing – with little to no training required
Get an exclusive demo of the new family of UiPath LLMs – GenAI models specialized for processing different types of documents and messages
This is a hands-on session specifically designed for automation developers and AI enthusiasts seeking to enhance their knowledge in leveraging the latest intelligent document processing capabilities offered by UiPath.
Speakers:
👨🏫 Andras Palfi, Senior Product Manager, UiPath
👩🏫 Lenka Dulovicova, Product Program Manager, UiPath
Search and Society: Reimagining Information Access for Radical FuturesBhaskar Mitra
The field of Information retrieval (IR) is currently undergoing a transformative shift, at least partly due to the emerging applications of generative AI to information access. In this talk, we will deliberate on the sociotechnical implications of generative AI for information access. We will argue that there is both a critical necessity and an exciting opportunity for the IR community to re-center our research agendas on societal needs while dismantling the artificial separation between the work on fairness, accountability, transparency, and ethics in IR and the rest of IR research. Instead of adopting a reactionary strategy of trying to mitigate potential social harms from emerging technologies, the community should aim to proactively set the research agenda for the kinds of systems we should build inspired by diverse explicitly stated sociotechnical imaginaries. The sociotechnical imaginaries that underpin the design and development of information access technologies needs to be explicitly articulated, and we need to develop theories of change in context of these diverse perspectives. Our guiding future imaginaries must be informed by other academic fields, such as democratic theory and critical theory, and should be co-developed with social science scholars, legal scholars, civil rights and social justice activists, and artists, among others.
JMeter webinar - integration with InfluxDB and GrafanaRTTS
Watch this recorded webinar about real-time monitoring of application performance. See how to integrate Apache JMeter, the open-source leader in performance testing, with InfluxDB, the open-source time-series database, and Grafana, the open-source analytics and visualization application.
In this webinar, we will review the benefits of leveraging InfluxDB and Grafana when executing load tests and demonstrate how these tools are used to visualize performance metrics.
Length: 30 minutes
Session Overview
-------------------------------------------
During this webinar, we will cover the following topics while demonstrating the integrations of JMeter, InfluxDB and Grafana:
- What out-of-the-box solutions are available for real-time monitoring JMeter tests?
- What are the benefits of integrating InfluxDB and Grafana into the load testing stack?
- Which features are provided by Grafana?
- Demonstration of InfluxDB and Grafana using a practice web application
To view the webinar recording, go to:
https://www.rttsweb.com/jmeter-integration-webinar
PHP Frameworks: I want to break free (IPC Berlin 2024)Ralf Eggert
In this presentation, we examine the challenges and limitations of relying too heavily on PHP frameworks in web development. We discuss the history of PHP and its frameworks to understand how this dependence has evolved. The focus will be on providing concrete tips and strategies to reduce reliance on these frameworks, based on real-world examples and practical considerations. The goal is to equip developers with the skills and knowledge to create more flexible and future-proof web applications. We'll explore the importance of maintaining autonomy in a rapidly changing tech landscape and how to make informed decisions in PHP development.
This talk is aimed at encouraging a more independent approach to using PHP frameworks, moving towards a more flexible and future-proof approach to PHP development.
Connector Corner: Automate dynamic content and events by pushing a buttonDianaGray10
Here is something new! In our next Connector Corner webinar, we will demonstrate how you can use a single workflow to:
Create a campaign using Mailchimp with merge tags/fields
Send an interactive Slack channel message (using buttons)
Have the message received by managers and peers along with a test email for review
But there’s more:
In a second workflow supporting the same use case, you’ll see:
Your campaign sent to target colleagues for approval
If the “Approve” button is clicked, a Jira/Zendesk ticket is created for the marketing design team
But—if the “Reject” button is pushed, colleagues will be alerted via Slack message
Join us to learn more about this new, human-in-the-loop capability, brought to you by Integration Service connectors.
And...
Speakers:
Akshay Agnihotri, Product Manager
Charlie Greenberg, Host
UiPath Test Automation using UiPath Test Suite series, part 4DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 4. In this session, we will cover Test Manager overview along with SAP heatmap.
The UiPath Test Manager overview with SAP heatmap webinar offers a concise yet comprehensive exploration of the role of a Test Manager within SAP environments, coupled with the utilization of heatmaps for effective testing strategies.
Participants will gain insights into the responsibilities, challenges, and best practices associated with test management in SAP projects. Additionally, the webinar delves into the significance of heatmaps as a visual aid for identifying testing priorities, areas of risk, and resource allocation within SAP landscapes. Through this session, attendees can expect to enhance their understanding of test management principles while learning practical approaches to optimize testing processes in SAP environments using heatmap visualization techniques
What will you get from this session?
1. Insights into SAP testing best practices
2. Heatmap utilization for testing
3. Optimization of testing processes
4. Demo
Topics covered:
Execution from the test manager
Orchestrator execution result
Defect reporting
SAP heatmap example with demo
Speaker:
Deepak Rai, Automation Practice Lead, Boundaryless Group and UiPath MVP
Essentials of Automations: Optimizing FME Workflows with ParametersSafe Software
Are you looking to streamline your workflows and boost your projects’ efficiency? Do you find yourself searching for ways to add flexibility and control over your FME workflows? If so, you’re in the right place.
Join us for an insightful dive into the world of FME parameters, a critical element in optimizing workflow efficiency. This webinar marks the beginning of our three-part “Essentials of Automation” series. This first webinar is designed to equip you with the knowledge and skills to utilize parameters effectively: enhancing the flexibility, maintainability, and user control of your FME projects.
Here’s what you’ll gain:
- Essentials of FME Parameters: Understand the pivotal role of parameters, including Reader/Writer, Transformer, User, and FME Flow categories. Discover how they are the key to unlocking automation and optimization within your workflows.
- Practical Applications in FME Form: Delve into key user parameter types including choice, connections, and file URLs. Allow users to control how a workflow runs, making your workflows more reusable. Learn to import values and deliver the best user experience for your workflows while enhancing accuracy.
- Optimization Strategies in FME Flow: Explore the creation and strategic deployment of parameters in FME Flow, including the use of deployment and geometry parameters, to maximize workflow efficiency.
- Pro Tips for Success: Gain insights on parameterizing connections and leveraging new features like Conditional Visibility for clarity and simplicity.
We’ll wrap up with a glimpse into future webinars, followed by a Q&A session to address your specific questions surrounding this topic.
Don’t miss this opportunity to elevate your FME expertise and drive your projects to new heights of efficiency.
Software Delivery At the Speed of AI: Inflectra Invests In AI-Powered QualityInflectra
In this insightful webinar, Inflectra explores how artificial intelligence (AI) is transforming software development and testing. Discover how AI-powered tools are revolutionizing every stage of the software development lifecycle (SDLC), from design and prototyping to testing, deployment, and monitoring.
Learn about:
• The Future of Testing: How AI is shifting testing towards verification, analysis, and higher-level skills, while reducing repetitive tasks.
• Test Automation: How AI-powered test case generation, optimization, and self-healing tests are making testing more efficient and effective.
• Visual Testing: Explore the emerging capabilities of AI in visual testing and how it's set to revolutionize UI verification.
• Inflectra's AI Solutions: See demonstrations of Inflectra's cutting-edge AI tools like the ChatGPT plugin and Azure Open AI platform, designed to streamline your testing process.
Whether you're a developer, tester, or QA professional, this webinar will give you valuable insights into how AI is shaping the future of software delivery.
Kubernetes & AI - Beauty and the Beast !?! @KCD Istanbul 2024Tobias Schneck
As AI technology is pushing into IT I was wondering myself, as an “infrastructure container kubernetes guy”, how get this fancy AI technology get managed from an infrastructure operational view? Is it possible to apply our lovely cloud native principals as well? What benefit’s both technologies could bring to each other?
Let me take this questions and provide you a short journey through existing deployment models and use cases for AI software. On practical examples, we discuss what cloud/on-premise strategy we may need for applying it to our own infrastructure to get it to work from an enterprise perspective. I want to give an overview about infrastructure requirements and technologies, what could be beneficial or limiting your AI use cases in an enterprise environment. An interactive Demo will give you some insides, what approaches I got already working for real.
Unsubscribed: Combat Subscription Fatigue With a Membership Mentality by Head...
Building Hopsworks, a cloud-native managed feature store for machine learning
1. Jim Dowling
CEO, Logical Clocks
Cloud Native London Meetup, March 3 2021
Building Hopsworks, a cloud-native managed
feature store for machine learning
3. The Hopsworks Feature Store - Available on all Platforms as Managed, Enterprise, and Community
hopsworks.ai
(managed platform)
Enterprise Hopsworks
(self-hosted platform)
Community Hopsworks**
(self-hosted platform)
Runs on any Platform*
(On-premise, Cloud, VMs, etc)
Runs on any Platform*
(On-premise, Cloud, VMs, etc)
*Supported operating systems: RHEL/Centos 7.x and Ubuntu 18.04. Minimum Requirements: 32GB RAM, 100GB disk, 8 CPUs. Runs in air-gapped environments.
**Community Hopsworks does not include (1) Feature Store Connectors to Third-Party Platforms and (2) SSO with Active Directory/OAuth-2/Azure-AD/AWS.
2016 2018 2020
Only Managed Feature Available
today on both AWS and Azure
4. When do I need a Feature
Store for Machine Learning
and what it is anyway?
5. Business Problem: Use Machine Learning to Predict Money Laundering
Reference: Whitepaper, Webinar
6. 6
What data can I use to solve my Anti-Money Laundering Problem with?
Know Your Customer Data
Historical Financial Transactions
Recent
Financial
Transactions
Data Warehouse
Data Lake
Message Bus
TRAIN
SERVE
7. 7
It is not always easy to get access to Enterprise data for training and serving.
Know Your Customer Data
Historical Financial Transactions
Recent
Financial
Transactions
Data Warehouse
Data Lake
Message Bus
TRAIN
SERVE
8. 8
What data can I use to make predictions with?
Know Your Customer Data
Historical Financial Transactions
Recent
Financial
Transactions
Data Warehouse
Data Lake
Message Bus
TRAIN
SERVE
Feature
Store
9. Where does the Feature Store fit into the ML Pipeline?
FEATURE STORE
TRAIN / SERVE
FEATURIZE
10. Offline Feature Store - Create Training Data and Batch Predictions
df = kycFG.select_all().join(rftFG.select_all()).join(hftFG.select_all())
td = fs.create_training_dataset("precipitation_training_dataset",
version=1,
data_format="tfrecord",
description="Precipitation Training dataset",
splits={'train': 0.7, 'test': 0.2, 'validate': 0.1})
td.save(df)
FG=Feature Group https://docs.hopsworks.ai/
Feature Store
kycFG
rftFG
hftFG
Training Data
(.tfrecord)
Model
train
11. Online Feature Store - the Data Layer for Operational (Online) Models
US-West-1c
US-West-la
US-West-1b
RonDB2
Model
RonDB1 Model
RonDB3
Model
Online Application
1.JDBC 2.Predict
2-20ms
1. Build Feature Vector Using Online Feature Store
2. Send Feature Vector to Model for Prediction
~5-50ms
13. Hopsworks - Develop and Operate ML Applications at Scale
APPLICATIONS
API
DASHBOARDS
HOPSWORKS
DATASOURCE
ORCHESTRATION
Airflow
BATCH
Apache Spark
STREAMING
Apache Spark
Apache Flink
HOPSWORKS
FEATURE
STORE
ML DEVELOP
AND TRAIN
Notebooks as Jobs
Tensorflow
Scikit-Learn
PyTorch
Tensorboard
FILESYSTEM & METASTORE
HopsFS
MODEL
SERVING AND
MONITORING
KFServing
TF-Serving
Flask
Data Preparation
& Ingestion
Experimentation
& Model Training
Deploy
& Productionalize
Apache
Kafka
18. Moving to the Cloud - Connectors and Integrations
Hopsworks
Project-Based Multi-Tenant Security
API
KEY
IAM Profile or Federated IAM Role
Users
Jobs
Dev Feature Store
Staging Feature Store
Prod Feature Store
User
Login
(LDAP, AD,
OAuth2, 2FA)
databricks
SageMaker
Kubeflow
Amazon EMR
Delta Lake
Snowflake
Amazon S3
Amazon
Redshift
19. 19
Making Hopsworks Cloud-Native
Hopsworks Open Source Cloud Native Service
Open-Source Docker Repository ECR / ACR
Kubernetes EKS / AKS
Hopsworks Services Rejected Cloud Native Versions
Spark-on-YARN Databricks / EMR
HopsFS S3
RonDB DynamoDB/Elasticache
Kafka Managed Kafka
Elastic Open Distro AWS Elastic
27. RonDB - a new open-source cloud-native distribution of NDB (MySQL Cluster)
Inventor of NDB
(MySQL Cluster)
www.rondb.com
RonDB vs Redis - RonDB outperforms on 1 CPU Core and Keeps on Scaling
MySQL Cluster (NDB) - the world’s highest throughput transactional datastore
200m ops/second with NDB - world’s fastest key-value store
28. 28
RonDB - the first LATS Database in the Cloud. Launched in private beta Feb 2021.
RonDB is a LATS Database
low Latency, high Availability, high Throughput, scalable Storage
< 1ms KV lookup
>10M KV Lookups/sec
>99.999% availability
30. 30
Lessons Learnt (so far) in building a Cloud Native Managed Data/AI Platform
Shiny new Toys not always the best
● Lambda functions poor for synchronous events (e.g. request reply)
due to the slow response times
○ Unsuitable for "web" endpoints - 500-2000 ms response time
○ Cold lambdas, but also JS JIT.
○ Parallel operations difficult due to lack of support in lambda
● “Amplifeck’d” is a common word on our Slack
● SQL > Key Value APIs
33. Feature Engineering and Model Training Pipeline - With a Feature Store
KAFKA Train/Test Data
(S3, HDFS, etc)
Online
Application
Data Warehouse
Data Lake
Feature
Engineering
Offline
Feature Store
Model
Training
Model
Serving
Online
Feature Store
Model
Repository
Monitor
Deploy
Feature Vectors
Result Sink (DB)
Batch
Scoring
Batch Access
Deploy
Feature Store