View the Big Data Technology Stack in a nutshell. This Big Data Technology Stack deck covers the different layers of the Big Data world and summarizes the major technologies in vogue today.
RWE & Patient Analytics Leveraging Databricks – A Use CaseDatabricks
Gaining insights and knowledge from real-world health data (RWD), i.e., data acquired outside the context of randomized clinical trials, has been an area of continued opportunity for pharma organizations.
What is real-world data and real-world evidence – how it is generated, what value it drives for life sciences in general and what kind of analytics are performed.
What are some considerations and challenges related to data security, privacy, and industrialization of a big data platform hosted in the cloud.
How we leveraged Databricks to perform big data ingestion – advantages over native AWS Batch/Glue Examples of some of the advanced analytics use cases downstream that leveraged DB for RWE.
Note: This solution and one of the use case leveraging the solution won the 2020 Gartner Eye for Innovation award.
https://www.gartner.com/en/newsroom/press-releases/2020-11-17-gartner-announces-winners-of-the-2020-gartner-healthcare-and-life-sciences-eye-on-innovation-awardIn this
Apache Spark Training | Spark Tutorial For Beginners | Apache Spark Certifica...Edureka!
This Edureka "Apache Spark Training" tutorial will talk about how Apache Spark works practically. We have demonstrated a Movie Recommendation Project using Apache Spark in this tutorial. Below are the topics covered in this tutorial:
1) Use Cases Of Real Time Analytics
2) Movie Recommendation System Using Spark
3) What Is Spark?
4) Getting Movie Dataset
5) Spark Streaming
6) Collaborative Filtering
7) Spark MLlib
8) Fetching Results
9) Storing Results
Online Book store is an online web application where the customer can purchase books online, through a web browser. The customers can select a book; later can add to the shopping cart and finally purchase using payment methods.
Data Engineer's Lunch #81: Reverse ETL Tools for Modern Data PlatformsAnant Corporation
During this lunch, we’ll review open-source reverse ETL tools to uncover how to send data back to SaaS systems.
Sign Up For Our Newsletter: http://eepurl.com/grdMkn
Join Data Engineer’s Lunch Weekly at 12 PM EST Every Monday:
https://www.meetup.com/Data-Wranglers-DC/events/
Cassandra.Link:
https://cassandra.link/
Follow Us and Reach Us At:
Anant:
https://www.anant.us/
Awesome Cassandra:
https://github.com/Anant/awesome-cassandra
Email:
solutions@anant.us
LinkedIn:
https://www.linkedin.com/company/anant/
Twitter:
https://twitter.com/anantcorp
Eventbrite:
https://www.eventbrite.com/o/anant-1072927283
Facebook:
https://www.facebook.com/AnantCorp/
Join The Anant Team:
https://www.careers.anant.us
#data #dataengineering #datagovernance
View the Big Data Technology Stack in a nutshell. This Big Data Technology Stack deck covers the different layers of the Big Data world and summarizes the major technologies in vogue today.
RWE & Patient Analytics Leveraging Databricks – A Use CaseDatabricks
Gaining insights and knowledge from real-world health data (RWD), i.e., data acquired outside the context of randomized clinical trials, has been an area of continued opportunity for pharma organizations.
What is real-world data and real-world evidence – how it is generated, what value it drives for life sciences in general and what kind of analytics are performed.
What are some considerations and challenges related to data security, privacy, and industrialization of a big data platform hosted in the cloud.
How we leveraged Databricks to perform big data ingestion – advantages over native AWS Batch/Glue Examples of some of the advanced analytics use cases downstream that leveraged DB for RWE.
Note: This solution and one of the use case leveraging the solution won the 2020 Gartner Eye for Innovation award.
https://www.gartner.com/en/newsroom/press-releases/2020-11-17-gartner-announces-winners-of-the-2020-gartner-healthcare-and-life-sciences-eye-on-innovation-awardIn this
Apache Spark Training | Spark Tutorial For Beginners | Apache Spark Certifica...Edureka!
This Edureka "Apache Spark Training" tutorial will talk about how Apache Spark works practically. We have demonstrated a Movie Recommendation Project using Apache Spark in this tutorial. Below are the topics covered in this tutorial:
1) Use Cases Of Real Time Analytics
2) Movie Recommendation System Using Spark
3) What Is Spark?
4) Getting Movie Dataset
5) Spark Streaming
6) Collaborative Filtering
7) Spark MLlib
8) Fetching Results
9) Storing Results
Online Book store is an online web application where the customer can purchase books online, through a web browser. The customers can select a book; later can add to the shopping cart and finally purchase using payment methods.
Data Engineer's Lunch #81: Reverse ETL Tools for Modern Data PlatformsAnant Corporation
During this lunch, we’ll review open-source reverse ETL tools to uncover how to send data back to SaaS systems.
Sign Up For Our Newsletter: http://eepurl.com/grdMkn
Join Data Engineer’s Lunch Weekly at 12 PM EST Every Monday:
https://www.meetup.com/Data-Wranglers-DC/events/
Cassandra.Link:
https://cassandra.link/
Follow Us and Reach Us At:
Anant:
https://www.anant.us/
Awesome Cassandra:
https://github.com/Anant/awesome-cassandra
Email:
solutions@anant.us
LinkedIn:
https://www.linkedin.com/company/anant/
Twitter:
https://twitter.com/anantcorp
Eventbrite:
https://www.eventbrite.com/o/anant-1072927283
Facebook:
https://www.facebook.com/AnantCorp/
Join The Anant Team:
https://www.careers.anant.us
#data #dataengineering #datagovernance
PostgreSQL + Kafka: The Delight of Change Data CaptureJeff Klukas
PostgreSQL is an open source relational database. Kafka is an open source log-based messaging system. Because both systems are powerful and flexible, they’re devouring whole categories of infrastructure. And they’re even better together.
In this talk, you’ll learn about commit logs and how that fundamental data structure underlies both PostgreSQL and Kafka. We’ll use that basis to understand what Kafka is, what advantages it has over traditional messaging systems, and why it’s perfect for modeling database tables as streams. From there, we’ll introduce the concept of change data capture (CDC) and run a live demo of Bottled Water, an open source CDC pipeline, watching INSERT, UPDATE, and DELETE operations in PostgreSQL stream into Kafka. We’ll wrap up with a discussion of use cases for this pipeline: messaging between systems with transactional guarantees, transmitting database changes to a data warehouse, and stream processing.
Build Large-Scale Data Analytics and AI Pipeline Using RayDPDatabricks
A large-scale end-to-end data analytics and AI pipeline usually involves data processing frameworks such as Apache Spark for massive data preprocessing, and ML/DL frameworks for distributed training on the preprocessed data. A conventional approach is to use two separate clusters and glue multiple jobs. Other solutions include running deep learning frameworks in an Apache Spark cluster, or use workflow orchestrators like Kubeflow to stitch distributed programs. All these options have their own limitations. We introduce Ray as a single substrate for distributed data processing and machine learning. We also introduce RayDP which allows you to start an Apache Spark job on Ray in your python program and utilize Ray’s in-memory object store to efficiently exchange data between Apache Spark and other libraries. We will demonstrate how this makes building an end-to-end data analytics and AI pipeline simpler and more efficient.
Kafka High Availability in multi data center setup with floating Observers wi...HostedbyConfluent
Enabling High Availability in cluster setup that spawns different data centers is challenging but it is even more if we are using just two data-centers. Not ideal for Kafka HA at all. But this is reality for most organizations as they are using the same data-centers previously used for database HA.
In this presentation we will see how to use Kafka Observer feature to address this challenge with additional tweak to distribute load evenly among Observers and ordinary Brokers and make them floating between data-centers. The whole demo is supported by Infrastructure as a code automation trough Ansible.
Battle of the Stream Processing Titans – Flink versus RisingWaveYingjun Wu
The world of real-time data processing is constantly evolving, with new technologies and platforms emerging to meet the ever-increasing demands of modern data-driven businesses. Apache Flink and RisingWave are two powerful stream processing solutions that have gained significant traction in recent years. But which platform is right for your organization? Karin Wolok and Yingjun Wu go head-to-head to compare and contrast the strengths and limitations of Flink and RisingWave. They’ll also share real-world use cases, best practices for optimizing performance and efficiency, and key considerations for selecting the right solution for your specific business needs.
Bighead: Airbnb’s End-to-End Machine Learning Platform with Krishna Puttaswa...Databricks
Airbnb has a wide variety of ML problems ranging from models on traditional structured data to models built on unstructured data such as user reviews, messages and listing images. The ability to build, iterate on, and maintain healthy machine learning models is critical to Airbnb’s success. Many ML Platforms cover data collection, feature engineering, training, deploying, productionalization, and monitoring but few, if any, do all of the above seamlessly.
Bighead aims to tie together various open source and in-house projects to remove incidental complexity from ML workflows. Bighead is built on Python and Spark and can be used in modular pieces as each ML problem presents unique challenges. Through standardization of the path to production, training environments and the methods for collecting and transforming data on Spark, each model is reproducible and iterable.
This talk covers the architecture, the problems that each individual component and the overall system aims to solve, and a vision for the future of machine learning infrastructure. It’s widely adapted in Airbnb and we have variety of models running in production. We have seen the overall model development time go down from many months to days on Bighead. We plan to open source Bighead to allow the wider community to benefit from our work.
Deploying Kafka Streams Applications with Docker and Kubernetesconfluent
(Gwen Shapira + Matthias J. Sax, Confluent) Kafka Summit SF 2018
Kafka Streams, Apache Kafka’s stream processing library, allows developers to build sophisticated stateful stream processing applications which you can deploy in an environment of your choice. Kafka Streams is not only scalable, but fully elastic allowing for dynamic scale-in and scale-out as the library handles state migration transparently in the background. By running Kafka Streams applications on Kubernetes, you will be able to use Kubernetes powerful control plane to standardize and simplify the application management—from deployment to dynamic scaling.
In this technical deep dive, we’ll explain the internals of dynamic scaling and state migration in Kafka Streams. We’ll then show, with a live demo, how a Kafka Streams application can run in a Docker container on Kubernetes and the dynamic scaling of an application running in Kubernetes.
Exactly-once Stream Processing with Kafka StreamsGuozhang Wang
I will present the recent additions to Kafka to achieve exactly-once semantics (0.11.0) within its Streams API for stream processing use cases. This is achieved by leveraging the underlying idempotent and transactional client features. The main focus will be the specific semantics that Kafka distributed transactions enable in Streams and the underlying mechanics to let Streams scale efficiently.
Serverless Kafka on AWS as Part of a Cloud-native Data Lake ArchitectureKai Wähner
AWS Data Lake / Lake House + Confluent Cloud for Serverless Apache Kafka. Learn about use cases, architectures, and features.
Data must be continuously collected, processed, and reactively used in applications across the entire enterprise - some in real time, some in batch mode. In other words: As an enterprise becomes increasingly software-defined, it needs a data platform designed primarily for "data in motion" rather than "data at rest."
Apache Kafka is now mainstream when it comes to data in motion! The Kafka API has become the de facto standard for event-driven architectures and event streaming. Unfortunately, the cost of running it yourself is very often too expensive when you add factors like scaling, administration, support, security, creating connectors...and everything else that goes with it. Resources in enterprises are scarce: this applies to both the best team members and the budget.
The cloud - as we all know - offers the perfect solution to such challenges.
Most likely, fully-managed cloud services such as AWS S3, DynamoDB or Redshift are already in use. Now it is time to implement "fully-managed" for Kafka as well - with Confluent Cloud on AWS.
Building a central integration layer that doesn't care where or how much data is coming from.
Implementing scalable data stream processing to gain real-time insights
Leveraging fully managed connectors (like S3, Redshift, Kinesis, MongoDB Atlas & more) to quickly access data
Confluent Cloud in action? Let's show how ao.com made it happen!
Translated with www.DeepL.com/Translator (free version)
Microservices, Apache Kafka, Node, Dapr and more - Part Two (Fontys Hogeschoo...Lucas Jellema
Apache Kafka is one of the best known enterprise grade message brokers – created at LinkedIn, donated to the Apache software foundation and used in an ever growing number of organizations to provide a backbone for asynchronous communication. This session introduces Apache Kafka – history, concepts, community and tooling. In a hands on lab, participants will create topics, publish and consume messages and get a general feel for Kafka. Simple microservices are developed in NodeJS – publishing to and consuming from Apache Kafka.
Dapr.io has support for Apache Kafka. Using Kafka through Dapr is very straightforward as is explained and demonstrated and applied in a second handson lab – with applications in various programming languages. Participants will even be able to exchange events across their laptops – through a cloud based Kafka broker.
Use of Apache Kafka in several architecture patterns is discussed – such as data integration, microservices, CQRS, Event Sourcing – along with a number of real world use cases from several well known organizations. The Kafka Connector framework is introduced – a set of adapters that allow us to easily connect Kafka to sources and sinks – where respectively change events are captured from and messages are published to.
Bonus Lab: Apache Kafka is ran on Kubernetes as is Dapr.io. Multiple mutually interacting microservices are deployed on the same local Kubernetes cluster.
Building Pinterest Real-Time Ads Platform Using Kafka Streams confluent
Building Pinterest Real-Time Ads Platform Using Kafka Streams (Liquan Pei + Boyang Chen, Pinterest) Kafka Summit SF 2018
In this talk, we are sharing the experience of building Pinterest’s real-time Ads Platform utilizing Kafka Streams. The real-time budgeting system is the most mission-critical component of the Ads Platform as it controls how each ad is delivered to maximize user, advertiser and Pinterest value. The system needs to handle over 50,000 queries per section (QPS) impressions, requires less than five seconds of end-to-end latency and recovers within five minutes during outages. It also needs to be scalable to handle the fast growth of Pinterest’s ads business.
The real-time budgeting system is composed of real-time stream-stream joiner, real-time spend aggregator and a spend predictor. At Pinterest’s scale, we need to overcome quite a few challenges to make each component work. For example, the stream-stream joiner needs to maintain terabyte size state while supporting fast recovery, and the real-time spend aggregator needs to publish to thousands of ads servers while supporting over one million read QPS. We choose Kafka Streams as it provides milliseconds latency guarantee, scalable event-based processing and easy-to-use APIs. In the process of building the system, we performed tons of tuning to RocksDB, Kafka Producer and Consumer, and pushed several open source contributions to Apache Kafka. We are also working on adding a remote checkpoint for Kafka Streams state to reduce the time of code start when adding more machines to the application. We believe that our experience can be beneficial to people who want to build real-time streaming solutions at large scale and deeply understand Kafka Streams.
Data Driving Yahoo Mail Growth and Evolution with a 50 PB Hadoop WarehouseDataWorks Summit
Yahoo Mail has 200+ million users a month and generates hundreds of terabytes of data per day, which continues to grow steadily. The nature of email messages has also evolved: for example, today the majority of them are generated by machines, consisting of newsletters, social media notifications, purchase invoices, travel bookings, and the like, which drove innovations in product development to help users organize their inboxes.
Since 2014, the Yahoo Mail Data Engineering team took on the task of revamping the Mail data warehouse and analytics infrastructure in order to drive the continued growth and evolution of Yahoo Mail. Along the way we have built a 50 PB Hadoop warehouse, and surrounding analytics and machine learning programs that have transformed the way data plays in Yahoo Mail.
In this session we will share our experience from this 3 year journey, from the system architecture, analytics systems built, to the learnings from development and drive for adoption.
Distributed Databases Deconstructed: CockroachDB, TiDB and YugaByte DBYugabyteDB
Slides for Amey Banarse's, Principal Data Architect at Yugabyte, "Distributed Databases Deconstructed: CockroachDB, TiDB and YugaByte DB" webinar recorded on Oct 30, 2019 at 11 AM Pacific.
Playback here: https://vimeo.com/369929255
Bringing Kafka Without Zookeeper Into Production with Colin McCabe | Kafka Su...HostedbyConfluent
Over the last few years, we have been working on removing the dependency on ZooKeeper from Apache Kafka®. Instead of using an external system to store metadata, Kafka can now manage its own metadata. This new mode of operation is called Kafka Raft mode, or ""KRaft"" for short. It has many performance and scalability benefits.
This talk will discuss our efforts to get KRaft mode production-ready. We will talk about the old and new architectures, and how we adapted features to work in both worlds. We will also talk about our experiences with testing and deploying the new software. Finally, we'll talk about what's planned for the future.
Streaming ETL to Elastic with Apache Kafka and KSQLconfluent
Companies are recognizing the importance of a low-latency, scalable, fault-tolerant data backbone, in the form of the Apache Kafka streaming platform. With Kafka, developers can integrate multiple sources and systems, enableing low latency analytics, event-driven architectures and the population of multiple downstream systems. These data pipelines can be built using configuration alone.
In this talk we’ll see how easy it is to stream data from sources such as databases and into Kafka using the Kafka Connect API. We’ll use KSQL to filter, aggregate and join it to other data, and then stream this enriched data from Kafka out into targets such as Elasticsearch. All of this can be accomplished without a single line of code!
The following report is based on a DBMS of an online Art Gallery Online Shopping Store made using Django (front-end) and MYSQL_8.0 (for database storage).
Feature Store as a Data Foundation for Machine LearningProvectus
Looking to design and build a centralized, scalable Feature Store for your Data Science & Machine Learning teams to take advantage of? Come and learn from experts of Provectus and Amazon Web Services (AWS) how to!
Feature Store is a key component of the ML stack and data infrastructure, which enables feature engineering and management. By having a Feature Store, organizations can save massive amounts of resources, innovate faster, and drive ML processes at scale. In this webinar, you will learn how to build a Feature Store with a data mesh pattern and see how to achieve consistency between real-time and training features, to improve reproducibility with time-traveling for data.
Agenda
- Modern Data Lakes & Modern ML Infrastructure
- Existing and Emerging Architectural Shifts
- Feature Store: Overview and Reference Architecture
- AWS Perspective on Feature Store
Intended Audience
Technology executives & decision makers, manager-level tech roles, data architects & analysts, data engineers & data scientists, ML practitioners & ML engineers, and developers
Presenters
- Stepan Pushkarev, Chief Technology Officer, Provectus
- Gandhi Raketla, Senior Solutions Architect, AWS
- German Osin, Senior Solutions Architect, Provectus
Feel free to share this presentation with your colleagues and don't hesitate to reach out to us at info@provectus.com if you have any questions!
REQUEST WEBINAR: https://provectus.com/webinar-feature-store-as-data-foundation-for-ml-nov-2020/
Data Scientists and Machine Learning practitioners, nowadays, seem to be churning out models by the dozen and they continuously experiment to find ways to improve their accuracies. They also use a variety of ML and DL frameworks & languages , and a typical organization may find that this results in a heterogenous, complicated bunch of assets that require different types of runtimes, resources and sometimes even specialized compute to operate efficiently.
But what does it mean for an enterprise to actually take these models to "production" ? How does an organization scale inference engines out & make them available for real-time applications without significant latencies ? There needs to be different techniques for batch (offline) inferences and instant, online scoring. Data needs to be accessed from various sources and cleansing, transformations of data needs to be enabled prior to any predictions. In many cases, there maybe no substitute for customized data handling with scripting either.
Enterprises also require additional auditing and authorizations built in, approval processes and still support a "continuous delivery" paradigm whereby a data scientist can enable insights faster. Not all models are created equal, nor are consumers of a model - so enterprises require both metering and allocation of compute resources for SLAs.
In this session, we will take a look at how machine learning is operationalized in IBM Data Science Experience (DSX), a Kubernetes based offering for the Private Cloud and optimized for the HortonWorks Hadoop Data Platform. DSX essentially brings in typical software engineering development practices to Data Science, organizing the dev->test->production for machine learning assets in much the same way as typical software deployments. We will also see what it means to deploy, monitor accuracies and even rollback models & custom scorers as well as how API based techniques enable consuming business processes and applications to remain relatively stable amidst all the chaos.
Speaker
Piotr Mierzejewski, Program Director Development IBM DSX Local, IBM
PostgreSQL + Kafka: The Delight of Change Data CaptureJeff Klukas
PostgreSQL is an open source relational database. Kafka is an open source log-based messaging system. Because both systems are powerful and flexible, they’re devouring whole categories of infrastructure. And they’re even better together.
In this talk, you’ll learn about commit logs and how that fundamental data structure underlies both PostgreSQL and Kafka. We’ll use that basis to understand what Kafka is, what advantages it has over traditional messaging systems, and why it’s perfect for modeling database tables as streams. From there, we’ll introduce the concept of change data capture (CDC) and run a live demo of Bottled Water, an open source CDC pipeline, watching INSERT, UPDATE, and DELETE operations in PostgreSQL stream into Kafka. We’ll wrap up with a discussion of use cases for this pipeline: messaging between systems with transactional guarantees, transmitting database changes to a data warehouse, and stream processing.
Build Large-Scale Data Analytics and AI Pipeline Using RayDPDatabricks
A large-scale end-to-end data analytics and AI pipeline usually involves data processing frameworks such as Apache Spark for massive data preprocessing, and ML/DL frameworks for distributed training on the preprocessed data. A conventional approach is to use two separate clusters and glue multiple jobs. Other solutions include running deep learning frameworks in an Apache Spark cluster, or use workflow orchestrators like Kubeflow to stitch distributed programs. All these options have their own limitations. We introduce Ray as a single substrate for distributed data processing and machine learning. We also introduce RayDP which allows you to start an Apache Spark job on Ray in your python program and utilize Ray’s in-memory object store to efficiently exchange data between Apache Spark and other libraries. We will demonstrate how this makes building an end-to-end data analytics and AI pipeline simpler and more efficient.
Kafka High Availability in multi data center setup with floating Observers wi...HostedbyConfluent
Enabling High Availability in cluster setup that spawns different data centers is challenging but it is even more if we are using just two data-centers. Not ideal for Kafka HA at all. But this is reality for most organizations as they are using the same data-centers previously used for database HA.
In this presentation we will see how to use Kafka Observer feature to address this challenge with additional tweak to distribute load evenly among Observers and ordinary Brokers and make them floating between data-centers. The whole demo is supported by Infrastructure as a code automation trough Ansible.
Battle of the Stream Processing Titans – Flink versus RisingWaveYingjun Wu
The world of real-time data processing is constantly evolving, with new technologies and platforms emerging to meet the ever-increasing demands of modern data-driven businesses. Apache Flink and RisingWave are two powerful stream processing solutions that have gained significant traction in recent years. But which platform is right for your organization? Karin Wolok and Yingjun Wu go head-to-head to compare and contrast the strengths and limitations of Flink and RisingWave. They’ll also share real-world use cases, best practices for optimizing performance and efficiency, and key considerations for selecting the right solution for your specific business needs.
Bighead: Airbnb’s End-to-End Machine Learning Platform with Krishna Puttaswa...Databricks
Airbnb has a wide variety of ML problems ranging from models on traditional structured data to models built on unstructured data such as user reviews, messages and listing images. The ability to build, iterate on, and maintain healthy machine learning models is critical to Airbnb’s success. Many ML Platforms cover data collection, feature engineering, training, deploying, productionalization, and monitoring but few, if any, do all of the above seamlessly.
Bighead aims to tie together various open source and in-house projects to remove incidental complexity from ML workflows. Bighead is built on Python and Spark and can be used in modular pieces as each ML problem presents unique challenges. Through standardization of the path to production, training environments and the methods for collecting and transforming data on Spark, each model is reproducible and iterable.
This talk covers the architecture, the problems that each individual component and the overall system aims to solve, and a vision for the future of machine learning infrastructure. It’s widely adapted in Airbnb and we have variety of models running in production. We have seen the overall model development time go down from many months to days on Bighead. We plan to open source Bighead to allow the wider community to benefit from our work.
Deploying Kafka Streams Applications with Docker and Kubernetesconfluent
(Gwen Shapira + Matthias J. Sax, Confluent) Kafka Summit SF 2018
Kafka Streams, Apache Kafka’s stream processing library, allows developers to build sophisticated stateful stream processing applications which you can deploy in an environment of your choice. Kafka Streams is not only scalable, but fully elastic allowing for dynamic scale-in and scale-out as the library handles state migration transparently in the background. By running Kafka Streams applications on Kubernetes, you will be able to use Kubernetes powerful control plane to standardize and simplify the application management—from deployment to dynamic scaling.
In this technical deep dive, we’ll explain the internals of dynamic scaling and state migration in Kafka Streams. We’ll then show, with a live demo, how a Kafka Streams application can run in a Docker container on Kubernetes and the dynamic scaling of an application running in Kubernetes.
Exactly-once Stream Processing with Kafka StreamsGuozhang Wang
I will present the recent additions to Kafka to achieve exactly-once semantics (0.11.0) within its Streams API for stream processing use cases. This is achieved by leveraging the underlying idempotent and transactional client features. The main focus will be the specific semantics that Kafka distributed transactions enable in Streams and the underlying mechanics to let Streams scale efficiently.
Serverless Kafka on AWS as Part of a Cloud-native Data Lake ArchitectureKai Wähner
AWS Data Lake / Lake House + Confluent Cloud for Serverless Apache Kafka. Learn about use cases, architectures, and features.
Data must be continuously collected, processed, and reactively used in applications across the entire enterprise - some in real time, some in batch mode. In other words: As an enterprise becomes increasingly software-defined, it needs a data platform designed primarily for "data in motion" rather than "data at rest."
Apache Kafka is now mainstream when it comes to data in motion! The Kafka API has become the de facto standard for event-driven architectures and event streaming. Unfortunately, the cost of running it yourself is very often too expensive when you add factors like scaling, administration, support, security, creating connectors...and everything else that goes with it. Resources in enterprises are scarce: this applies to both the best team members and the budget.
The cloud - as we all know - offers the perfect solution to such challenges.
Most likely, fully-managed cloud services such as AWS S3, DynamoDB or Redshift are already in use. Now it is time to implement "fully-managed" for Kafka as well - with Confluent Cloud on AWS.
Building a central integration layer that doesn't care where or how much data is coming from.
Implementing scalable data stream processing to gain real-time insights
Leveraging fully managed connectors (like S3, Redshift, Kinesis, MongoDB Atlas & more) to quickly access data
Confluent Cloud in action? Let's show how ao.com made it happen!
Translated with www.DeepL.com/Translator (free version)
Microservices, Apache Kafka, Node, Dapr and more - Part Two (Fontys Hogeschoo...Lucas Jellema
Apache Kafka is one of the best known enterprise grade message brokers – created at LinkedIn, donated to the Apache software foundation and used in an ever growing number of organizations to provide a backbone for asynchronous communication. This session introduces Apache Kafka – history, concepts, community and tooling. In a hands on lab, participants will create topics, publish and consume messages and get a general feel for Kafka. Simple microservices are developed in NodeJS – publishing to and consuming from Apache Kafka.
Dapr.io has support for Apache Kafka. Using Kafka through Dapr is very straightforward as is explained and demonstrated and applied in a second handson lab – with applications in various programming languages. Participants will even be able to exchange events across their laptops – through a cloud based Kafka broker.
Use of Apache Kafka in several architecture patterns is discussed – such as data integration, microservices, CQRS, Event Sourcing – along with a number of real world use cases from several well known organizations. The Kafka Connector framework is introduced – a set of adapters that allow us to easily connect Kafka to sources and sinks – where respectively change events are captured from and messages are published to.
Bonus Lab: Apache Kafka is ran on Kubernetes as is Dapr.io. Multiple mutually interacting microservices are deployed on the same local Kubernetes cluster.
Building Pinterest Real-Time Ads Platform Using Kafka Streams confluent
Building Pinterest Real-Time Ads Platform Using Kafka Streams (Liquan Pei + Boyang Chen, Pinterest) Kafka Summit SF 2018
In this talk, we are sharing the experience of building Pinterest’s real-time Ads Platform utilizing Kafka Streams. The real-time budgeting system is the most mission-critical component of the Ads Platform as it controls how each ad is delivered to maximize user, advertiser and Pinterest value. The system needs to handle over 50,000 queries per section (QPS) impressions, requires less than five seconds of end-to-end latency and recovers within five minutes during outages. It also needs to be scalable to handle the fast growth of Pinterest’s ads business.
The real-time budgeting system is composed of real-time stream-stream joiner, real-time spend aggregator and a spend predictor. At Pinterest’s scale, we need to overcome quite a few challenges to make each component work. For example, the stream-stream joiner needs to maintain terabyte size state while supporting fast recovery, and the real-time spend aggregator needs to publish to thousands of ads servers while supporting over one million read QPS. We choose Kafka Streams as it provides milliseconds latency guarantee, scalable event-based processing and easy-to-use APIs. In the process of building the system, we performed tons of tuning to RocksDB, Kafka Producer and Consumer, and pushed several open source contributions to Apache Kafka. We are also working on adding a remote checkpoint for Kafka Streams state to reduce the time of code start when adding more machines to the application. We believe that our experience can be beneficial to people who want to build real-time streaming solutions at large scale and deeply understand Kafka Streams.
Data Driving Yahoo Mail Growth and Evolution with a 50 PB Hadoop WarehouseDataWorks Summit
Yahoo Mail has 200+ million users a month and generates hundreds of terabytes of data per day, which continues to grow steadily. The nature of email messages has also evolved: for example, today the majority of them are generated by machines, consisting of newsletters, social media notifications, purchase invoices, travel bookings, and the like, which drove innovations in product development to help users organize their inboxes.
Since 2014, the Yahoo Mail Data Engineering team took on the task of revamping the Mail data warehouse and analytics infrastructure in order to drive the continued growth and evolution of Yahoo Mail. Along the way we have built a 50 PB Hadoop warehouse, and surrounding analytics and machine learning programs that have transformed the way data plays in Yahoo Mail.
In this session we will share our experience from this 3 year journey, from the system architecture, analytics systems built, to the learnings from development and drive for adoption.
Distributed Databases Deconstructed: CockroachDB, TiDB and YugaByte DBYugabyteDB
Slides for Amey Banarse's, Principal Data Architect at Yugabyte, "Distributed Databases Deconstructed: CockroachDB, TiDB and YugaByte DB" webinar recorded on Oct 30, 2019 at 11 AM Pacific.
Playback here: https://vimeo.com/369929255
Bringing Kafka Without Zookeeper Into Production with Colin McCabe | Kafka Su...HostedbyConfluent
Over the last few years, we have been working on removing the dependency on ZooKeeper from Apache Kafka®. Instead of using an external system to store metadata, Kafka can now manage its own metadata. This new mode of operation is called Kafka Raft mode, or ""KRaft"" for short. It has many performance and scalability benefits.
This talk will discuss our efforts to get KRaft mode production-ready. We will talk about the old and new architectures, and how we adapted features to work in both worlds. We will also talk about our experiences with testing and deploying the new software. Finally, we'll talk about what's planned for the future.
Streaming ETL to Elastic with Apache Kafka and KSQLconfluent
Companies are recognizing the importance of a low-latency, scalable, fault-tolerant data backbone, in the form of the Apache Kafka streaming platform. With Kafka, developers can integrate multiple sources and systems, enableing low latency analytics, event-driven architectures and the population of multiple downstream systems. These data pipelines can be built using configuration alone.
In this talk we’ll see how easy it is to stream data from sources such as databases and into Kafka using the Kafka Connect API. We’ll use KSQL to filter, aggregate and join it to other data, and then stream this enriched data from Kafka out into targets such as Elasticsearch. All of this can be accomplished without a single line of code!
The following report is based on a DBMS of an online Art Gallery Online Shopping Store made using Django (front-end) and MYSQL_8.0 (for database storage).
Feature Store as a Data Foundation for Machine LearningProvectus
Looking to design and build a centralized, scalable Feature Store for your Data Science & Machine Learning teams to take advantage of? Come and learn from experts of Provectus and Amazon Web Services (AWS) how to!
Feature Store is a key component of the ML stack and data infrastructure, which enables feature engineering and management. By having a Feature Store, organizations can save massive amounts of resources, innovate faster, and drive ML processes at scale. In this webinar, you will learn how to build a Feature Store with a data mesh pattern and see how to achieve consistency between real-time and training features, to improve reproducibility with time-traveling for data.
Agenda
- Modern Data Lakes & Modern ML Infrastructure
- Existing and Emerging Architectural Shifts
- Feature Store: Overview and Reference Architecture
- AWS Perspective on Feature Store
Intended Audience
Technology executives & decision makers, manager-level tech roles, data architects & analysts, data engineers & data scientists, ML practitioners & ML engineers, and developers
Presenters
- Stepan Pushkarev, Chief Technology Officer, Provectus
- Gandhi Raketla, Senior Solutions Architect, AWS
- German Osin, Senior Solutions Architect, Provectus
Feel free to share this presentation with your colleagues and don't hesitate to reach out to us at info@provectus.com if you have any questions!
REQUEST WEBINAR: https://provectus.com/webinar-feature-store-as-data-foundation-for-ml-nov-2020/
Data Scientists and Machine Learning practitioners, nowadays, seem to be churning out models by the dozen and they continuously experiment to find ways to improve their accuracies. They also use a variety of ML and DL frameworks & languages , and a typical organization may find that this results in a heterogenous, complicated bunch of assets that require different types of runtimes, resources and sometimes even specialized compute to operate efficiently.
But what does it mean for an enterprise to actually take these models to "production" ? How does an organization scale inference engines out & make them available for real-time applications without significant latencies ? There needs to be different techniques for batch (offline) inferences and instant, online scoring. Data needs to be accessed from various sources and cleansing, transformations of data needs to be enabled prior to any predictions. In many cases, there maybe no substitute for customized data handling with scripting either.
Enterprises also require additional auditing and authorizations built in, approval processes and still support a "continuous delivery" paradigm whereby a data scientist can enable insights faster. Not all models are created equal, nor are consumers of a model - so enterprises require both metering and allocation of compute resources for SLAs.
In this session, we will take a look at how machine learning is operationalized in IBM Data Science Experience (DSX), a Kubernetes based offering for the Private Cloud and optimized for the HortonWorks Hadoop Data Platform. DSX essentially brings in typical software engineering development practices to Data Science, organizing the dev->test->production for machine learning assets in much the same way as typical software deployments. We will also see what it means to deploy, monitor accuracies and even rollback models & custom scorers as well as how API based techniques enable consuming business processes and applications to remain relatively stable amidst all the chaos.
Speaker
Piotr Mierzejewski, Program Director Development IBM DSX Local, IBM
Data Con LA 2022 - Pre- Recorded - Simplifying AI/ML using Databricks feature...Data Con LA
Debu Sinha, Sr Specialist Solutions Architect - AI/ML at Databricks
AI/ ML/ Data Science
1. What are feature stores. 2. Why are they important? 3. Using Databricks and the feature store offering to streamline ml. This hold true for small companies as well. How we frame our approach to AI initiatives will determine its success. Don't worry, I am not a zealot. I will not tell you AI and ML are the cure-all and will solve all your problems. Some tasks are particularly well suited to these techniques, but not all. What I love about them is the fact that they allow us to tackle difficult problems that might otherwise be too daunting.
How to Productionize Your Machine Learning Models Using Apache Spark MLlib 2....Databricks
Apache Spark has rapidly become a key tool for data scientists to explore, understand and transform massive datasets and to build and train advanced machine learning models. The question then becomes, how do you deploy these ML model to a production environment? How do you embed what you’ve learned into customer facing data applications?
In this talk I will discuss best practices on how data scientists productionize machine learning models, do a deep dive with actual case studies, and show live tutorials of a few example architectures and code in Python, Scala, Java and SQL.
Apache ® Spark™ MLlib 2.x: How to Productionize your Machine Learning ModelsAnyscale
Apache Spark has rapidly become a key tool for data scientists to explore, understand and transform massive datasets and to build and train advanced machine learning models. The question then becomes, how do I deploy these model to a production environment? How do I embed what I have learned into customer facing data applications?
In this webinar, we will discuss best practices from Databricks on
how our customers productionize machine learning models
do a deep dive with actual customer case studies,
show live tutorials of a few example architectures and code in Python, Scala, Java and SQL.
Modern machine learning systems may be very complex and may fall into many pitfalls. It's very easy to unintendedly introduce technical debt into such a complex structure. One of the approaches solving some of anti-patterns is a feature store. Feature store is a missing piece filling a gap between raw data and machine learning models. Not only it will help you to handle technical debt, but even more importantly speeds up time to develop new model.
Simplifying the Creation of Machine Learning Workflow Pipelines for IoT Appli...ScyllaDB
SmartDeployAI builds data workflow pipelines for running large scale Industrial IoT applications. Their software platform is a shared multi-tenant Kubernetes cluster environment where multiple workflow pipelines can be bootstrapped and scheduled to run concurrently. Learn how IoT sensors and devices are provisioned on their platform. This process requires them to track markers in their metadata store or parameters to run various pipeline models. They need to persist this data and make it available throughout the entire data workflow pipeline life-cycle.
Learn how their journey led to Scylla, and how they minimized latencies, maintained data storage isolation for each workflow pipeline in a shared Kubernetes cluster, bootstrapped pipeline artifacts and resources on demand and reduced their resource consumption footprint.
Bring Your SAP and Enterprise Data to Hadoop, Kafka, and the CloudDataWorks Summit
The world’s largest enterprises run their infrastructure on Oracle, DB2 and SQL and their critical business operations on SAP applications. Organisations need this data to be available in real-time to conduct necessary analytics. However, delivering this heterogeneous data at the speed it’s required can be a huge challenge because of the complex underlying data models and structures and legacy manual processes which are prone to errors and delays.
Unlock these silos of data and enable the new advanced analytics platforms by attending this session.
Find out how to:
• To overcome common challenges faced by enterprises trying to access their SAP data
• You can integrate SAP data in real-time with change data capture (CDC) technology
• Organisations are using Attunity Replicate for SAP to stream SAP data in to Kafka
Speakers:
John Hol, Regional Director, Attunity
Mike Hollobon, Director Business Development, IBT
MLOps journey at Swisscom: AI Use Cases, Architecture and Future VisionBATbern
What powers the AI/ML services of Switzerland's leading telecommunication company? In this talk, we will provide an overview of the different AI/ML projects at Swisscom, from Conversational AI and Recommender Systems to Anomaly Detection. Moreover, we will show how we automate, scale, and operationalise these ML pipelines in production, highlighting the MLOps techniques and open source tools that are used. Finally, we will present Swisscom's roadmap towards the cloud with AWS and discuss how we envision a common MLOps solution for the organisation.
Building a MLOps Platform Around MLflow to Enable Model Productionalization i...Databricks
Getting machine learning models to production is notoriously difficult: it involves multiple teams (data scientists, data and machine learning engineers, operations, …), who often does not speak to each other very well; the model can be trained in one environment but then productionalized in completely different environment; it is not just about the code, but also about the data (features) and the model itself… At DataSentics, as a machine learning and cloud engineering studio, we see this struggle firsthand – on our internal projects and client’s projects as well.
Operationalizing Machine Learning at Scale at StarbucksDatabricks
As ML-driven innovations are propelled by the Self-Service capabilities in the Enterprise Data and Analytics Platform, teams face a significant entry barrier and productivity issues in moving from POCs to Operating ML-powered apps at scale in production.
Flink Forward San Francisco 2018: Dave Torok & Sameer Wadkar - "Embedding Fl...Flink Forward
Operationalizing Machine Learning models is never easy. Our team at Comcast has been challenged with operationalizing predictive ML models to improve customer care experiences. Using Apache Flink we have been able to apply real-time streaming to all aspects of the Machine Learning lifecycle. This includes data feature exploration and preparation by data scientists, deploying live models to serve near-real-time predictions, and validating results for model retraining and iteration. We will share best practices and lessons learned from Flink’s role in our operationalized lifecycle including:
• Executing as the “Prediction Pipeline” – a model container environment for near-real-time streaming and batch predictions
• Preparing streaming features and data sets for model training, as input for production model predictions, and for a continually-updated customer context
• Using connected streams and savepoints for “Live in the Dark”, multi-variant testing, and validation scenarios
• Incorporating Flink’s Queryable State as an approach to the online “Feature Store” – a data catalog for reuse by multiple models and use cases
• Enabling versioned models, versioned feature sets, and versioned data through DevOps approaches.
MLOps Virtual Event | Building Machine Learning Platforms for the Full LifecycleDatabricks
Successfully building a machine learning model is hard enough. Reproducing your results at scale — enabling others to reproduce pipelines, comparing results from other versions, moving models into production, redeploying and rolling out updated models — is exponentially harder. To address these challenges and accelerate innovation, many companies are building custom “ML platforms” to automate the end-to-end ML lifecycle.
Watch a replay of this MLOps Virtual Event to hear more about the latest developments and best practices for managing the full ML lifecycle on Databricks with MLflow. We covered a checklist of capabilities you’ll need, common pitfalls, technological and organizational challenges, and how to overcome them.
https://www.youtube.com/playlist?list=PLTPXxbhUt-YUFNBwBsSIlknoNbS7GExZw
ML Model Deployment and Scoring on the Edge with Automatic ML & DFSri Ambati
Machine Learning Model Deployment and Scoring on the Edge with Automatic Machine Learning and Data Flow
YouTube Video URL: https://youtu.be/gB0bTH-L6DE
Deploying Machine Learning models to the edge can present significant ML/IoT challenges centered around the need for low latency and accurate scoring on minimal resource environments. H2O.ai's Driverless AI AutoML and Cloudera Data Flow work nicely together to solve this challenge. Driverless AI automates the building of accurate Machine Learning models, which are deployed as light footprint and low latency Java or C++ artifacts, also known as a MOJO (Model Optimized). And Cloudera Data Flow leverage Apache NiFi that offers an innovative data flow framework to host MOJOs to make predictions on data moving on the edge.
Data Streaming with Apache Kafka & MongoDB - EMEAAndrew Morgan
A new generation of technologies is needed to consume and exploit today's real time, fast moving data sources. Apache Kafka, originally developed at LinkedIn, has emerged as one of these key new technologies.
This webinar explores the use-cases and architecture for Kafka, and how it integrates with MongoDB to build sophisticated data-driven applications that exploit new sources of data.
Webinar: Data Streaming with Apache Kafka & MongoDBMongoDB
A new generation of technologies is needed to consume and exploit today's real time, fast moving data sources. Apache Kafka, originally developed at LinkedIn, has emerged as one of these key new technologies.
In this presentation, we show how Data Reply helped an Austrian fintech customer to overcome previous performance limitations in their data analytics landscape, leverage real-time pipelines, break down monoliths, and foster a self-service data culture to enable new event-driven and business-critical use cases.
Encryption in Microsoft 365 - ExpertsLive Netherlands 2024Albert Hoitingh
In this session I delve into the encryption technology used in Microsoft 365 and Microsoft Purview. Including the concepts of Customer Key and Double Key Encryption.
Slack (or Teams) Automation for Bonterra Impact Management (fka Social Soluti...Jeffrey Haguewood
Sidekick Solutions uses Bonterra Impact Management (fka Social Solutions Apricot) and automation solutions to integrate data for business workflows.
We believe integration and automation are essential to user experience and the promise of efficient work through technology. Automation is the critical ingredient to realizing that full vision. We develop integration products and services for Bonterra Case Management software to support the deployment of automations for a variety of use cases.
This video focuses on the notifications, alerts, and approval requests using Slack for Bonterra Impact Management. The solutions covered in this webinar can also be deployed for Microsoft Teams.
Interested in deploying notification automations for Bonterra Impact Management? Contact us at sales@sidekicksolutionsllc.com to discuss next steps.
Essentials of Automations: Optimizing FME Workflows with ParametersSafe Software
Are you looking to streamline your workflows and boost your projects’ efficiency? Do you find yourself searching for ways to add flexibility and control over your FME workflows? If so, you’re in the right place.
Join us for an insightful dive into the world of FME parameters, a critical element in optimizing workflow efficiency. This webinar marks the beginning of our three-part “Essentials of Automation” series. This first webinar is designed to equip you with the knowledge and skills to utilize parameters effectively: enhancing the flexibility, maintainability, and user control of your FME projects.
Here’s what you’ll gain:
- Essentials of FME Parameters: Understand the pivotal role of parameters, including Reader/Writer, Transformer, User, and FME Flow categories. Discover how they are the key to unlocking automation and optimization within your workflows.
- Practical Applications in FME Form: Delve into key user parameter types including choice, connections, and file URLs. Allow users to control how a workflow runs, making your workflows more reusable. Learn to import values and deliver the best user experience for your workflows while enhancing accuracy.
- Optimization Strategies in FME Flow: Explore the creation and strategic deployment of parameters in FME Flow, including the use of deployment and geometry parameters, to maximize workflow efficiency.
- Pro Tips for Success: Gain insights on parameterizing connections and leveraging new features like Conditional Visibility for clarity and simplicity.
We’ll wrap up with a glimpse into future webinars, followed by a Q&A session to address your specific questions surrounding this topic.
Don’t miss this opportunity to elevate your FME expertise and drive your projects to new heights of efficiency.
Neuro-symbolic is not enough, we need neuro-*semantic*Frank van Harmelen
Neuro-symbolic (NeSy) AI is on the rise. However, simply machine learning on just any symbolic structure is not sufficient to really harvest the gains of NeSy. These will only be gained when the symbolic structures have an actual semantics. I give an operational definition of semantics as “predictable inference”.
All of this illustrated with link prediction over knowledge graphs, but the argument is general.
GraphRAG is All You need? LLM & Knowledge GraphGuy Korland
Guy Korland, CEO and Co-founder of FalkorDB, will review two articles on the integration of language models with knowledge graphs.
1. Unifying Large Language Models and Knowledge Graphs: A Roadmap.
https://arxiv.org/abs/2306.08302
2. Microsoft Research's GraphRAG paper and a review paper on various uses of knowledge graphs:
https://www.microsoft.com/en-us/research/blog/graphrag-unlocking-llm-discovery-on-narrative-private-data/
Builder.ai Founder Sachin Dev Duggal's Strategic Approach to Create an Innova...Ramesh Iyer
In today's fast-changing business world, Companies that adapt and embrace new ideas often need help to keep up with the competition. However, fostering a culture of innovation takes much work. It takes vision, leadership and willingness to take risks in the right proportion. Sachin Dev Duggal, co-founder of Builder.ai, has perfected the art of this balance, creating a company culture where creativity and growth are nurtured at each stage.
Transcript: Selling digital books in 2024: Insights from industry leaders - T...BookNet Canada
The publishing industry has been selling digital audiobooks and ebooks for over a decade and has found its groove. What’s changed? What has stayed the same? Where do we go from here? Join a group of leading sales peers from across the industry for a conversation about the lessons learned since the popularization of digital books, best practices, digital book supply chain management, and more.
Link to video recording: https://bnctechforum.ca/sessions/selling-digital-books-in-2024-insights-from-industry-leaders/
Presented by BookNet Canada on May 28, 2024, with support from the Department of Canadian Heritage.
Generating a custom Ruby SDK for your web service or Rails API using Smithyg2nightmarescribd
Have you ever wanted a Ruby client API to communicate with your web service? Smithy is a protocol-agnostic language for defining services and SDKs. Smithy Ruby is an implementation of Smithy that generates a Ruby SDK using a Smithy model. In this talk, we will explore Smithy and Smithy Ruby to learn how to generate custom feature-rich SDKs that can communicate with any web service, such as a Rails JSON API.
Elevating Tactical DDD Patterns Through Object CalisthenicsDorra BARTAGUIZ
After immersing yourself in the blue book and its red counterpart, attending DDD-focused conferences, and applying tactical patterns, you're left with a crucial question: How do I ensure my design is effective? Tactical patterns within Domain-Driven Design (DDD) serve as guiding principles for creating clear and manageable domain models. However, achieving success with these patterns requires additional guidance. Interestingly, we've observed that a set of constraints initially designed for training purposes remarkably aligns with effective pattern implementation, offering a more ‘mechanical’ approach. Let's explore together how Object Calisthenics can elevate the design of your tactical DDD patterns, offering concrete help for those venturing into DDD for the first time!
DevOps and Testing slides at DASA ConnectKari Kakkonen
My and Rik Marselis slides at 30.5.2024 DASA Connect conference. We discuss about what is testing, then what is agile testing and finally what is Testing in DevOps. Finally we had lovely workshop with the participants trying to find out different ways to think about quality and testing in different parts of the DevOps infinity loop.
The Art of the Pitch: WordPress Relationships and SalesLaura Byrne
Clients don’t know what they don’t know. What web solutions are right for them? How does WordPress come into the picture? How do you make sure you understand scope and timeline? What do you do if sometime changes?
All these questions and more will be explored as we talk about matching clients’ needs with what your agency offers without pulling teeth or pulling your hair out. Practical tips, and strategies for successful relationship building that leads to closing the deal.
Dev Dives: Train smarter, not harder – active learning and UiPath LLMs for do...UiPathCommunity
💥 Speed, accuracy, and scaling – discover the superpowers of GenAI in action with UiPath Document Understanding and Communications Mining™:
See how to accelerate model training and optimize model performance with active learning
Learn about the latest enhancements to out-of-the-box document processing – with little to no training required
Get an exclusive demo of the new family of UiPath LLMs – GenAI models specialized for processing different types of documents and messages
This is a hands-on session specifically designed for automation developers and AI enthusiasts seeking to enhance their knowledge in leveraging the latest intelligent document processing capabilities offered by UiPath.
Speakers:
👨🏫 Andras Palfi, Senior Product Manager, UiPath
👩🏫 Lenka Dulovicova, Product Program Manager, UiPath
Knowledge engineering: from people to machines and back
Real time data processing and model inferncing platform with Kafka streams (Navinder Singh - Walmart)
1. 1
Building real-time data processing and model
inferencing platform with Kafka Streams
Navinder Pal Singh Brar
2. Confidential and Proprietary
Personalization Fraud detection Display advertisement
Email advertisement Omnichannel reorder
Autosuggest for out of
stock products
Delivery Optimization Smart pricing Inventory Forecasting
Voice Commerce
ML @ Walmart
3. Confidential and Proprietary
Business
understanding
Data Collection
Data Preparation
Exploratory Data
Analysis
Modelling
Model Evaluation
Model Deployment
Data ScienceModel
Life cycle
Remaining 30-40% to
make it production ready
with help of developers
50% + time spending in data collection
and cleaning activity
Courtesy: http://www.oogazone.com, https://www.vectorstock.com
4. Confidential and Proprietary
Build a platform to process events,
derive inferences and serve knowledge
Reliable, highly available and scalable
and scalable
High throughput and low latency
latency
Universal feature store across models
across models
Pluggable design to onboard new
onboard new models
Reduce dev to prod time
Mission Statement
5. Confidential and Proprietary
Customer Backbone - CBB
Distributed streams processing platform built on Kafka Streams
Data scientists can bring their trained models and host them on top of CBB, which takes care of
• Data Ingestion
• Data Transformation
• Feature Extraction
• Model Inferencing/Scoring
• Post Processing
Motto: Depth, Freshness & Reach
7. Confidential and Proprietary
Why Streams?
Simple
Library, not a framework
Embedded DB
Interactive Queries
Highly scalable
DSL/Low Level APIs
At least/Exactly once guarantees
Apache Samza
Apache Spark
Apache Flink
Dynomite
Other alternatives
8. Confidential and Proprietary
Multitenancy: the challenges
Sequential execution of
tenant models
1
Any corrupt model
can bring down the
JVM
2
Any model
upgrade
requires JVM
restart
3
Client Isolation
4
9. Confidential and Proprietary 9
CBB Data Pipeline
CBB Platform
Kafka Streams
Recommendation
Personalization
Fraud Detection
….
CBB
Internal
Kafka
10. Confidential and Proprietary
CBB Processor
CBB Store
KIP-408: Add Asynchronous Processing To Kafka Streams
CBB Internals
C storeB storeA store
Model A Model B Model C
Before
Model A
Model B
Model C
A store
B store
C store
After
11. Confidential and Proprietary
Process events and update CBB stores
Different clients can pull events at own pace
Appropriate sharing and isolation
Multitenancy: the solution
12. Confidential and Proprietary 12
Data Model
Tenant Stores
Hop-On Store
Platform Store
LEAF Store
1. Linkages –customer
graph
2. Events – customer
interactions
3. Address –
Addressable entities
4. Facets – customer
features
Platform Store
Sequence Store
13. Confidential and Proprietary
Sequence Store
0 1 2 3 4 5 6 7 8 … … … …9 10 11
CBB Processorwrites
here
Model A
(offset=3)
Model B
(offset=8)
Sequence Store
14. Confidential and Proprietary
Model Inferencing
Problem
Data scientists use various
machine learning libraries and
need to support them in
production e.g. Spark ML, Scikit-
learn, Tensorflow
Solution
Mleap Runtime
Provides production level scoring
infrastructure independent on the core
libraries
Execute Spark ML Pipelines without the
dependency on the spark context
Execute Scikit-learn pipelines without the
dependency on numpy, pandas
16. Confidential and Proprietary
Global Datastores
Problem
Global data e.g. product catalog
One copy of global store per jvm
Processing global topics doesn't
work with huge data
Global data is required before an
active task moves to a VM
Solution
Create global stores in a different
Kafka streams app and bootstrap
each jvm on update
17. Confidential and Proprietary
11000 stores in 27
countries
100 million weekly
customers instores
100 million uniquemonthly
visitors @Walmart.com
55 bannersincluding
includingJet.com,
Hayneedle
Source: https://corprate.walmart.com/our-story/our-business
Walmart Scale
18. Confidential and Proprietary
Problem:
Link different id’s data together when they are identified
to be same person
Identity Graph Processing
Solution: Real time Identity Graph Conflation.
Aims to provide a coherent view of a customer by
building an identity graph uniting all customer
identities across channels and across Walmart
subsidiaries
19. Confidential and Proprietary
Graph processing co-locates the data of two or more customer identities linked to each other on the same physical node.
id3
id1
id4
id2
id5 id6
id1id6
id5
id4
id3
id2
=
Node A Node B Node A
Customer Identity Graph