KeyBank is using an iterative design approach to scale their end-to-end enterprise monitoring system with Kafka and Elasticsearch at its core. See how they did it and the lessons learned along the way.
How Apache Spark and Apache Hadoop are being used to keep banking regulators ...DataWorks Summit
The global financial crisis showed that traditional IT systems at banks were ill equiped to monitor and manage the daily-changing risk landscape during the global financial crisis. The sheer amount of data that needed to be crunched meant that many of the banks were day(s) behind in calculating, understanding and reporting their risk positions. Post crisis, a review by banking regulator, led the regulators to introduce a new legislation BCBS 239: Principles for effective risk data aggregation and reporting, that requires banks to meet more stringent (timeliness) requirement, in their ability to aggregate and report on their quickly-changing risk positions or risk fines to the tune of $millions. To meet these new requirements, banks have been forced to re-think their traditional IT architectures, which are unable to cope with sheer volume of risk data, and are instead turning to Apache Hadoop and Apache Spark to build out next generation of risk systems. In this talk you will discover, how some of the leading banks in the world are leveraging Apache Hadoop and Apache Spark to meet BCBS 239 regulation.
Speaker
Kunal Taneja
Journey to Creating a 360 View of the Customer: Implementing Big Data Strateg...Databricks
"The modernization of the tobacco industry is resulting in a shift towards a more data-driven approach to trade, operations and the consumer. The need to scale while maintaining margins is paramount, and today’s consumer requires more personalized engagement and value at every interaction to drive sales and revenue.
At Altria, we’re at the forefront of this evolution, leveraging hundreds of terabytes of big data (such as point-of-sale, clickstream, mobile data, and more) and machine learning to improve our ability to make smarter decisions and outpace the competition. This talk recaps our big data journey from a legacy data infrastructure (Teradata), isolated data systems, and the lack of resources which prevented our ability to move quickly and scale, to our current state where we’ve successfully implemented, architected and on-boarded tools and processes in stages of data acquisition, store, prepare, and business intelligence with Azure Data Lake, Azure Databricks, Azure Data factory, APIs Managements, Streaming and Hosting technologies and provided Data Analytics platform.
We’ll discuss the roadblocks we came across, how we overcame them, and how we employed a unified approach to big data and analytics through the fully managed Azure Databricks platform and the Azure suite of tools which allowed us to streamline workflows, improve operational performance, and ultimately introduce new customer experiences that drive engagement and revenue."
Apache Kafka vs. Integration Middleware (MQ, ETL, ESB)Kai Wähner
Learn the differences between an event-driven streaming platform and middleware like MQ, ETL and ESBs – including best practices and anti-patterns, but also how these concepts and tools complement each other in an enterprise architecture.
Extract-Transform-Load (ETL) is still a widely-used pattern to move data between different systems via batch processing. Due to its challenges in today’s world where real time is the new standard, an Enterprise Service Bus (ESB) is used in many enterprises as integration backbone between any kind of microservice, legacy application or cloud service to move data via SOAP / REST Web Services or other technologies. Stream Processing is often added as its own component in the enterprise architecture for correlation of different events to implement contextual rules and stateful analytics. Using all these components introduces challenges and complexities in development and operations.
This session discusses how teams in different industries solve these challenges by building a native streaming platform from the ground up instead of using ETL and ESB tools in their architecture. This allows to build and deploy independent, mission-critical streaming real time application and microservices. The architecture leverages distributed processing and fault-tolerance with fast failover, no-downtime rolling deployments and the ability to reprocess events, so you can recalculate output when your code changes. Integration and Stream Processing are still key functionality but can be realized in real time natively instead of using additional ETL, ESB or Stream Processing tools.
Thomas Lamirault_Mohamed Amine Abdessemed -A brief history of time with Apac...Flink Forward
Many use cases in the telecommunication industry require producing counters, quality metrics, and alarms in a streaming fashion with very low latency. Most of this metrics are only valuable when they’re made available as soon as the associated events happened. In our company we are looking for a system able to produce this kind of real-time indicator, which must handle massive amounts of data (400,000 eps) with often peak loads (like New Year’s Eve) or out-of-order events like massive network disorder. Low latency and flexible window management with specific watermark emission are also a must-haves. Heterogeneous format, multiple flow correlation, and the possibility of late data arrival are other challenges. Flink being already widely used at Bouygues Telecom for real-time data integration, its features made it the evident candidate for the future System. In this talk, we'll present a real use case of streaming analytics using Flink, Kafka & HBase along with other legacy systems.
Want to see a high-level overview of the products in the Microsoft data platform portfolio in Azure? I’ll cover products in the categories of OLTP, OLAP, data warehouse, storage, data transport, data prep, data lake, IaaS, PaaS, SMP/MPP, NoSQL, Hadoop, open source, reporting, machine learning, and AI. It’s a lot to digest but I’ll categorize the products and discuss their use cases to help you narrow down the best products for the solution you want to build.
An Introduction to Confluent Cloud: Apache Kafka as a Serviceconfluent
Business breakout during Confluent’s streaming event in Munich, presented by Hans Jespersen, VP WW Systems Engineering at Confluent. This three-day hands-on course focused on how to build, manage, and monitor clusters using industry best-practices developed by the world’s foremost Apache Kafka™ experts. The sessions focused on how Kafka and the Confluent Platform work, how their main subsystems interact, and how to set up, manage, monitor, and tune your cluster.
Building Data Quality Audit Framework using Delta Lake at CernerDatabricks
Cerner needs to know what assets it owns, where they are located, and the status of those assets. A configuration management system is an inventory of IT assets and IT things like servers, network devices, storage arrays, and software licenses.
How Apache Spark and Apache Hadoop are being used to keep banking regulators ...DataWorks Summit
The global financial crisis showed that traditional IT systems at banks were ill equiped to monitor and manage the daily-changing risk landscape during the global financial crisis. The sheer amount of data that needed to be crunched meant that many of the banks were day(s) behind in calculating, understanding and reporting their risk positions. Post crisis, a review by banking regulator, led the regulators to introduce a new legislation BCBS 239: Principles for effective risk data aggregation and reporting, that requires banks to meet more stringent (timeliness) requirement, in their ability to aggregate and report on their quickly-changing risk positions or risk fines to the tune of $millions. To meet these new requirements, banks have been forced to re-think their traditional IT architectures, which are unable to cope with sheer volume of risk data, and are instead turning to Apache Hadoop and Apache Spark to build out next generation of risk systems. In this talk you will discover, how some of the leading banks in the world are leveraging Apache Hadoop and Apache Spark to meet BCBS 239 regulation.
Speaker
Kunal Taneja
Journey to Creating a 360 View of the Customer: Implementing Big Data Strateg...Databricks
"The modernization of the tobacco industry is resulting in a shift towards a more data-driven approach to trade, operations and the consumer. The need to scale while maintaining margins is paramount, and today’s consumer requires more personalized engagement and value at every interaction to drive sales and revenue.
At Altria, we’re at the forefront of this evolution, leveraging hundreds of terabytes of big data (such as point-of-sale, clickstream, mobile data, and more) and machine learning to improve our ability to make smarter decisions and outpace the competition. This talk recaps our big data journey from a legacy data infrastructure (Teradata), isolated data systems, and the lack of resources which prevented our ability to move quickly and scale, to our current state where we’ve successfully implemented, architected and on-boarded tools and processes in stages of data acquisition, store, prepare, and business intelligence with Azure Data Lake, Azure Databricks, Azure Data factory, APIs Managements, Streaming and Hosting technologies and provided Data Analytics platform.
We’ll discuss the roadblocks we came across, how we overcame them, and how we employed a unified approach to big data and analytics through the fully managed Azure Databricks platform and the Azure suite of tools which allowed us to streamline workflows, improve operational performance, and ultimately introduce new customer experiences that drive engagement and revenue."
Apache Kafka vs. Integration Middleware (MQ, ETL, ESB)Kai Wähner
Learn the differences between an event-driven streaming platform and middleware like MQ, ETL and ESBs – including best practices and anti-patterns, but also how these concepts and tools complement each other in an enterprise architecture.
Extract-Transform-Load (ETL) is still a widely-used pattern to move data between different systems via batch processing. Due to its challenges in today’s world where real time is the new standard, an Enterprise Service Bus (ESB) is used in many enterprises as integration backbone between any kind of microservice, legacy application or cloud service to move data via SOAP / REST Web Services or other technologies. Stream Processing is often added as its own component in the enterprise architecture for correlation of different events to implement contextual rules and stateful analytics. Using all these components introduces challenges and complexities in development and operations.
This session discusses how teams in different industries solve these challenges by building a native streaming platform from the ground up instead of using ETL and ESB tools in their architecture. This allows to build and deploy independent, mission-critical streaming real time application and microservices. The architecture leverages distributed processing and fault-tolerance with fast failover, no-downtime rolling deployments and the ability to reprocess events, so you can recalculate output when your code changes. Integration and Stream Processing are still key functionality but can be realized in real time natively instead of using additional ETL, ESB or Stream Processing tools.
Thomas Lamirault_Mohamed Amine Abdessemed -A brief history of time with Apac...Flink Forward
Many use cases in the telecommunication industry require producing counters, quality metrics, and alarms in a streaming fashion with very low latency. Most of this metrics are only valuable when they’re made available as soon as the associated events happened. In our company we are looking for a system able to produce this kind of real-time indicator, which must handle massive amounts of data (400,000 eps) with often peak loads (like New Year’s Eve) or out-of-order events like massive network disorder. Low latency and flexible window management with specific watermark emission are also a must-haves. Heterogeneous format, multiple flow correlation, and the possibility of late data arrival are other challenges. Flink being already widely used at Bouygues Telecom for real-time data integration, its features made it the evident candidate for the future System. In this talk, we'll present a real use case of streaming analytics using Flink, Kafka & HBase along with other legacy systems.
Want to see a high-level overview of the products in the Microsoft data platform portfolio in Azure? I’ll cover products in the categories of OLTP, OLAP, data warehouse, storage, data transport, data prep, data lake, IaaS, PaaS, SMP/MPP, NoSQL, Hadoop, open source, reporting, machine learning, and AI. It’s a lot to digest but I’ll categorize the products and discuss their use cases to help you narrow down the best products for the solution you want to build.
An Introduction to Confluent Cloud: Apache Kafka as a Serviceconfluent
Business breakout during Confluent’s streaming event in Munich, presented by Hans Jespersen, VP WW Systems Engineering at Confluent. This three-day hands-on course focused on how to build, manage, and monitor clusters using industry best-practices developed by the world’s foremost Apache Kafka™ experts. The sessions focused on how Kafka and the Confluent Platform work, how their main subsystems interact, and how to set up, manage, monitor, and tune your cluster.
Building Data Quality Audit Framework using Delta Lake at CernerDatabricks
Cerner needs to know what assets it owns, where they are located, and the status of those assets. A configuration management system is an inventory of IT assets and IT things like servers, network devices, storage arrays, and software licenses.
Building Reliable Data Lakes at Scale with Delta LakeDatabricks
Most data practitioners grapple with data reliability issues—it’s the bane of their existence. Data engineers, in particular, strive to design, deploy, and serve reliable data in a performant manner so that their organizations can make the most of their valuable corporate data assets.
Delta Lake is an open-source storage layer that brings ACID transactions to Apache Spark™ and big data workloads. Built on open standards, Delta Lake employs co-designed compute and storage and is compatible with Spark API’s. It powers high data reliability and query performance to support big data use cases, from batch and streaming ingests, fast interactive queries to machine learning. In this tutorial we will discuss the requirements of modern data engineering, the challenges data engineers face when it comes to data reliability and performance and how Delta Lake can help. Through presentation, code examples and notebooks, we will explain these challenges and the use of Delta Lake to address them. You will walk away with an understanding of how you can apply this innovation to your data architecture and the benefits you can gain.
This tutorial will be both instructor-led and hands-on interactive session. Instructions on how to get tutorial materials will be covered in class.
What you’ll learn:
Understand the key data reliability challenges
How Delta Lake brings reliability to data lakes at scale
Understand how Delta Lake fits within an Apache Spark™ environment
How to use Delta Lake to realize data reliability improvements
Prerequisites
A fully-charged laptop (8-16GB memory) with Chrome or Firefox
Pre-register for Databricks Community Edition
Capital One Delivers Risk Insights in Real Time with Stream Processingconfluent
Speakers: Ravi Dubey, Senior Manager, Software Engineering, Capital One + Jeff Sharpe, Software Engineer, Capital One
Capital One supports interactions with real-time streaming transactional data using Apache Kafka®. Kafka helps deliver information to internal operation teams and bank tellers to assist with assessing risk and protect customers in a myriad of ways.
Inside the bank, Kafka allows Capital One to build a real-time system that takes advantage of modern data and cloud technologies without exposing customers to unnecessary data breaches, or violating privacy regulations. These examples demonstrate how a streaming platform enables Capital One to act on their visions faster and in a more scalable way through the Kafka solution, helping establish Capital One as an innovator in the banking space.
Join us for this online talk on lessons learned, best practices and technical patterns of Capital One’s deployment of Apache Kafka.
-Find out how Kafka delivers on a 5-second service-level agreement (SLA) for inside branch tellers.
-Learn how to combine and host data in-memory and prevent personally identifiable information (PII) violations of in-flight transactions.
-Understand how Capital One manages Kafka Docker containers using Kubernetes.
Watch the recording: https://videos.confluent.io/watch/6e6ukQNnmASwkf9Gkdhh69?.
Yelp has operated our connector ecosystem to feed vital data to domain-specific teams and data stores. We share some of our learning and experiences on operating such system. We will touch on what is the next phase of the system evolution.
This describes a conceptual model approach to designing an enterprise data fabric. This is the set of hardware and software infrastructure, tools and facilities to implement, administer, manage and operate data operations across the entire span of the data within the enterprise across all data activities including data acquisition, transformation, storage, distribution, integration, replication, availability, security, protection, disaster recovery, presentation, analytics, preservation, retention, backup, retrieval, archival, recall, deletion, monitoring, capacity planning across all data storage platforms enabling use by applications to meet the data needs of the enterprise.
The conceptual data fabric model represents a rich picture of the enterprise’s data context. It embodies an idealised and target data view.
Designing a data fabric enables the enterprise respond to and take advantage of key related data trends:
• Internal and External Digital Expectations
• Cloud Offerings and Services
• Data Regulations
• Analytics Capabilities
It enables the IT function demonstrate positive data leadership. It shows the IT function is able and willing to respond to business data needs. It allows the enterprise to meet data challenges
• More and more data of many different types
• Increasingly distributed platform landscape
• Compliance and regulation
• Newer data technologies
• Shadow IT where the IT function cannot deliver IT change and new data facilities quickly
It is concerned with the design an open and flexible data fabric that improves the responsiveness of the IT function and reduces shadow IT.
MLOps and Data Quality: Deploying Reliable ML Models in ProductionProvectus
Looking to build a robust machine learning infrastructure to streamline MLOps? Learn from Provectus experts how to ensure the success of your MLOps initiative by implementing Data QA components in your ML infrastructure.
For most organizations, the development of multiple machine learning models, their deployment and maintenance in production are relatively new tasks. Join Provectus as we explain how to build an end-to-end infrastructure for machine learning, with a focus on data quality and metadata management, to standardize and streamline machine learning life cycle management (MLOps).
Agenda
- Data Quality and why it matters
- Challenges and solutions of Data Testing
- Challenges and solutions of Model Testing
- MLOps pipelines and why they matter
- How to expand validation pipelines for Data Quality
Microsoft Data Platform - What's includedJames Serra
The pace of Microsoft product innovation is so fast that even though I spend half my days learning, I struggle to keep up. And as I work with customers I find they are often in the dark about many of the products that we have since they are focused on just keeping what they have running and putting out fires. So, let me cover what products you might have missed in the Microsoft data platform world. Be prepared to discover all the various Microsoft technologies and products for collecting data, transforming it, storing it, and visualizing it. My goal is to help you not only understand each product but understand how they all fit together and there proper use case, allowing you to build the appropriate solution that can incorporate any data in the future no matter the size, frequency, or type. Along the way we will touch on technologies covering NoSQL, Hadoop, and open source.
Amazon Redshift is a fast, fully managed, petabyte-scale data warehouse service that makes it simple and cost-effective to efficiently analyze all your data using your existing business intelligence tools. You can start small for just $0.25 per hour with no commitment or upfront costs and scale to a petabyte or more for $1,000 per terabyte per year, less than a tenth of most other data warehousing solutions.
See a recording of the webinar based on this presentation here on YouTube: https://youtu.be/GgLKodmL5xE
Masterclass series webinars, including on-demand access to all of this years recorded webinars: http://aws.amazon.com/campaigns/emea/masterclass/
Journey Through the Cloud webinar series, including on-demand access to all webinars so far this year: http://aws.amazon.com/campaigns/emea/journey/
Top 5 Event Streaming Use Cases for 2021 with Apache KafkaKai Wähner
Apache Kafka and Event Streaming are two of the most relevant buzzwords in tech these days. Ever wonder what the predicted TOP 5 Event Streaming Architectures and Use Cases for 2021 are? Check out the following presentation. Learn about edge deployments, hybrid and multi-cloud architectures, service mesh-based microservices, streaming machine learning, and cybersecurity.
On-demand video recording: https://videos.confluent.io/watch/XAjxV3j8hzwCcEKoZVErUJ
Tech talk on what Azure Databricks is, why you should learn it and how to get started. We'll use PySpark and talk about some real live examples from the trenches, including the pitfalls of leaving your clusters running accidentally and receiving a huge bill ;)
After this you will hopefully switch to Spark-as-a-service and get rid of your HDInsight/Hadoop clusters.
This is part 1 of an 8 part Data Science for Dummies series:
Databricks for dummies
Titanic survival prediction with Databricks + Python + Spark ML
Titanic with Azure Machine Learning Studio
Titanic with Databricks + Azure Machine Learning Service
Titanic with Databricks + MLS + AutoML
Titanic with Databricks + MLFlow
Titanic with DataRobot
Deployment, DevOps/MLops and Operationalization
Data Con LA 2020
Description
In this session, I introduce the Amazon Redshift lake house architecture which enables you to query data across your data warehouse, data lake, and operational databases to gain faster and deeper insights. With a lake house architecture, you can store data in open file formats in your Amazon S3 data lake.
Speaker
Antje Barth, Amazon Web Services, Sr. Developer Advocate, AI and Machine Learning
Building Cloud-Native App Series - Part 3 of 11
Microservices Architecture Series
AWS Kinesis Data Streams
AWS Kinesis Firehose
AWS Kinesis Data Analytics
Apache Flink - Analytics
The Connected Consumer – Real-time Customer 360Capgemini
With Business Data Lake technologies based on EMC’s Big Data portfolio it becomes possible to move away from channel specific analytics towards a 360 customer view.
This presentation will show how technologies like Spark, Hadoop, and Kafka help companies gain a real-time view of everything their customers do and make changes to customer touch points whether mobile, web, in-store, direct marketing or existing transactional systems.
Presented by Steve Jones, Vice President, Insights & Data, Capgemini at EMC World 2016
http://www.capgemini.com/emc
What’s New with Databricks Machine LearningDatabricks
In this session, the Databricks product team provides a deeper dive into the machine learning announcements. Join us for a detailed demo that gives you insights into the latest innovations that simplify the ML lifecycle — from preparing data, discovering features, and training and managing models in production.
Building Reliable Data Lakes at Scale with Delta LakeDatabricks
Most data practitioners grapple with data reliability issues—it’s the bane of their existence. Data engineers, in particular, strive to design, deploy, and serve reliable data in a performant manner so that their organizations can make the most of their valuable corporate data assets.
Delta Lake is an open-source storage layer that brings ACID transactions to Apache Spark™ and big data workloads. Built on open standards, Delta Lake employs co-designed compute and storage and is compatible with Spark API’s. It powers high data reliability and query performance to support big data use cases, from batch and streaming ingests, fast interactive queries to machine learning. In this tutorial we will discuss the requirements of modern data engineering, the challenges data engineers face when it comes to data reliability and performance and how Delta Lake can help. Through presentation, code examples and notebooks, we will explain these challenges and the use of Delta Lake to address them. You will walk away with an understanding of how you can apply this innovation to your data architecture and the benefits you can gain.
This tutorial will be both instructor-led and hands-on interactive session. Instructions on how to get tutorial materials will be covered in class.
What you’ll learn:
Understand the key data reliability challenges
How Delta Lake brings reliability to data lakes at scale
Understand how Delta Lake fits within an Apache Spark™ environment
How to use Delta Lake to realize data reliability improvements
Prerequisites
A fully-charged laptop (8-16GB memory) with Chrome or Firefox
Pre-register for Databricks Community Edition
Capital One Delivers Risk Insights in Real Time with Stream Processingconfluent
Speakers: Ravi Dubey, Senior Manager, Software Engineering, Capital One + Jeff Sharpe, Software Engineer, Capital One
Capital One supports interactions with real-time streaming transactional data using Apache Kafka®. Kafka helps deliver information to internal operation teams and bank tellers to assist with assessing risk and protect customers in a myriad of ways.
Inside the bank, Kafka allows Capital One to build a real-time system that takes advantage of modern data and cloud technologies without exposing customers to unnecessary data breaches, or violating privacy regulations. These examples demonstrate how a streaming platform enables Capital One to act on their visions faster and in a more scalable way through the Kafka solution, helping establish Capital One as an innovator in the banking space.
Join us for this online talk on lessons learned, best practices and technical patterns of Capital One’s deployment of Apache Kafka.
-Find out how Kafka delivers on a 5-second service-level agreement (SLA) for inside branch tellers.
-Learn how to combine and host data in-memory and prevent personally identifiable information (PII) violations of in-flight transactions.
-Understand how Capital One manages Kafka Docker containers using Kubernetes.
Watch the recording: https://videos.confluent.io/watch/6e6ukQNnmASwkf9Gkdhh69?.
Yelp has operated our connector ecosystem to feed vital data to domain-specific teams and data stores. We share some of our learning and experiences on operating such system. We will touch on what is the next phase of the system evolution.
This describes a conceptual model approach to designing an enterprise data fabric. This is the set of hardware and software infrastructure, tools and facilities to implement, administer, manage and operate data operations across the entire span of the data within the enterprise across all data activities including data acquisition, transformation, storage, distribution, integration, replication, availability, security, protection, disaster recovery, presentation, analytics, preservation, retention, backup, retrieval, archival, recall, deletion, monitoring, capacity planning across all data storage platforms enabling use by applications to meet the data needs of the enterprise.
The conceptual data fabric model represents a rich picture of the enterprise’s data context. It embodies an idealised and target data view.
Designing a data fabric enables the enterprise respond to and take advantage of key related data trends:
• Internal and External Digital Expectations
• Cloud Offerings and Services
• Data Regulations
• Analytics Capabilities
It enables the IT function demonstrate positive data leadership. It shows the IT function is able and willing to respond to business data needs. It allows the enterprise to meet data challenges
• More and more data of many different types
• Increasingly distributed platform landscape
• Compliance and regulation
• Newer data technologies
• Shadow IT where the IT function cannot deliver IT change and new data facilities quickly
It is concerned with the design an open and flexible data fabric that improves the responsiveness of the IT function and reduces shadow IT.
MLOps and Data Quality: Deploying Reliable ML Models in ProductionProvectus
Looking to build a robust machine learning infrastructure to streamline MLOps? Learn from Provectus experts how to ensure the success of your MLOps initiative by implementing Data QA components in your ML infrastructure.
For most organizations, the development of multiple machine learning models, their deployment and maintenance in production are relatively new tasks. Join Provectus as we explain how to build an end-to-end infrastructure for machine learning, with a focus on data quality and metadata management, to standardize and streamline machine learning life cycle management (MLOps).
Agenda
- Data Quality and why it matters
- Challenges and solutions of Data Testing
- Challenges and solutions of Model Testing
- MLOps pipelines and why they matter
- How to expand validation pipelines for Data Quality
Microsoft Data Platform - What's includedJames Serra
The pace of Microsoft product innovation is so fast that even though I spend half my days learning, I struggle to keep up. And as I work with customers I find they are often in the dark about many of the products that we have since they are focused on just keeping what they have running and putting out fires. So, let me cover what products you might have missed in the Microsoft data platform world. Be prepared to discover all the various Microsoft technologies and products for collecting data, transforming it, storing it, and visualizing it. My goal is to help you not only understand each product but understand how they all fit together and there proper use case, allowing you to build the appropriate solution that can incorporate any data in the future no matter the size, frequency, or type. Along the way we will touch on technologies covering NoSQL, Hadoop, and open source.
Amazon Redshift is a fast, fully managed, petabyte-scale data warehouse service that makes it simple and cost-effective to efficiently analyze all your data using your existing business intelligence tools. You can start small for just $0.25 per hour with no commitment or upfront costs and scale to a petabyte or more for $1,000 per terabyte per year, less than a tenth of most other data warehousing solutions.
See a recording of the webinar based on this presentation here on YouTube: https://youtu.be/GgLKodmL5xE
Masterclass series webinars, including on-demand access to all of this years recorded webinars: http://aws.amazon.com/campaigns/emea/masterclass/
Journey Through the Cloud webinar series, including on-demand access to all webinars so far this year: http://aws.amazon.com/campaigns/emea/journey/
Top 5 Event Streaming Use Cases for 2021 with Apache KafkaKai Wähner
Apache Kafka and Event Streaming are two of the most relevant buzzwords in tech these days. Ever wonder what the predicted TOP 5 Event Streaming Architectures and Use Cases for 2021 are? Check out the following presentation. Learn about edge deployments, hybrid and multi-cloud architectures, service mesh-based microservices, streaming machine learning, and cybersecurity.
On-demand video recording: https://videos.confluent.io/watch/XAjxV3j8hzwCcEKoZVErUJ
Tech talk on what Azure Databricks is, why you should learn it and how to get started. We'll use PySpark and talk about some real live examples from the trenches, including the pitfalls of leaving your clusters running accidentally and receiving a huge bill ;)
After this you will hopefully switch to Spark-as-a-service and get rid of your HDInsight/Hadoop clusters.
This is part 1 of an 8 part Data Science for Dummies series:
Databricks for dummies
Titanic survival prediction with Databricks + Python + Spark ML
Titanic with Azure Machine Learning Studio
Titanic with Databricks + Azure Machine Learning Service
Titanic with Databricks + MLS + AutoML
Titanic with Databricks + MLFlow
Titanic with DataRobot
Deployment, DevOps/MLops and Operationalization
Data Con LA 2020
Description
In this session, I introduce the Amazon Redshift lake house architecture which enables you to query data across your data warehouse, data lake, and operational databases to gain faster and deeper insights. With a lake house architecture, you can store data in open file formats in your Amazon S3 data lake.
Speaker
Antje Barth, Amazon Web Services, Sr. Developer Advocate, AI and Machine Learning
Building Cloud-Native App Series - Part 3 of 11
Microservices Architecture Series
AWS Kinesis Data Streams
AWS Kinesis Firehose
AWS Kinesis Data Analytics
Apache Flink - Analytics
The Connected Consumer – Real-time Customer 360Capgemini
With Business Data Lake technologies based on EMC’s Big Data portfolio it becomes possible to move away from channel specific analytics towards a 360 customer view.
This presentation will show how technologies like Spark, Hadoop, and Kafka help companies gain a real-time view of everything their customers do and make changes to customer touch points whether mobile, web, in-store, direct marketing or existing transactional systems.
Presented by Steve Jones, Vice President, Insights & Data, Capgemini at EMC World 2016
http://www.capgemini.com/emc
What’s New with Databricks Machine LearningDatabricks
In this session, the Databricks product team provides a deeper dive into the machine learning announcements. Join us for a detailed demo that gives you insights into the latest innovations that simplify the ML lifecycle — from preparing data, discovering features, and training and managing models in production.
Cloud-native Semantic Layer on Data LakeDatabricks
With larger volume and more real-time data stored in data lake, it becomes more complex to manage these data and serve analytics and applications. With different service interfaces, data caliber, performance bias on different scenarios, the business users begin to suffer low confidence on quality and efficiency to get insight from data.
Agile Big Data Analytics Development: An Architecture-Centric ApproachSoftServe
Presented at The Hawaii International Conference on System Sciences by Hong-Mei Chen and Rick Kazman (University of Hawaii), Serge Haziyev (SoftServe).
Enterprise Data World 2018 - Building Cloud Self-Service Analytical SolutionDmitry Anoshin
This session will cover building the modern Data Warehouse by migration from the traditional DW platform into the cloud, using Amazon Redshift and Cloud ETL Matillion in order to provide Self-Service BI for the business audience. This topic will cover the technical migration path of DW with PL/SQL ETL to the Amazon Redshift via Matillion ETL, with a detailed comparison of modern ETL tools. Moreover, this talk will be focusing on working backward through the process, i.e. starting from the business audience and their needs that drive changes in the old DW. Finally, this talk will cover the idea of self-service BI, and the author will share a step-by-step plan for building an efficient self-service environment using modern BI platform Tableau.
An experience sharing of the OpenStack deployment at Suning.com, a large online retailer in China. The talk presents the challenges and opportunities on orchestrating the enterprise workloads using Heat.
Thomas Weise, Apache Apex PMC Member and Architect/Co-Founder, DataTorrent - ...Dataconomy Media
Thomas Weise, Apache Apex PMC Member and Architect/Co-Founder of DataTorrent presented "Streaming Analytics with Apache Apex" as part of the Big Data, Berlin v 8.0 meetup organised on the 14th of July 2016 at the WeWork headquarters.
Similar to How KeyBank Used Elastic to Build an Enterprise Monitoring Solution (20)
An introduction to Elasticsearch's advanced relevance ranking toolboxElasticsearch
The hallmark of a great search experience is always delivering the most relevant results, quickly, to every user. The difficulty lies behind the scenes in making that happen elegantly and at a scale. From App Search’s intuitive drag and drop interface to the advanced relevance capabilities built into the core of Elasticsearch — Elastic offers a range of tools for developers to tune relevance ranking and create incredible search experiences. In this session, we’ll explore some of Elasticsearch’s advanced relevance ranking features, such as dense vector fields, BM25F, ranking evaluation, and more. Plus we’ll give you some ideas for how these features are being used by other Elastic users to create world-class, category defining search experiences.
Eze Castle Integration is a managed service provider (MSP), cloud service provider (CSP), and internet service provider (ISP) that delivers services to more than 1,000 clients around the world. Different departments within Eze Castle have devised their own log aggregation solutions in order to provide visibility, meet regulatory compliance requirements, conduct cybersecurity investigations, and help engineers with troubleshooting infrastructure issues. In 2019, they partnered with Elastic to consolidate the data generated from different systems into a single pane of glass. And thanks to the ease of deployment on Elastic Cloud, professional consultation services from Elastic engineers, and on-demand training courses available on Elastic Learning, Eze Castle was able to go from proof-of-concept to a fully functioning ""Eze Managed SIEM"" product within a month!
Learn about Eze Castle's journey with Elastic and how they grew Eze Managed SIEM from zero to 100 customers In less than 14 months.
Cómo crear excelentes experiencias de búsqueda en sitios webElasticsearch
Descubre lo fácil que es crear búsquedas relevantes y enriquecidas en sitios web de cara al público para impulsar las conversiones, incrementar el consumo de contenido y ayudar a los visitantes a encontrar lo que necesitan. Realiza un recorrido por las herramientas de Elastic a las que puedes sacar partido para transformar con facilidad tu sitio web, lo que incluye nuestro nuevo y potente rastreador web.
Te damos la bienvenida a una nueva forma de realizar búsquedas Elasticsearch
Al igual que la mayoría de las organizaciones modernas, tus equipos probablemente usan más de 10 aplicaciones basadas en la nube a diario, pero dedican demasiado tiempo a buscar la información que necesitan en todas estas. Gracias a las características integradas de Elastic Workplace Search, podrás comprobar lo sencillo que resulta poner el contenido relevante al alcance de tus equipos gracias a la búsqueda unificada para todas las aplicaciones que usan para llevar a cabo su trabajo.
Tirez pleinement parti d'Elastic grâce à Elastic CloudElasticsearch
Découvrez pourquoi Elastic Cloud est la solution idéale pour exploiter toutes les offres d'Elastic. Bénéficiez d'une flexibilité d'achat et de déploiement au sein de Google Cloud, de Microsoft Azure, d'Amazon Web Services ou des trois à la fois. Apprenez quels avantages vous apporte une offre de service géré et déterminez la solution qui vous permet de la gérer par vous-même grâce à des outils intégrés d'automatisation et d'orchestration. Et ce n'est pas tout ! Familiarisez-vous avec les fonctionnalités qui peuvent vous aider à scaler vos opérations au fur et à mesure de l'évolution de votre déploiement, à stocker vos données d'une manière rentable et à optimiser vos recherches. Ainsi, vous n'aurez plus à abandonner de données et obtiendrez les informations exploitables dont vous avez besoin pour assurer le fonctionnement de votre entreprise.
Comment transformer vos données en informations exploitablesElasticsearch
Découvrez des fonctionnalités stratégiques de la Suite Elastic, notamment Elasticsearch, un moteur de données incomparable, et Kibana, véritable fenêtre ouverte sur la Suite Elastic.
Dans cette session, vous apprendrez à :
injecter des données dans la Suite Elastic ;
stocker des données ;
analyser des données ;
exploiter des données.
Plongez au cœur de la recherche dans tous ses états.Elasticsearch
À l'instar de la plupart des entreprises modernes, vos équipes utilisent probablement plus de 10 applications hébergées dans le cloud chaque jour, mais passent aussi bien trop de temps à chercher les informations dont elles ont besoin dans ces outils. Grâce aux fonctionnalités prêtes à l'emploi d'Elastic Workplace Search, découvrez combien il est facile de mettre le contenu pertinent à portée de la main de vos équipes grâce à une recherche unifiée sur l'ensemble des applications qu'elles utilisent pour faire leur travail.
Modernising One Legal Se@rch with Elastic Enterprise Search [Customer Story]Elasticsearch
Knowledge management needs in the legal sector, why Linklaters decided to move away from its legacy KM search engine, Kin+Carta's management of the migration process, and how the switch revitalised a well-established system and opened up new possibilities for its future development.
An introduction to Elasticsearch's advanced relevance ranking toolboxElasticsearch
The hallmark of a great search experience is always delivering the most relevant results, quickly, to every user. The difficulty lies behind the scenes in making that happen elegantly and at a scale. From App Search’s intuitive drag and drop interface to the advanced relevance capabilities built into the core of Elasticsearch — Elastic offers a range of tools for developers to tune relevance ranking and create incredible search experiences. In this session, we’ll explore some of Elasticsearch’s advanced relevance ranking features, such as dense vector fields, BM25F, ranking evaluation, and more. Plus we’ll give you some ideas for how these features are being used by other Elastic users to create world-class, category defining search experiences.
Like most modern organizations, your teams are likely using upwards of 10 cloud-based applications on a daily basis, but spending far too many hours a day searching for the information they need across all of them. With the out-of-the-box capabilities of Elastic Workplace Search, see how easy it is to put relevant content right at your teams’ fingertips with unified search across all the apps they rely on to get work done.
Building great website search experiencesElasticsearch
Discover how easy it is to create rich, relevant search on public facing websites that drives conversion, increases content consumption, and helps visitors find what they need. Get a tour of the Elastic tools you can leverage to easily transform your website, including our powerful new web crawler.
Keynote: Harnessing the power of Elasticsearch for simplified searchElasticsearch
Get an overview of the innovation Elastic is bringing to the Enterprise Search landscape, and learn how you can harness these capabilities across your technology landscape to make the power of search work for you.
Cómo transformar los datos en análisis con los que tomar decisionesElasticsearch
Descubre las áreas de características estratégicas de Elastic Stack: Elasticsearch, un motor de datos inigualable y Kibana, la ventana que da acceso a Elastic Stack.
En la sesión hablaremos sobre:
Cómo incorporar datos a Elastic Stack
Almacenamiento de datos
Análisis de los datos
Actuar en función de los datos
Explore relève les défis Big Data avec Elastic Cloud Elasticsearch
Spécialisée dans le développement et la gestion de solutions de veille documentaire et commerciale, Explore offre à ses clients une lecture précise et organisée de l’actualités des marchés et projets sur leurs territoires d'intervention. Afin de rendre leur offre plus agile et performante, Explore a choisi l’offre Elastic Cloud hébergée sur Microsoft Azure. Découvrez comment les équipes de production et de développement sont désormais en mesure de mieux exploiter les données pour les clients d’Explore et gagnent du temps sur la gestion de leur infrastructure.
Comment transformer vos données en informations exploitablesElasticsearch
Découvrez des fonctionnalités stratégiques de la Suite Elastic, notamment Elasticsearch, un moteur de données incomparable, et Kibana, véritable fenêtre ouverte sur la Suite Elastic.
Dans cette session, vous apprendrez à :
injecter des données dans la Suite Elastic ;
stocker des données ;
analyser des données ;
exploiter des données.
Transforming data into actionable insightsElasticsearch
Learn about the strategic feature areas of the Elastic Stack—Elasticsearch, a data engine like no other, and Kibana, the window into the Elastic Stack.
The session will cover:
Bringing data into the Elastic Stack
Storing data
Analyzing data
Acting on data
"Elastic enables the world’s leading organization to exceed their business objectives and power their mission-critical systems by eliminating data silos, connecting the dots, and transforming data of all types into actionable insights.
Come learn how the power of search can help you quickly surface relevant insights at scale. Whether you are an executive looking to reduce operational costs, a department head striving to do more with fewer tools, or engineer monitoring and protecting your IT environment, this session is for you. "
Empowering agencies using Elastic as a Service inside GovernmentElasticsearch
It has now been four years since the beta release of Elastic Cloud Enterprise which kicked off a wave of the Elastic public sector community running Elastic as a service within Government rather than utilizing purely hosted solutions. Fast forward to 2021 and we have multiple options for multiple mission needs. Learn top tips from Elastic architects and their experience enabling their teams with the automation and provisioning of Elastic tech to change the game in how government delivers solutions.
The opportunities and challenges of data for public goodElasticsearch
Data is an increasingly valuable resource for delivering economic and social benefit. Heather will discuss the challenges and opportunities, and how communities at all levels of the public sector can play a part in leading the change.
Neuro-symbolic is not enough, we need neuro-*semantic*Frank van Harmelen
Neuro-symbolic (NeSy) AI is on the rise. However, simply machine learning on just any symbolic structure is not sufficient to really harvest the gains of NeSy. These will only be gained when the symbolic structures have an actual semantics. I give an operational definition of semantics as “predictable inference”.
All of this illustrated with link prediction over knowledge graphs, but the argument is general.
Kubernetes & AI - Beauty and the Beast !?! @KCD Istanbul 2024Tobias Schneck
As AI technology is pushing into IT I was wondering myself, as an “infrastructure container kubernetes guy”, how get this fancy AI technology get managed from an infrastructure operational view? Is it possible to apply our lovely cloud native principals as well? What benefit’s both technologies could bring to each other?
Let me take this questions and provide you a short journey through existing deployment models and use cases for AI software. On practical examples, we discuss what cloud/on-premise strategy we may need for applying it to our own infrastructure to get it to work from an enterprise perspective. I want to give an overview about infrastructure requirements and technologies, what could be beneficial or limiting your AI use cases in an enterprise environment. An interactive Demo will give you some insides, what approaches I got already working for real.
"Impact of front-end architecture on development cost", Viktor TurskyiFwdays
I have heard many times that architecture is not important for the front-end. Also, many times I have seen how developers implement features on the front-end just following the standard rules for a framework and think that this is enough to successfully launch the project, and then the project fails. How to prevent this and what approach to choose? I have launched dozens of complex projects and during the talk we will analyze which approaches have worked for me and which have not.
Accelerate your Kubernetes clusters with Varnish CachingThijs Feryn
A presentation about the usage and availability of Varnish on Kubernetes. This talk explores the capabilities of Varnish caching and shows how to use the Varnish Helm chart to deploy it to Kubernetes.
This presentation was delivered at K8SUG Singapore. See https://feryn.eu/presentations/accelerate-your-kubernetes-clusters-with-varnish-caching-k8sug-singapore-28-2024 for more details.
GraphRAG is All You need? LLM & Knowledge GraphGuy Korland
Guy Korland, CEO and Co-founder of FalkorDB, will review two articles on the integration of language models with knowledge graphs.
1. Unifying Large Language Models and Knowledge Graphs: A Roadmap.
https://arxiv.org/abs/2306.08302
2. Microsoft Research's GraphRAG paper and a review paper on various uses of knowledge graphs:
https://www.microsoft.com/en-us/research/blog/graphrag-unlocking-llm-discovery-on-narrative-private-data/
Dev Dives: Train smarter, not harder – active learning and UiPath LLMs for do...UiPathCommunity
💥 Speed, accuracy, and scaling – discover the superpowers of GenAI in action with UiPath Document Understanding and Communications Mining™:
See how to accelerate model training and optimize model performance with active learning
Learn about the latest enhancements to out-of-the-box document processing – with little to no training required
Get an exclusive demo of the new family of UiPath LLMs – GenAI models specialized for processing different types of documents and messages
This is a hands-on session specifically designed for automation developers and AI enthusiasts seeking to enhance their knowledge in leveraging the latest intelligent document processing capabilities offered by UiPath.
Speakers:
👨🏫 Andras Palfi, Senior Product Manager, UiPath
👩🏫 Lenka Dulovicova, Product Program Manager, UiPath
Transcript: Selling digital books in 2024: Insights from industry leaders - T...BookNet Canada
The publishing industry has been selling digital audiobooks and ebooks for over a decade and has found its groove. What’s changed? What has stayed the same? Where do we go from here? Join a group of leading sales peers from across the industry for a conversation about the lessons learned since the popularization of digital books, best practices, digital book supply chain management, and more.
Link to video recording: https://bnctechforum.ca/sessions/selling-digital-books-in-2024-insights-from-industry-leaders/
Presented by BookNet Canada on May 28, 2024, with support from the Department of Canadian Heritage.
PHP Frameworks: I want to break free (IPC Berlin 2024)Ralf Eggert
In this presentation, we examine the challenges and limitations of relying too heavily on PHP frameworks in web development. We discuss the history of PHP and its frameworks to understand how this dependence has evolved. The focus will be on providing concrete tips and strategies to reduce reliance on these frameworks, based on real-world examples and practical considerations. The goal is to equip developers with the skills and knowledge to create more flexible and future-proof web applications. We'll explore the importance of maintaining autonomy in a rapidly changing tech landscape and how to make informed decisions in PHP development.
This talk is aimed at encouraging a more independent approach to using PHP frameworks, moving towards a more flexible and future-proof approach to PHP development.
Let's dive deeper into the world of ODC! Ricardo Alves (OutSystems) will join us to tell all about the new Data Fabric. After that, Sezen de Bruijn (OutSystems) will get into the details on how to best design a sturdy architecture within ODC.
Epistemic Interaction - tuning interfaces to provide information for AI supportAlan Dix
Paper presented at SYNERGY workshop at AVI 2024, Genoa, Italy. 3rd June 2024
https://alandix.com/academic/papers/synergy2024-epistemic/
As machine learning integrates deeper into human-computer interactions, the concept of epistemic interaction emerges, aiming to refine these interactions to enhance system adaptability. This approach encourages minor, intentional adjustments in user behaviour to enrich the data available for system learning. This paper introduces epistemic interaction within the context of human-system communication, illustrating how deliberate interaction design can improve system understanding and adaptation. Through concrete examples, we demonstrate the potential of epistemic interaction to significantly advance human-computer interaction by leveraging intuitive human communication strategies to inform system design and functionality, offering a novel pathway for enriching user-system engagements.
FIDO Alliance Osaka Seminar: Passkeys and the Road Ahead.pdf
How KeyBank Used Elastic to Build an Enterprise Monitoring Solution
1. Mick Miller
Senior Product Manager, Cloud Native
KeyBank
mick_miller@keybank.com
@mickmill
Lessons from the trenches
How KeyBank used Elastic to build an enterprise monitoring
solution
4. Business and
technology problems
21+ monitoring systems resulted in:
• Slow MTTR, great MTTB
• Difficulty in identifying the root cause of problems
• Poor mobile application satisfaction scores
• Poor branch workstation performance metrics
6. The floodgates open
2017–2018
Architecture
2018
Pilot/Prod
2019
Scale out
Enterprise Monitoring transformation
timeline
Late 2017
All new monitoring stopped
Log storage cost and stability
issues
Huge backlog of monitoring
requests
Early 2018
Critical situation w/ eastern WA
branches
No visibility into root cause
Elasticsearch dev environment
functional
Deployed metric and winlog beats
The floodgates
opened!
Two days later...
Root causes identified
Remediation project
launched
8. Pathway to production cleared
First production cluster was deployed mid-2018
Funding provided by retiring 5 of the 21 existing monitoring
systems
• More systems retired in 2018, 2019 with plans for more in 2020
Estimated savings over $5M
• No accurate way to estimate savings from eliminating all the different
support and skill sets required to support 21 systems
Cluster immediately started to experience performance issues;
tuning and scaling became highest priority
Urgency
9. Lessons from KeyBank's monitoring transformation
Cluster size estimations1
2
3
4
5
Architecture and design
approaches
Automation
Feedback loops and monitoring
Unexpected business value
10. Bad news: it's nearly impossible to precisely predict your workload
• Index growth, data ingestions, query usage, etc.
• Expect to iterate three or more times
• You’re always wrong the first time
• Elasticsearch is not an RDBMS and does not scale like a datastore
Cluster size estimation
There’s no way anyone can tell you how to design a perfect
cluster, and those who claim they do are liars.
—Fred de Villamil, Elasticsearch for fun and profit
Good news: it is possible to design for growth
• Start small and isolated; KeyBank used a dedicated eight-server hyper-
converged cluster
• Plan on scaling out and up; with a preference to scale out
• Automation is the key to scaling out and up without outages
18. Type OS Heap size
(GB)
RAM
(GB)
Disk (GB) CPU
Data node RHEL
7.x
30 80 8000 16
Master node RHEL
7.x
30 64 40 2
Coordinator
node
RHEL
7.x
30 64 40 8
Current compute node specs
Initial lessons learned:
• Keep heap around 30 GB
• 16 cores are enough
• Monitor garbage collection
• VMs on hyper-converged
systems are not great for data
nodes
• Hyper-converged nodes are
expensive to scale
• Low-cost physical; more
compute for your $
19. Pain points with current design
• Hyper-converged systems / hypervisor disk I/O too high
• Hypervisor VM replication waste of disk space and drive writes
• Logstash ingestion pipeline fragile
Next iteration design goals
• Move data nodes to 49 physical servers; keep master and coordinator
nodes on hyper-converged systems
• Move Logstash ingestion pipelines to containers for isolation and scale
on demand
Unless you plan to use Elasticsearch to power the
search of your blog or a small e-commerce
website, your first design will fail.
—Fred de Villamil, Elasticsearch for fun and profit
Current issues and next iteration goals
20. Next iteration cluster design: massive scale out
Ingestion ≅ 1.5TB/day
49 physical servers
14 VMs on HX
3TB RAM
638 cores
Hot nodes: 14TB
Warm nodes: 228TB
Cold nodes: 182.3TB
Total cluster: 453TB
21. Lessons from KeyBank's monitoring transformation
Cluster size estimations
2
1
3
4
5
Architecture and design
approaches
Automation
Feedback loops and monitoring
Unexpected business value
22. Independently scalable tiers
Loosely coupled tiers
High availability
Design to maintain the environment during business
hours with no outages
Architecture and design approaches
24. Independently scalable tiers allow for surgical scaling
HA tiers allows for production live updates
Physical vs. virtual
• Virtual → Kafka, Elasticsearch master and coordinator nodes
• Physical → Elasticsearch data nodes
• Containerize Logstash streams for pipeline isolation and on-
demand scaling
How to scale
25. Measure ingestion to indexed performance (SLAs)
• KeyBank SLA is under 0.5 sec. from inbound raw Kafka topic to
indexed in Elasticsearch
• Backlog item to create Kibana dashboard and alerting
• Ingestion SLAs are only part of the metric; need to make sure
Kafka output topics are near empty all the time.
• When end-to-end indexing speed exceeds SLAs, it's time to tune
or scale out
Measure shards per data node
• When these get too high (approx. 300), it's time to tune or scale
out
Monitor query execution times
• Possibly scale out feedback
When to scale
26. Lessons from KeyBank's monitoring transformation
Cluster size estimations
3
1
2
4
5
Architecture and design
approaches
Automation
Feedback loops and monitoring
Unexpected business value
27. Core principle:
Infrastructure as Code (IaC)
Automation platform:
Ansible and AWX
Source control:
Bitbucket and simplified gitflow
Automate everything
29. Lessons from KeyBank's monitoring transformation
Cluster size estimations
4
1
2
3
5
Architecture and design
approaches
Automation
Feedback loops and monitoring
Unexpected business value
30. Important metrics for monitoring:
• Number of shards per node
• High shard per node count is a leading indicator of
degraded system performance
• Disk I/O per second
• Heap size
• Excessive garbage collection
• CPU utilization
• Ingestion SLA (time from Kafka raw topic to fully indexed)
Feedback loops and monitoring
31. Lessons from KeyBank's monitoring transformation
Cluster size estimations
5
1
2
3
4
Architecture and design
approaches
Automation
Feedback loops and monitoring
Unexpected business value
32. Initial focus on enterprise
monitoring data led to integrations
with:
• Database
• ITSM
• Collaboration
• Development pipeline
Unexpected business insight through
integrations