Submit Search
Upload
Bangalore Meetup - Enable realtime machine learning with streaming data
•
0 likes
•
12 views
Christina Lin
Follow
Enable realtime machine learning with streaming data
Read less
Read more
Technology
Report
Share
Report
Share
1 of 28
Download now
Download to read offline
Recommended
The slide that goes with the workshop. - Demonstrate Transitioning to Real-Time Data Systems: This track shows how to transition from traditional batch processing to real-time data handling, emphasizing the critical importance of timely updates in scenarios like air traffic control where delays can be hazardous. - Implement Real-Time Data Solutions: Participants learn to implement and integrate a technology stack featuring Apache Spark, Cassandra, and PostgreSQL to effectively manage and process live data feeds from various sources, suitable for high-stakes environments. - Employ Streaming and Data Capture Technologies: The workshop demonstrates how to utilize streaming platforms and Change Data Capture (CDC) with tools like Redpanda and Debezium to ensure continuous data flow and immediate update capture, streamlining operations and data accuracy. - Apply Data Querying Techniques: It concludes by teaching how to apply advanced querying methods using Flink’s SQL client to incorporate real-time data queries into existing systems, ensuring efficient data manipulation and integration into relational and NoSQL databases.
Building Real-Time Data Pipelines: Stream & Batch Processing workshop Slide
Building Real-Time Data Pipelines: Stream & Batch Processing workshop Slide
Christina Lin
The slide that goes with the workshop. - Demonstrate Transitioning to Real-Time Data Systems: This track shows how to transition from traditional batch processing to real-time data handling, emphasizing the critical importance of timely updates in scenarios like air traffic control where delays can be hazardous. - Implement Real-Time Data Solutions: Participants learn to implement and integrate a technology stack featuring Apache Spark, Cassandra, and PostgreSQL to effectively manage and process live data feeds from various sources, suitable for high-stakes environments. - Employ Streaming and Data Capture Technologies: The workshop demonstrates how to utilize streaming platforms and Change Data Capture (CDC) with tools like Redpanda and Debezium to ensure continuous data flow and immediate update capture, streamlining operations and data accuracy. - Apply Data Querying Techniques: It concludes by teaching how to apply advanced querying methods using Flink’s SQL client to incorporate real-time data queries into existing systems, ensuring efficient data manipulation and integration into relational and NoSQL databases.
ODSC - Batch to Stream workshop - integration of Apache Spark, Cassandra, Pos...
ODSC - Batch to Stream workshop - integration of Apache Spark, Cassandra, Pos...
Christina Lin
Revolutionizing the customer experience - Hello Engagement Database
Revolutionizing the customer experience - Hello Engagement Database
Revolutionizing the customer experience - Hello Engagement Database
Dipti Borkar
Example architectures for AWS Data solutions
Delivering business insights and automation utilizing aws data services
Delivering business insights and automation utilizing aws data services
Bhuvaneshwaran R
With all the hype around Cloud and SDN, business decision makers are finding themselves trying to navigate through many new concepts and consequently needing to change the way they have traditionally selected their IT infrastructure. Technologies are now becoming more integrated and it is more important than ever to help your business be agile enough to keep up with the demands of your users and your customers. Come hear from Lisa Guess to learn how organizations can embrace Cloud technologies such as automation, SDN and Orchestration platforms to help you build next-generation networks.
Lisa Guess - Embracing the Cloud
Lisa Guess - Embracing the Cloud
centralohioissa
Cardinality-HL-Overview
Cardinality-HL-Overview
Harry Frost
Choosing the Right Database: Exploring MySQL Alternatives for Modern Applications by Bhanu Jamwal, Head of Solution Engineering, PingCAP at the Mydbops Opensource Database Meetup 14. This presentation discusses the challenges in choosing the right database for modern applications, focusing on MySQL alternatives. It highlights the growth of new applications, the need to improve infrastructure, and the rise of cloud-native architecture. The presentation explores alternatives to MySQL, such as MySQL forks, database clustering, and distributed SQL. It introduces TiDB as a distributed SQL database for modern applications, highlighting its features and top use cases. Case studies of companies benefiting from TiDB are included. The presentation also outlines TiDB's product roadmap, detailing upcoming features and enhancements.
Choosing the Right Database: Exploring MySQL Alternatives for Modern Applicat...
Choosing the Right Database: Exploring MySQL Alternatives for Modern Applicat...
Mydbops
WASP is a framework to develop big data pipelines, working with streaming analytics, multi model storages and machine learning models. Everything is in real time.
Wasp2 - IoT and Streaming Platform
Wasp2 - IoT and Streaming Platform
Paolo Platter
Recommended
The slide that goes with the workshop. - Demonstrate Transitioning to Real-Time Data Systems: This track shows how to transition from traditional batch processing to real-time data handling, emphasizing the critical importance of timely updates in scenarios like air traffic control where delays can be hazardous. - Implement Real-Time Data Solutions: Participants learn to implement and integrate a technology stack featuring Apache Spark, Cassandra, and PostgreSQL to effectively manage and process live data feeds from various sources, suitable for high-stakes environments. - Employ Streaming and Data Capture Technologies: The workshop demonstrates how to utilize streaming platforms and Change Data Capture (CDC) with tools like Redpanda and Debezium to ensure continuous data flow and immediate update capture, streamlining operations and data accuracy. - Apply Data Querying Techniques: It concludes by teaching how to apply advanced querying methods using Flink’s SQL client to incorporate real-time data queries into existing systems, ensuring efficient data manipulation and integration into relational and NoSQL databases.
Building Real-Time Data Pipelines: Stream & Batch Processing workshop Slide
Building Real-Time Data Pipelines: Stream & Batch Processing workshop Slide
Christina Lin
The slide that goes with the workshop. - Demonstrate Transitioning to Real-Time Data Systems: This track shows how to transition from traditional batch processing to real-time data handling, emphasizing the critical importance of timely updates in scenarios like air traffic control where delays can be hazardous. - Implement Real-Time Data Solutions: Participants learn to implement and integrate a technology stack featuring Apache Spark, Cassandra, and PostgreSQL to effectively manage and process live data feeds from various sources, suitable for high-stakes environments. - Employ Streaming and Data Capture Technologies: The workshop demonstrates how to utilize streaming platforms and Change Data Capture (CDC) with tools like Redpanda and Debezium to ensure continuous data flow and immediate update capture, streamlining operations and data accuracy. - Apply Data Querying Techniques: It concludes by teaching how to apply advanced querying methods using Flink’s SQL client to incorporate real-time data queries into existing systems, ensuring efficient data manipulation and integration into relational and NoSQL databases.
ODSC - Batch to Stream workshop - integration of Apache Spark, Cassandra, Pos...
ODSC - Batch to Stream workshop - integration of Apache Spark, Cassandra, Pos...
Christina Lin
Revolutionizing the customer experience - Hello Engagement Database
Revolutionizing the customer experience - Hello Engagement Database
Revolutionizing the customer experience - Hello Engagement Database
Dipti Borkar
Example architectures for AWS Data solutions
Delivering business insights and automation utilizing aws data services
Delivering business insights and automation utilizing aws data services
Bhuvaneshwaran R
With all the hype around Cloud and SDN, business decision makers are finding themselves trying to navigate through many new concepts and consequently needing to change the way they have traditionally selected their IT infrastructure. Technologies are now becoming more integrated and it is more important than ever to help your business be agile enough to keep up with the demands of your users and your customers. Come hear from Lisa Guess to learn how organizations can embrace Cloud technologies such as automation, SDN and Orchestration platforms to help you build next-generation networks.
Lisa Guess - Embracing the Cloud
Lisa Guess - Embracing the Cloud
centralohioissa
Cardinality-HL-Overview
Cardinality-HL-Overview
Harry Frost
Choosing the Right Database: Exploring MySQL Alternatives for Modern Applications by Bhanu Jamwal, Head of Solution Engineering, PingCAP at the Mydbops Opensource Database Meetup 14. This presentation discusses the challenges in choosing the right database for modern applications, focusing on MySQL alternatives. It highlights the growth of new applications, the need to improve infrastructure, and the rise of cloud-native architecture. The presentation explores alternatives to MySQL, such as MySQL forks, database clustering, and distributed SQL. It introduces TiDB as a distributed SQL database for modern applications, highlighting its features and top use cases. Case studies of companies benefiting from TiDB are included. The presentation also outlines TiDB's product roadmap, detailing upcoming features and enhancements.
Choosing the Right Database: Exploring MySQL Alternatives for Modern Applicat...
Choosing the Right Database: Exploring MySQL Alternatives for Modern Applicat...
Mydbops
WASP is a framework to develop big data pipelines, working with streaming analytics, multi model storages and machine learning models. Everything is in real time.
Wasp2 - IoT and Streaming Platform
Wasp2 - IoT and Streaming Platform
Paolo Platter
The Briefing Room with Dr. Robin Bloor and Teradata RainStor Live Webcast October 13, 2015 Watch the archive: https://bloorgroup.webex.com/bloorgroup/lsr.php?RCID=012bb2c290097165911872b1f241531d Hadoop data lakes are emerging as peers to corporate data warehouses. However, successful data management solutions require a fusion of all relevant data, new and old, which has proven challenging for many companies. With a data lake that’s been optimized for fast queries, solid governance and lifecycle management, users can take data management to a whole new level. Register for this episode of The Briefing Room to learn from veteran Analyst Dr. Robin Bloor as he discusses the relevance of data lakes in today’s information landscape. He’ll be briefed by Mark Cusack of Teradata, who will explain how his company’s archiving solution has developed into a storage point for raw data. He’ll show how the proven compression, scalability and governance of Teradata RainStor combined with Hadoop can enable an optimized data lake that serves as both reservoir for historical data and as a "system of record” for the enterprise. Visit InsideAnalysis.com for more information.
First in Class: Optimizing the Data Lake for Tighter Integration
First in Class: Optimizing the Data Lake for Tighter Integration
Inside Analysis
Presented at the MariaDB Roadshow Wien (Vienna) 2017
Keynote: Open Source für den geschäftskritischen Einsatz
Keynote: Open Source für den geschäftskritischen Einsatz
MariaDB plc
In this presentation, Logitech presents their strategy on how organizations can become more agile and increase time-to-delivery. Logitech's cloud deployment of data virtualization has empowered their IT organization to redefine the way data services are produced and delivered for analytics. This presentation is part of the Fast Data Strategy Conference, and you can watch the video here goo.gl/PkObRj.
Data Virtualization in the Cloud – Accelerating Time-to-Value
Data Virtualization in the Cloud – Accelerating Time-to-Value
Denodo
Slides for the talk given on 20-07-2019 at Nairobi JVM. It was a talk about building data pipelines with Apache Kafka as a message broker or enterprise bus and Apache spark as a distributed computing engine that enables processing of large volume of data efficiently.
Building scalable data with kafka and spark
Building scalable data with kafka and spark
babatunde ekemode
Discover how to avoid common pitfalls when shifting to an event-driven architecture (EDA) in order to boost system recovery and scalability. We cover Kafka Schema Registry, in-broker transformations, event sourcing, and more.
Event-Driven Architecture Masterclass: Engineering a Robust, High-performance...
Event-Driven Architecture Masterclass: Engineering a Robust, High-performance...
ScyllaDB
Amazon Elastic MapReduce (Amazon EMR) makes it easy to provision and manage Hadoop in the AWS Cloud. Hadoop is available in multiple distributions and Amazon EMR gives you the option of using the Amazon Distribution or the MapR Distribution for Hadoop. This webinar will show you examples of how to use Amazon EMR to with the MapR Distribution for Hadoop. You will learn how you can free yourself from the heavy lifting required to run Hadoop on-premises, and gain the advantages of using the cloud to increase flexibility and accelerate projects while lowering costs. What we'll learn: • See a live demonstration of how you can quickly and easily launch your first Hadoop cluster in a few steps. • Examples of real world applications and customer successes in production • Best practices for maximizing the benefits of using MapR with AWS.
AWS Partner Webcast - Hadoop in the Cloud: Unlocking the Potential of Big Dat...
AWS Partner Webcast - Hadoop in the Cloud: Unlocking the Potential of Big Dat...
Amazon Web Services
AWS Partner Presentation - Datapipe - Deploying Hybrid IT, AWS Summit 2012 - NYC
AWS Partner Presentation - Datapipe - Deploying Hybrid IT, AWS Summit 2012 - NYC
Amazon Web Services
SimplifyStreamingArchitecture
SimplifyStreamingArchitecture
Maheedhar Gunturu
Neutron Done the SDN Way Dragonflow is an open source distributed control plane implementation of Neutron which is an integral part of OpenStack. Dragonflow introduces innovative solutions and features to implement networking and distributed network services in a manner that is both lightweight and simple to extend, yet targeted towards performance-intensive and latency-sensitive applications. Dragonflow aims at solving the performance
Dragonflow Austin Summit Talk
Dragonflow Austin Summit Talk
Eran Gampel
Supporting Hadoop in containers takes much more than the very primitive support Docker provides using the Storage Plugin. A production scale Hadoop deployment inside containers needs to honor anti/affinity, fault-domain and data-locality policies. Kubernetes alone, with primitives such as StatefulSets and PersitentVolumeClaims, is not sufficient to support a complex data-heavy application such as Hadoop. One needs to think about this problem more holistically across containers, networking and storage stacks. Also, constructs around deployment, scaling, upgrade etc in traditional orchestration platforms is designed for applications that have adopted a microservices philosophy, which doesn't fit most Big Data applications across the ingest, store, process, serve and visualization stages of the pipeline. Come to this technical session to learn how to run and manage lifecycle of containerized Hadoop and other applications in the data analytics pipeline efficiently and effectively, far and beyond simple container orchestration. #BigData, #NoSQL, #Hortonworks, #Cloudera, #Kafka, #Tensorflow, #Cassandra, #MongoDB, #Kudu, #Hive, #HBase, PARTHA SEETALA, CTO, Robin Systems.
Containerized Hadoop beyond Kubernetes
Containerized Hadoop beyond Kubernetes
DataWorks Summit
Machine learning (ML) and artificial intelligence (AI) enable intelligent processes that can autonomously make decisions in real-time. The real challenge for effective ML and AI is getting all relevant data to a converged data platform in real-time, where it can be processed using modern technologies and integrated into any downstream systems.
Enabling Real-Time Business with Change Data Capture
Enabling Real-Time Business with Change Data Capture
MapR Technologies
En esta presentacion mostramos las buenas practicas en la implementacion de Big Data Open Source usando HDInsight
Ai tour 2019 Mejores Practicas en Entornos de Produccion Big Data Open Source...
Ai tour 2019 Mejores Practicas en Entornos de Produccion Big Data Open Source...
nnakasone
https://www.fbcinc.com/e/nlit/agenda.aspx Cloudera booth data in motion tim spann seattle April 2024
April 2024 - NLIT Cloudera Real-Time LLM Streaming 2024
April 2024 - NLIT Cloudera Real-Time LLM Streaming 2024
Timothy Spann
Managing 3.8 million e-prescriptions daily for more than 1 million healthcare professionals is no small feat. And, with rapid growth in the number of digital transactions and expansion of its network, Surescripts needed to replace its legacy relational database system to address a new set of data management challenges while meeting their customers’ demanding SLAs. Join us for this on-demand webinar to hear from Keith Willard, Chief Architect at Surescripts, to learn how and why Surescripts leverages DataStax Enterprise to deliver enhanced message processing at scale. View recording: https://youtu.be/1T6V1XAoaJQ Explore all DataStax webinars: https://www.datastax.com/resources/webinars
Webinar - Delivering Enhanced Message Processing at Scale With an Always-on D...
Webinar - Delivering Enhanced Message Processing at Scale With an Always-on D...
DataStax
Enterprise Architecture (EA) provides a visual blueprint of the organization, and shows key interrelationships between data, process, applications, and more. By abstracting these assets in a graphical view, it’s possible to see key interrelationships, particularly as they relate to data and its business impact across the organization. Join us for a discussion on how data architecture is a key component of an overall enterprise architecture for enhanced business value and success.
Slides: Enterprise Architecture vs. Data Architecture
Slides: Enterprise Architecture vs. Data Architecture
DATAVERSITY
Born in HP Labs, EsgynDB is an open sourced All-in-One SQL DB Engine for Hadoop. It is the only platform which provides mixed workloads (Real-Time Operational Reporting, Analytics and Transactions) with high performance and concurrency. EsgynDB gives you the ease and security of traditional (R)DBMS and the cost-effective scale of Hadoop.
EsgynDB: A Big Data Engine. Simplifying Fast and Reliable Mixed Workloads
EsgynDB: A Big Data Engine. Simplifying Fast and Reliable Mixed Workloads
Srikanth Ramakrishnan
Story of building Big Data Platform in Equinix to cater a number of use cases. It explains journey and selection of Cassandra for NoSQL solution sitting in the heart of the platform. Storm , flume, AMQ, Drools, Solr technologies playing an important role in the platform. Platform processing large amounts of data in real-time.
Equinix Big Data Platform and Cassandra - A view into the journey
Equinix Big Data Platform and Cassandra - A view into the journey
Praveen Kumar
Your team is always under pressure to accelerate the adoption of the most modern and powerful technologies. Simultaneously, your existing investments, such as IBM i, your organization’s most critical data asset, remain in a silo. The only practical path forward is to connect the new and existing with a streaming technology like Apache Kafka to feed real-time applications that power use cases ranging from marketing and order replenishment to fraud detection. Join this Precisely webinar to learn how to unlock the potential of your IBM i data by creating data pipelines that integrate, transform, and deliver it to users when and where they need it. Additionally, hear how Stark Denmark, uses Precisely Connect CDC to provide data to their organization in real-time. Join this webinar to: - Understand the benefits and challenges of building data pipelines that access and integrate data from IBM i systems to modern data platforms - Learn how Precisely can help you build real-time data pipelines - Hear from Stark Denmark on how they are using Connect CDC from Precisely and the benefits they are getting
Streaming IBM i to Kafka for Next-Gen Use Cases
Streaming IBM i to Kafka for Next-Gen Use Cases
Precisely
As many of our customers have come to learn - integrating legacy data into modern data architecture is easier said than done! View this on-demand webinar to learn all about Precisely's seamless data integration solutions and how they have helped thousands of customers like you trust their data. Learn about the two flavors of Precisely's Connect: • Collect, prepare, transform and load your data to various targets using Connect ETL with the flexibility of using clusters and running on many different environments. With our 'design once, deploy anywhere' feature; what is built on prem today, can run on a cloud platform tomorrow with no development or mainframe expertise required. • Capture data changes in real-time with no coding, tuning or performance impact using Connect CDC. Replicating exactly WHAT you need and HOW you need it with over 80 built-in data transformation methods.
Seamless, Real-Time Data Integration with Connect
Seamless, Real-Time Data Integration with Connect
Precisely
Deck delivered at Architect Council events in November and December of 2008
Azure Services Platform
Azure Services Platform
David Chou
Kafka summit apac session
Kafka summit apac session
Kafka summit apac session
Christina Lin
Very quick overview of the components in Serverless Integration using Kubernetes, Knative, Kafka and Camel K.
Serverless integration anatomy
Serverless integration anatomy
Christina Lin
More Related Content
Similar to Bangalore Meetup - Enable realtime machine learning with streaming data
The Briefing Room with Dr. Robin Bloor and Teradata RainStor Live Webcast October 13, 2015 Watch the archive: https://bloorgroup.webex.com/bloorgroup/lsr.php?RCID=012bb2c290097165911872b1f241531d Hadoop data lakes are emerging as peers to corporate data warehouses. However, successful data management solutions require a fusion of all relevant data, new and old, which has proven challenging for many companies. With a data lake that’s been optimized for fast queries, solid governance and lifecycle management, users can take data management to a whole new level. Register for this episode of The Briefing Room to learn from veteran Analyst Dr. Robin Bloor as he discusses the relevance of data lakes in today’s information landscape. He’ll be briefed by Mark Cusack of Teradata, who will explain how his company’s archiving solution has developed into a storage point for raw data. He’ll show how the proven compression, scalability and governance of Teradata RainStor combined with Hadoop can enable an optimized data lake that serves as both reservoir for historical data and as a "system of record” for the enterprise. Visit InsideAnalysis.com for more information.
First in Class: Optimizing the Data Lake for Tighter Integration
First in Class: Optimizing the Data Lake for Tighter Integration
Inside Analysis
Presented at the MariaDB Roadshow Wien (Vienna) 2017
Keynote: Open Source für den geschäftskritischen Einsatz
Keynote: Open Source für den geschäftskritischen Einsatz
MariaDB plc
In this presentation, Logitech presents their strategy on how organizations can become more agile and increase time-to-delivery. Logitech's cloud deployment of data virtualization has empowered their IT organization to redefine the way data services are produced and delivered for analytics. This presentation is part of the Fast Data Strategy Conference, and you can watch the video here goo.gl/PkObRj.
Data Virtualization in the Cloud – Accelerating Time-to-Value
Data Virtualization in the Cloud – Accelerating Time-to-Value
Denodo
Slides for the talk given on 20-07-2019 at Nairobi JVM. It was a talk about building data pipelines with Apache Kafka as a message broker or enterprise bus and Apache spark as a distributed computing engine that enables processing of large volume of data efficiently.
Building scalable data with kafka and spark
Building scalable data with kafka and spark
babatunde ekemode
Discover how to avoid common pitfalls when shifting to an event-driven architecture (EDA) in order to boost system recovery and scalability. We cover Kafka Schema Registry, in-broker transformations, event sourcing, and more.
Event-Driven Architecture Masterclass: Engineering a Robust, High-performance...
Event-Driven Architecture Masterclass: Engineering a Robust, High-performance...
ScyllaDB
Amazon Elastic MapReduce (Amazon EMR) makes it easy to provision and manage Hadoop in the AWS Cloud. Hadoop is available in multiple distributions and Amazon EMR gives you the option of using the Amazon Distribution or the MapR Distribution for Hadoop. This webinar will show you examples of how to use Amazon EMR to with the MapR Distribution for Hadoop. You will learn how you can free yourself from the heavy lifting required to run Hadoop on-premises, and gain the advantages of using the cloud to increase flexibility and accelerate projects while lowering costs. What we'll learn: • See a live demonstration of how you can quickly and easily launch your first Hadoop cluster in a few steps. • Examples of real world applications and customer successes in production • Best practices for maximizing the benefits of using MapR with AWS.
AWS Partner Webcast - Hadoop in the Cloud: Unlocking the Potential of Big Dat...
AWS Partner Webcast - Hadoop in the Cloud: Unlocking the Potential of Big Dat...
Amazon Web Services
AWS Partner Presentation - Datapipe - Deploying Hybrid IT, AWS Summit 2012 - NYC
AWS Partner Presentation - Datapipe - Deploying Hybrid IT, AWS Summit 2012 - NYC
Amazon Web Services
SimplifyStreamingArchitecture
SimplifyStreamingArchitecture
Maheedhar Gunturu
Neutron Done the SDN Way Dragonflow is an open source distributed control plane implementation of Neutron which is an integral part of OpenStack. Dragonflow introduces innovative solutions and features to implement networking and distributed network services in a manner that is both lightweight and simple to extend, yet targeted towards performance-intensive and latency-sensitive applications. Dragonflow aims at solving the performance
Dragonflow Austin Summit Talk
Dragonflow Austin Summit Talk
Eran Gampel
Supporting Hadoop in containers takes much more than the very primitive support Docker provides using the Storage Plugin. A production scale Hadoop deployment inside containers needs to honor anti/affinity, fault-domain and data-locality policies. Kubernetes alone, with primitives such as StatefulSets and PersitentVolumeClaims, is not sufficient to support a complex data-heavy application such as Hadoop. One needs to think about this problem more holistically across containers, networking and storage stacks. Also, constructs around deployment, scaling, upgrade etc in traditional orchestration platforms is designed for applications that have adopted a microservices philosophy, which doesn't fit most Big Data applications across the ingest, store, process, serve and visualization stages of the pipeline. Come to this technical session to learn how to run and manage lifecycle of containerized Hadoop and other applications in the data analytics pipeline efficiently and effectively, far and beyond simple container orchestration. #BigData, #NoSQL, #Hortonworks, #Cloudera, #Kafka, #Tensorflow, #Cassandra, #MongoDB, #Kudu, #Hive, #HBase, PARTHA SEETALA, CTO, Robin Systems.
Containerized Hadoop beyond Kubernetes
Containerized Hadoop beyond Kubernetes
DataWorks Summit
Machine learning (ML) and artificial intelligence (AI) enable intelligent processes that can autonomously make decisions in real-time. The real challenge for effective ML and AI is getting all relevant data to a converged data platform in real-time, where it can be processed using modern technologies and integrated into any downstream systems.
Enabling Real-Time Business with Change Data Capture
Enabling Real-Time Business with Change Data Capture
MapR Technologies
En esta presentacion mostramos las buenas practicas en la implementacion de Big Data Open Source usando HDInsight
Ai tour 2019 Mejores Practicas en Entornos de Produccion Big Data Open Source...
Ai tour 2019 Mejores Practicas en Entornos de Produccion Big Data Open Source...
nnakasone
https://www.fbcinc.com/e/nlit/agenda.aspx Cloudera booth data in motion tim spann seattle April 2024
April 2024 - NLIT Cloudera Real-Time LLM Streaming 2024
April 2024 - NLIT Cloudera Real-Time LLM Streaming 2024
Timothy Spann
Managing 3.8 million e-prescriptions daily for more than 1 million healthcare professionals is no small feat. And, with rapid growth in the number of digital transactions and expansion of its network, Surescripts needed to replace its legacy relational database system to address a new set of data management challenges while meeting their customers’ demanding SLAs. Join us for this on-demand webinar to hear from Keith Willard, Chief Architect at Surescripts, to learn how and why Surescripts leverages DataStax Enterprise to deliver enhanced message processing at scale. View recording: https://youtu.be/1T6V1XAoaJQ Explore all DataStax webinars: https://www.datastax.com/resources/webinars
Webinar - Delivering Enhanced Message Processing at Scale With an Always-on D...
Webinar - Delivering Enhanced Message Processing at Scale With an Always-on D...
DataStax
Enterprise Architecture (EA) provides a visual blueprint of the organization, and shows key interrelationships between data, process, applications, and more. By abstracting these assets in a graphical view, it’s possible to see key interrelationships, particularly as they relate to data and its business impact across the organization. Join us for a discussion on how data architecture is a key component of an overall enterprise architecture for enhanced business value and success.
Slides: Enterprise Architecture vs. Data Architecture
Slides: Enterprise Architecture vs. Data Architecture
DATAVERSITY
Born in HP Labs, EsgynDB is an open sourced All-in-One SQL DB Engine for Hadoop. It is the only platform which provides mixed workloads (Real-Time Operational Reporting, Analytics and Transactions) with high performance and concurrency. EsgynDB gives you the ease and security of traditional (R)DBMS and the cost-effective scale of Hadoop.
EsgynDB: A Big Data Engine. Simplifying Fast and Reliable Mixed Workloads
EsgynDB: A Big Data Engine. Simplifying Fast and Reliable Mixed Workloads
Srikanth Ramakrishnan
Story of building Big Data Platform in Equinix to cater a number of use cases. It explains journey and selection of Cassandra for NoSQL solution sitting in the heart of the platform. Storm , flume, AMQ, Drools, Solr technologies playing an important role in the platform. Platform processing large amounts of data in real-time.
Equinix Big Data Platform and Cassandra - A view into the journey
Equinix Big Data Platform and Cassandra - A view into the journey
Praveen Kumar
Your team is always under pressure to accelerate the adoption of the most modern and powerful technologies. Simultaneously, your existing investments, such as IBM i, your organization’s most critical data asset, remain in a silo. The only practical path forward is to connect the new and existing with a streaming technology like Apache Kafka to feed real-time applications that power use cases ranging from marketing and order replenishment to fraud detection. Join this Precisely webinar to learn how to unlock the potential of your IBM i data by creating data pipelines that integrate, transform, and deliver it to users when and where they need it. Additionally, hear how Stark Denmark, uses Precisely Connect CDC to provide data to their organization in real-time. Join this webinar to: - Understand the benefits and challenges of building data pipelines that access and integrate data from IBM i systems to modern data platforms - Learn how Precisely can help you build real-time data pipelines - Hear from Stark Denmark on how they are using Connect CDC from Precisely and the benefits they are getting
Streaming IBM i to Kafka for Next-Gen Use Cases
Streaming IBM i to Kafka for Next-Gen Use Cases
Precisely
As many of our customers have come to learn - integrating legacy data into modern data architecture is easier said than done! View this on-demand webinar to learn all about Precisely's seamless data integration solutions and how they have helped thousands of customers like you trust their data. Learn about the two flavors of Precisely's Connect: • Collect, prepare, transform and load your data to various targets using Connect ETL with the flexibility of using clusters and running on many different environments. With our 'design once, deploy anywhere' feature; what is built on prem today, can run on a cloud platform tomorrow with no development or mainframe expertise required. • Capture data changes in real-time with no coding, tuning or performance impact using Connect CDC. Replicating exactly WHAT you need and HOW you need it with over 80 built-in data transformation methods.
Seamless, Real-Time Data Integration with Connect
Seamless, Real-Time Data Integration with Connect
Precisely
Deck delivered at Architect Council events in November and December of 2008
Azure Services Platform
Azure Services Platform
David Chou
Similar to Bangalore Meetup - Enable realtime machine learning with streaming data
(20)
First in Class: Optimizing the Data Lake for Tighter Integration
First in Class: Optimizing the Data Lake for Tighter Integration
Keynote: Open Source für den geschäftskritischen Einsatz
Keynote: Open Source für den geschäftskritischen Einsatz
Data Virtualization in the Cloud – Accelerating Time-to-Value
Data Virtualization in the Cloud – Accelerating Time-to-Value
Building scalable data with kafka and spark
Building scalable data with kafka and spark
Event-Driven Architecture Masterclass: Engineering a Robust, High-performance...
Event-Driven Architecture Masterclass: Engineering a Robust, High-performance...
AWS Partner Webcast - Hadoop in the Cloud: Unlocking the Potential of Big Dat...
AWS Partner Webcast - Hadoop in the Cloud: Unlocking the Potential of Big Dat...
AWS Partner Presentation - Datapipe - Deploying Hybrid IT, AWS Summit 2012 - NYC
AWS Partner Presentation - Datapipe - Deploying Hybrid IT, AWS Summit 2012 - NYC
SimplifyStreamingArchitecture
SimplifyStreamingArchitecture
Dragonflow Austin Summit Talk
Dragonflow Austin Summit Talk
Containerized Hadoop beyond Kubernetes
Containerized Hadoop beyond Kubernetes
Enabling Real-Time Business with Change Data Capture
Enabling Real-Time Business with Change Data Capture
Ai tour 2019 Mejores Practicas en Entornos de Produccion Big Data Open Source...
Ai tour 2019 Mejores Practicas en Entornos de Produccion Big Data Open Source...
April 2024 - NLIT Cloudera Real-Time LLM Streaming 2024
April 2024 - NLIT Cloudera Real-Time LLM Streaming 2024
Webinar - Delivering Enhanced Message Processing at Scale With an Always-on D...
Webinar - Delivering Enhanced Message Processing at Scale With an Always-on D...
Slides: Enterprise Architecture vs. Data Architecture
Slides: Enterprise Architecture vs. Data Architecture
EsgynDB: A Big Data Engine. Simplifying Fast and Reliable Mixed Workloads
EsgynDB: A Big Data Engine. Simplifying Fast and Reliable Mixed Workloads
Equinix Big Data Platform and Cassandra - A view into the journey
Equinix Big Data Platform and Cassandra - A view into the journey
Streaming IBM i to Kafka for Next-Gen Use Cases
Streaming IBM i to Kafka for Next-Gen Use Cases
Seamless, Real-Time Data Integration with Connect
Seamless, Real-Time Data Integration with Connect
Azure Services Platform
Azure Services Platform
More from Christina Lin
Kafka summit apac session
Kafka summit apac session
Kafka summit apac session
Christina Lin
Very quick overview of the components in Serverless Integration using Kubernetes, Knative, Kafka and Camel K.
Serverless integration anatomy
Serverless integration anatomy
Christina Lin
Day in the life event-driven workshop
Day in the life event-driven workshop
Day in the life event-driven workshop
Christina Lin
My slide for Apache Con
Agile integration cloud native developement
Agile integration cloud native developement
Christina Lin
My slides at DevConf IN
Dev conf .in cloud native reference architecture .advance
Dev conf .in cloud native reference architecture .advance
Christina Lin
My Camel K Talk for Taiwan Java user group
Camel k Taiwan Java user group
Camel k Taiwan Java user group
Christina Lin
My Slides fro API centric microservices Architecture for #Devoxxma
Devoxxma-API centric microservices Architecture
Devoxxma-API centric microservices Architecture
Christina Lin
JBoss Fuse allow you to have flexibility to deploy your Camel application on two most popular java container standards, OSGi and JavaEE, this workshop walks you through how to develop your application on JBoss EAP
JBoss Fuse - Fuse workshop EAP container
JBoss Fuse - Fuse workshop EAP container
Christina Lin
Red Hat Tech Exchange Integration often involves storing, retrieving, and transforming data. Using a traditional database in your integration is likely to becomes a bottleneck that is expensive and hard to scale. By combining the agile, lightweight, Enterprise Integration pattern (EIP) based integrations using open source technology with scalable in-memory storage, it’s possible to achieve near linear scalability and boost performance of your integration services.
Supercharge Your Integration Services