Event streaming architectures launched a reexamination of applications and systems architectures across the board. We live in a world where answers are needed now in a constant real-time flow. Yet beyond the event streaming system itself, what are the corequisites to ensure our large scale distributed database systems can keep pace with this always-on, always-current real time flow of data? What are the requirements and expectations for this next tech cycle?
5 Factors When Selecting a High Performance, Low Latency DatabaseScyllaDB
There are hundreds of possible databases you can choose from today. Yet if you draw up a short list of critical criteria related to performance and scalability for your use case, the field of choices narrows and your evaluation decision becomes much easier.
In this session, we’ll explore 5 essential factors to consider when selecting a high performance low latency database, including options, opportunities, and tradeoffs related to software architecture, hardware utilization, interoperability, RASP, and Deployment.
Demystifying the Distributed Database Landscape (DevOps) (1).pdfScyllaDB
What is the state of high-performance, distributed databases as we head deeper into 2022, and which options are best suited for your own development projects?
The data-intensive applications leading this next tech cycle are typically powered by multiple types of databases and data stores—each satisfying specific needs and often interacting with a broader data ecosystem. Even the very notion of a database is evolving as new hardware architectures and methodologies allow for ever-greater capabilities and expectations for horizontal and vertical scalability, performance and reliability.
In this webinar, Peter Corless, ScyllaDB’s director of technology advocacy, will survey the current landscape of distributed database systems and highlight new directions in the industry.
This talk will cover different database and database-adjacent technologies as well as describe their appropriate use cases, patterns and anti-patterns with a focus on:
- Distributed SQL, NewSQL and NoSQL
- In-memory datastores and caches
- Streaming technologies with persistent data storage
Typesafe & William Hill: Cassandra, Spark, and Kafka - The New Streaming Data...DataStax Academy
Typesafe did a survey of Spark usage last year and found that a large percentage of Spark users combine it with Cassandra and Kafka. This talk focuses on streaming data scenarios that demonstrate how these three tools complement each other for building robust, scalable, and flexible data applications. Cassandra provides resilient and scalable storage, with flexible data format and query options. Kafka provides durable, scalable collection of streaming data with message-queue semantics. Spark provides very flexible analytics, everything from classic SQL queries to machine learning and graph algorithms, running in a streaming model based on "mini-batches", offline batch jobs, or interactive queries. We'll consider best practices and areas where improvements are needed.
Choosing the Right Database: Exploring MySQL Alternatives for Modern Applicat...Mydbops
Choosing the Right Database: Exploring MySQL Alternatives for Modern Applications by Bhanu Jamwal, Head of Solution Engineering, PingCAP at the Mydbops Opensource Database Meetup 14.
This presentation discusses the challenges in choosing the right database for modern applications, focusing on MySQL alternatives. It highlights the growth of new applications, the need to improve infrastructure, and the rise of cloud-native architecture.
The presentation explores alternatives to MySQL, such as MySQL forks, database clustering, and distributed SQL. It introduces TiDB as a distributed SQL database for modern applications, highlighting its features and top use cases.
Case studies of companies benefiting from TiDB are included. The presentation also outlines TiDB's product roadmap, detailing upcoming features and enhancements.
Scaling Security on 100s of Millions of Mobile Devices Using Apache Kafka® an...confluent
Watch this talk here: https://www.confluent.io/online-talks/scaling-security-on-100s-of-millions-of-mobile-devices-using-kafka-and-scylla-on-demand
Join mobile cybersecurity leader Lookout as they talk through their data ingestion journey.
Lookout enables enterprises to protect their data by evaluating threats and risks at post-perimeter endpoint devices and providing access to corporate data after conditional security scans. Their continuous assessment of device health creates a massive amount of telemetry data, forcing new approaches to data ingestion. Learn how Lookout changed its approach in order to grow from 1.5 million devices to 100 million devices and beyond, by implementing Confluent Platform and switching to Scylla.
5 Factors When Selecting a High Performance, Low Latency DatabaseScyllaDB
There are hundreds of possible databases you can choose from today. Yet if you draw up a short list of critical criteria related to performance and scalability for your use case, the field of choices narrows and your evaluation decision becomes much easier.
In this session, we’ll explore 5 essential factors to consider when selecting a high performance low latency database, including options, opportunities, and tradeoffs related to software architecture, hardware utilization, interoperability, RASP, and Deployment.
Demystifying the Distributed Database Landscape (DevOps) (1).pdfScyllaDB
What is the state of high-performance, distributed databases as we head deeper into 2022, and which options are best suited for your own development projects?
The data-intensive applications leading this next tech cycle are typically powered by multiple types of databases and data stores—each satisfying specific needs and often interacting with a broader data ecosystem. Even the very notion of a database is evolving as new hardware architectures and methodologies allow for ever-greater capabilities and expectations for horizontal and vertical scalability, performance and reliability.
In this webinar, Peter Corless, ScyllaDB’s director of technology advocacy, will survey the current landscape of distributed database systems and highlight new directions in the industry.
This talk will cover different database and database-adjacent technologies as well as describe their appropriate use cases, patterns and anti-patterns with a focus on:
- Distributed SQL, NewSQL and NoSQL
- In-memory datastores and caches
- Streaming technologies with persistent data storage
Typesafe & William Hill: Cassandra, Spark, and Kafka - The New Streaming Data...DataStax Academy
Typesafe did a survey of Spark usage last year and found that a large percentage of Spark users combine it with Cassandra and Kafka. This talk focuses on streaming data scenarios that demonstrate how these three tools complement each other for building robust, scalable, and flexible data applications. Cassandra provides resilient and scalable storage, with flexible data format and query options. Kafka provides durable, scalable collection of streaming data with message-queue semantics. Spark provides very flexible analytics, everything from classic SQL queries to machine learning and graph algorithms, running in a streaming model based on "mini-batches", offline batch jobs, or interactive queries. We'll consider best practices and areas where improvements are needed.
Choosing the Right Database: Exploring MySQL Alternatives for Modern Applicat...Mydbops
Choosing the Right Database: Exploring MySQL Alternatives for Modern Applications by Bhanu Jamwal, Head of Solution Engineering, PingCAP at the Mydbops Opensource Database Meetup 14.
This presentation discusses the challenges in choosing the right database for modern applications, focusing on MySQL alternatives. It highlights the growth of new applications, the need to improve infrastructure, and the rise of cloud-native architecture.
The presentation explores alternatives to MySQL, such as MySQL forks, database clustering, and distributed SQL. It introduces TiDB as a distributed SQL database for modern applications, highlighting its features and top use cases.
Case studies of companies benefiting from TiDB are included. The presentation also outlines TiDB's product roadmap, detailing upcoming features and enhancements.
Scaling Security on 100s of Millions of Mobile Devices Using Apache Kafka® an...confluent
Watch this talk here: https://www.confluent.io/online-talks/scaling-security-on-100s-of-millions-of-mobile-devices-using-kafka-and-scylla-on-demand
Join mobile cybersecurity leader Lookout as they talk through their data ingestion journey.
Lookout enables enterprises to protect their data by evaluating threats and risks at post-perimeter endpoint devices and providing access to corporate data after conditional security scans. Their continuous assessment of device health creates a massive amount of telemetry data, forcing new approaches to data ingestion. Learn how Lookout changed its approach in order to grow from 1.5 million devices to 100 million devices and beyond, by implementing Confluent Platform and switching to Scylla.
ScyllaDB Open Source 5.0 is the latest evolution of our monstrously fast and scalable NoSQL database – powering instantaneous experiences with massive distributed datasets.
Join us to learn about ScyllaDB Open Source 5.0, which represents the first milestone in ScyllaDB V. ScyllaDB 5.0 introduces a host of functional, performance and stability improvements that resolve longstanding challenges of legacy NoSQL databases.
We’ll cover:
- New capabilities including a new IO model and scheduler, Raft-based schema updates, automated tombstone garbage collection, optimized reverse queries, and support for the latest AWS EC2 instances
- How ScyllaDB 5.0 fits into the evolution of ScyllaDB – and what to expect next
- The first look at benchmarks that quantify the impact of ScyllaDB 5.0's numerous optimizations
This will be an interactive session with ample time for Q & A – bring us your questions and feedback!
Supporting Hadoop in containers takes much more than the very primitive support Docker provides using the Storage Plugin. A production scale Hadoop deployment inside containers needs to honor anti/affinity, fault-domain and data-locality policies. Kubernetes alone, with primitives such as StatefulSets and PersitentVolumeClaims, is not sufficient to support a complex data-heavy application such as Hadoop. One needs to think about this problem more holistically across containers, networking and storage stacks. Also, constructs around deployment, scaling, upgrade etc in traditional orchestration platforms is designed for applications that have adopted a microservices philosophy, which doesn't fit most Big Data applications across the ingest, store, process, serve and visualization stages of the pipeline. Come to this technical session to learn how to run and manage lifecycle of containerized Hadoop and other applications in the data analytics pipeline efficiently and effectively, far and beyond simple container orchestration. #BigData, #NoSQL, #Hortonworks, #Cloudera, #Kafka, #Tensorflow, #Cassandra, #MongoDB, #Kudu, #Hive, #HBase, PARTHA SEETALA, CTO, Robin Systems.
Streaming Data Analytics with ksqlDB and Superset | Robert Stolz, PresetHostedbyConfluent
Streaming data systems have been growing rapidly in importance to the modern data stack. Kafka’s kSQL provides an interface for analytic tools that speak SQL. Apache Superset, the most popular modern open-source visualization and analytics solution, plugs into nearly any data source that speaks SQL, including Kafka. Here, we review and compare methods for connecting Kafka to Superset to enable streaming analytics use cases including anomaly detection, operational monitoring, and online data integration.
Want to see a high-level overview of the products in the Microsoft data platform portfolio in Azure? I’ll cover products in the categories of OLTP, OLAP, data warehouse, storage, data transport, data prep, data lake, IaaS, PaaS, SMP/MPP, NoSQL, Hadoop, open source, reporting, machine learning, and AI. It’s a lot to digest but I’ll categorize the products and discuss their use cases to help you narrow down the best products for the solution you want to build.
Optimizing Performance in Rust for Low-Latency Database DriversScyllaDB
The process of optimizing shard-aware drivers for ScyllaDB has involved several initiatives, often necessitating a complete rewrite from the ground up. Discover the efforts put into enhancing the performance of ScyllaDB drivers with a focus on Rust, and how its code base will serve as a foundation for drivers using other language bindings in the future. This session emphasizes the performance gains achieved by harnessing the power of the asynchronous Tokio framework as the backbone of a new, high-performance driver while thoughtfully architecting and optimizing various components of the driver.
NewSQL - Deliverance from BASE and back to SQL and ACIDTony Rogerson
There are a number of NewSQL products now on market such as VoltDB and Progres-XL. These promise NoSQL performance and scalability but with ACID and relational concepts implemented with ANSI SQL.
This session will cover off why NoSQL came about, why it's had it's day and why NewSQL will become the backbone of the Enterprise for OLTP and Analytics.
Azure Days 2019: Grösser und Komplexer ist nicht immer besser (Meinrad Weiss)Trivadis
«Moderne» Data Warehouse/Data Lake Architekturen strotzen oft nur von Layern und Services. Mit solchen Systemen lassen sich Petabytes von Daten verwalten und analysieren. Das Ganze hat aber auch seinen Preis (Komplexität, Latenzzeit, Stabilität) und nicht jedes Projekt wird mit diesem Ansatz glücklich.
Der Vortrag zeigt die Reise von einer technologieverliebten Lösung zu einer auf die Anwender Bedürfnisse abgestimmten Umgebung. Er zeigt die Sonnen- und Schattenseiten von massiv parallelen Systemen und soll die Sinne auf das Aufnehmen der realen Kundenanforderungen sensibilisieren.
Kubernetes is great for deploying stateless containers, but what about the big data ecosystem? Episode 3 of our Kubernetes series covers how DC/OS enables you to connect your Kubernetes-based applications to co-located big data services.
Slides cover:
1. Why persistence is challenging in distributed architectures
How DC/OS helps you take advantage of the services available in the big data ecosystem
2. How to connect Kubernetes to your data services through networking
3. How Apache Flink and Apache Spark work with Kubernetes to enable real-time data processing on DC/OS
Keeping Analytics Data Fresh in a Streaming Architecture | John Neal, QlikHostedbyConfluent
Qlik is an industry leader across its solution stack, both on the Data Integration side of things with Qlik Replicate (real-time CDC) and Qlik Compose (data warehouse and data lake automation), and on the Analytics side with Qlik Sense. These two “sides” of Qlik are coming together more frequently these days as the need for “always fresh” data increases across organizations.
When real-time streaming applications are the topic du jour, those companies are looking to Apache Kafka to provide the architectural backbone those applications require. Those same companies turn to Qlik Replicate to put the data from their enterprise database systems into motion at scale, whether that data resides in “legacy” mainframe databases; traditional relational databases such as Oracle, MySQL, or SQL Server; or applications such as SAP and SalesForce.
In this session we will look in depth at how Qlik Replicate can be used to continuously stream changes from a source database into Apache Kafka. From there, we will explore how a purpose-built consumer can be used to provide the bridge between Apache Kafka and an analytics application such as Qlik Sense.
Seamless, Real-Time Data Integration with ConnectPrecisely
As many of our customers have come to learn - integrating legacy data into modern data architecture is easier said than done! View this on-demand webinar to learn all about Precisely's seamless data integration solutions and how they have helped thousands of customers like you trust their data.
Learn about the two flavors of Precisely's Connect:
• Collect, prepare, transform and load your data to various targets using Connect ETL with the flexibility of using clusters and running on many different environments. With our 'design once, deploy anywhere' feature; what is built on prem today, can run on a cloud platform tomorrow with no development or mainframe expertise required.
• Capture data changes in real-time with no coding, tuning or performance impact using Connect CDC. Replicating exactly WHAT you need and HOW you need it with over 80 built-in data transformation methods.
Cloudera Operational DB (Apache HBase & Apache Phoenix)Timothy Spann
Cloudera Operational DB (Apache HBase & Apache Phoenix)
Using Apache NiFi 1.10 to read/write from HBase
Dec 2019, Timothy Spann, Field Engineer, Data in Motion
Princeton Meetup 10-dec-2019
https://www.meetup.com/futureofdata-princeton/events/266496424/
Hosted By PGA Fund at:
https://pga.fund/coworking-space/
Princeton Growth Accelerator
5 Independence Way, 4th Floor, Princeton, NJ
Managing and Deploying High Performance Computing Clusters using Windows HPC ...Saptak Sen
The new management features built into Windows HPC Server 2008 R2 are the foundation for deploying and managing HPC clusters of scale up to 1000 nodes. Join us for a deep dive in monitoring and diagnostic tools, a review of the updated heat-map and template-based deployment. We also cover the new PowerShell-based scripting capabilities: the basics of management shell, as well as the underlying design and key concepts, new Reporting Capabilities, and a discussion on network boot.
Simplifying Big Data Integration with Syncsort DMX and DMX-hPrecisely
Today’s modern data strategies have to manage more than growing data volumes. They must also address the added complexity of integrating diverse data sources and types, adhere to security and governance mandates, and ensure the right tools and skills are in place to deliver business value from the data.
Learn how the latest enhancements to Syncsort DMX and DMX-h can help you achieve your modern data strategy goals with a single interface for accessing and integrating all your enterprise data sources – batch and streaming – across Hadoop, Spark, Linux, Windows or Unix – on premise or in the cloud.
Watch this on-demand customer education webcast to learn the latest product features introduced this year, including:
• Best in class data ingestion capabilities with enhanced support for mainframes, RDBMSs, MPP, Avro/Parquet, Kafka, NoSQL and more.
• Single interface for streaming and batch processes – now with support for Kafka and MapR Streams
• Secure data access, data governance and lineage with seamless integration with Kerberos, Apache Ranger, Apache Ambari, Cloudera Manager, Cloudera Navigator and Sentry.
• Evolution of our design once, deploy anywhere architecture – now with support for Spark!
Capital One Delivers Risk Insights in Real Time with Stream Processingconfluent
Speakers: Ravi Dubey, Senior Manager, Software Engineering, Capital One + Jeff Sharpe, Software Engineer, Capital One
Capital One supports interactions with real-time streaming transactional data using Apache Kafka®. Kafka helps deliver information to internal operation teams and bank tellers to assist with assessing risk and protect customers in a myriad of ways.
Inside the bank, Kafka allows Capital One to build a real-time system that takes advantage of modern data and cloud technologies without exposing customers to unnecessary data breaches, or violating privacy regulations. These examples demonstrate how a streaming platform enables Capital One to act on their visions faster and in a more scalable way through the Kafka solution, helping establish Capital One as an innovator in the banking space.
Join us for this online talk on lessons learned, best practices and technical patterns of Capital One’s deployment of Apache Kafka.
-Find out how Kafka delivers on a 5-second service-level agreement (SLA) for inside branch tellers.
-Learn how to combine and host data in-memory and prevent personally identifiable information (PII) violations of in-flight transactions.
-Understand how Capital One manages Kafka Docker containers using Kubernetes.
Watch the recording: https://videos.confluent.io/watch/6e6ukQNnmASwkf9Gkdhh69?.
Microsoft Azure zmienia się. Jego częśc poświęcona bazie danych (Windows Azure SQL Database) zmienia się jeszcze szybciej. Podczas tej sesji chciałbym pokazac tym, którzy nie widzieli, oraz przypomniec tym, którzy już coś wiedzą - o co chodzi z WASD, jakie zmiany nastapiły i czego możemy po tej bazie oczekiwać. Dla odważnych będzie okazja podłączenia się do konta w chmurze i przetestowania ych rozwiązań samemu.
Machbase Neo is an innovative iot data processing solution that integrates various features into an #all_in_one timeseries database.
In the past, development organizations had to invest a lot of time and resources to build a single service or solution. Moreover, they had to navigate complex and challenging processes for data collection and processing. But now, with the introduction of Machbase Neo, these problems have been solved. You can now set up everything using just one Machbase Neo server, allowing developers to focus on their core tasks. This product can save developers over 90% of their time by eliminating unnecessary tasks.
Cosmos DB Real-time Advanced Analytics WorkshopDatabricks
The workshop implements an innovative fraud detection solution as a PoC for a bank who provides payment processing services for commerce to their merchant customers all across the globe, helping them save costs by applying machine learning and advanced analytics to detect fraudulent transactions. Since their customers are around the world, the right solutions should minimize any latencies experienced using their service by distributing as much of the solution as possible, as closely as possible, to the regions in which their customers use the service. The workshop designs a data pipeline solution that leverages Cosmos DB for both the scalable ingest of streaming data, and the globally distributed serving of both pre-scored data and machine learning models. Cosmos DB’s major advantage when operating at a global scale is its high concurrency with low latency and predictable results.
This combination is unique to Cosmos DB and ideal for the bank needs. The solution leverages the Cosmos DB change data feed in concert with the Azure Databricks Delta and Spark capabilities to enable a modern data warehouse solution that can be used to create risk reduction solutions for scoring transactions for fraud in an offline, batch approach and in a near real-time, request/response approach. https://github.com/Microsoft/MCW-Cosmos-DB-Real-Time-Advanced-Analytics Takeaway: How to leverage Azure Cosmos DB + Azure Databricks along with Spark ML for building innovative advanced analytics pipelines.
Is Using KoP (Kafka-on-Pulsar) a Good Idea? - Pulsar Summit SF 2022StreamNative
So, you are a responsible software engineer building microservices for Apache Kafka, and life is good. Eventually, you hear the community talking about the outstanding experience they are having with Apache Pulsar features. They talk about infinite event stream retention, a rebalance-free architecture, native support for event processing, and multi-tenancy. Exciting, right? Most people would want to migrate their code to Pulsar. Especially when you know that Pulsar also supports Kafka clients natively via the protocol handler known as KoP — which enables the Kafka client APIs on Pulsar. But, as said before, you are responsible; and you don't believe in fairy tales, just like you don't believe that migrations like this happen effortlessly. This session will discuss the architecture behind protocol handlers, what it means having one enabled on Pulsar, and how the KoP works. It will detail the effort required to migrate a microservice written for Kafka to Pulsar, and whether the code need to change for this.
Building an Asynchronous Application Framework with Python and Pulsar - Pulsa...StreamNative
This talk describes Klaviyo’s internal messaging system, an asynchronous application framework built around Pulsar that provides a set of high-quality tools for building business-critical asynchronous data flows in unreliable environments. This framework includes: a pulsar ORM and schema migrator for topic configuration; a retry/replay system; a versioned schema registry; a consumer framework oriented around preventing message loss and in hostile environments while maximizing observability; an experimental “online schema change” for topics; and more. Development of this system was informed by lessons learned during heavy use of datastores like RabbitMQ and Kafka, and frameworks like Celery, Spark, and Flink. In addition to the capabilities of this system, this talk will also cover (sometimes painful) lessons learned about the process of converting a heterogenous async-computing environment onto Pulsar and a unified model.
More Related Content
Similar to Distributed Database Design Decisions to Support High Performance Event Streaming - Pulsar Summit SF 2022
ScyllaDB Open Source 5.0 is the latest evolution of our monstrously fast and scalable NoSQL database – powering instantaneous experiences with massive distributed datasets.
Join us to learn about ScyllaDB Open Source 5.0, which represents the first milestone in ScyllaDB V. ScyllaDB 5.0 introduces a host of functional, performance and stability improvements that resolve longstanding challenges of legacy NoSQL databases.
We’ll cover:
- New capabilities including a new IO model and scheduler, Raft-based schema updates, automated tombstone garbage collection, optimized reverse queries, and support for the latest AWS EC2 instances
- How ScyllaDB 5.0 fits into the evolution of ScyllaDB – and what to expect next
- The first look at benchmarks that quantify the impact of ScyllaDB 5.0's numerous optimizations
This will be an interactive session with ample time for Q & A – bring us your questions and feedback!
Supporting Hadoop in containers takes much more than the very primitive support Docker provides using the Storage Plugin. A production scale Hadoop deployment inside containers needs to honor anti/affinity, fault-domain and data-locality policies. Kubernetes alone, with primitives such as StatefulSets and PersitentVolumeClaims, is not sufficient to support a complex data-heavy application such as Hadoop. One needs to think about this problem more holistically across containers, networking and storage stacks. Also, constructs around deployment, scaling, upgrade etc in traditional orchestration platforms is designed for applications that have adopted a microservices philosophy, which doesn't fit most Big Data applications across the ingest, store, process, serve and visualization stages of the pipeline. Come to this technical session to learn how to run and manage lifecycle of containerized Hadoop and other applications in the data analytics pipeline efficiently and effectively, far and beyond simple container orchestration. #BigData, #NoSQL, #Hortonworks, #Cloudera, #Kafka, #Tensorflow, #Cassandra, #MongoDB, #Kudu, #Hive, #HBase, PARTHA SEETALA, CTO, Robin Systems.
Streaming Data Analytics with ksqlDB and Superset | Robert Stolz, PresetHostedbyConfluent
Streaming data systems have been growing rapidly in importance to the modern data stack. Kafka’s kSQL provides an interface for analytic tools that speak SQL. Apache Superset, the most popular modern open-source visualization and analytics solution, plugs into nearly any data source that speaks SQL, including Kafka. Here, we review and compare methods for connecting Kafka to Superset to enable streaming analytics use cases including anomaly detection, operational monitoring, and online data integration.
Want to see a high-level overview of the products in the Microsoft data platform portfolio in Azure? I’ll cover products in the categories of OLTP, OLAP, data warehouse, storage, data transport, data prep, data lake, IaaS, PaaS, SMP/MPP, NoSQL, Hadoop, open source, reporting, machine learning, and AI. It’s a lot to digest but I’ll categorize the products and discuss their use cases to help you narrow down the best products for the solution you want to build.
Optimizing Performance in Rust for Low-Latency Database DriversScyllaDB
The process of optimizing shard-aware drivers for ScyllaDB has involved several initiatives, often necessitating a complete rewrite from the ground up. Discover the efforts put into enhancing the performance of ScyllaDB drivers with a focus on Rust, and how its code base will serve as a foundation for drivers using other language bindings in the future. This session emphasizes the performance gains achieved by harnessing the power of the asynchronous Tokio framework as the backbone of a new, high-performance driver while thoughtfully architecting and optimizing various components of the driver.
NewSQL - Deliverance from BASE and back to SQL and ACIDTony Rogerson
There are a number of NewSQL products now on market such as VoltDB and Progres-XL. These promise NoSQL performance and scalability but with ACID and relational concepts implemented with ANSI SQL.
This session will cover off why NoSQL came about, why it's had it's day and why NewSQL will become the backbone of the Enterprise for OLTP and Analytics.
Azure Days 2019: Grösser und Komplexer ist nicht immer besser (Meinrad Weiss)Trivadis
«Moderne» Data Warehouse/Data Lake Architekturen strotzen oft nur von Layern und Services. Mit solchen Systemen lassen sich Petabytes von Daten verwalten und analysieren. Das Ganze hat aber auch seinen Preis (Komplexität, Latenzzeit, Stabilität) und nicht jedes Projekt wird mit diesem Ansatz glücklich.
Der Vortrag zeigt die Reise von einer technologieverliebten Lösung zu einer auf die Anwender Bedürfnisse abgestimmten Umgebung. Er zeigt die Sonnen- und Schattenseiten von massiv parallelen Systemen und soll die Sinne auf das Aufnehmen der realen Kundenanforderungen sensibilisieren.
Kubernetes is great for deploying stateless containers, but what about the big data ecosystem? Episode 3 of our Kubernetes series covers how DC/OS enables you to connect your Kubernetes-based applications to co-located big data services.
Slides cover:
1. Why persistence is challenging in distributed architectures
How DC/OS helps you take advantage of the services available in the big data ecosystem
2. How to connect Kubernetes to your data services through networking
3. How Apache Flink and Apache Spark work with Kubernetes to enable real-time data processing on DC/OS
Keeping Analytics Data Fresh in a Streaming Architecture | John Neal, QlikHostedbyConfluent
Qlik is an industry leader across its solution stack, both on the Data Integration side of things with Qlik Replicate (real-time CDC) and Qlik Compose (data warehouse and data lake automation), and on the Analytics side with Qlik Sense. These two “sides” of Qlik are coming together more frequently these days as the need for “always fresh” data increases across organizations.
When real-time streaming applications are the topic du jour, those companies are looking to Apache Kafka to provide the architectural backbone those applications require. Those same companies turn to Qlik Replicate to put the data from their enterprise database systems into motion at scale, whether that data resides in “legacy” mainframe databases; traditional relational databases such as Oracle, MySQL, or SQL Server; or applications such as SAP and SalesForce.
In this session we will look in depth at how Qlik Replicate can be used to continuously stream changes from a source database into Apache Kafka. From there, we will explore how a purpose-built consumer can be used to provide the bridge between Apache Kafka and an analytics application such as Qlik Sense.
Seamless, Real-Time Data Integration with ConnectPrecisely
As many of our customers have come to learn - integrating legacy data into modern data architecture is easier said than done! View this on-demand webinar to learn all about Precisely's seamless data integration solutions and how they have helped thousands of customers like you trust their data.
Learn about the two flavors of Precisely's Connect:
• Collect, prepare, transform and load your data to various targets using Connect ETL with the flexibility of using clusters and running on many different environments. With our 'design once, deploy anywhere' feature; what is built on prem today, can run on a cloud platform tomorrow with no development or mainframe expertise required.
• Capture data changes in real-time with no coding, tuning or performance impact using Connect CDC. Replicating exactly WHAT you need and HOW you need it with over 80 built-in data transformation methods.
Cloudera Operational DB (Apache HBase & Apache Phoenix)Timothy Spann
Cloudera Operational DB (Apache HBase & Apache Phoenix)
Using Apache NiFi 1.10 to read/write from HBase
Dec 2019, Timothy Spann, Field Engineer, Data in Motion
Princeton Meetup 10-dec-2019
https://www.meetup.com/futureofdata-princeton/events/266496424/
Hosted By PGA Fund at:
https://pga.fund/coworking-space/
Princeton Growth Accelerator
5 Independence Way, 4th Floor, Princeton, NJ
Managing and Deploying High Performance Computing Clusters using Windows HPC ...Saptak Sen
The new management features built into Windows HPC Server 2008 R2 are the foundation for deploying and managing HPC clusters of scale up to 1000 nodes. Join us for a deep dive in monitoring and diagnostic tools, a review of the updated heat-map and template-based deployment. We also cover the new PowerShell-based scripting capabilities: the basics of management shell, as well as the underlying design and key concepts, new Reporting Capabilities, and a discussion on network boot.
Simplifying Big Data Integration with Syncsort DMX and DMX-hPrecisely
Today’s modern data strategies have to manage more than growing data volumes. They must also address the added complexity of integrating diverse data sources and types, adhere to security and governance mandates, and ensure the right tools and skills are in place to deliver business value from the data.
Learn how the latest enhancements to Syncsort DMX and DMX-h can help you achieve your modern data strategy goals with a single interface for accessing and integrating all your enterprise data sources – batch and streaming – across Hadoop, Spark, Linux, Windows or Unix – on premise or in the cloud.
Watch this on-demand customer education webcast to learn the latest product features introduced this year, including:
• Best in class data ingestion capabilities with enhanced support for mainframes, RDBMSs, MPP, Avro/Parquet, Kafka, NoSQL and more.
• Single interface for streaming and batch processes – now with support for Kafka and MapR Streams
• Secure data access, data governance and lineage with seamless integration with Kerberos, Apache Ranger, Apache Ambari, Cloudera Manager, Cloudera Navigator and Sentry.
• Evolution of our design once, deploy anywhere architecture – now with support for Spark!
Capital One Delivers Risk Insights in Real Time with Stream Processingconfluent
Speakers: Ravi Dubey, Senior Manager, Software Engineering, Capital One + Jeff Sharpe, Software Engineer, Capital One
Capital One supports interactions with real-time streaming transactional data using Apache Kafka®. Kafka helps deliver information to internal operation teams and bank tellers to assist with assessing risk and protect customers in a myriad of ways.
Inside the bank, Kafka allows Capital One to build a real-time system that takes advantage of modern data and cloud technologies without exposing customers to unnecessary data breaches, or violating privacy regulations. These examples demonstrate how a streaming platform enables Capital One to act on their visions faster and in a more scalable way through the Kafka solution, helping establish Capital One as an innovator in the banking space.
Join us for this online talk on lessons learned, best practices and technical patterns of Capital One’s deployment of Apache Kafka.
-Find out how Kafka delivers on a 5-second service-level agreement (SLA) for inside branch tellers.
-Learn how to combine and host data in-memory and prevent personally identifiable information (PII) violations of in-flight transactions.
-Understand how Capital One manages Kafka Docker containers using Kubernetes.
Watch the recording: https://videos.confluent.io/watch/6e6ukQNnmASwkf9Gkdhh69?.
Microsoft Azure zmienia się. Jego częśc poświęcona bazie danych (Windows Azure SQL Database) zmienia się jeszcze szybciej. Podczas tej sesji chciałbym pokazac tym, którzy nie widzieli, oraz przypomniec tym, którzy już coś wiedzą - o co chodzi z WASD, jakie zmiany nastapiły i czego możemy po tej bazie oczekiwać. Dla odważnych będzie okazja podłączenia się do konta w chmurze i przetestowania ych rozwiązań samemu.
Machbase Neo is an innovative iot data processing solution that integrates various features into an #all_in_one timeseries database.
In the past, development organizations had to invest a lot of time and resources to build a single service or solution. Moreover, they had to navigate complex and challenging processes for data collection and processing. But now, with the introduction of Machbase Neo, these problems have been solved. You can now set up everything using just one Machbase Neo server, allowing developers to focus on their core tasks. This product can save developers over 90% of their time by eliminating unnecessary tasks.
Cosmos DB Real-time Advanced Analytics WorkshopDatabricks
The workshop implements an innovative fraud detection solution as a PoC for a bank who provides payment processing services for commerce to their merchant customers all across the globe, helping them save costs by applying machine learning and advanced analytics to detect fraudulent transactions. Since their customers are around the world, the right solutions should minimize any latencies experienced using their service by distributing as much of the solution as possible, as closely as possible, to the regions in which their customers use the service. The workshop designs a data pipeline solution that leverages Cosmos DB for both the scalable ingest of streaming data, and the globally distributed serving of both pre-scored data and machine learning models. Cosmos DB’s major advantage when operating at a global scale is its high concurrency with low latency and predictable results.
This combination is unique to Cosmos DB and ideal for the bank needs. The solution leverages the Cosmos DB change data feed in concert with the Azure Databricks Delta and Spark capabilities to enable a modern data warehouse solution that can be used to create risk reduction solutions for scoring transactions for fraud in an offline, batch approach and in a near real-time, request/response approach. https://github.com/Microsoft/MCW-Cosmos-DB-Real-Time-Advanced-Analytics Takeaway: How to leverage Azure Cosmos DB + Azure Databricks along with Spark ML for building innovative advanced analytics pipelines.
Similar to Distributed Database Design Decisions to Support High Performance Event Streaming - Pulsar Summit SF 2022 (20)
Is Using KoP (Kafka-on-Pulsar) a Good Idea? - Pulsar Summit SF 2022StreamNative
So, you are a responsible software engineer building microservices for Apache Kafka, and life is good. Eventually, you hear the community talking about the outstanding experience they are having with Apache Pulsar features. They talk about infinite event stream retention, a rebalance-free architecture, native support for event processing, and multi-tenancy. Exciting, right? Most people would want to migrate their code to Pulsar. Especially when you know that Pulsar also supports Kafka clients natively via the protocol handler known as KoP — which enables the Kafka client APIs on Pulsar. But, as said before, you are responsible; and you don't believe in fairy tales, just like you don't believe that migrations like this happen effortlessly. This session will discuss the architecture behind protocol handlers, what it means having one enabled on Pulsar, and how the KoP works. It will detail the effort required to migrate a microservice written for Kafka to Pulsar, and whether the code need to change for this.
Building an Asynchronous Application Framework with Python and Pulsar - Pulsa...StreamNative
This talk describes Klaviyo’s internal messaging system, an asynchronous application framework built around Pulsar that provides a set of high-quality tools for building business-critical asynchronous data flows in unreliable environments. This framework includes: a pulsar ORM and schema migrator for topic configuration; a retry/replay system; a versioned schema registry; a consumer framework oriented around preventing message loss and in hostile environments while maximizing observability; an experimental “online schema change” for topics; and more. Development of this system was informed by lessons learned during heavy use of datastores like RabbitMQ and Kafka, and frameworks like Celery, Spark, and Flink. In addition to the capabilities of this system, this talk will also cover (sometimes painful) lessons learned about the process of converting a heterogenous async-computing environment onto Pulsar and a unified model.
Blue-green deploys with Pulsar & Envoy in an event-driven microservice ecosys...StreamNative
In this talk, learn how Toast leverages our Envoy control-plane to manage blue-green deploys of Pulsar consumers, and how this has helped drive adoption across the engineering organization. Dive into the history of Pulsar at Toast, starting from its introduction in 2019 to provide event-driven architecture across a rapidly scaling restaurant software platform. We will detail some of the hurdles that we encountered gaining buy-in across a diverse set of teams, and dive deep into how we enforce best practices and integrate with our service control plane.
Simplify Pulsar Functions Development with SQL - Pulsar Summit SF 2022StreamNative
Pulsar Functions is a succinct framework provided by Apache Pulsar to conduct real-time data processing. Its use cases include ETL pipeline, event-driven applications, and simple data analytics. While Pulsar Functions already provides an extremely simple programming interface, we want to further lower the barrier for users to access real-time data. Since SQL is one of the universal languages in the technology world and well accepted by the vast majority of data engineers, we decided to add a SQL expressing layer on top of Pulsar Functions runtime. In this talk, we will discuss the architecture and implementation of this new service. We will see how SQL syntax, Pulsar Functions, and Function Mesh can work together to deliver a unique user development experience for real-time data jobs in the cloud environment. We will also walk through use cases like filtering, routing, and projecting messages as well as integrating with the Pulsar IO Connectors framework.
Towards a ZooKeeper-less Pulsar, etcd, etcd, etcd. - Pulsar Summit SF 2022StreamNative
Starting with version 2.10, the Apache ZooKeeper dependency has been eliminated and replaced with a pluggable framework that enables you to reduce the infrastructure footprint of Apache Pulsar by leveraging alternative metadata and coordination systems based on your deployment environment. In this talk, walk through the steps required to utilize the existing etcd service running inside Kubernetes to act as Pulsar's metadata store, thereby eliminating the need to run ZooKeeper entirely, leaving you with a Zookeeper-less Pulsar.
Apache Pulsar is a highly available, distributed messaging system that provides guarantees of no message loss and strong message ordering with predictable read and write latency. In this talk, learn how this can be validated for Apache Pulsar Kubernetes deployments. Various failures are injected using Chaos Mesh to simulate network and other infrastructure failure conditions. There are many questions that are asked about failure scenarios, but it could be hard to find answers to these important questions. When a failure happens, how long does it take to recover? Does it cause unavailability? How does it impact throughput and latency? Are the guarantees of no message loss and strong message ordering kept, even when components fail? If a complete availability zone fails, is the system configured correctly to handle AZ failures? This talk will help you find answers to these questions and apply the tooling and practices to your own testing and validation.
Cross the Streams! Creating Streaming Data Pipelines with Apache Flink + Apac...StreamNative
Despite what the Ghostbusters said, we’re going to go ahead and cross (or, join) the streams. This session covers getting started with streaming data pipelines, maximizing Pulsar’s messaging system alongside one of the most flexible streaming frameworks available, Apache Flink. Specifically, we’ll demonstrate the use of Flink SQL, which provides various abstractions and allows your pipeline to be language-agnostic. So, if you want to leverage the power of a high-speed, highly customizable stream processing engine without the usual overhead and learning curves of the technologies involved (and their interconnected relationships), then this talk is for you. Watch the step-by-step demo to build a unified batch and streaming pipeline from scratch with Pulsar, via the Flink SQL client. This means you don’t need to be familiar with Flink, (or even a specific programming language). The examples provided are built for highly complex systems, but the talk itself will be accessible to any experience level.
Message Redelivery: An Unexpected Journey - Pulsar Summit SF 2022StreamNative
Apache Pulsar depends upon message acknowledgments to provide at-least-once or exactly-once processing guarantees. With these guarantees, any transmission between the broker and its producers and consumers requires an acknowledgment. But what happens if an acknowledgment is not received? Resending the message introduces the potential of duplicate processing and increases the likelihood of out or order processing. Therefore, it is critical to understand the Pulsar message redelivery semantics in order to prevent either of these conditions. In this talk, we will walk you through the redelivery semantics of Apache Pulsar, and highlight some of the control mechanisms available to application developers to control this behavior. Finally, we will present best practices for configuring message redelivery to suit various use cases.
Unlocking the Power of Lakehouse Architectures with Apache Pulsar and Apache ...StreamNative
Lakehouses are quickly growing in popularity as a new approach to Data Platform Architecture bringing some of the long-established benefits from OLTP world to OLAP, including transactions, record-level updates/deletes, and changes streaming. In this talk, we will discuss Apache Hudi and how it unlocks possibilities of building your own fully open-source Lakehouse featuring a rich set of integrations with existing technologies, including Apache Pulsar. In this session, we will present: - What Lakehouses are, and why they are needed. - What Apache Hudi is and how it works. - Provide a use-case and demo that applies Apache Hudi’s DeltaStreamer tool to ingest data from Apache Pulsar.
Understanding Broker Load Balancing - Pulsar Summit SF 2022StreamNative
Pulsar is a horizontally scalable messaging system, so the traffic in a logical cluster must be balanced across all the available Pulsar brokers as evenly as possible, in order to ensure full utilization of the broker layer. You can use multiple settings and tools to control the traffic distribution which requires a bit of context to understand how the traffic is managed in Pulsar. In this talk, we will walk you through the load balancing capabilities of Apache Pulsar, and highlight some of the control mechanisms available to control the distribution of load across the Pulsar brokers. Finally, we will discuss the various loading shedding strategies that are available. At the end of the talk, you will have a better understanding of how Pulsar's broker level auto-balancing works, and how to properly configure it to meet your workload demands.
Building an Asynchronous Application Framework with Python and Pulsar - Pulsa...StreamNative
This talk describes Klaviyo’s internal messaging system, an asynchronous application framework built around Pulsar that provides a set of high-quality tools for building business-critical asynchronous data flows in unreliable environments. This framework includes: a pulsar ORM and schema migrator for topic configuration; a retry/replay system; a versioned schema registry; a consumer framework oriented around preventing message loss and in hostile environments while maximizing observability; an experimental “online schema change” for topics; and more. Development of this system was informed by lessons learned during heavy use of datastores like RabbitMQ and Kafka, and frameworks like Celery, Spark, and Flink. In addition to the capabilities of this system, this talk will also cover (sometimes painful) lessons learned about the process of converting a heterogenous async-computing environment onto Pulsar and a unified model.
Pulsar's Journey in Yahoo!: On-prem, Cloud and Hybrid - Pulsar Summit SF 2022StreamNative
In today’s world, we are seeing a big shift toward the Cloud. With this shift comes a big shift in the expectations we have for a messaging system, especially when the messaging system is presented as managed service in a large-scale, multi-tenant environment. For any large-scale enterprise, it’s very important to evaluate messaging system and be confident before expanding complex distributed data systems like Apache Pulsar from on-premise to elastically scalable, fully managed services on cloud services. We must consider aspects such as: migration from and integration with large-scale on-premise clusters, security, cost efficiency, and the cloud friendliness of the architecture, modeling cost and capacity, tenant isolation, deployment robustness, availability, monitoring, etc. Not every messaging system is built to be cloud-native and run as a managed service with cost efficiency. We have been running large-scale Apache Pulsar at Yahoo for the last 8 years on various platforms and hardware configurations while meeting application SLAs and serving more than 1M topics in a cluster. In this talk, we will talk about Pulsar’s journey in Yahoo! from an on-premise platform to a hybrid cloud and on-premise system. We will talk about Pulsar’s architecture and features that make Pulsar a good cloud-native messaging-system choice for any enterprise.
Event-Driven Applications Done Right - Pulsar Summit SF 2022StreamNative
Pulsar Summit San Francisco is the event dedicated to Apache Pulsar. This one-day, action-packed event will include 5 keynotes, 12 breakout sessions, and 1 amazing happy hour. Speakers are from top companies, including Google, AWS, Databricks, Onehouse, StarTree, Intel, ScyllaDB, and more! It’s the perfect opportunity to network with Pulsar thought leaders in person.
Join developers, architects, data engineers, DevOps professionals, and anyone who wants to learn about messaging and event streaming for this one-day, in-person event. Pulsar Summit San Francisco brings the Apache Pulsar Community together to share best practices and discuss the future of streaming technologies.
Pulsar @ Scale. 200M RPM and 1K instances - Pulsar Summit SF 2022StreamNative
Our services team creates, builds, and maintains the as a service offering for base platform services within our organization. Several thousand applications use these custom services daily generating more than 700 million requests per minute. One of these services was our publish / subscriber offering, BQ with custom SDK and custom metrics based on Apache Pulsar. BQ is the core communication service within our organization, having more 200M RPM. All the core processes of the organization depend on this service for operation: the CDC of any of our RDBMS or NoSQL offering, all the eventing efforts of the organization, async communication between apps, notification systems, etc. The backend of the solution was Apache Pulsar running on EC2 on AWS and on top of that we built several components as wrappers of the actual backend, creating our own SDKs and abstractions and in many ways extending the features provided by Pulsar. We had a multi-cluster setup 100% on AWS, with custom Pulsar Docker images running on large ASG setups, along with our own wrapping and admin APIs and DBs. All of this in turn transformed the solution into a volatile solution.
Data Democracy: Journey to User-Facing Analytics - Pulsar Summit SF 2022StreamNative
There is an increasing need to unleash analytical capabilities directly to the end-users to democratize decision-making. User-Facing Analytics is a new frontier that will shape the products of tomorrow and push the limits of existing technology. It demands a solution that will scale to millions of users to provide fast, real-time insights. In this session, Xiang will talk about his journey to build Apache Pinot to tackle the analytics problem space with the architectural changes and technology inventions made over the past decade. He will also talk about how other big data companies such as LinkedIn, Uber, and Stripe power their user-facing analytical applications.
Beam + Pulsar: Powerful Stream Processing at Scale - Pulsar Summit SF 2022StreamNative
Pulsar Summit San Francisco is the event dedicated to Apache Pulsar. This one-day, action-packed event will include 5 keynotes, 12 breakout sessions, and 1 amazing happy hour. Speakers are from top companies, including Google, AWS, Databricks, Onehouse, StarTree, Intel, ScyllaDB, and more! It’s the perfect opportunity to network with Pulsar thought leaders in person.
Join developers, architects, data engineers, DevOps professionals, and anyone who wants to learn about messaging and event streaming for this one-day, in-person event. Pulsar Summit San Francisco brings the Apache Pulsar Community together to share best practices and discuss the future of streaming technologies.
Welcome and Opening Remarks - Pulsar Summit SF 2022StreamNative
Pulsar Summit San Francisco is the event dedicated to Apache Pulsar. This one-day, action-packed event will include 5 keynotes, 12 breakout sessions, and 1 amazing happy hour. Speakers are from top companies, including Google, AWS, Databricks, Onehouse, StarTree, Intel, ScyllaDB, and more! It’s the perfect opportunity to network with Pulsar thought leaders in person.
Join developers, architects, data engineers, DevOps professionals, and anyone who wants to learn about messaging and event streaming for this one-day, in-person event. Pulsar Summit San Francisco brings the Apache Pulsar Community together to share best practices and discuss the future of streaming technologies.
Log System As Backbone – How We Built the World’s Most Advanced Vector Databa...StreamNative
Milvus is an open-source vector database that leverages a novel data fabric to build and manage vector similarity search applications. As the world's most popular vector database, it has already been adopted in production by thousands of companies around the world, including Lucidworks, Shutterstock, and Cloudinary. With the launch of Milvus 2.0, the community aims to introduce a cloud-native, highly scalable and extendable vector similarity solution, and the key design concept is log as data.
Milvus relies on Pulsar as the log pub/sub system. Pulsar helps Milvus to reduce system complexity by loosely decoupling each micro service, making the system stateless by disaggregating log storage and computation, which also makes the system further extendable. We will introduce the overview design, the implementation details of Milvus and its roadmap in this topic.
Takeaways:
1) Get a general idea about what is a vector database and its real-world use cases.
2) Understand the major design principles of Milvus 2.0.
3) Learn how to build a complex system with the help of a modern log system like Pulsar.
MoP(MQTT on Pulsar) - a Powerful Tool for Apache Pulsar in IoT - Pulsar Summi...StreamNative
MQTT (Message Queuing Telemetry Transport,) is a message protocol based on the pub/sub model with the advantages of compact message structure, low resource consumption, and high efficiency, which is suitable for IoT applications with low bandwidth and unstable network environments.
This session will introduce MQTT on Pulsar, which allows developers users of MQTT transport protocol to use Apache Pulsar. I will share the architecture, principles and future planning of MoP, to help you understand Apache Pulsar's capabilities and practices in the IoT industry.
Improvements Made in KoP 2.9.0 - Pulsar Summit Asia 2021StreamNative
KoP(Kafka on Pulsar) is a protocol handler that provides a convenient method for Kafka clients to connect to Pulsar without any change.
Since the first GA version (2.8.0) of KoP (Kafka on Pulsar) was released several months ago, there has been a lot of significant bug fixes and enhancements, such as the support for authorization, and lower version Kafka client. This talk will walk through the basics of KoP, then introduce the improvements in the 2.9.0 version.
Pushing the limits of ePRTC: 100ns holdover for 100 daysAdtran
At WSTS 2024, Alon Stern explored the topic of parametric holdover and explained how recent research findings can be implemented in real-world PNT networks to achieve 100 nanoseconds of accuracy for up to 100 days.
SAP Sapphire 2024 - ASUG301 building better apps with SAP Fiori.pdfPeter Spielvogel
Building better applications for business users with SAP Fiori.
• What is SAP Fiori and why it matters to you
• How a better user experience drives measurable business benefits
• How to get started with SAP Fiori today
• How SAP Fiori elements accelerates application development
• How SAP Build Code includes SAP Fiori tools and other generative artificial intelligence capabilities
• How SAP Fiori paves the way for using AI in SAP apps
DevOps and Testing slides at DASA ConnectKari Kakkonen
My and Rik Marselis slides at 30.5.2024 DASA Connect conference. We discuss about what is testing, then what is agile testing and finally what is Testing in DevOps. Finally we had lovely workshop with the participants trying to find out different ways to think about quality and testing in different parts of the DevOps infinity loop.
Welcome to the first live UiPath Community Day Dubai! Join us for this unique occasion to meet our local and global UiPath Community and leaders. You will get a full view of the MEA region's automation landscape and the AI Powered automation technology capabilities of UiPath. Also, hosted by our local partners Marc Ellis, you will enjoy a half-day packed with industry insights and automation peers networking.
📕 Curious on our agenda? Wait no more!
10:00 Welcome note - UiPath Community in Dubai
Lovely Sinha, UiPath Community Chapter Leader, UiPath MVPx3, Hyper-automation Consultant, First Abu Dhabi Bank
10:20 A UiPath cross-region MEA overview
Ashraf El Zarka, VP and Managing Director MEA, UiPath
10:35: Customer Success Journey
Deepthi Deepak, Head of Intelligent Automation CoE, First Abu Dhabi Bank
11:15 The UiPath approach to GenAI with our three principles: improve accuracy, supercharge productivity, and automate more
Boris Krumrey, Global VP, Automation Innovation, UiPath
12:15 To discover how Marc Ellis leverages tech-driven solutions in recruitment and managed services.
Brendan Lingam, Director of Sales and Business Development, Marc Ellis
Alt. GDG Cloud Southlake #33: Boule & Rebala: Effective AppSec in SDLC using ...James Anderson
Effective Application Security in Software Delivery lifecycle using Deployment Firewall and DBOM
The modern software delivery process (or the CI/CD process) includes many tools, distributed teams, open-source code, and cloud platforms. Constant focus on speed to release software to market, along with the traditional slow and manual security checks has caused gaps in continuous security as an important piece in the software supply chain. Today organizations feel more susceptible to external and internal cyber threats due to the vast attack surface in their applications supply chain and the lack of end-to-end governance and risk management.
The software team must secure its software delivery process to avoid vulnerability and security breaches. This needs to be achieved with existing tool chains and without extensive rework of the delivery processes. This talk will present strategies and techniques for providing visibility into the true risk of the existing vulnerabilities, preventing the introduction of security issues in the software, resolving vulnerabilities in production environments quickly, and capturing the deployment bill of materials (DBOM).
Speakers:
Bob Boule
Robert Boule is a technology enthusiast with PASSION for technology and making things work along with a knack for helping others understand how things work. He comes with around 20 years of solution engineering experience in application security, software continuous delivery, and SaaS platforms. He is known for his dynamic presentations in CI/CD and application security integrated in software delivery lifecycle.
Gopinath Rebala
Gopinath Rebala is the CTO of OpsMx, where he has overall responsibility for the machine learning and data processing architectures for Secure Software Delivery. Gopi also has a strong connection with our customers, leading design and architecture for strategic implementations. Gopi is a frequent speaker and well-known leader in continuous delivery and integrating security into software delivery.
The Metaverse and AI: how can decision-makers harness the Metaverse for their...Jen Stirrup
The Metaverse is popularized in science fiction, and now it is becoming closer to being a part of our daily lives through the use of social media and shopping companies. How can businesses survive in a world where Artificial Intelligence is becoming the present as well as the future of technology, and how does the Metaverse fit into business strategy when futurist ideas are developing into reality at accelerated rates? How do we do this when our data isn't up to scratch? How can we move towards success with our data so we are set up for the Metaverse when it arrives?
How can you help your company evolve, adapt, and succeed using Artificial Intelligence and the Metaverse to stay ahead of the competition? What are the potential issues, complications, and benefits that these technologies could bring to us and our organizations? In this session, Jen Stirrup will explain how to start thinking about these technologies as an organisation.
Removing Uninteresting Bytes in Software FuzzingAftab Hussain
Imagine a world where software fuzzing, the process of mutating bytes in test seeds to uncover hidden and erroneous program behaviors, becomes faster and more effective. A lot depends on the initial seeds, which can significantly dictate the trajectory of a fuzzing campaign, particularly in terms of how long it takes to uncover interesting behaviour in your code. We introduce DIAR, a technique designed to speedup fuzzing campaigns by pinpointing and eliminating those uninteresting bytes in the seeds. Picture this: instead of wasting valuable resources on meaningless mutations in large, bloated seeds, DIAR removes the unnecessary bytes, streamlining the entire process.
In this work, we equipped AFL, a popular fuzzer, with DIAR and examined two critical Linux libraries -- Libxml's xmllint, a tool for parsing xml documents, and Binutil's readelf, an essential debugging and security analysis command-line tool used to display detailed information about ELF (Executable and Linkable Format). Our preliminary results show that AFL+DIAR does not only discover new paths more quickly but also achieves higher coverage overall. This work thus showcases how starting with lean and optimized seeds can lead to faster, more comprehensive fuzzing campaigns -- and DIAR helps you find such seeds.
- These are slides of the talk given at IEEE International Conference on Software Testing Verification and Validation Workshop, ICSTW 2022.
Observability Concepts EVERY Developer Should Know -- DeveloperWeek Europe.pdfPaige Cruz
Monitoring and observability aren’t traditionally found in software curriculums and many of us cobble this knowledge together from whatever vendor or ecosystem we were first introduced to and whatever is a part of your current company’s observability stack.
While the dev and ops silo continues to crumble….many organizations still relegate monitoring & observability as the purview of ops, infra and SRE teams. This is a mistake - achieving a highly observable system requires collaboration up and down the stack.
I, a former op, would like to extend an invitation to all application developers to join the observability party will share these foundational concepts to build on:
Enhancing Performance with Globus and the Science DMZGlobus
ESnet has led the way in helping national facilities—and many other institutions in the research community—configure Science DMZs and troubleshoot network issues to maximize data transfer performance. In this talk we will present a summary of approaches and tips for getting the most out of your network infrastructure using Globus Connect Server.
Accelerate your Kubernetes clusters with Varnish CachingThijs Feryn
A presentation about the usage and availability of Varnish on Kubernetes. This talk explores the capabilities of Varnish caching and shows how to use the Varnish Helm chart to deploy it to Kubernetes.
This presentation was delivered at K8SUG Singapore. See https://feryn.eu/presentations/accelerate-your-kubernetes-clusters-with-varnish-caching-k8sug-singapore-28-2024 for more details.
Why You Should Replace Windows 11 with Nitrux Linux 3.5.0 for enhanced perfor...SOFTTECHHUB
The choice of an operating system plays a pivotal role in shaping our computing experience. For decades, Microsoft's Windows has dominated the market, offering a familiar and widely adopted platform for personal and professional use. However, as technological advancements continue to push the boundaries of innovation, alternative operating systems have emerged, challenging the status quo and offering users a fresh perspective on computing.
One such alternative that has garnered significant attention and acclaim is Nitrux Linux 3.5.0, a sleek, powerful, and user-friendly Linux distribution that promises to redefine the way we interact with our devices. With its focus on performance, security, and customization, Nitrux Linux presents a compelling case for those seeking to break free from the constraints of proprietary software and embrace the freedom and flexibility of open-source computing.
Smart TV Buyer Insights Survey 2024 by 91mobiles.pdf91mobiles
91mobiles recently conducted a Smart TV Buyer Insights Survey in which we asked over 3,000 respondents about the TV they own, aspects they look at on a new TV, and their TV buying preferences.
Securing your Kubernetes cluster_ a step-by-step guide to success !KatiaHIMEUR1
Today, after several years of existence, an extremely active community and an ultra-dynamic ecosystem, Kubernetes has established itself as the de facto standard in container orchestration. Thanks to a wide range of managed services, it has never been so easy to set up a ready-to-use Kubernetes cluster.
However, this ease of use means that the subject of security in Kubernetes is often left for later, or even neglected. This exposes companies to significant risks.
In this talk, I'll show you step-by-step how to secure your Kubernetes cluster for greater peace of mind and reliability.
A tale of scale & speed: How the US Navy is enabling software delivery from l...sonjaschweigert1
Rapid and secure feature delivery is a goal across every application team and every branch of the DoD. The Navy’s DevSecOps platform, Party Barge, has achieved:
- Reduction in onboarding time from 5 weeks to 1 day
- Improved developer experience and productivity through actionable findings and reduction of false positives
- Maintenance of superior security standards and inherent policy enforcement with Authorization to Operate (ATO)
Development teams can ship efficiently and ensure applications are cyber ready for Navy Authorizing Officials (AOs). In this webinar, Sigma Defense and Anchore will give attendees a look behind the scenes and demo secure pipeline automation and security artifacts that speed up application ATO and time to production.
We will cover:
- How to remove silos in DevSecOps
- How to build efficient development pipeline roles and component templates
- How to deliver security artifacts that matter for ATO’s (SBOMs, vulnerability reports, and policy evidence)
- How to streamline operations with automated policy checks on container images
Monitoring Java Application Security with JDK Tools and JFR Events
Distributed Database Design Decisions to Support High Performance Event Streaming - Pulsar Summit SF 2022
1. Pulsar Summit
San Francisco
Hotel Nikko
August 18 2022
Ecosystem
Distributed Database
Design Decisions to
Support High
Performance Event
Streaming
Peter Corless
Director of Technical Advocacy • ScyllaDB
2. Peter Corless is the Director of
Technical Advocacy of ScyllaDB, the
company behind the monstrously fast
and scalable NoSQL database.
He is the editor of and frequent
contributor to the ScyllaDB blog, and
program chair of the ScyllaDB Summit
and P99 CONF.
He recently hosted the Distributed
Systems Masterclass, co-sponsored by
StreamNative+ScyllaDB
Peter Corless
Director of Technical Advocacy
ScyllaDB
3. Distributed Database Design Decisions to Support High Performance Event Streaming
Requirements for
This Next Tech Cycle
5. Hardware Infrastructure is Evolving
+ Compute
+ From 100+ cores → 1,000+ cores per server
+ From multicore CPUs → full System on a Chip (SoC) designs (CPU,
GPU, Cache, Memory)
+ Memory
+ Terabyte-scale RAM per server
+ DDR4 — 800 to 1600 MHz, 2011-present
+ DDR5 — 4600 MHz in 2020, 8000 MHz by 2024
+ DDR6 — 9600 MHz by 2025
+ Storage
+ Petabyte-scale storage per server
+ NVMe 2.0 [2021] — separation of base and transport
Distributed Database Design Decisions to Support High Performance Event Streaming
6. Distributed Database Design Decisions to Support High Performance Event Streaming
Databases are Evolving
+ Consistency Models [CAP Model: AP vs. CP]
+ Strong, Eventual, Tunable
+ ACID vs. BASE
+ Data Model / Query Languages [SQL vs. NoSQL]
+ RDBMS / SQL
+ NoSQL [Document, Key-Value, Wide-Column, Graph]
+ Big Data → HUGE Data
+ Data Stored: Gigabytes? Terabytes? Petabytes? Exabytes?
+ Payload Sizes: Kilobytes? Megabytes?
+ OPS / TPS: Hundreds of thousands? Millions?
+ Latencies: Sub-millisecond? Single-digit milliseconds?
7. Distributed Database Design Decisions to Support High Performance Event Streaming
Databases are [or should be] designed for
specific kinds of data, specific kinds of
workloads, and specific kinds of queries.
How aligned or far away from your specific
use case a database may be in its design &
implementation from your desired utility of it
determines the resistance of the system
Variable
Resistors
Anyone?
8. Distributed Database Design Decisions to Support High Performance Event Streaming
Sure you can use various databases for tasks they were
never designed for — but should you?
DATA ENGINEERS
9. Distributed Database Design Decisions to Support High Performance Event Streaming
Δ Data
––––––––––––
t
t ~ n ×0.001s
For a database to be appropriate for event streaming, it needs to
support managing changes to data over time in “real time” —
measured in single-digit milliseconds or less.
And where changes to data can be produced at a rate of
hundreds of thousands or millions of events per second. [And
greater rates in future]
11. Distributed Database Design Decisions to Support High Performance Event Streaming
While many database systems have been incrementally adapted to cloud native
environments, they still have underlying architectural limits or presumptions.
+ Strong consistency / record-locking — limits latencies & throughput
+ Single primary server for read/writes — replicas are read-only or only for failover;
bottlenecks write-heavy workloads
+ Local clustering/single datacenter design — inappropriate for high availability;
hampers global distribution; lack of topology-awareness induces fragility
12. Distributed Database Design Decisions to Support High Performance Event Streaming
Two flavors of responses:
+ NoSQL — Designed for non-relational data models, various query languages,
high availability distributed systems
+ Key value, document, wide column, graph, etc.
+ NewSQL — Still RDBMS, still SQL, but designed to operate as a highly
available distributed system
13. Distributed Database Design Decisions to Support High Performance Event Streaming
Database-as-a-Service (DBaaS)
+ Lift-and-Shift to Cloud — Same base offering as on-premises version,
offered as a cloud-hosted managed service
+ Easy/fast to bring to market, but no fundamental design changes
+ Cloud Native — Designed from-the-ground-up for cloud [only] usage
+ Elasticity — Dynamic provisioning, scale up/down for throughput, storage
+ Serverless — Do I need to know what hardware I’m running on?
+ Microservices & API integration — App integration, connectors, DevEx
+ Billing — making it easy to consume & measure ROI/TCO
+ Governance: Privacy Compliance / Data Localization
14. Distributed Database Design Decisions to Support High Performance Event Streaming
What does a database need to be, or have, or do, to properly support
event streaming in 2022?
+ High Availability [“Always On”]
+ Impedance Match of Database to Event Streaming Systems
+ Similar characteristics for throughput, latency
+ All the Appropriate “Goesintos/Goesouttas”
+ Sink Connector
+ Change Data Capture (CDC) / Source Connector
+ Supports your favorite streaming flavor of the day
+ Kafka, Pulsar, RabbitMQ Streams, etc.
15. Distributed Database Design Decisions to Support High Performance Event Streaming
Event Streaming Journey of a
NoSQL Database: ScyllaDB
16. Distributed Database Design Decisions to Support High Performance Event Streaming
ScyllaDB: Building on “Good Bones”
+ Performant: Shard-per-core, async-everywhere, shared-nothing architecture
+ Scalable: both horizontal [100s/1000s of nodes] & vertical [100s/1000s cores]
+ Available: Peer-to-Peer, Active-Active; no single point of failure
+ Distribution: Multi-datacenter clustering & replication, auto-sharding
+ Consistency: tunable; primarily eventual, but also Lightweight Transactions (LWT)
+ Topology Aware: Shard-aware, Node-aware, Rack-aware, Datacenter-aware
+ Compatible: Cassandra CQL & Amazon DynamoDB APIs
17. Distributed Database Design Decisions to Support High Performance Event Streaming
ScyllaDB Journey to Event Streaming — Starting with Kafka
+ Shard-Aware Kafka Sink Connector [January 2020]
+ Github: https://github.com/scylladb/kafka-connect-scylladb
+ Blog: https://www.scylladb.com/2020/02/18/introducing-the-kafka-scylla-connector/
18. Distributed Database Design Decisions to Support High Performance Event Streaming
ScyllaDB Journey to Event Streaming — Starting with Kafka
+ Shard-Aware Kafka Sink Connector [January 2020]
+ Github: https://github.com/scylladb/kafka-connect-scylladb
+ Blog: https://www.scylladb.com/2020/02/18/introducing-the-kafka-scylla-connector/
+ Change Data Capture [January 2020 – October 2021]
+ January 2020: ScyllaDB Open Source 3.2 — Experimental
+ Course of 2020 - 3.3, 3.4, 4.0, 4.1, 4.2 — Experimental iterations
+ January 2021: 4.3: Production-ready, new API
+ March 2021: 4.4: new API
+ October 2021: 4.5: performance & stability
19. Distributed Database Design Decisions to Support High Performance Event Streaming
ScyllaDB Journey to Event Streaming — Starting with Kafka
+ Shard-Aware Kafka Sink Connector [January 2020]
+ Github: https://github.com/scylladb/kafka-connect-scylladb
+ Blog: https://www.scylladb.com/2020/02/18/introducing-the-kafka-scylla-connector/
+ Change Data Capture [January 2020 – October 2021]
+ January 2020: ScyllaDB Open Source 3.2 — Experimental
+ Course of 2020 - 3.3, 3.4, 4.0, 4.1, 4.2 — Experimental iterations
+ January 2021: 4.3: Production-ready, new API
+ March 2021: 4.4: new API
+ October 2021: 4.5: performance & stability
+ CDC Kafka Source Connector [April 2021]
+ Github: https://github.com/scylladb/scylla-cdc-source-connector
+ Blog: https://debezium.io/blog/2021/09/22/deep-dive-into-a-debezium-community-conn
ector-scylla-cdc-source-connector/
21. Distributed Database Design Decisions to Support High Performance Event Streaming
ScyllaDB Journey to Event Streaming with Pulsar
+ Pulsar Consumer: Cassandra Sink Connector
+ Comes by default with Pulsar
+ ScyllaDB is Cassandra CQL compatible
+ Docs: https://pulsar.apache.org/docs/io-cassandra-sink/
+ Github: https://github.com/apache/pulsar/blob/master/site2/docs/io-cassandra-sink.md
+ Pulsar Producer: Can use ScyllaDB CDC Source Connector using Kafka Compatibility
+ Pulsar makes it easy to bring Kafka topics into Pulsar
+ Docs: https://pulsar.apache.org/docs/adaptors-kafka/
+ Potential Developments:
+ Native Pulsar Shard-Aware ScyllaDB Consumer Connector — even faster ingestion
+ Native CDC Pulsar Producer — unwrap your topics
22. Distributed Database Design Decisions to Support High Performance Event Streaming
ScyllaDB CDC:
How Does It Work?
23. ScyllaDB Quickstart: Create a Table and Enable CDC
CREATE TABLE ks.tbl (
pk int,
ck int,
val int,
col set<int>,
PRIMARY KEY (pk, ck)
) WITH cdc = { 'enabled': true };
Distributed Database Design Decisions to Support High Performance Event Streaming
24. Distributed Database Design Decisions to Support High Performance Event Streaming
CDC Options - Record Types
Delta Preimage Postimage
'full': contain
information about
every modified
column
'keys': only the
primary key of the
change will be
recorded
'false': Disables the
feature
'true': contain only the
columns that were
changed by the write
‘full’: contain the entire
row (how it was
before the write was
made)
'false': Disables the
feature
'true': show the
affected row’s state
after the write.
Postimage row always
contains all the
columns no matter if
they were affected by
the change or not
What was changed? What was before? What’s the end result?
25. Distributed Database Design Decisions to Support High Performance Event Streaming
CDC Options - Record Types
Enabled Postimage
86400: In seconds. By
default records on
CDC log table expire
within 24 hours
If set to 0, a separate
cleaning mechanism is
recommended.
'false': Disables the
CDC feature
'true': Enables the
CDC feature
TTL
26. Distributed Database Design Decisions to Support High Performance Event Streaming
cqlsh> desc table ks.tbl
_scylla_cdc_log;
CREATE TABLE ks.tbl_scylla_cdc_log (
"cdc$stream_id" blob,
"cdc$time" timeuuid,
"cdc$batch_seq_no" int,
"cdc$deleted_col" boolean,
"cdc$deleted_elements_col" frozen<set<int>>,
"cdc$deleted_val" boolean,
"cdc$end_of_batch" boolean,
"cdc$operation" tinyint,
"cdc$ttl" bigint,
ck int,
col frozen<set<int>>,
pk int,
val int,
PRIMARY KEY ("cdc$stream_id"
, "cdc$time", "cdc$batch_seq_no")
)
Partition Key Sorted by time Batch sequence
CDC Log Table
27. Cassandra DynamoDB MongoDB ScyllaDB
Consumer location on-node off-node off-node off-node
Replication duplicated deduplicated deduplicated deduplicated
Deltas yes no partial optional
Pre-image no yes no optional
Post-image no yes yes optional
Slow consumer reaction Table stopped Consumer loses data Consumer loses data Consumer loses data
Ordering no yes yes yes
Distributed Database Design Decisions to Support High Performance Event Streaming
How Do NoSQL CDC Implementations Compare?
28. Writing to Base Table [No CDC]
CQL write goes to
coordinator node.
INSERT INTO base_table(...)...
Distributed Database Design Decisions to Support High Performance Event Streaming
29. Coordinator node creates
write calls to replica nodes.
INSERT INTO base_table(...)...
CQL
Replicated writes
Writing to Base Table [No CDC]
Distributed Database Design Decisions to Support High Performance Event Streaming
30. Writing to CDC Enabled Table
CQL write goes to
coordinator node.
INSERT INTO base_table(...)...
Distributed Database Design Decisions to Support High Performance Event Streaming
31. Writing to CDC enabled table (post/preimage)
If required, Coordinator reads
existing row data for
pre-/post image generation.
INSERT INTO base_table(...)...
CQL
(Opt) preimage read
Distributed Database Design Decisions to Support High Performance Event Streaming
32. Writing to CDC Enabled Table
Coordinator creates CDC log table
writes and piggybacks on base
table writes to same replica nodes.
While data size written is larger, the
number of writes requests does not
change.
INSERT INTO base_table(...)...
CQL
CDC write
Distributed Database Design Decisions to Support High Performance Event Streaming
33. ▪ CDC data is grouped into streams
• Divides the token ring space
• Each stream represents a tokenization “slot” in
current topology
• Stream is log partition key
• Stream chosen for given write based on base table
PK tokenization
▪ Can read from all, one or some streams at a time
• Allows “round-robin” traversal of data space to
avoid too large or cross-node queries
Stream 1, 2, 3, 4...
Token ring
Distributed Database Design Decisions to Support High Performance Event Streaming
CDC Streams
34. Distributed Database Design Decisions to Support High Performance Event Streaming
CDC Streams
Token ring
CDC
Java
Driver
Kafka
Source
Conn.
The Java driver handles round-robin
traversal.
Kafka
Broker
CDC Streams
Stream 1, 2, 3, 4...
35. Distributed Database Design Decisions to Support High Performance Event Streaming
Change Data Capture (CDC) lesson here:
https://university.scylladb.com/courses/scylla-operations/lessons/change-data-capture-cdc/
Learn NoSQL for free!
university.scylladb.com