Hochverfügbarkeit mit MariaDB Enterprise
Presented by Ralf Gebhardt at the MariaDB Roadshow Germany: 4.7.2014 in Hamburg, 8.7.2014 in Berlin and 11.7.2014 in Frankfurt.
Understanding Kafka Produce and Fetch api calls for high throughtput applicat...HostedbyConfluent
The data team at Cloudflare uses Kafka to process tens of petabytes a day. All this data is moved using the 2 foundational Kafka api calls: Produce (api key 0) and Fetch (api key 1). Understanding the structure of these calls (and of the underlying RecordSet structure) is key to building high throughput clients.
The talk describes the basics of the Kafka wire protocol (api keys, correlation id), and the structure of the Produce and Fetch calls. It shows how the asynchronous nature of the wire protocol can combine with the structure of the Produce and Fetch calls to increase latency and reduce client throughput; a solution is offered through use of synchronous single-partition calls.
The RecordSet structure, which is used to encode and store sets (batches) of records is described, and its implications on Fetch requests are discussed. The relationship between Fetch api calls and ""consume"" operations is discussed, as is the impact of offset alignment to RecordSet boundaries.
Cloud computing is a set of principles and approaches to deliver computing infrastructure, services, platforms, and applications sourced from clouds to users on-demand across a network.
Feed Your SIEM Smart with Kafka Connect (Vitalii Rudenskyi, McKesson Corp) Ka...HostedbyConfluent
SIEM platforms are essential to the new cybersecurity paradigm and data collection layer is a very important piece of it.
When you deliver a new platform, you can easily get lost in a variety of different vendors and solutions, too many challenges are facing. What if I change vendors, will I keep my data? How to feed multiple tools with the same data? How to collect data from custom apps and services? How to pay less for an expensive platform? How to keep data without a huge cost?
Join us if you are looking for the answers. In this session, you will learn how we replaced the vendor-provided data collection layer with kafka connect and the lessons we learnt. After the talk you will know:
- architecture and real-life examples of the flexible and highly available data collection platform
- custom connectors that do most of the work for us and how to extend the connectors to consume new data, we made them open sourced
- easy way to receive data from thousands of servers and many cloud services
- how to archive data at low cost
You will leave armed with a set of free tools and recipes to build a truly vendor-agnostic data collection platform. It will allow you to take you SIEM costs under control. You will feed your analytics tools with what they need and archive the rest at low cost. You will feed your SIEM smart!
Taming a massive fleet of Python-based Kafka apps at Robinhood | Chandra Kuch...HostedbyConfluent
Robinhood uses Kafka in every line of its business, from stock and crypto trading to clearing and data analytics. One interesting aspect of our architecture is that many of our microservices leveraging Kafka are written in Python. When you combine Python's relatively slow performance coupled, its reliance on process-based parallelism and Robinhood’s scale, the result is a massive fleet of application processes producing to and consuming from our Kafka clusters. This fleet generates an atypical workload on Kafka that warrants a deeper investment in scalability and reliability.
This talk discusses our investments in Kafka infrastructure for a large-scale Python-based environment:
kafkahood: our librdkafka-based client library wrapper that codifies best practices, sane defaults and deep client-side observability.
kafkaproxy: a Rust-based sidecar proxy that reduces connection fan-in from Python gunicorn worker pools to our Kafka clusters.
We'll also present challenges we encountered along the way and share our learnings with the audience.
Hochverfügbarkeit mit MariaDB Enterprise
Presented by Ralf Gebhardt at the MariaDB Roadshow Germany: 4.7.2014 in Hamburg, 8.7.2014 in Berlin and 11.7.2014 in Frankfurt.
Understanding Kafka Produce and Fetch api calls for high throughtput applicat...HostedbyConfluent
The data team at Cloudflare uses Kafka to process tens of petabytes a day. All this data is moved using the 2 foundational Kafka api calls: Produce (api key 0) and Fetch (api key 1). Understanding the structure of these calls (and of the underlying RecordSet structure) is key to building high throughput clients.
The talk describes the basics of the Kafka wire protocol (api keys, correlation id), and the structure of the Produce and Fetch calls. It shows how the asynchronous nature of the wire protocol can combine with the structure of the Produce and Fetch calls to increase latency and reduce client throughput; a solution is offered through use of synchronous single-partition calls.
The RecordSet structure, which is used to encode and store sets (batches) of records is described, and its implications on Fetch requests are discussed. The relationship between Fetch api calls and ""consume"" operations is discussed, as is the impact of offset alignment to RecordSet boundaries.
Cloud computing is a set of principles and approaches to deliver computing infrastructure, services, platforms, and applications sourced from clouds to users on-demand across a network.
Feed Your SIEM Smart with Kafka Connect (Vitalii Rudenskyi, McKesson Corp) Ka...HostedbyConfluent
SIEM platforms are essential to the new cybersecurity paradigm and data collection layer is a very important piece of it.
When you deliver a new platform, you can easily get lost in a variety of different vendors and solutions, too many challenges are facing. What if I change vendors, will I keep my data? How to feed multiple tools with the same data? How to collect data from custom apps and services? How to pay less for an expensive platform? How to keep data without a huge cost?
Join us if you are looking for the answers. In this session, you will learn how we replaced the vendor-provided data collection layer with kafka connect and the lessons we learnt. After the talk you will know:
- architecture and real-life examples of the flexible and highly available data collection platform
- custom connectors that do most of the work for us and how to extend the connectors to consume new data, we made them open sourced
- easy way to receive data from thousands of servers and many cloud services
- how to archive data at low cost
You will leave armed with a set of free tools and recipes to build a truly vendor-agnostic data collection platform. It will allow you to take you SIEM costs under control. You will feed your analytics tools with what they need and archive the rest at low cost. You will feed your SIEM smart!
Taming a massive fleet of Python-based Kafka apps at Robinhood | Chandra Kuch...HostedbyConfluent
Robinhood uses Kafka in every line of its business, from stock and crypto trading to clearing and data analytics. One interesting aspect of our architecture is that many of our microservices leveraging Kafka are written in Python. When you combine Python's relatively slow performance coupled, its reliance on process-based parallelism and Robinhood’s scale, the result is a massive fleet of application processes producing to and consuming from our Kafka clusters. This fleet generates an atypical workload on Kafka that warrants a deeper investment in scalability and reliability.
This talk discusses our investments in Kafka infrastructure for a large-scale Python-based environment:
kafkahood: our librdkafka-based client library wrapper that codifies best practices, sane defaults and deep client-side observability.
kafkaproxy: a Rust-based sidecar proxy that reduces connection fan-in from Python gunicorn worker pools to our Kafka clusters.
We'll also present challenges we encountered along the way and share our learnings with the audience.
Salesforce enabling real time scenarios at scale using kafkaThomas Alex
Nishant Gupta from Salesforce talked about Ajna, a service for monitoring system health across global data centers in real time, and how Kafka is at the center of this system. The talk covers the scenario, key challenges, learnings and best practices.
Lessons from the field: Catalog of Kafka Deployments | Joseph Niemiec, ClouderaHostedbyConfluent
Streaming architectures have been on the rise steadily and as a result, we have seen the adoption of Kafka go up too. With the diverse spread of use cases across multiple industries, we have seen a variety of Kafka deployments across our hundreds of Kafka customers. Along the way, we have learnt some best practices as well as what not to do in mission-critical architectures. Join Joe Niemiec, Sr. Product Manager at Cloudera, as he shares these insights in this session that covers topics such as - The many ways that Kafka has been deployed in the field Standalone clusters, multiple clusters in a single data center and multiple clusters geographically distributed performing replication Clusters of all sizes small and large, few messages to hundreds of thousands per second Discussion about architecture failure domains Configurations tuned and used in specific deployments
How Much Can You Connect? | Bhavesh Raheja, Disney + HotstarHostedbyConfluent
How many connects can you run in a single cluster? Disney + Hotstar runs over 10 different connect clusters with over 2000+ connectors. In this talk, we share our experience of running Kafka connect at scale. We will walk through our decisions of using one cluster vs many and how the improvements in the connect ecosystem like incremental rebalancing have allowed us to scale to thousands of connects. We will also discuss challenges with scaling up & down connect workers while keeping the ecosystem stable & present a wishlist of the missing features in this distributed task framework.
Mainframe Integration, Offloading and Replacement with Apache Kafka | Kai Wae...HostedbyConfluent
Legacy migration is a journey. Mainframes cannot be replaced in a single project. A big bang will fail. This has to be planned long-term.
Mainframe offloading and replacement with Apache Kafka and its ecosystem can be used to keep a more modern data store in real-time sync with the mainframe, while at the same time persisting the event data on the bus to enable microservices, and deliver the data to other systems such as data warehouses and search indexes.
This session walks through the different steps some companies are already gone through. Technical options like Change Data Capture (CDC), MQ, and third-party tools for mainframe integration, offloading and replacement are explored.
Caveats of hosting MariaDB on Microsoft Azure CloudMariaDB plc
Cloud service providers do not provide the same level of flexibility compared as an on-premises database. In this session, discover how Gaming Innovation Group (GiG) and MariaDB deployed a highly available setup on Microsoft Azure using the following technologies: Azure Load Balancer, CoroSync, Pacemaker, MariaDB MaxScale, MariaDB Cluster and MariaDB Server.
WSO2Con ASIA 2016: Deployment Patterns and Capacity PlanningWSO2
Identifying the right deployment architecture is key when providing smooth operation of a production system. Deployment architecture is important for providing non-functional requirements such as availability, scalability and security for an enterprise software solution. In addition, a deployment architecture is key when supporting future business and technical requirements. Deployment patterns are a way to quickly determine best practices from production deployments identified from different industry verticals.
After identifying the correct deployment architecture, it’s crucial to determine the size of the deployment by understanding the number of servers/VMs/containers necessary to support the minimum, average and possible maximum load that the system is expected to handle. This session will focus on how you could take a fact based approach to determine the size of your deployment.
A Look into the Mirror: Patterns and Best Practices for MirrorMaker2 | Cliff ...HostedbyConfluent
From migrations between Apache Kafka clusters to multi-region deployments across datacenters, the introduction of MirrorMaker2 has expanded the possibilities for Apache Kafka deployments and use cases. In this session you will learn about patterns, best practices, and learnings compiled from running MirrorMaker2 in production at every scale.
Using Kafka as a Database For Real-Time Transaction Processing | Chad Preisle...HostedbyConfluent
You have learned about Kafka event sourcing with streams and using Kafka as a database, but you may be having a tough time wrapping your head around what that means and what challenges you will face. Kafka’s exactly once semantics, data retention rules, and stream DSL make it a great database for real-time transaction processing. This talk will focus on how to use Kafka events as a database. We will talk about using KTables vs GlobalKTables, and how to apply them to patterns we use with traditional databases. We will go over a real-world example of joining events against existing data and some issues to be aware of. We will finish covering some important things to remember about state stores, partitions, and streams to help you avoid problems when your data sets become large.
SingleStore & Kafka: Better Together to Power Modern Real-Time Data Architect...HostedbyConfluent
To remain competitive, organizations need to democratize access to fast analytics, not only to gain real-time insights on their business but also to power smart apps that need to react in the moment. In this session, you will learn how Kafka and SingleStore enable modern, yet simple data architecture to analyze both fast paced incoming data as well as large historical datasets. In particular, you will understand why SingleStore is well suited process data streams coming from Kafka.
How Confluent Completes the Event Streaming Platform (Addison Huddy & Dan Ros...HostedbyConfluent
Apache Kafka fundamentally changes how organizations build and deploy a universal data pipeline that is scalable, reliable, and durable enough to meet the needs of digital-first organizations. However, as powerful as Kafka is today, it’s not an event-streaming platform - and getting it there on your own is a long, complicated, and expensive process. Earlier this year Confluent announced Project Metamorphosis - our plan to bring the best characteristics of cloud native systems to Apache Kafka. Since May we’ve begun transforming Confluent Cloud and Confluent Platform to do just that.
Join two of our Product Managers, Dan Rosanova and Addison Huddy to: Learn how we’ve evolved Confluent Cloud with the first phase of Project Metamorphosis releases
See how Confluent Platform 6.0 brings these transformational, cloud-like qualities to self-managed Kafka
Get a sneak peak of our next Metamorphosis theme and how it impacts your Kafka and event-streaming strategy.
The evolution of cloud requires not only virtualised server resources but also on demand, utility consumption of physical hardware resources delivered in real time.
Exposing and Controlling Kafka Event Streaming with Kong Konnect Enterprise |...HostedbyConfluent
Event streaming allows companies to build more scalable and loosely coupled real-time applications supporting massive concurrency demands and simplifying the construction of services.
At the same time, API management provides capabilities to securely control the upstream services consumption, including the event processing infrastructure.
This session shows how Kong Konnect Enterprise can complement Kafka Event Streaming, exposing it to new and external consumers while applying specific and critical policies to control its consumption, including API key, OAuth/OIDC and others for authentication, rate limiting, caching, log processing, etc.
Big Data security: Facing the challenge by Carlos Gómez at Big Data Spain 2017Big Data Spain
This talk gives a technical and innovative overview of how companies can face the challenge of protecting the data and services that are in their data-centric platform, focusing on three main aspects: implementing network segmentation, managing AAA and securing data processing.
https://www.bigdataspain.org/2017/talk/big-data-security-facing-the-challenge
Big Data Spain 2017
16th - 17th November Kinépolis Madrid
See Webinar Recording at https://resource.alibabacloud.com/webinar/detail.htm?webinarId=11
Get up to speed with the basics of cloud networking, including network setup, VPC, routing, and switching between multiple subnets.
In this presentation, we will discuss various aspects of how networking varies within the cloud, including:
1. Basic networking and how it maps to the cloud
2. Various Cloud networking concepts that allow customers to design their own network within the cloud
3. Run through examples of basic network setup, VPC, Routing, and switching between multiple subnets
4. Network security with security groups and VPN
More Webinars: https://resource.alibabacloud.com/webinar/index.htm
Server Load Balancer: www.alibabacloud.com/product/server-load-balancer
VPC: www.alibabacloud.com/product/vpc
Express Connect: www.alibabacloud.com/product/express-connect
VPN Gateway: www.alibabacloud.com/product/vpn-gateway
The Road Most Traveled: A Kafka Story | Heikki Nousiainen, AivenHostedbyConfluent
When moving to a cloud native architecture Moogsoft knew they needed more scale than Rabbit could provide. Moogsoft moved into Kafka which is known for quick writing and driving heavy event driven workloads on top of niceties such as replayability. Choosing the tool was easy, finding a vendor that ticked all their boxes was not. They needed to ensure scalability, upgradability, builds via existing IAC pipelines, and observability via existing tools. When Moogsoft found Aiven, they were impressed with their offering and ability to scale on demand. During this presentation we will explore how Moogsoft used Aiven for Kafka to manage and scale their data in the cloud.
Salesforce enabling real time scenarios at scale using kafkaThomas Alex
Nishant Gupta from Salesforce talked about Ajna, a service for monitoring system health across global data centers in real time, and how Kafka is at the center of this system. The talk covers the scenario, key challenges, learnings and best practices.
Lessons from the field: Catalog of Kafka Deployments | Joseph Niemiec, ClouderaHostedbyConfluent
Streaming architectures have been on the rise steadily and as a result, we have seen the adoption of Kafka go up too. With the diverse spread of use cases across multiple industries, we have seen a variety of Kafka deployments across our hundreds of Kafka customers. Along the way, we have learnt some best practices as well as what not to do in mission-critical architectures. Join Joe Niemiec, Sr. Product Manager at Cloudera, as he shares these insights in this session that covers topics such as - The many ways that Kafka has been deployed in the field Standalone clusters, multiple clusters in a single data center and multiple clusters geographically distributed performing replication Clusters of all sizes small and large, few messages to hundreds of thousands per second Discussion about architecture failure domains Configurations tuned and used in specific deployments
How Much Can You Connect? | Bhavesh Raheja, Disney + HotstarHostedbyConfluent
How many connects can you run in a single cluster? Disney + Hotstar runs over 10 different connect clusters with over 2000+ connectors. In this talk, we share our experience of running Kafka connect at scale. We will walk through our decisions of using one cluster vs many and how the improvements in the connect ecosystem like incremental rebalancing have allowed us to scale to thousands of connects. We will also discuss challenges with scaling up & down connect workers while keeping the ecosystem stable & present a wishlist of the missing features in this distributed task framework.
Mainframe Integration, Offloading and Replacement with Apache Kafka | Kai Wae...HostedbyConfluent
Legacy migration is a journey. Mainframes cannot be replaced in a single project. A big bang will fail. This has to be planned long-term.
Mainframe offloading and replacement with Apache Kafka and its ecosystem can be used to keep a more modern data store in real-time sync with the mainframe, while at the same time persisting the event data on the bus to enable microservices, and deliver the data to other systems such as data warehouses and search indexes.
This session walks through the different steps some companies are already gone through. Technical options like Change Data Capture (CDC), MQ, and third-party tools for mainframe integration, offloading and replacement are explored.
Caveats of hosting MariaDB on Microsoft Azure CloudMariaDB plc
Cloud service providers do not provide the same level of flexibility compared as an on-premises database. In this session, discover how Gaming Innovation Group (GiG) and MariaDB deployed a highly available setup on Microsoft Azure using the following technologies: Azure Load Balancer, CoroSync, Pacemaker, MariaDB MaxScale, MariaDB Cluster and MariaDB Server.
WSO2Con ASIA 2016: Deployment Patterns and Capacity PlanningWSO2
Identifying the right deployment architecture is key when providing smooth operation of a production system. Deployment architecture is important for providing non-functional requirements such as availability, scalability and security for an enterprise software solution. In addition, a deployment architecture is key when supporting future business and technical requirements. Deployment patterns are a way to quickly determine best practices from production deployments identified from different industry verticals.
After identifying the correct deployment architecture, it’s crucial to determine the size of the deployment by understanding the number of servers/VMs/containers necessary to support the minimum, average and possible maximum load that the system is expected to handle. This session will focus on how you could take a fact based approach to determine the size of your deployment.
A Look into the Mirror: Patterns and Best Practices for MirrorMaker2 | Cliff ...HostedbyConfluent
From migrations between Apache Kafka clusters to multi-region deployments across datacenters, the introduction of MirrorMaker2 has expanded the possibilities for Apache Kafka deployments and use cases. In this session you will learn about patterns, best practices, and learnings compiled from running MirrorMaker2 in production at every scale.
Using Kafka as a Database For Real-Time Transaction Processing | Chad Preisle...HostedbyConfluent
You have learned about Kafka event sourcing with streams and using Kafka as a database, but you may be having a tough time wrapping your head around what that means and what challenges you will face. Kafka’s exactly once semantics, data retention rules, and stream DSL make it a great database for real-time transaction processing. This talk will focus on how to use Kafka events as a database. We will talk about using KTables vs GlobalKTables, and how to apply them to patterns we use with traditional databases. We will go over a real-world example of joining events against existing data and some issues to be aware of. We will finish covering some important things to remember about state stores, partitions, and streams to help you avoid problems when your data sets become large.
SingleStore & Kafka: Better Together to Power Modern Real-Time Data Architect...HostedbyConfluent
To remain competitive, organizations need to democratize access to fast analytics, not only to gain real-time insights on their business but also to power smart apps that need to react in the moment. In this session, you will learn how Kafka and SingleStore enable modern, yet simple data architecture to analyze both fast paced incoming data as well as large historical datasets. In particular, you will understand why SingleStore is well suited process data streams coming from Kafka.
How Confluent Completes the Event Streaming Platform (Addison Huddy & Dan Ros...HostedbyConfluent
Apache Kafka fundamentally changes how organizations build and deploy a universal data pipeline that is scalable, reliable, and durable enough to meet the needs of digital-first organizations. However, as powerful as Kafka is today, it’s not an event-streaming platform - and getting it there on your own is a long, complicated, and expensive process. Earlier this year Confluent announced Project Metamorphosis - our plan to bring the best characteristics of cloud native systems to Apache Kafka. Since May we’ve begun transforming Confluent Cloud and Confluent Platform to do just that.
Join two of our Product Managers, Dan Rosanova and Addison Huddy to: Learn how we’ve evolved Confluent Cloud with the first phase of Project Metamorphosis releases
See how Confluent Platform 6.0 brings these transformational, cloud-like qualities to self-managed Kafka
Get a sneak peak of our next Metamorphosis theme and how it impacts your Kafka and event-streaming strategy.
The evolution of cloud requires not only virtualised server resources but also on demand, utility consumption of physical hardware resources delivered in real time.
Exposing and Controlling Kafka Event Streaming with Kong Konnect Enterprise |...HostedbyConfluent
Event streaming allows companies to build more scalable and loosely coupled real-time applications supporting massive concurrency demands and simplifying the construction of services.
At the same time, API management provides capabilities to securely control the upstream services consumption, including the event processing infrastructure.
This session shows how Kong Konnect Enterprise can complement Kafka Event Streaming, exposing it to new and external consumers while applying specific and critical policies to control its consumption, including API key, OAuth/OIDC and others for authentication, rate limiting, caching, log processing, etc.
Big Data security: Facing the challenge by Carlos Gómez at Big Data Spain 2017Big Data Spain
This talk gives a technical and innovative overview of how companies can face the challenge of protecting the data and services that are in their data-centric platform, focusing on three main aspects: implementing network segmentation, managing AAA and securing data processing.
https://www.bigdataspain.org/2017/talk/big-data-security-facing-the-challenge
Big Data Spain 2017
16th - 17th November Kinépolis Madrid
See Webinar Recording at https://resource.alibabacloud.com/webinar/detail.htm?webinarId=11
Get up to speed with the basics of cloud networking, including network setup, VPC, routing, and switching between multiple subnets.
In this presentation, we will discuss various aspects of how networking varies within the cloud, including:
1. Basic networking and how it maps to the cloud
2. Various Cloud networking concepts that allow customers to design their own network within the cloud
3. Run through examples of basic network setup, VPC, Routing, and switching between multiple subnets
4. Network security with security groups and VPN
More Webinars: https://resource.alibabacloud.com/webinar/index.htm
Server Load Balancer: www.alibabacloud.com/product/server-load-balancer
VPC: www.alibabacloud.com/product/vpc
Express Connect: www.alibabacloud.com/product/express-connect
VPN Gateway: www.alibabacloud.com/product/vpn-gateway
The Road Most Traveled: A Kafka Story | Heikki Nousiainen, AivenHostedbyConfluent
When moving to a cloud native architecture Moogsoft knew they needed more scale than Rabbit could provide. Moogsoft moved into Kafka which is known for quick writing and driving heavy event driven workloads on top of niceties such as replayability. Choosing the tool was easy, finding a vendor that ticked all their boxes was not. They needed to ensure scalability, upgradability, builds via existing IAC pipelines, and observability via existing tools. When Moogsoft found Aiven, they were impressed with their offering and ability to scale on demand. During this presentation we will explore how Moogsoft used Aiven for Kafka to manage and scale their data in the cloud.
Skalierbarkeit mit MariaDB und MaxScale - MariaDB Roadshow Summer 2014 Hambur...MariaDB Corporation
Skalierbarkeit mit MariaDB und MaxScale
Presented by Ralf Gebhardt at the MariaDB Roadshow Germany: 4.7.2014 in Hamburg, 8.7.2014 in Berlin and 11.7.2014 in Frankfurt.
Kaseya Connect 2013: Optimizing Your K Server - Best Practices in Kaseya Infr...Kaseya
Do you think you have maximized your Kaseya Server for your current environment? Are you running into performance issues that are difficult to address? Are you planning for future grown? Well this session is what you were looking for! Join us in this technical session as you hear from Kaseya Experts and how they have tuned Kaseya to scale and manage thousands of devices on a single virtual machine including IIS, SQL and Kaseya specific optimization techniques.
In 2018's user conference keynote MariaDB CEO, Michael Howard, announced an initiative to build a MariaDB DBaaS platform. In this session, the DBaaS team shares how MariaDB is approaching DBaaS, then discusses the role of containers and Kubernetes, the need for infrastructure-agnostic provisioning, support for day-two operations and enterprise requirements for large-scale DBaaS deployments.
Lookout on Scaling Security to 100 Million DevicesScyllaDB
The massive increase of security-related data requires companies to respond with new approaches to ingestion. Learn how Lookout has changed its approach for ingesting telemetry to meet their goal of growing from 1.5 million devices to 100 million devices and beyond, using Kafka Connect and switching from AWS DynamoDB to Scylla.
Solutions for bi-directional integration between Oracle RDBMS & Apache KafkaGuido Schmutz
Apache Kafka is a popular distributed streaming data platform and more and more is the architectural backbone for integrating streaming data with a Data Lake, Microservices and Stream Processing. A lot of data necessary in stream processing is stored in traditional systems backed by relational databases. This session will present different approaches for integrating relational databases with Kafka, such as Kafka Connect, Oracle GoldenGate, ORDS APIs and bridging Kafka with Oracle AQ.
Introducing new features in Confluent Platform 5.4 and Apache Kafka 2.4...
CP 5.4 (based on AK 2.4)
Security:
Role-Based Access Control (RBAC)
Structured Audit Logs
Resilience:
Multi-Region Clusters (MRC)
Data Compatibility:
Server-side Schema Validation
Management & Monitoring:
Control Center enhancements
RBAC management
Replicator monitoring
Performance & Elasticity:
Tiered Storage (preview)
Stream Processing:
New ksqlDB features like Pull Queries and Kafka Connect Integration (preview)
Webinar Slides: Multi-Region AWS Aurora vs Continuent Tungsten for MySQL & Ma...Continuent
This webinar has three parts, and takes about 30 minutes:
- Overview of Amazon Aurora cross-region
- Common challenges when using Amazon Aurora
- How can multi-region MySQL deployments be improved?
- Q&A
AGENDA
- Aurora key benefits
- Aurora Cross Region Replica Requirements
- Limitations using Aurora
- Tungsten Multi-Master Clustering
- Tungsten Composite Clustering
- Continuent Tungsten Key Benefits as compared to Aurora
- Tungsten Dashboard
Similar to Max scale overview boston mysql meetup 03102014 (20)
In his public lecture, Christian Timmerer provides insights into the fascinating history of video streaming, starting from its humble beginnings before YouTube to the groundbreaking technologies that now dominate platforms like Netflix and ORF ON. Timmerer also presents provocative contributions of his own that have significantly influenced the industry. He concludes by looking at future challenges and invites the audience to join in a discussion.
Pushing the limits of ePRTC: 100ns holdover for 100 daysAdtran
At WSTS 2024, Alon Stern explored the topic of parametric holdover and explained how recent research findings can be implemented in real-world PNT networks to achieve 100 nanoseconds of accuracy for up to 100 days.
A tale of scale & speed: How the US Navy is enabling software delivery from l...sonjaschweigert1
Rapid and secure feature delivery is a goal across every application team and every branch of the DoD. The Navy’s DevSecOps platform, Party Barge, has achieved:
- Reduction in onboarding time from 5 weeks to 1 day
- Improved developer experience and productivity through actionable findings and reduction of false positives
- Maintenance of superior security standards and inherent policy enforcement with Authorization to Operate (ATO)
Development teams can ship efficiently and ensure applications are cyber ready for Navy Authorizing Officials (AOs). In this webinar, Sigma Defense and Anchore will give attendees a look behind the scenes and demo secure pipeline automation and security artifacts that speed up application ATO and time to production.
We will cover:
- How to remove silos in DevSecOps
- How to build efficient development pipeline roles and component templates
- How to deliver security artifacts that matter for ATO’s (SBOMs, vulnerability reports, and policy evidence)
- How to streamline operations with automated policy checks on container images
Epistemic Interaction - tuning interfaces to provide information for AI supportAlan Dix
Paper presented at SYNERGY workshop at AVI 2024, Genoa, Italy. 3rd June 2024
https://alandix.com/academic/papers/synergy2024-epistemic/
As machine learning integrates deeper into human-computer interactions, the concept of epistemic interaction emerges, aiming to refine these interactions to enhance system adaptability. This approach encourages minor, intentional adjustments in user behaviour to enrich the data available for system learning. This paper introduces epistemic interaction within the context of human-system communication, illustrating how deliberate interaction design can improve system understanding and adaptation. Through concrete examples, we demonstrate the potential of epistemic interaction to significantly advance human-computer interaction by leveraging intuitive human communication strategies to inform system design and functionality, offering a novel pathway for enriching user-system engagements.
Climate Impact of Software Testing at Nordic Testing DaysKari Kakkonen
My slides at Nordic Testing Days 6.6.2024
Climate impact / sustainability of software testing discussed on the talk. ICT and testing must carry their part of global responsibility to help with the climat warming. We can minimize the carbon footprint but we can also have a carbon handprint, a positive impact on the climate. Quality characteristics can be added with sustainability, and then measured continuously. Test environments can be used less, and in smaller scale and on demand. Test techniques can be used in optimizing or minimizing number of tests. Test automation can be used to speed up testing.
Securing your Kubernetes cluster_ a step-by-step guide to success !KatiaHIMEUR1
Today, after several years of existence, an extremely active community and an ultra-dynamic ecosystem, Kubernetes has established itself as the de facto standard in container orchestration. Thanks to a wide range of managed services, it has never been so easy to set up a ready-to-use Kubernetes cluster.
However, this ease of use means that the subject of security in Kubernetes is often left for later, or even neglected. This exposes companies to significant risks.
In this talk, I'll show you step-by-step how to secure your Kubernetes cluster for greater peace of mind and reliability.
The Metaverse and AI: how can decision-makers harness the Metaverse for their...Jen Stirrup
The Metaverse is popularized in science fiction, and now it is becoming closer to being a part of our daily lives through the use of social media and shopping companies. How can businesses survive in a world where Artificial Intelligence is becoming the present as well as the future of technology, and how does the Metaverse fit into business strategy when futurist ideas are developing into reality at accelerated rates? How do we do this when our data isn't up to scratch? How can we move towards success with our data so we are set up for the Metaverse when it arrives?
How can you help your company evolve, adapt, and succeed using Artificial Intelligence and the Metaverse to stay ahead of the competition? What are the potential issues, complications, and benefits that these technologies could bring to us and our organizations? In this session, Jen Stirrup will explain how to start thinking about these technologies as an organisation.
zkStudyClub - Reef: Fast Succinct Non-Interactive Zero-Knowledge Regex ProofsAlex Pruden
This paper presents Reef, a system for generating publicly verifiable succinct non-interactive zero-knowledge proofs that a committed document matches or does not match a regular expression. We describe applications such as proving the strength of passwords, the provenance of email despite redactions, the validity of oblivious DNS queries, and the existence of mutations in DNA. Reef supports the Perl Compatible Regular Expression syntax, including wildcards, alternation, ranges, capture groups, Kleene star, negations, and lookarounds. Reef introduces a new type of automata, Skipping Alternating Finite Automata (SAFA), that skips irrelevant parts of a document when producing proofs without undermining soundness, and instantiates SAFA with a lookup argument. Our experimental evaluation confirms that Reef can generate proofs for documents with 32M characters; the proofs are small and cheap to verify (under a second).
Paper: https://eprint.iacr.org/2023/1886
Removing Uninteresting Bytes in Software FuzzingAftab Hussain
Imagine a world where software fuzzing, the process of mutating bytes in test seeds to uncover hidden and erroneous program behaviors, becomes faster and more effective. A lot depends on the initial seeds, which can significantly dictate the trajectory of a fuzzing campaign, particularly in terms of how long it takes to uncover interesting behaviour in your code. We introduce DIAR, a technique designed to speedup fuzzing campaigns by pinpointing and eliminating those uninteresting bytes in the seeds. Picture this: instead of wasting valuable resources on meaningless mutations in large, bloated seeds, DIAR removes the unnecessary bytes, streamlining the entire process.
In this work, we equipped AFL, a popular fuzzer, with DIAR and examined two critical Linux libraries -- Libxml's xmllint, a tool for parsing xml documents, and Binutil's readelf, an essential debugging and security analysis command-line tool used to display detailed information about ELF (Executable and Linkable Format). Our preliminary results show that AFL+DIAR does not only discover new paths more quickly but also achieves higher coverage overall. This work thus showcases how starting with lean and optimized seeds can lead to faster, more comprehensive fuzzing campaigns -- and DIAR helps you find such seeds.
- These are slides of the talk given at IEEE International Conference on Software Testing Verification and Validation Workshop, ICSTW 2022.
PHP Frameworks: I want to break free (IPC Berlin 2024)Ralf Eggert
In this presentation, we examine the challenges and limitations of relying too heavily on PHP frameworks in web development. We discuss the history of PHP and its frameworks to understand how this dependence has evolved. The focus will be on providing concrete tips and strategies to reduce reliance on these frameworks, based on real-world examples and practical considerations. The goal is to equip developers with the skills and knowledge to create more flexible and future-proof web applications. We'll explore the importance of maintaining autonomy in a rapidly changing tech landscape and how to make informed decisions in PHP development.
This talk is aimed at encouraging a more independent approach to using PHP frameworks, moving towards a more flexible and future-proof approach to PHP development.
Elevating Tactical DDD Patterns Through Object CalisthenicsDorra BARTAGUIZ
After immersing yourself in the blue book and its red counterpart, attending DDD-focused conferences, and applying tactical patterns, you're left with a crucial question: How do I ensure my design is effective? Tactical patterns within Domain-Driven Design (DDD) serve as guiding principles for creating clear and manageable domain models. However, achieving success with these patterns requires additional guidance. Interestingly, we've observed that a set of constraints initially designed for training purposes remarkably aligns with effective pattern implementation, offering a more ‘mechanical’ approach. Let's explore together how Object Calisthenics can elevate the design of your tactical DDD patterns, offering concrete help for those venturing into DDD for the first time!
GDG Cloud Southlake #33: Boule & Rebala: Effective AppSec in SDLC using Deplo...James Anderson
Effective Application Security in Software Delivery lifecycle using Deployment Firewall and DBOM
The modern software delivery process (or the CI/CD process) includes many tools, distributed teams, open-source code, and cloud platforms. Constant focus on speed to release software to market, along with the traditional slow and manual security checks has caused gaps in continuous security as an important piece in the software supply chain. Today organizations feel more susceptible to external and internal cyber threats due to the vast attack surface in their applications supply chain and the lack of end-to-end governance and risk management.
The software team must secure its software delivery process to avoid vulnerability and security breaches. This needs to be achieved with existing tool chains and without extensive rework of the delivery processes. This talk will present strategies and techniques for providing visibility into the true risk of the existing vulnerabilities, preventing the introduction of security issues in the software, resolving vulnerabilities in production environments quickly, and capturing the deployment bill of materials (DBOM).
Speakers:
Bob Boule
Robert Boule is a technology enthusiast with PASSION for technology and making things work along with a knack for helping others understand how things work. He comes with around 20 years of solution engineering experience in application security, software continuous delivery, and SaaS platforms. He is known for his dynamic presentations in CI/CD and application security integrated in software delivery lifecycle.
Gopinath Rebala
Gopinath Rebala is the CTO of OpsMx, where he has overall responsibility for the machine learning and data processing architectures for Secure Software Delivery. Gopi also has a strong connection with our customers, leading design and architecture for strategic implementations. Gopi is a frequent speaker and well-known leader in continuous delivery and integrating security into software delivery.
Transcript: Selling digital books in 2024: Insights from industry leaders - T...BookNet Canada
The publishing industry has been selling digital audiobooks and ebooks for over a decade and has found its groove. What’s changed? What has stayed the same? Where do we go from here? Join a group of leading sales peers from across the industry for a conversation about the lessons learned since the popularization of digital books, best practices, digital book supply chain management, and more.
Link to video recording: https://bnctechforum.ca/sessions/selling-digital-books-in-2024-insights-from-industry-leaders/
Presented by BookNet Canada on May 28, 2024, with support from the Department of Canadian Heritage.
3. MaxScale Objectives
● To be highly scalable
● Lightweight with small footprint
● Extendable
● Minimum possible latency
● Highly available
● Provide authentication
● Must be transparent to the application
4. MaxScale Core
● Event driven I/O Processor
● Polling, event driven mechanism that is responsible for
dispatching events to the various modules that make up
MaxScale
● Events in Maxscale = Network requests, such as:
● Handling an incoming connection on a listener socket
● Incoming data for a client connection
● Data arriving on a connection from a backend data server
● A socket error on one of the client or database
connections
● A socket closure
17. Possible Future Functionality Ideas
● Database firewall
● Auditing and Logging
● HA for non MySQL and MariaDB databases
● Connection load balancing with MariaDB 10
and Spider
● Statement based load balancing