MySQL Cluster is a distributed database that provides extreme scalability, high availability, and real-time performance. It uses an auto-sharding and auto-replicating architecture to distribute data across multiple low-cost servers. Key benefits include scaling reads and writes, 99.999% availability through its shared-nothing design with no single point of failure, and real-time responsiveness. It supports both SQL and NoSQL interfaces to enable complex queries as well as high-performance key-value access.
Virtual Flink Forward 2020: Netflix Data Mesh: Composable Data Processing - J...Flink Forward
Netflix processes trillions of events and petabytes of data a day in the Keystone data pipeline, which is built on top of Apache Flink. As Netflix has scaled up original productions annually enjoyed by more than 150 million global members, data integration across the streaming service and the studio has become a priority. Scalably integrating data across hundreds of different data stores in a way that enables us to holistically optimize cost, performance and operational concerns presented a significant challenge. Learn how we expanded the scope of the Keystone pipeline into the Netflix Data Mesh, our real-time, general-purpose, data transportation platform for moving data between Netflix systems. The Keystone Platform’s unique approach to declarative configuration and schema evolution, as well as our approach to unifying batch and streaming data and processing will be covered in depth.
A firewall remains the better choice for organizations willing to cut costs on their security mechanism, because it allows them to implement a parallel software on all hosts instead of implementing one individually.
Tutorial: Using GoBGP as an IXP connecting routerShu Sugimoto
- Show you how GoBGP can be used as a software router in conjunction with quagga
- (Tutorial) Walk through the setup of IXP connecting router using GoBGP
Developer’s guide to contributing code to Kafka with Mickael Maison and Tom B...HostedbyConfluent
Contributing code to an open source project can sometimes feel really difficult. The process is often different for each project and requires you to develop, build and test your change and then it needs to be accepted by the project. For Kafka, certain types of changes also require you to go through the Kafka Improvement Proposal (KIP) process.
In this talk, we will cover in detail the process to contribute code to Apache Kafka, from setting up a development environment, to building the code, running tests and opening a PR. We will also look at the KIP process, describe what each section of the document is for, the importance of finding consensus, and what happens when it gets voted. We will share, from a committers point of view, what we look for when reviewing a KIP and give some tips to help you get through the process successfully.
At the end of this talk, you will be able to get started contributing code to Kafka and understand how to get from idea, to KIP, to released feature.
Virtual Flink Forward 2020: Netflix Data Mesh: Composable Data Processing - J...Flink Forward
Netflix processes trillions of events and petabytes of data a day in the Keystone data pipeline, which is built on top of Apache Flink. As Netflix has scaled up original productions annually enjoyed by more than 150 million global members, data integration across the streaming service and the studio has become a priority. Scalably integrating data across hundreds of different data stores in a way that enables us to holistically optimize cost, performance and operational concerns presented a significant challenge. Learn how we expanded the scope of the Keystone pipeline into the Netflix Data Mesh, our real-time, general-purpose, data transportation platform for moving data between Netflix systems. The Keystone Platform’s unique approach to declarative configuration and schema evolution, as well as our approach to unifying batch and streaming data and processing will be covered in depth.
A firewall remains the better choice for organizations willing to cut costs on their security mechanism, because it allows them to implement a parallel software on all hosts instead of implementing one individually.
Tutorial: Using GoBGP as an IXP connecting routerShu Sugimoto
- Show you how GoBGP can be used as a software router in conjunction with quagga
- (Tutorial) Walk through the setup of IXP connecting router using GoBGP
Developer’s guide to contributing code to Kafka with Mickael Maison and Tom B...HostedbyConfluent
Contributing code to an open source project can sometimes feel really difficult. The process is often different for each project and requires you to develop, build and test your change and then it needs to be accepted by the project. For Kafka, certain types of changes also require you to go through the Kafka Improvement Proposal (KIP) process.
In this talk, we will cover in detail the process to contribute code to Apache Kafka, from setting up a development environment, to building the code, running tests and opening a PR. We will also look at the KIP process, describe what each section of the document is for, the importance of finding consensus, and what happens when it gets voted. We will share, from a committers point of view, what we look for when reviewing a KIP and give some tips to help you get through the process successfully.
At the end of this talk, you will be able to get started contributing code to Kafka and understand how to get from idea, to KIP, to released feature.
Introduction to the cutting-edge end-user (software) development, RIA and semantic technologies to offer a next-generation end-user centred web application mashup platform through FIWARE WireCloud.
Building DataCenter networks with VXLAN BGP-EVPNCisco Canada
The session specifically covers the requirements and approaches for deploying the Underlay, Overlay as well as the inter-Fabric connectivity of Data Center Networks or Fabrics. Within the VXLAN BGP-EVPN based Overlay, we focus on the insights like forwarding and control plane functions which are critical to the simplicity operation of the architecture in achieving scale, small failure domains and consistent configuration. To complete the overlay view on VXLAN BGP-EVPN, we are going to the insides of BGP and its EVPN address-familiy and extend to about how multiple DC Fabric can be interconnected within, either as stretched Fabrics or with true DCI. The session concludes with a brief overview of manageability functions, network orchestration capabilities and multi-tenancy details. This Advanced session is intended for network, design and operation engineers from Enterprises to Service Providers.
Webinar topic: VLAN vs VXLAN
Presenter: Achmad Mardiansyah
In this webinar series, We are discussing VLAN vs VXLAN
Please share your feedback or webinar ideas here: http://bit.ly/glcfeedback
Check our schedule for future events: https://www.glcnetworks.com/schedule/
Follow our social media for updates: Facebook, Instagram, YouTube Channel, and telegram
The recording is available on Youtube
https://youtu.be/HDo7XVLRd9E
Cilium: Kernel Native Security & DDOS Mitigation for Microservices with BPFDocker, Inc.
We have introduced Cilium at DockerCon US 2017 this year. Cilium provides application-aware network connectivity, security, and load-balancing for containers. This talk will follow up on the introduction and deep dive into recent kernel developments that address two fundamental questions: How can I provide application-aware security and routing efficiently without overhead embedded into every service? How can container hosts protect themselves from internal and external DDoS attacks? The solutions include:
kproxy: a kernel-based socket proxy which allows for application-aware routing and security enforcement with minimal overhead.
XDP: A lightning-fast packet processing datapath using BPF. The technology is intended for DDoS mitigation, load-balancing, and forwarding.
This talk will deep dive into these exciting technologies and show how Cilium makes BPF and these kernel features available on Linux for your Docker containers.
Netflix’s Big Data Platform team manages data warehouse in Amazon S3 with over 60 petabytes of data and writes hundreds of terabytes of data every day. With a data warehouse at this scale, it is a constant challenge to keep improving performance. This talk will focus on Iceberg, a new table metadata format that is designed for managing huge tables backed by S3 storage. Iceberg decreases job planning time from minutes to under a second, while also isolating reads from writes to guarantee jobs always use consistent table snapshots.
In this session, you'll learn:
• Some background about big data at Netflix
• Why Iceberg is needed and the drawbacks of the current tables used by Spark and Hive
• How Iceberg maintains table metadata to make queries fast and reliable
• The benefits of Iceberg's design and how it is changing the way Netflix manages its data warehouse
• How you can get started using Iceberg
Speaker
Ryan Blue, Software Engineer, Netflix
HAProxy TCP 모드에서 클라이언트의 Source IP를 내부 서버로 전달하는 방법을 알아봅니다.
* 중간에 오타가 있어서 수정본을 다시 업로드 하고자 했으나... SlideShare 측의 답변으로는 "Re-Upload 기능을 제거했다."라고 합니다. 부디 오타 등 부자연스러운 부분에 대해 너그럽게 이해를 부탁 드립니다.
Demystifying EVPN in the data center: Part 1 in 2 episode seriesCumulus Networks
Network operators are slowly but surely embracing L3-based leaf-spine designs. However, either due to legacy applications or certain multi-tenancy requirements, the need for L2 across racks is still present. How do you solve the problem of providing L2 across multiple racks? EVPN is quickly emerging as the best answer to this question.
In this episode of our 2-part series on EVPN, we start with a discussion of the use cases, a review of the technologies EVPN competes with, and dive into an evaluation of the pros and cons of each.
For a recording of the live event, go to http://go.cumulusnetworks.com/l/32472/2017-09-22/95t27t
5. Building the Cancer Research Data Commons with Neo4j: The Bento FrameworkNeo4j
Mark Jensen, Director, Data Management and Interoperability, Frederick National Laboratory for Cancer Research
Todd Pihl, Director, Technical Project Manager, Frederick National Laboratory for Cancer Research
Ming Ying, Senior Software Engineer, Frederick National Laboratory for Cancer Research
The Proxy Wars - MySQL Router, ProxySQL, MariaDB MaxScaleColin Charles
As proxies (and database routers) go, the first one I ever used was the now deprecated MySQL Proxy. Since then, I've managed to use MariaDB MaxScale quite a bit (including its fork AirBnB MaxScale), played around with ProxySQL in recent time, and also started taking a look at MySQL Router. In this quick 20-minute overview, we'll discuss why these three exist, a feature comparison, and reasons when to use the right tool for the job.
Introduce the basic concept of Open vSwitch. In this slide, we talked about how Linux kernel and networking stack worked together to forward and process the network packet and also compare those Linux networking stack functionality with Open vSwitch and Openflow.
At the end of this slide, we talk about the challenge to integrate the Open vSwitch with Kubernetes, what kind of the networking function we need to resolve and what is the benefit we can get from the Open Vswitch.
MySQL High Availability Solutions - Feb 2015 webinarAndrew Morgan
How important is your data? Can you afford to lose it? What about just some of it? What would be the impact if you couldn’t access it for a minute, an hour, a day or a week?
Different applications can have very different requirements for High Availability. Some need 100% data reliability with 24x7x365 read & write access while many others are better served by a simpler approach with more modest HA ambitions.
MySQL has an array of High Availability solutions ranging from simple backups, through replication and shared storage clustering – all the way up to 99.999% available shared nothing, geographically replicated clusters. These solutions also have different ‘bonus’ features such as full InnoDB compatibility, in-memory real-time performance, linear scalability and SQL & NoSQL APIs.
The purpose of this presentation is to help you decide where your application sits in terms of HA requirements and discover which of the MySQL solutions best fit the bill. It will also cover what you need outside of the database to ensure High Availability – state of the art monitoring being a prime example.
Introduction to the cutting-edge end-user (software) development, RIA and semantic technologies to offer a next-generation end-user centred web application mashup platform through FIWARE WireCloud.
Building DataCenter networks with VXLAN BGP-EVPNCisco Canada
The session specifically covers the requirements and approaches for deploying the Underlay, Overlay as well as the inter-Fabric connectivity of Data Center Networks or Fabrics. Within the VXLAN BGP-EVPN based Overlay, we focus on the insights like forwarding and control plane functions which are critical to the simplicity operation of the architecture in achieving scale, small failure domains and consistent configuration. To complete the overlay view on VXLAN BGP-EVPN, we are going to the insides of BGP and its EVPN address-familiy and extend to about how multiple DC Fabric can be interconnected within, either as stretched Fabrics or with true DCI. The session concludes with a brief overview of manageability functions, network orchestration capabilities and multi-tenancy details. This Advanced session is intended for network, design and operation engineers from Enterprises to Service Providers.
Webinar topic: VLAN vs VXLAN
Presenter: Achmad Mardiansyah
In this webinar series, We are discussing VLAN vs VXLAN
Please share your feedback or webinar ideas here: http://bit.ly/glcfeedback
Check our schedule for future events: https://www.glcnetworks.com/schedule/
Follow our social media for updates: Facebook, Instagram, YouTube Channel, and telegram
The recording is available on Youtube
https://youtu.be/HDo7XVLRd9E
Cilium: Kernel Native Security & DDOS Mitigation for Microservices with BPFDocker, Inc.
We have introduced Cilium at DockerCon US 2017 this year. Cilium provides application-aware network connectivity, security, and load-balancing for containers. This talk will follow up on the introduction and deep dive into recent kernel developments that address two fundamental questions: How can I provide application-aware security and routing efficiently without overhead embedded into every service? How can container hosts protect themselves from internal and external DDoS attacks? The solutions include:
kproxy: a kernel-based socket proxy which allows for application-aware routing and security enforcement with minimal overhead.
XDP: A lightning-fast packet processing datapath using BPF. The technology is intended for DDoS mitigation, load-balancing, and forwarding.
This talk will deep dive into these exciting technologies and show how Cilium makes BPF and these kernel features available on Linux for your Docker containers.
Netflix’s Big Data Platform team manages data warehouse in Amazon S3 with over 60 petabytes of data and writes hundreds of terabytes of data every day. With a data warehouse at this scale, it is a constant challenge to keep improving performance. This talk will focus on Iceberg, a new table metadata format that is designed for managing huge tables backed by S3 storage. Iceberg decreases job planning time from minutes to under a second, while also isolating reads from writes to guarantee jobs always use consistent table snapshots.
In this session, you'll learn:
• Some background about big data at Netflix
• Why Iceberg is needed and the drawbacks of the current tables used by Spark and Hive
• How Iceberg maintains table metadata to make queries fast and reliable
• The benefits of Iceberg's design and how it is changing the way Netflix manages its data warehouse
• How you can get started using Iceberg
Speaker
Ryan Blue, Software Engineer, Netflix
HAProxy TCP 모드에서 클라이언트의 Source IP를 내부 서버로 전달하는 방법을 알아봅니다.
* 중간에 오타가 있어서 수정본을 다시 업로드 하고자 했으나... SlideShare 측의 답변으로는 "Re-Upload 기능을 제거했다."라고 합니다. 부디 오타 등 부자연스러운 부분에 대해 너그럽게 이해를 부탁 드립니다.
Demystifying EVPN in the data center: Part 1 in 2 episode seriesCumulus Networks
Network operators are slowly but surely embracing L3-based leaf-spine designs. However, either due to legacy applications or certain multi-tenancy requirements, the need for L2 across racks is still present. How do you solve the problem of providing L2 across multiple racks? EVPN is quickly emerging as the best answer to this question.
In this episode of our 2-part series on EVPN, we start with a discussion of the use cases, a review of the technologies EVPN competes with, and dive into an evaluation of the pros and cons of each.
For a recording of the live event, go to http://go.cumulusnetworks.com/l/32472/2017-09-22/95t27t
5. Building the Cancer Research Data Commons with Neo4j: The Bento FrameworkNeo4j
Mark Jensen, Director, Data Management and Interoperability, Frederick National Laboratory for Cancer Research
Todd Pihl, Director, Technical Project Manager, Frederick National Laboratory for Cancer Research
Ming Ying, Senior Software Engineer, Frederick National Laboratory for Cancer Research
The Proxy Wars - MySQL Router, ProxySQL, MariaDB MaxScaleColin Charles
As proxies (and database routers) go, the first one I ever used was the now deprecated MySQL Proxy. Since then, I've managed to use MariaDB MaxScale quite a bit (including its fork AirBnB MaxScale), played around with ProxySQL in recent time, and also started taking a look at MySQL Router. In this quick 20-minute overview, we'll discuss why these three exist, a feature comparison, and reasons when to use the right tool for the job.
Introduce the basic concept of Open vSwitch. In this slide, we talked about how Linux kernel and networking stack worked together to forward and process the network packet and also compare those Linux networking stack functionality with Open vSwitch and Openflow.
At the end of this slide, we talk about the challenge to integrate the Open vSwitch with Kubernetes, what kind of the networking function we need to resolve and what is the benefit we can get from the Open Vswitch.
MySQL High Availability Solutions - Feb 2015 webinarAndrew Morgan
How important is your data? Can you afford to lose it? What about just some of it? What would be the impact if you couldn’t access it for a minute, an hour, a day or a week?
Different applications can have very different requirements for High Availability. Some need 100% data reliability with 24x7x365 read & write access while many others are better served by a simpler approach with more modest HA ambitions.
MySQL has an array of High Availability solutions ranging from simple backups, through replication and shared storage clustering – all the way up to 99.999% available shared nothing, geographically replicated clusters. These solutions also have different ‘bonus’ features such as full InnoDB compatibility, in-memory real-time performance, linear scalability and SQL & NoSQL APIs.
The purpose of this presentation is to help you decide where your application sits in terms of HA requirements and discover which of the MySQL solutions best fit the bill. It will also cover what you need outside of the database to ensure High Availability – state of the art monitoring being a prime example.
Ramp-Tutorial for MYSQL Cluster - Scaling with Continuous AvailabilityPythian
Rene Cannao's Ramp-Tutorial for MYSQL Cluster - Scaling with Continuous Availability. Rene, a Senior Operational DBA at PalominoDB.com, will guide attendees through a hands-on experience in the installation, configuration management and tuning of MySQL Cluster.
Agenda:
- MySQL Cluster Concepts and Architecture: we will review the principle of a fault-tolerant shared nothing architecture, and how this is implemented into NDB;
- MySQL Cluster processes : attendees will understand the various roles and interactions between Data Nodes, API Nodes and Management Nodes;
- Installation : we will install a minimal HA solution with MySQL Cluster on 3 virtual machines;
- Configuration of a basic system : upon describing the most important configuration parameters, Data/API/Management nodes will be configured and the Cluster launched;
- Loading data: the "world" schema will be imported into NDB using "in memory" and "disk based" storages; the attendees will experience how data changes are visible across API Nodes;
- Understand the NDB Storage Engine : internal implementation details will be explained, like synchronous replication, transaction coordinator, heartbeat, communication, failure detection and handling, checkpoint, etc;
- Query and schema design : attendees will understand the execution plan of queries with NDB, how SQL and Data Nodes communicate, how indexes and partitions are implemented, condition pushdown, join pushdown, query cache;
- Management and Administration: the attendees will test High Availability of NDB when a node become unavailable will learn how to read log file, how to stop/start any component of the Cluster to perform a rolling restart with no downtime, and how to handle a degraded setup;
- Backup and Recovery: attendees will be driven through the procedure of using NDB-native online backup and restore, and how this differs from mysqldump;
- Monitor and improve performance: attendee will learn how to boost performance tweaking variables according to hardware configuration and application workload
Get the best out of MySQL Cluster, presentation covers:
- Tuning and optimization to exploit the auto-sharded, distributed design of MySQL Cluster
- Using Adaptive Query Localization to scale cross-shard JOINs
- Data access patterns, schema and query optimizations
- Recommended tuning parameters
Tune in to the on-demand webinar: http://www.mysql.com/news-and-events/on-demand-webinars/display-od-719.html
Best practices for MySQL High AvailabilityColin Charles
The MariaDB/MySQL world is full of tradeoffs, and choosing a high availability (HA) solution is no exception. This session aims to look at all the alternatives in an unbiased way. Preference is of course only given to open source solutions.
How do you choose between: asynchronous/semi-synchronous/synchronous replication, MHA (MySQL high availability tools), DRBD, Tungsten Replicator, or Galera Cluster? Do you integrate Pacemaker and Heartbeat like Percona Replication Manager? The cloud brings even more fun, especially if you are dealing with a hybrid cloud and must think about geographical redundancy.
What about newer solutions like using Consul for MySQL HA?
When you’ve decided on your solution, how do you provision and monitor these solutions?
This and more will be covered in a walkthrough of MySQL HA options and when to apply them.
Software Design Patterns in Laravel by Phill SparksPhill Sparks
Laravel makes use of quite a few well-established design patterns that promote reusable object-oriented code. Together, we will investigate the design patterns used in the core of Laravel 4 and discuss how they encourage reusable software.
We run a busy installation with high levels of activity and architectural changes. Over the years we have developed techniques and mastered tools to help us maintain high levels of reliability and availability.
Here are some of the things we use on a day-to-day basis, and you probably could too.
NewSQL - Deliverance from BASE and back to SQL and ACIDTony Rogerson
There are a number of NewSQL products now on market such as VoltDB and Progres-XL. These promise NoSQL performance and scalability but with ACID and relational concepts implemented with ANSI SQL.
This session will cover off why NoSQL came about, why it's had it's day and why NewSQL will become the backbone of the Enterprise for OLTP and Analytics.
Run Cloud Native MySQL NDB Cluster in KubernetesBernd Ocklin
The more your database aligns with Cloud Native principles such as resilience, scaling, auto-healing and data consistency across all nodes, the better it also runs as DBaaS in Kubernetes. I walk through running databases in Kubernetes and demos manual deployment and deployment with an NDB operator.
This talk was given at the MySQL Dev Room FOSDEM 2021.
CodeFutures - Scaling Your Database in the CloudRightScale
RightScale Conference Santa Clara 2011: Scaling an application in the cloud often hits the most common bottleneck – the database tier. Not only is database performance the number one cause of poor application performance, but also the issue is magnified in cloud environments where I/O and bandwidth is generally slower and less predictable than in dedicated data centers. Database sharding is a highly effective method of removing the database scalability barrier, operating on top of proven RDBMS products such as MySQL and Postgres – as well as the new NoSQL database platforms. One critical aspect often given too little consideration is monitoring and continuous operation of your databases, including the full lifecycle, to ensure that they stay up.
One of our presentation which was given on Cassandra Database. Aruman implement big-data projects for its multiple client. RDBMS to Cassandra conversion is task which is taken by ARUMAN.
1. What are the difficulties in deploying and managing the life cycle of data-heavy application
2. Review of kubernetes landscape w.r.t data-heavy applications
3. Robin approach to orchestrating data-heavy applications
Pythian: My First 100 days with a Cassandra ClusterDataStax Academy
With Apache Cassandra being a massively scalable open source NoSQL database and with the amount of data that we create and copy annually which is doubling in size every two years, it is expected to reach 44 zettabytes, or 44 trillion gigabytes, we can assume that sooner or later a DBA will be handling a Cassandra database in their shop. This beginner/intermediate-level session will take you through my journey of an Oracle DBA and my first 100 days of starting to administer a Cassandra Cluster, show several demos and all the roadblocks and the success I had along this path.
Similar to MySQL Cluster Scaling to a Billion Queries (20)
Kubernetes & AI - Beauty and the Beast !?! @KCD Istanbul 2024Tobias Schneck
As AI technology is pushing into IT I was wondering myself, as an “infrastructure container kubernetes guy”, how get this fancy AI technology get managed from an infrastructure operational view? Is it possible to apply our lovely cloud native principals as well? What benefit’s both technologies could bring to each other?
Let me take this questions and provide you a short journey through existing deployment models and use cases for AI software. On practical examples, we discuss what cloud/on-premise strategy we may need for applying it to our own infrastructure to get it to work from an enterprise perspective. I want to give an overview about infrastructure requirements and technologies, what could be beneficial or limiting your AI use cases in an enterprise environment. An interactive Demo will give you some insides, what approaches I got already working for real.
JMeter webinar - integration with InfluxDB and GrafanaRTTS
Watch this recorded webinar about real-time monitoring of application performance. See how to integrate Apache JMeter, the open-source leader in performance testing, with InfluxDB, the open-source time-series database, and Grafana, the open-source analytics and visualization application.
In this webinar, we will review the benefits of leveraging InfluxDB and Grafana when executing load tests and demonstrate how these tools are used to visualize performance metrics.
Length: 30 minutes
Session Overview
-------------------------------------------
During this webinar, we will cover the following topics while demonstrating the integrations of JMeter, InfluxDB and Grafana:
- What out-of-the-box solutions are available for real-time monitoring JMeter tests?
- What are the benefits of integrating InfluxDB and Grafana into the load testing stack?
- Which features are provided by Grafana?
- Demonstration of InfluxDB and Grafana using a practice web application
To view the webinar recording, go to:
https://www.rttsweb.com/jmeter-integration-webinar
Smart TV Buyer Insights Survey 2024 by 91mobiles.pdf91mobiles
91mobiles recently conducted a Smart TV Buyer Insights Survey in which we asked over 3,000 respondents about the TV they own, aspects they look at on a new TV, and their TV buying preferences.
State of ICS and IoT Cyber Threat Landscape Report 2024 previewPrayukth K V
The IoT and OT threat landscape report has been prepared by the Threat Research Team at Sectrio using data from Sectrio, cyber threat intelligence farming facilities spread across over 85 cities around the world. In addition, Sectrio also runs AI-based advanced threat and payload engagement facilities that serve as sinks to attract and engage sophisticated threat actors, and newer malware including new variants and latent threats that are at an earlier stage of development.
The latest edition of the OT/ICS and IoT security Threat Landscape Report 2024 also covers:
State of global ICS asset and network exposure
Sectoral targets and attacks as well as the cost of ransom
Global APT activity, AI usage, actor and tactic profiles, and implications
Rise in volumes of AI-powered cyberattacks
Major cyber events in 2024
Malware and malicious payload trends
Cyberattack types and targets
Vulnerability exploit attempts on CVEs
Attacks on counties – USA
Expansion of bot farms – how, where, and why
In-depth analysis of the cyber threat landscape across North America, South America, Europe, APAC, and the Middle East
Why are attacks on smart factories rising?
Cyber risk predictions
Axis of attacks – Europe
Systemic attacks in the Middle East
Download the full report from here:
https://sectrio.com/resources/ot-threat-landscape-reports/sectrio-releases-ot-ics-and-iot-security-threat-landscape-report-2024/
Transcript: Selling digital books in 2024: Insights from industry leaders - T...BookNet Canada
The publishing industry has been selling digital audiobooks and ebooks for over a decade and has found its groove. What’s changed? What has stayed the same? Where do we go from here? Join a group of leading sales peers from across the industry for a conversation about the lessons learned since the popularization of digital books, best practices, digital book supply chain management, and more.
Link to video recording: https://bnctechforum.ca/sessions/selling-digital-books-in-2024-insights-from-industry-leaders/
Presented by BookNet Canada on May 28, 2024, with support from the Department of Canadian Heritage.
DevOps and Testing slides at DASA ConnectKari Kakkonen
My and Rik Marselis slides at 30.5.2024 DASA Connect conference. We discuss about what is testing, then what is agile testing and finally what is Testing in DevOps. Finally we had lovely workshop with the participants trying to find out different ways to think about quality and testing in different parts of the DevOps infinity loop.
UiPath Test Automation using UiPath Test Suite series, part 3DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 3. In this session, we will cover desktop automation along with UI automation.
Topics covered:
UI automation Introduction,
UI automation Sample
Desktop automation flow
Pradeep Chinnala, Senior Consultant Automation Developer @WonderBotz and UiPath MVP
Deepak Rai, Automation Practice Lead, Boundaryless Group and UiPath MVP
3. Program Agenda
<Insert Picture Here>
• Databases Are Exciting Again!!!
• Overview of MySQL Cluster
• MySQL Cluster - What’s New
• How is it used?
4. A converging world ...
Information Banking
Social
Messaging
Networking
Multi-
Gaming
Media
5. 2.1BN USERS
8X DATA GROWTH IN 5 YRS 850M USERS
70+ NEW DOMAINS EVERY 60 SECONDS 20M APPS PER DAY
40% DATA
GROWTH PER YEAR
1 TR VIDEO
PLAYBACKS
$1TR BY 2014
250m TWEETS
PER DAY $700BN IN 2011
5.9BN MOBILE SUBS IN 2011
1 BILLION iOS & ANDROID APPS 370K CALL MINUTES
EVERY 60 SECONDS
DOWNLOADED PER WEEK
6. Driving new Database Requirements
EXTREME WRITE SCALABILITY REAL TIME USER EXPERIENCE
ROCK SOLID RELIABILITY RAPID SERVICE INNOVATION
7. No Trade-Offs: Cellular Network
HLR / HSS
Location
Updates
AuC, Call
Routing, Billing
Pre & Post Paid
• Massive volumes of write traffic
• <3ms database response
• Downtime & lost transactions = lost $
Billing, AuC, MySQL Cluster in Action: http://bit.ly/oRI5tF
VLR
8. No Trade-Offs
Transactional Integrity
Complex REAL TIME USER EXPERIENCE
EXTREME WRITE SCALABILITY Queries
Standards & Skillsets
ROCK SOLID RELIABILITY ELIMNATE BARRIERS TO ENTRY
9. MySQL Cluster – Users & Applications
Extreme Scalability, Availability and Affordability
• Web
• High volume OLTP
• eCommerce
• User Profile Management
• Session Management & Caching
• Content Management
• On-Line Gaming
• Telecoms
• Subscriber Databases (HLR / HSS)
• Service Delivery Platforms
• VAS: VoIP, IPTV & VoD
• Mobile Content Delivery
• Mobile Payments
• LTE Access http://www.mysql.com/customers/cluster/
11. Basic architectures
2-tier
Data access
Application Logic
Front-End
Data
Indexes
12. Basic architectures
3-tier
SQL, JDBC,
Front-End ADO, ...
Application Logic
Data access
(e.g. SQL engine)
Data
Indexes
13. Basic architectures
4-tier
Front-End
Application
Logic
Data access
(e.g. SQL engine)
Data
Indexes
14. All services share the same data view
native ClusterJ REST/JSON LDAP memcached SQL, JDBC,
ADO, ...
NDB API
MySQL Cluster Data Nodes
15. C++ example
NdbOperation *op = trx>getNdbOperation(myTable);
op>insertTuple();
op>equal("key", i);
op>setValue("value", &value);
trx>execute( NdbTransaction::Commit );
16. Java example
Character newCharacter =
session.newInstance(Character.class);
newCharacter.setName(„Yoda“);
newCharacter.setAttributes(„Force“);
Session.persist(newCharacter);
17. SQL example
(requires MySQL Server)
Mysql> INSERT INTO Charaters (Name, Attributes)
VALUES („Yoda“, „Force“);
18. High performance and Scalability
Cluster is
• Distributed
• Event Driven
• Asynchronous
• Parallel
• Non-locking
19. Your friends / Your enemies
•
Disks (life-saver) •
Disks (slow fsync)
•
CPU cache •
Network latency
•
RAM •
Heap allocation
•
Many cores •
NUMA
•
Context switching
20. Use your friends
Disks (your job saver)
– Log your data to disk (asynchrounsly)
CPU cache
– Align to to it
RAM
– Preallocate!
Many cores
– Distribute to cores (have a model that supports this)
21. Avoid your enemies
Disks
– Reduce fsyncs
– no swapping
Network latency
– Reduce network round trips
Slow heap allocation
– Pre-allocate all memory, avoid using it
NUMA
– Disable it
Context switching
– Lock to cores
– Get network interrupts out of your way
22. MySQL Cluster
– A distributed hash table
17 Yoda
143 Albert
12 Bernd
42 Ernest
md5() % <no of nodes>
MySQL Cluster Data Nodes
17 Yoda 12 Bernd
143 Ernest 143 Albert
23. Best Practice : Primary Keys
• ALWAYS DEFINE A PRIMARY KEY ON THE TABLE!
• A hidden PRIMARY KEY is added if no PK is specified. BUT..
• .. NOT recommended
• The hidden primary key is for example not replicated
(between Clusters)!!
• There are problems in this area, so avoid the problems!
• So always, at least have
id BIGINT AUTO_INCREMENT PRIMARY KEY
• Even if you don't “need” it for you applications
25. Auto-Sharding (distribution)
– Application “knows“ the data location
Application
find({id: 12})
{id: 12, name: Bernd}
MySQL Cluster Data Nodes
26. Auto-Sharding
• Transparent to the application and data access layer
• No need for application-layer sharding logic – build into the API & kernel
• Partitioning based on hashing all or part of the primary key
• Each node stores primary fragment for 1 partition and back-up fragment for another
• Transparency maintained during failover, upgrades and scale-out
• No need to limit application to single-shard transactions
29. Adding High Availability
– Synchronous Replication
17 Yoda 12 Bernd
42 Ernest 143 Albert
12 Bernd 17 Yoda
143 Albert 42 Ernest
30. Handling Scheduled Maintenance
On-Line Operations
• Scale the cluster (add & remove nodes on-line)
• Repartition tables
• Upgrade / patch servers & OS
• Upgrade / patch MySQL Cluster
• Back-Up
• Evolve the schema on-line, in real-time
31. Adding disk durability
Memory
{id: 17, … } In-memory tables
Data kept in memory but
complemented by logging
to disk.
Disk
Disk based tables
Data kept on disk but
cached in memory.
Logging to disk is
decoupled from
transaction writing.
32. Shared Nothing
SQL, JDBC,
ADO, ...
No shared components.
Cheap commodity
hardware.
Proper SAN acceptable
but expensive.
33. Adding High Availability
– Extreme resilience
Application
Service continuing
MySQL Cluster Data Nodes
37. Doing things in parallel
• Primary key reads can be directed to the correct
shard on the API application level
– No waste of resources by doing same operation
on all
• Each data node can handle up to 16 operations in
parallel
• One data node can fully utilize up to 51 physical CPU
cores
47. READS
Million / minute
1.200
1.056
1.000
800
• 8 x Commodity Intel Servers
600 • 2 x 6-core processors 2.93GHz
400
200
• x5670 processors (24 threads
0 per total)
2 node 4 node 8 node
• 48GB RAM
UPDATE • Linux
Million / minute
120
109
100 • Infiniband networking
80
60 • flexAsynch benchmark
40
• C++ NoSQL API (NDB API)
20
0
4 node 8 node
48. Adaptive Query Localization
Scaling Distributed Joins 70x
More
Performance
• Perform Complex Queries
mysqld across Shards
• JOINs pushed down to data nodes
A Data Nodes
• Executed in parallel
Q • Returns single result set to MySQL
L
• Opens Up New Use-Cases
• Real-time analytics
• Recommendations engines
mysqld
• Analyze click-streams
Data Nodes DON’T COMPROMISE
FUNCTIONALITY TO SCALE-OUT !!
49. MySQL Cluster 7.2 AQL Test Query
Web-Based Content Management System
MySQL
Server
Data Data
Node1 Node2
Copyright 2011 Oracle Corporation 49
50. Web-Based CMS
70x
More
Performance
87.23 seconds
1.26 seconds
Must Analyze tables for best results
mysql> ANALYZE TABLE <tab-name>;
51. Memcached Key-Value API
• Persistent, Scalable, HA
Back-End to memcached
• No application changes: re-
uses standard memcached
clients & libraries
• Consolidate Caching &
Database Tiers
• Eliminate cache invalidation
• Simpler re-use of data across
services
• Improved service levels
New • Flexible Deployment
NoSQL • Schema or Schema-less
Access storage
52. Schema-Free apps
• Rapid application
evolution
• New types of data
constantly added
• No time to get schema
extended
• Missing skills to extend
schema
• Initially roll out to just a
few users
• Constantly adding to live
system
Copyright 2011 Oracle Corporation 52
53. Cluster & Memcached – Schema-Free
key value
<town:maidenhead,SL6>
Application view
SQL view key value
<town:maidenhead,SL6>
generic table
56. MySQL 5.5 Server Integration
• Configure storage engine per-table
• Choose the right tool for the job
• InnoDB: Foreign Keys, XA Transactions,
Large Rows
• MySQL Cluster: HA, High Write Rates, Real-
Time
• Reduces Complexity, Simplifies
DevOps
• Take advantage of MySQL 5.5
• 3x higher performance
• Improved partitioning, diagnostics, availability,
etc.
58. Multi-Site Clustering
• Split data nodes across
data centers
• Synchronous replication
Node Group 1 and auto-failover between
Data Node 1 Data Node 2 sites
Synchronous
Synchronous
Replication
Replication
• Improved heartbeating to
handle network partitions
Node Group 2 • Extends HA Options
Data Node 3 Data Node 4 • Active/Active with no
need for conflict
handling
59. Active/Active Geographic Replication
•Replicating complete
clusters across data
centers
• DR & data locality
• No passive resources
Geographic
Replication •Simplified Active /
Active Replication
• Eliminates requirement
for application & schema
changes
• Transaction-level
rollback
61. Simplified Provisioning & Maintenance
User Privilege Consolidation
The existence, content and timing of future releases described here is included for information only and may be changed at Oracles discretion.
October 3rd, 2011
62. MySQL Cluster Manager
Reducing TCO and creating a more agile, highly
available database environment
Automated
Management
Monitoring & High
Recovery Availability
Operation
Copyright 2011 Oracle Corporation 62
63. How Does MySQL Cluster Manager Help?
Example: Initiating upgrade from MySQL Cluster 7.0 to 7.2
Before MySQL Cluster Manager With MySQL Cluster Manager
• 1 x preliminary check of cluster state upgrade cluster --package=7.1 mycluster;
• 8 x ssh commands per server
• 8 x per-process stop commands
• 4 x scp of configuration files (2 x mgmd & 2 x Total: 1 Command -
mysqld)
• 8 x per-process start commands
Unattended Operation
• 8 x checks for started and re-joined processes • Results
• 8 x process completion verifications
• 1 x verify completion of the whole cluster. • Reduces the overhead and complexity
• Excludes manual editing of each configuration of managing database clusters
file. • Reduces the risk of downtime resulting
from administrator error
Total: 46 commands - • Automates best practices in database
2.5 hours of attended operation cluster management
Copyright 2011 Oracle Corporation 63
64. Bootstrap single host Cluster
1. Download MCM from edelivery.oracle.com:
• Package including Cluster
1. Unzip
2. Run agent, define, create & start Cluster!
$> binmcmd –bootstrap
MySQL Cluster Manager 1.1.2 started
Connect to MySQL Cluster Manager by running "D:AndrewDocumentsMySQLmcmbinmcm" -a NOVA:1862
Configuring default cluster 'mycluster'...
Starting default cluster 'mycluster'...
Cluster 'mycluster' started successfully
ndb_mgmd NOVA:1186
ndbd NOVA
ndbd NOVA
mysqld NOVA:3306
mysqld NOVA:3307
ndbapi *
Connect to the database by running "D:AndrewDocumentsMySQLmcmclusterbinmysql" -h NOVA -P 3306
-u root
• Connect to Cluster & start using database
To bootstrap with Cluster 7.2 replace contents of mcm/cluster directory
http://www.clusterdb.com/mysql-cluster/mysql-cluster-manager-1-1-2-creating-a-cluster-is-now-trivial
Copyright 2011 Oracle Corporation 64
66. Evaluate MySQL Cluster CGE
30-Day Trial
• Navigate to
http://edelivery.oracle.com/
and step through (selecting
“MySQL Database” as the
Product Pack)
• Select MySQL Cluster
Manager
68. When to Consider MySQL Cluster
What are the consequences of downtime or failing to meet
performance requirements?
How much effort and $ is spent in developing and managing HA in
your applications?
Are you considering sharding your database to scale write
performance? How does that impact your application and
developers?
Do your services need to be real-time?
Will your services have unpredictable scalability demands,
especially for writes ?
Do you want the flexibility to manage your data
with more than just SQL ?
69. Where would I not Use MySQL Cluster?
• “Hot” data sets >3TB
• Replicate cold data to InnoDB
• Long running transactions
• Large rows, without using BLOBs
• Foreign Keys
• Can use triggers to emulate:
• http://dev.mysql.com/tech-resources/articles/mysql-enforcing-foreign-keys.html
• Full table scans
• Savepoints
• Geo-Spatial indexes
• InnoDB storage engine would be the right choice
MySQL Cluster Evaluation Guide
http://mysql.com/why-mysql/white-papers/mysql_cluster_eval_guide.php
70. MySQL Cluster in Action
Web Reference Architectures
Session Management eCommerce Data Content Management
Refinery Memcache / Application Servers
MySQL Servers MySQL Servers
MySQL Master
Node Group 1 Node Group 2 Node Group 1 Node Group 2
F1 F2 F1 F2 Slave N
F4
Node 3
F3
Node 3
F4
Node 3
F3
Node 3
F1 F2 F1 F2
F3 F4
Node 4
F4
Node 4
F3
Slave 8 Slave 9 Slave 10
Node 4
Node 4
Slave 6 Slave 7
MySQL Cluster Data Nodes MySQL Cluster Data Nodes
Slave 1 Slave 2 Slave 3 Slave 4 Slave 5
• 4 x Data Nodes: 6k Analytics
page hits per second MySQL Master XOR
• Each page hit
generating 8 – 12
database operations
Distributed
Slave 1 Slave 2 Slave 3
Storage
Whitepaper: http://www.mysql.com/why-mysql/white-papers/mysql_wp_high-availability_webrefarchs.php
75. CUSTOMER PERSPECTIVE
“MySQL Cluster won the performance test hands-
COMPANY OVERVIEW
down, and it fitted our needs perfectly. We
• Leading provider of communications evaluated shared-disk clustered databases, but the
platforms, solutions & services cost would have been at least 10x more.”
• €15.2bn Revenues (2009), 77k employees -- François Leygues, Systems Manager
across 130 countries
CHALLENGES / OPPORTUNITIES
• Converged services driving migration to RESULTS
next generation HLR / HSS systems • Scale out on standard ATCA hardware to
• New IMS platforms for Unified support 60m+ subscribers on a single
Communications platform
• Reduce cost per subscriber and accelerate • Low latency, high throughput with
time to value 99.999%+ availability
• Enabled customers to reduce cost per
subscriber and improve margins
SOLUTIONS • Delivered data management solution at
• MySQL Cluster Carrier Grade Edition 10x less cost than alternatives
• MySQL Support & Consulting Services
http://www.mysql.com/why-mysql/case-studies/mysql-alcatel-casestudy.php
http://www.mysql.com/why-mysql/case-studies/mysql-alcatel-casestudy.php
76. Shopatron: eCommerce Platform
• Applications
– Ecommerce back-end, user authentication,
order data & fulfilment, payment data &
inventory tracking. Supports several
thousand queries per second
• Key business benefits
– Scale quickly and at low cost to meet
demand
– Self-healing architecture, reducing TCO
• Why MySQL?
– Low cost scalability
– High read and write throughput
– Extreme availability
“Since deploying MySQL Cluster as our eCommerce database, we have had
continuous uptime with linear scalability enabling us to exceed our most stringent SLAs”
— Sean Collier, CIO & COO, Shopatron Inc
http://www.mysql.com/why-mysql/case-studies/mysql_cs_shopatron.php
http://www.mysql.com/why-mysql/case-studies/mysql_cs_shopatron.php 76
77. COMPANY OVERVIEW CUSTOMER PERSPECTIVE
• Pyro provide comms technology solutions ”MySQL Cluster 7.1 gave us the perfect combination
in Core Network, OSS/BSS & VAS of extreme levels of transaction throughput, low
• Deployed in 120+ networks worldwide latency & carrier-grade availability. We also reduced
• Cell C, one of the largest mobile TCO by being able to scale out on commodity server
operators in South Africa blades and eliminate costly shared storage”
• 560 roaming partners in 186 countries -- Phani Naik, Head of Technology at Pyro Group
CHALLENGES / OPPORTUNITIES
• FIFA 2010 world cup opens up network
services to millions of mobile subscribers RESULTS
• International roaming SDP to support up • Supported subscriber and traffic volumes
to 7m roaming subscribers per day • Delivered continuous availability
• Offer local pricing with home network • Implemented in 25% of the time of typical
functionality SDP solutions
• Minimize cost and time to market • Choice in deployment platforms to eliminate
vendor lock-in (migrated from Microsoft)
SOLUTIONS
• MySQL Cluster 7.1 & Services
78. CUSTOMER PERSPECTIVE
“Telenor has been using MySQL for fixed IP
COMPANY OVERVIEW management since 2003 and are extremely
• Leading telecoms provider across Europe satisfied with its speed, availability and
and Asia. Largest Nordic provider flexibility. Now we also support mobile
• 184m subscribers (Q2, 2010) and LTE IP management with our solution.
Telenor has found MySQL Cluster to be
the best performing database in the world
CHALLENGES / OPPORTUNITIES for our applications.”
• Extend OSS & BSS platforms for new
mobile services and evolution to LTE - Peter Eriksson, Manager, Network Provisioning
• OSS: IP Management & AAA
RESULTS
• BSS: Subscriber Data Management &
Customer Support • Launch new services with no downtime,
due to on-line operations of MySQL
Cluster
• Consolidated database supports
SOLUTIONS Subscriber Data Management initiatives
• MySQL Cluster • MySQL Cluster selected due to 99.999%
• MySQL Support Services availability, real time performance and
linear scalability on commodity hardware
79. COMPANY OVERVIEW CUSTOMER PERSPECTIVE
• UK-based retail and wholesale ISP & “Since deploying our latest AAA platform, the
Hosting Services MySQL environment has delivered continuous
• 2010 awards for best home broadband uptime, enabling us to exceed our most stringent
and customer service SLAs”
• Acquired by BT in 2007 -- Geoff Mitchell Network Engineer
CHALLENGES / OPPORTUNITIES
• Enter market for wholesale services,
demanding more stringent SLAs
• Re-architect AAA systems for data RESULTS
integrity & continuous availability to • Continuous system availability, exceeding
support billing sytems wholesale SLAs
• Consolidate data to for ease of reporting • 2x faster time to market for new services
and operating efficiency
• Agility and scale by separating database
• Fast time to market from applications
• Improved management & infrastructure
efficiency through database consolidation
SOLUTIONS
• MySQL Cluster
• MySQL Server with InnoDB
80. COMPANY OVERVIEW USER PERSPECTIVE
• Division of Docudesk “MySQL Cluster exceeds our requirements for low
latency, high throughput performance with
• Deliver Document Management SaaS
continuous availability, in a single solution that
minimizes complexity and overall cost.”
CHALLENGES / OPPORTUNITIES -- Casey Brown, Manager of Dev & DBA Services,
Docudesk
• Provide a single repository for customers to
manage, archive, and distribute documents
• Implement scalable, fault tolerant, real time
data management back-end RESULTS
• PHP session state cached for in-service • Successfully deployed document
personalization management solution, eliminating paper
• Store document meta-data, text (as trails from legal processes
BLOBs), ACL, job queues and billing data • Integrate caching and database into one
• Data volumes growing at 2% per day layer, reducing complexity & cost
• Support workload with 50:50 read/write
ratio
SOLUTION • Low latency for real-time user experience
• MySQL Cluster deployed on EC2 and document time-stamping
• Continuous database availability
81. Getting Started
Learn More
Scaling Web
Databases
Guide
www.mysql.com/cluster/
Evaluate MySQL Cluster 7.2 Bootstrap a Cluster!
Download, No
Download Today Obligation
http://www.mysql.com/ https://edelivery.oracl
downloads/cluster/ e.com/
Copyright 2011 Oracle Corporation 81
82. Summary
Scale Web Services with
Carrier-Grade Availability
Don’t Trade Functionality for Scale
Try it out Today!
Copyright 2011 Oracle Corporation 82
85. Multi-threaded Data Node Extensions
• Scaling out on commodity
hardware is the standard
Application Nodes way to increase
performance
• Add more data nodes and
Node 3
API nodes as required
• MySQL Cluster 7.2
Node 1
increases the ability to also
scale-up each data node
• Increases maximum
number of utilised threads
from 8 to 59
Node 2
Node 4
• Can deliver aX single
thread performance with
bX cores
Node Group 1 Node Group 2
86. Multi-threaded Data Node Extensions
• Threads (post GA!):
• recv: <= 8 Receive threads
Application Nodes
• tc: <= 24 Transaction
Coordinator threads
• ldm: <= 16 Local Query
Handler threads
• send: <= 8 Send threads
• main: 1 Main thread
recv send main • rep: 1 Replication thread
• io: 1 I/O thread
• Engineering guidelines
provided to find the best
tc ldm rep io configuration: ZXZX
87. Multi-threaded Data Node Extensions
ThreadConfig :=<entry> [ ,<entry> ] +
entry :=<type>={ [<param> ]+ }
• Note that extra send,
param := count = N |
cpubind = L |
recv & tc threads
cpuset = L will be part of post-
type := ldm | main | recv | rep | GA maintenance
maint | send | tc | io release.
Example:
ThreadConfig=ldm={count=2,cpubind=1,2},
ldm={count=2,cpuset=6-9},
main={cpubind=12},rep={cpubind=11}
88. NoSQL with Memcached
• Flexible: set maidenhead 0 0 3
SL6
• Deployment options
STORED
• Multiple Clusters
• Simultaneous SQL Access get maidenhead
• Can still cache in VALUE maidenhead 0 3
Memcached server SL6
• Flat key-value store or map END
to multiple tables/columns
89. Multi-Site Clustering – changes to
STONITH algorithm
• When heartbeat not received, all data nodes will be asked to
ping all other data nodes
• Each node establishes its list of ‘suspect’ data nodes from whom
they don’t receive a ping response within
ConnectCheckIntervalDelay msecs
• If second period of ConnectCheckIntervalDelay passes
without a ping response then each data node will send a Fail
report to all data nodes naming its suspected node(s)
• On receipt of a Fail message from a suspect node, the receiving
node will consider the originating node as failed rather than the
requested target
• Leaves each side of the temporarily partitioned network with a
viable set of data nodes and arbitration is used to select the
surviving side if there is no longer a clear majority
90. Multi-Site Clustering – WAN engineering
recommendations based on user experience
• (Obviously) the longer the latency between sites, the
higher the impact to performance
• Target latency should be <= 10 ms; 20 ms
acceptable
• Test with 1000 byte packet, under load
• Bandwidth requirements dependent on traffic but aim
for 1 Gbps+ (100 Mbps for low traffic Cluster)
• Simplest WAN topology possible (fewer points of
failure/failover latency)
• Typical WAN failover times should be short enough
not to trigger STONITH in Cluster
96. Geographic Replication – what’s
changed in conflict resolution
• Reflecting GCI (Global Checkpoint Index) removes requirement for
applications to maintain timestamp field in each potentially conflicting
table
• One of the two masters acts as the ‘primary’ and monitors all received
replication events from the ‘secondary’ (including its own ‘reflected GCI’) to
establish when changes not applied in same order on primary and secondary
Clusters
• Primary will then overwrite all conflicting transactions (or optionally just the
conflicting rows) on the secondary – as well as subsequent transactions
influenced by the conflict
• To use, set the function in mysql.ndb_replication to NDB$EPOCH()
or NDB$EPOCH_TRANS()
• Overview & worked example: http://bit.ly/activeactive
• Gory details: http://bit.ly/refcgci
97. How to Push Privilege Data into Data
Nodes
mysql> SOURCE /usr/local/mysql/share/mysql/ndb_dist_priv.sql;
mysql> CALL mysql.mysql_cluster_move_privileges();
mysql> SHOW CREATE TABLE mysql.userG
*************************** 1. row ***************************
Table: userCreate Table: CREATE TABLE `user` (
`Host` char(60) COLLATE utf8_bin NOT NULL DEFAULT '',
....
....
) ENGINE=ndbcluster DEFAULT CHARSET=utf8 COLLATE=utf8_bin
COMMENT='Users and global privileges‘
•Fully worked example:
http://www.clusterdb.com/mysql-cluster/sharing-user-credential
(http://bit.ly/userpriv)
102. On-Line Scaling & Maintenance
1. New node group added
2. Data is re-partitioned
3. Redundant data is deleted
4. Distribution is switched to share
load with new node group
• Can also update schema on-
line
• Upgrade hardware &
software with no downtime
• Perform back-ups on-line
103. Only MySQL Can…..
blend the agility & innovation of the web….
….with the trust & capability of the network.
104. No Trade-Offs: eCommerce
• Integrated Service Provider
platform
• eCommerce
• Payment processing
• Fulfillment
• Supports 1k+ manufacturers &
18k retail partners
• Requirements
• Scaling, On-Demand
• HA: failures & on-line upgrades
• High batch & real time loads
• Low TCO: capex and opex
http://mysql.com/customers/view/?id=1080
105. No Trade-Offs: Flight Control
• US Navy aircraft carriers
• Consolidated flight operations
management system
• Maintenance records
• Fuel loads
• Weather conditions
• Flight deck plans
• Requirements
• No Single Points of Failure
• Complete redundancy
• Small footprint, harsh environment
• 4 x MySQL Cluster nodes,
Linux and Windows
MySQL User Conference Session: http://bit.ly/ogeid3
106. Creating & running your first Cluster
- the “manual” way (without MCM)
• Up & running in 10-15 minutes using Quick Start guides from
http://dev.mysql.com/downloads/cluster/
• Versions for Linux, Windows & Solaris
Copyright 2011 Oracle Corporation 106