Webinar slides: Free Monitoring (on Steroids) for MySQL, MariaDB, PostgreSQL ...Severalnines
Traditional server monitoring tools are not built for modern distributed database architectures. Let’s face it, most production databases today run in some kind of high availability setup - from simpler master-slave replication to multi-master clusters fronted by redundant load balancers. Operations teams deal with dozens, often hundreds of services that make up the database environment.
This is why we built ClusterControl - to address modern, highly distributed database setups based on replication or clustering. We wanted something that could provide a systems view of all the components of a distributed cluster, including load balancers.
Watch this replay of a webinar on free database monitoring using ClusterControl Community Edition. We show you how to monitor all your MySQL, MariaDB, PostgreSQL and MongoDB systems from a single point of control - whether they are deployed as Galera Clusters, sharded clusters or replication setups across on-prem and cloud data centers. We also see how to use Advisors in order to improve performance.
AGENDA
- Requirements for monitoring distributed database systems
- Cloud-based vs On-prem monitoring solutions
- Agent-based vs Agentless monitoring
- Deepdive into ClusterControl Community Edition
- Architecture
- Metrics Collection
- Trending
- Dashboards
- Queries
- Performance Advisors
- Other features available to Community users
SPEAKER
Bartlomiej Oles is a MySQL and Oracle DBA, with over 15 years experience in managing highly available production systems at IBM, Nordea Bank, Acxiom, Lufthansa, and other Fortune 500 companies. In the past five years, his focus has been on building and applying automation tools to manage multi-datacenter database environments.
In this day and age, data grows so fast it’s not uncommon for those of us using a relational database to reach the limits of its capacity. In this session, Kwangbock Lee explains how Samsung uses ClustrixDB to handle fast-growing data without manual database sharding. He highlights lessons learned, including a few hiccups along the way, and shares Samsung's experience migrating to ClustrixDB.
The role of databases in modern application developmentMariaDB plc
The rise of serverless microservices, event-driven application architecture and full-stack development with JavaScript and the MEAN stack is changing what application developers need from databases – and how they interact with them. In this session, MariaDB's Thomas Boyd discusses recent advancements in application development and architecture and explains how MariaDB supports them.
"Smooth Operator" [Bay Area NewSQL meetup]Kevin Xu
This slide was delivered at the Bay Area NewSQL meetup in California on how TiDB, an open source NewSQL distributed database, is deployed and managed on any Kubernetes-enabled cloud environment by applying the Operator pattern.
SkySQL implements a groundbreaking, state-of-the-art architecture based on Kubernetes and ServiceNow, and with a strong emphasis on cloud security – using compartmentalization and indirect access to secure and protect customer databases.
In this session, we’ll walk through the architecture of SkySQL and discuss how MariaDB leverages an advanced Kubernetes operator and powerful ServiceNow configuration/workflow management to deploy and manage databases on cloud infrastructure.
SysAdmin Working from Home? Tips to Automate MySQL, MariaDB, Postgres & MongoDBSeveralnines
Are you an SysAdmin who is now responsible for your companies database operations? Then this is the webinar for you. Learn from a Senior DBA the basics you need to know to keep things up-and-running and how automation can help.
This document provides an overview and summary of TiDB, an open-source distributed SQL database compatible with MySQL. It discusses TiDB's architecture which includes TiDB for the SQL layer, TiKV for storage, and PD for placement driving. TiDB provides features like horizontal scalability, distributed transactions, and high availability. Example use cases are also presented, like Mobike's use of TiDB for locking/unlocking bikes and real-time analytics of bike usage data across 200 cities in China.
Webinar slides: Designing Open Source Databases for High AvailabilitySeveralnines
It is said that if you are not designing for failure, then you are heading for failure. How do you design a database system from the ground up to withstand failure? This can be a challenge as failures happen in many different ways, sometimes in ways that would be hard to imagine. This is a consequence of the complexity of today’s database environments.
At Severalnines we’re big fans of high availability databases and have seen our fair share of failure scenarios across the thousands of database deployments we enable every year.
In this webinar replay, we’ll look at the different types of failures you might encounter and what mechanisms can be used to address them. We will also look at some of popular HA solutions used today, and how they can help you achieve different levels of availability.
AGENDA
- Why design for High Availability?
- High availability concepts
- CAP theorem
- PACELC theorem
- Trade offs
- Deployment and operational cost
- System complexity
- Performance issues
- Lock management
- Architecting databases for failures
- Capacity planning
- Redundancy
- Load balancing
- Failover and switchover
- Quorum and split brain
- Fencing
- Multi datacenter and multi-cloud setups
- Recovery policy
- High availability solutions
- Database architecture determines Availability
- Active-Standby failover solution with shared storage or DRBD
- Master-slave replication
- Master-master cluster
- Failover and switchover mechanisms
- Reverse proxy
- Caching
- Virtual IP address
- Application connector
SPEAKER
Ashraf Sharif is System Support Engineer at Severalnines. He was previously involved in hosting world and LAMP stack, where he worked as principal consultant and head of support team and delivered clustering solutions for large websites in the South East Asia region. His professional interests are on system scalability and high availability.
Webinar slides: Free Monitoring (on Steroids) for MySQL, MariaDB, PostgreSQL ...Severalnines
Traditional server monitoring tools are not built for modern distributed database architectures. Let’s face it, most production databases today run in some kind of high availability setup - from simpler master-slave replication to multi-master clusters fronted by redundant load balancers. Operations teams deal with dozens, often hundreds of services that make up the database environment.
This is why we built ClusterControl - to address modern, highly distributed database setups based on replication or clustering. We wanted something that could provide a systems view of all the components of a distributed cluster, including load balancers.
Watch this replay of a webinar on free database monitoring using ClusterControl Community Edition. We show you how to monitor all your MySQL, MariaDB, PostgreSQL and MongoDB systems from a single point of control - whether they are deployed as Galera Clusters, sharded clusters or replication setups across on-prem and cloud data centers. We also see how to use Advisors in order to improve performance.
AGENDA
- Requirements for monitoring distributed database systems
- Cloud-based vs On-prem monitoring solutions
- Agent-based vs Agentless monitoring
- Deepdive into ClusterControl Community Edition
- Architecture
- Metrics Collection
- Trending
- Dashboards
- Queries
- Performance Advisors
- Other features available to Community users
SPEAKER
Bartlomiej Oles is a MySQL and Oracle DBA, with over 15 years experience in managing highly available production systems at IBM, Nordea Bank, Acxiom, Lufthansa, and other Fortune 500 companies. In the past five years, his focus has been on building and applying automation tools to manage multi-datacenter database environments.
In this day and age, data grows so fast it’s not uncommon for those of us using a relational database to reach the limits of its capacity. In this session, Kwangbock Lee explains how Samsung uses ClustrixDB to handle fast-growing data without manual database sharding. He highlights lessons learned, including a few hiccups along the way, and shares Samsung's experience migrating to ClustrixDB.
The role of databases in modern application developmentMariaDB plc
The rise of serverless microservices, event-driven application architecture and full-stack development with JavaScript and the MEAN stack is changing what application developers need from databases – and how they interact with them. In this session, MariaDB's Thomas Boyd discusses recent advancements in application development and architecture and explains how MariaDB supports them.
"Smooth Operator" [Bay Area NewSQL meetup]Kevin Xu
This slide was delivered at the Bay Area NewSQL meetup in California on how TiDB, an open source NewSQL distributed database, is deployed and managed on any Kubernetes-enabled cloud environment by applying the Operator pattern.
SkySQL implements a groundbreaking, state-of-the-art architecture based on Kubernetes and ServiceNow, and with a strong emphasis on cloud security – using compartmentalization and indirect access to secure and protect customer databases.
In this session, we’ll walk through the architecture of SkySQL and discuss how MariaDB leverages an advanced Kubernetes operator and powerful ServiceNow configuration/workflow management to deploy and manage databases on cloud infrastructure.
SysAdmin Working from Home? Tips to Automate MySQL, MariaDB, Postgres & MongoDBSeveralnines
Are you an SysAdmin who is now responsible for your companies database operations? Then this is the webinar for you. Learn from a Senior DBA the basics you need to know to keep things up-and-running and how automation can help.
This document provides an overview and summary of TiDB, an open-source distributed SQL database compatible with MySQL. It discusses TiDB's architecture which includes TiDB for the SQL layer, TiKV for storage, and PD for placement driving. TiDB provides features like horizontal scalability, distributed transactions, and high availability. Example use cases are also presented, like Mobike's use of TiDB for locking/unlocking bikes and real-time analytics of bike usage data across 200 cities in China.
Webinar slides: Designing Open Source Databases for High AvailabilitySeveralnines
It is said that if you are not designing for failure, then you are heading for failure. How do you design a database system from the ground up to withstand failure? This can be a challenge as failures happen in many different ways, sometimes in ways that would be hard to imagine. This is a consequence of the complexity of today’s database environments.
At Severalnines we’re big fans of high availability databases and have seen our fair share of failure scenarios across the thousands of database deployments we enable every year.
In this webinar replay, we’ll look at the different types of failures you might encounter and what mechanisms can be used to address them. We will also look at some of popular HA solutions used today, and how they can help you achieve different levels of availability.
AGENDA
- Why design for High Availability?
- High availability concepts
- CAP theorem
- PACELC theorem
- Trade offs
- Deployment and operational cost
- System complexity
- Performance issues
- Lock management
- Architecting databases for failures
- Capacity planning
- Redundancy
- Load balancing
- Failover and switchover
- Quorum and split brain
- Fencing
- Multi datacenter and multi-cloud setups
- Recovery policy
- High availability solutions
- Database architecture determines Availability
- Active-Standby failover solution with shared storage or DRBD
- Master-slave replication
- Master-master cluster
- Failover and switchover mechanisms
- Reverse proxy
- Caching
- Virtual IP address
- Application connector
SPEAKER
Ashraf Sharif is System Support Engineer at Severalnines. He was previously involved in hosting world and LAMP stack, where he worked as principal consultant and head of support team and delivered clustering solutions for large websites in the South East Asia region. His professional interests are on system scalability and high availability.
Webinar slides: How to Migrate from Oracle DB to MariaDBSeveralnines
This document provides an overview and agenda for a webinar on migrating from Oracle DB to MariaDB. The webinar will cover why organizations are moving to open source databases, the benefits of migrating to MariaDB from Oracle, how to plan and execute the migration process, and post-migration management topics like monitoring, backups, high availability, and scaling in MariaDB. The presentation will include discussions of data type mapping, enabling PL/SQL syntax in MariaDB, available migration tools, and testing approaches.
What if …
- Traditional, labour-intensive backup and archive practices for your MySQL, MariaDB, MongoDB and PostgreSQL databases were a thing of the past?
- You could have one backup management solution for all your business data?
- You could ensure integrity of all your backups?
- You could leverage the competitive pricing and almost limitless capacity of cloud-based backup while meeting cost, manageability, and compliance requirements from the business.
Welcome to our webinar on Backup Management with ClusterControl.
ClusterControl’s centralized backup management for open source databases provides you with hot backups of large datasets, point in time recovery in a couple of clicks, at-rest and in-transit data encryption, data integrity via automatic restore verification, cloud backups (AWS, Google and Azure) for Disaster Recovery, retention policies to ensure compliance, and automated alerts and reporting.
Whether you are looking at rebuilding your existing backup infrastructure, or updating it, this webinar is for you!
AGENDA
- Backup and recovery management of local or remote databases
- Logical or physical backups
- Full or Incremental backups
- Position or time-based Point in Time Recovery (for MySQL and PostgreSQL)
- Upload to the cloud (Amazon S3, Google Cloud Storage, Azure Storage)
- Encryption of backup data
- Compression of backup data
- One centralized backup system for your open source databases (Demo)
- Schedule, manage and operate backups
- Define backup policies, retention, history
- Validation - Automatic restore verification
- Backup reporting
SPEAKER
Bartlomiej Oles, Senior Support Engineer at Severalnines, is a MySQL and Oracle DBA, with over 15 years experience in managing highly available production systems at IBM, Nordea Bank, Acxiom, Lufthansa, and other Fortune 500 companies. In the past five years, his focus has been on building and applying automation tools to manage multi-datacenter database environments.
Webinar slides: Migrating to Galera Cluster for MySQL and MariaDBSeveralnines
This document provides an overview of online and offline migration strategies for migrating from a standalone MySQL or MySQL master-slave setup to a Galera Cluster. It discusses preparation steps like database schema checks and compatibility. It then outlines the process for offline migration using backups and restore, as well as online migration using MySQL replication to sync data between the existing and new Galera clusters before cutting over. Testing strategies like A/B testing in read-only mode are also presented.
OpenNebulaconf2017EU: OpenNebula 5.4 and Beyond by Tino Vázquez and Ruben S. ...OpenNebula Project
In this talk, Rubén and Tino will lay our the novelties (not all of them, there are many!) present in 5.4, ranging from core new functionality to the big changes in vCenter. Also, the roadmap for 5.6 and future versions would be laid out, as far as it is consolidated (it won't be closed yet, but nearly so).
It would also be the perfect session for feature requests, so don't miss it!
YouTube: https://youtu.be/Czzm2EimayY
Webinar slides: How to Automate & Manage PostgreSQL with ClusterControlSeveralnines
Running PostgreSQL in production comes with the responsibility for a business critical environment; this includes high availability, disaster recovery, and performance. Ops staff worry whether databases are up and running, if backups are taken and tested for integrity, whether there are performance problems that might affect end user experience, if failover will work properly in case of server failure without breaking applications, and the list goes on.
ClusterControl can be used to operationalize your PostgreSQL footprint across your enterprise. It offers a standard way of deploying high-availability replication setups with auto-failover, integrated with load balancers offering a single endpoint to applications. It provides constant health and performance monitoring through rich dashboards, as well as backup management and point-in-time recovery
See how much time and effort can be saved, as well as risks mitigated, with the help of a unified management platform over the more traditional, manual methods.
We’ve seen a 152% increase in ClusterControl installations by PostgreSQL users last year, so make sure you don’t miss out on the trend!
AGENDA
- Managing PostgreSQL “the old way”:
- Common challenges
- Important tasks to perform
- Tools that are available to help
- PostgreSQL automation and management with ClusterControl:
- Deployment
- Backup and recovery
- HA setups
- Failover
- Monitoring
- Live Demo
SPEAKER
Sebastian Insausti, Support Engineer at Severalnines, has loved technology since his childhood, when he did his first computer course (Windows 3.11). And from that moment he was decided on what his profession would be. He has since built up experience with MySQL, PostgreSQL, HAProxy, WAF (ModSecurity), Linux (RedHat, CentOS, OL, Ubuntu server), Monitoring (Nagios), Networking and Virtualization (VMWare, Proxmox, Hyper-V, RHEV).
Prior to joining Severalnines, Sebastian worked as a consultant to state companies in security, database replication and high availability scenarios. He’s also a speaker and has given a few talks locally on InnoDB Cluster and MySQL Enterprise together with an Oracle team. Previous to that, he worked for a Mexican company as chief of sysadmin department as well as for a local ISP (Internet Service Provider), where he managed customers' servers and connectivity.
Introducing the ultimate MariaDB cloud, SkySQLMariaDB plc
SkySQL is the first and only database-as-a-service (DBaaS) engineered for MariaDB by MariaDB, to use a state-of-the-art multi-cloud architecture built on Kubernetes and ServiceNow, and to deploy databases and data warehouses for transactional, analytical and hybrid transactional/analytical workloads.
In this session, we’ll lay out the vision for SkySQL, provide an overview of its capabilities, take a tour of its architecture, and discuss the long-term roadmap. We’ll wrap things up with a live demo of SkySQL, including a preview of its deep learning–based workload analysis and visualization interface.
How to power microservices with MariaDBMariaDB plc
Adoption of microservices is continuing at a rapid pace, but many deployments struggle when it comes to the database topology and data modeling. This session covers the pros and cons of different approaches (e.g., giving every microservice its own database or its own schema on a shared database) and various strategies for providing a consolidated view of data when different data is managed by different microservices.
Introducing the R2DBC async Java connectorMariaDB plc
Not too long ago, a reactive variant of the JDBC driver was released, known as Reactive Relational Database Connectivity (R2DBC for short). While R2DBC started as an experiment to enable integration of SQL databases into systems that use reactive programming models, it now specifies a full-fledged service-provider interface that can be used to retrieve data from a target data source.
In this session, we’ll take a look at the new MariaDB R2DBC connector and examine the advantages of fully reactive, non-blocking development with MariaDB. And, of course, we’ll dive in and get a first-hand look at what it’s like to use the new connector with some live coding!
Getting started in the cloud for developersMariaDB plc
Looking to get up and running in the cloud, and start building applications with MariaDB as fast as possible? In this session, Thomas Boyd walks through the quick-start process of deploying MariaDB in the most popular public clouds. He then touches on some of the essential differences between cloud database services, helping you to create the cloud database strategy that best meets your needs.
Cloud database vendors tend to report performance numbers for the sweet spot or running in highly optimized hardware with specific workload parameters.
Moreover, many of these systems are not tested under different failure scenarios that may appear in the public cloud.
At Netflix, as a cloud-native enterprise, our focus is on high availability. We achieve high availability by deploying at multiple regions.
Hence, our data store system performance is highly affected by our global deployment model, instance types and workload patterns.
Hence, we were interested in a Cloud database benchmark tool that could be deployed in a loosely-coupled fashion, as a microservice, and with the ability to dynamically change the configurations parameters at run time. In this paper, we present Netflix Data Benchmark (NDBench). NDBench offers pluggable patterns and loads and support for different client APIs. It offers the ability to deploy, manage and monitor multiple instances from a single point. NDBench was designed to run for infinite time. This nature offered us with the ability to test long-running database maintenance jobs, test database systems under conditions that affect the performance, such as compactions, repairs etc., and also gives us a view on client side issues like memory leaks and heap pressure. We have been running NDBench for almost 3 years having validated multiple database versions, tested numerous NoSQL systems running on the Cloud, and tested new functionalities. NDBench is major component of our testing and validation pipelines.
CCV: migrating our payment processing system to MariaDBMariaDB plc
CCV is a Dutch payment processor and loyalty provider. CCV's current payment processing platform is built on top of Microsoft SQL Server, but they are currently in the process of migrating it to MariaDB. This migration project is in progress and first production transactions are expected to run in 2020. In this session, Ernst Wernicke and Harry Dijkstra of CCV share how they are using MariaDB to meet critical high availability requirements, including geographic replication, zero data-loss, zero downtime (both planned and unplanned) and no single point of failure anywhere.
What to expect from MariaDB Platform X5, part 2MariaDB plc
This document summarizes new features and enhancements in MariaDB MaxScale 2.5 and MariaDB ColumnStore 1.5. Some key points include:
- MaxScale 2.5 includes a new graphical user interface, improved binlog router, capability to stream binlogs to Kafka as JSON, and distributed caching between MaxScale servers.
- ColumnStore 1.5 features a new API, PowerBI direct query connector, improved replication from InnoDB, and multinode support in SkySQL.
- Configuration and installation of ColumnStore has been simplified, including using a new ColumnStore.xml utility and S3 storage manager for redundant file storage in object storage.
How Pixid dropped Oracle and went hybrid with MariaDBMariaDB plc
Pixid replaced Oracle Database with MySQL in 2011, then soon migrated to MariaDB to get better performance, more features and synchronous clustering for high availability. In addition to high-performance transactions, their customers needed access to fast analytics for self-service reporting and data exploration. Pixid started with a separate columnar database for analytics, but with the release of MariaDB ColumnStore, they found a more elegant solution – deploying a single database platform to handle both transactions and analytics. In this session, Antoine Gosset and Jérôme Mouret share how Pixid went from Oracle Database to handling both transactional and analytical workloads with MariaDB.
TiDB Introduction - Boston MySQL Meetup GroupMorgan Tocker
This document provides an overview and summary of TiDB, an open-source distributed SQL database inspired by Google's Spanner and F1. The summary includes:
1. TiDB is a distributed SQL database that is compatible with MySQL and provides horizontal scalability, high availability, and strong consistency with a hybrid OLTP/OLAP architecture.
2. It consists of TiDB, TiKV, and PD components where TiDB is the frontend MySQL compatible database layer, TiKV is the distributed key-value storage layer, and PD is the placement driver for metadata management.
3. TiDB is being used by over 300 companies including Mobike for applications such as real-time analytics, high concurrency
Ceph Day Berlin is a conference about Ceph, an open source unified storage system. Ceph provides object, block, and file storage and has no single point of failure. It is scalable, allowing storage to grow and shrink online, and is widely used in OpenStack clouds, Kubernetes, and the Hadoop ecosystem. The Ceph Foundation was formed to support the Ceph community through events, infrastructure, and other initiatives. Upcoming events include Cephalocon in Barcelona in May 2019.
The capabilities and features of MariaDB Platform continue to expand, resulting in larger and more sophisticated production deployments – and the need for better tools. To provide DBAs with comprehensive, consolidating tooling, we created MariaDB Enterprise Tools: an easy-to-use, modular command-line interface for interacting with any part of MariaDB Platform.
In this session, we will provide a preview of the MariaDB Enterprise Client, walk through current and planned modules and discuss future plans for MariaDB Enterprise Tools – including SkySQL modules and the ability to create custom modules.
OpenNebulaConf2017EU: Hyper converged infrastructure with OpenNebula and Ceph...OpenNebula Project
Hyperconvergence is one of the big topics in datacenters at the moment. But is it more than an old wine in new bottles? Why we at Runtastic built an hyperconverged datacenter based on Opennebula with Ceph and what we learned.
YouTube: https://youtu.be/50Z4bmevTpg
Introducing TiDB - Percona Live FrankfurtMorgan Tocker
TiDB is an open-source distributed SQL database developed by PingCAP that is compatible with MySQL. It provides horizontal scalability, high availability, and consistent distributed transactions. Mobike, which has 200 million users and 9 million bikes, uses TiDB to handle over 30 TB of data per day. While TiDB aims to be compatible with MySQL, some features like stored procedures work differently or are still in development.
This document provides an agenda and overview for a technical deep dive presentation on Elastic Agent and Ingest Manager. The presentation will include demos of enrolling an agent, collecting data out of the box, and configuring data collection. It will also provide a technical overview of components like the new indexing strategy, Elastic Agent, config options, and the Elastic Package Registry for centralized package management. Questions from the audience will be taken at the end.
This document summarizes Ganglia, an open-source scalable distributed monitoring system. It introduces Ganglia's architecture which uses gmond daemons to gather metrics from servers and gmetad daemons to aggregate metrics. Metrics are stored and visualized using RRDTool and presented through a web frontend. Potential problems like a central node bottleneck are discussed along with solutions like distributing monitoring across multiple gmetad instances and storing databases in RAM.
First steps into developing an application as a suite of small services, and analysis of tools and architecture approaches to be used.
Topics covered:
1) What is a micro service architecture
2)Advantages in code procedures, team dynamics and scaling
3) How container services such as docker assist in its implementation
4) How to deploy code in a micro services architecture
5) Container Management tools and resource efficiency (mesos, kubernetes, aws container service)
6) Scaling up
By PeoplePerHour team
presented by CTO Spyros Lambrinidis & Senior DevOps Panagiotis Moustafellos @ Docker Athens Meetup 18/02/2015
The document discusses modern elastic datacenter architecture using Apache Mesos and DC/OS. It provides an introduction to Mesos and DC/OS, explaining how they allow building scalable, fault-tolerant distributed systems. It outlines the benefits of using Mesos and DC/OS, and describes how the speakers have implemented a solution using tools like Packer, Terraform, Ansible, and DC/OS to achieve scalability, automation, and high availability. Demos are presented on deploying and managing applications with DC/OS tools like Marathon and running Spark frameworks.
Webinar slides: How to Migrate from Oracle DB to MariaDBSeveralnines
This document provides an overview and agenda for a webinar on migrating from Oracle DB to MariaDB. The webinar will cover why organizations are moving to open source databases, the benefits of migrating to MariaDB from Oracle, how to plan and execute the migration process, and post-migration management topics like monitoring, backups, high availability, and scaling in MariaDB. The presentation will include discussions of data type mapping, enabling PL/SQL syntax in MariaDB, available migration tools, and testing approaches.
What if …
- Traditional, labour-intensive backup and archive practices for your MySQL, MariaDB, MongoDB and PostgreSQL databases were a thing of the past?
- You could have one backup management solution for all your business data?
- You could ensure integrity of all your backups?
- You could leverage the competitive pricing and almost limitless capacity of cloud-based backup while meeting cost, manageability, and compliance requirements from the business.
Welcome to our webinar on Backup Management with ClusterControl.
ClusterControl’s centralized backup management for open source databases provides you with hot backups of large datasets, point in time recovery in a couple of clicks, at-rest and in-transit data encryption, data integrity via automatic restore verification, cloud backups (AWS, Google and Azure) for Disaster Recovery, retention policies to ensure compliance, and automated alerts and reporting.
Whether you are looking at rebuilding your existing backup infrastructure, or updating it, this webinar is for you!
AGENDA
- Backup and recovery management of local or remote databases
- Logical or physical backups
- Full or Incremental backups
- Position or time-based Point in Time Recovery (for MySQL and PostgreSQL)
- Upload to the cloud (Amazon S3, Google Cloud Storage, Azure Storage)
- Encryption of backup data
- Compression of backup data
- One centralized backup system for your open source databases (Demo)
- Schedule, manage and operate backups
- Define backup policies, retention, history
- Validation - Automatic restore verification
- Backup reporting
SPEAKER
Bartlomiej Oles, Senior Support Engineer at Severalnines, is a MySQL and Oracle DBA, with over 15 years experience in managing highly available production systems at IBM, Nordea Bank, Acxiom, Lufthansa, and other Fortune 500 companies. In the past five years, his focus has been on building and applying automation tools to manage multi-datacenter database environments.
Webinar slides: Migrating to Galera Cluster for MySQL and MariaDBSeveralnines
This document provides an overview of online and offline migration strategies for migrating from a standalone MySQL or MySQL master-slave setup to a Galera Cluster. It discusses preparation steps like database schema checks and compatibility. It then outlines the process for offline migration using backups and restore, as well as online migration using MySQL replication to sync data between the existing and new Galera clusters before cutting over. Testing strategies like A/B testing in read-only mode are also presented.
OpenNebulaconf2017EU: OpenNebula 5.4 and Beyond by Tino Vázquez and Ruben S. ...OpenNebula Project
In this talk, Rubén and Tino will lay our the novelties (not all of them, there are many!) present in 5.4, ranging from core new functionality to the big changes in vCenter. Also, the roadmap for 5.6 and future versions would be laid out, as far as it is consolidated (it won't be closed yet, but nearly so).
It would also be the perfect session for feature requests, so don't miss it!
YouTube: https://youtu.be/Czzm2EimayY
Webinar slides: How to Automate & Manage PostgreSQL with ClusterControlSeveralnines
Running PostgreSQL in production comes with the responsibility for a business critical environment; this includes high availability, disaster recovery, and performance. Ops staff worry whether databases are up and running, if backups are taken and tested for integrity, whether there are performance problems that might affect end user experience, if failover will work properly in case of server failure without breaking applications, and the list goes on.
ClusterControl can be used to operationalize your PostgreSQL footprint across your enterprise. It offers a standard way of deploying high-availability replication setups with auto-failover, integrated with load balancers offering a single endpoint to applications. It provides constant health and performance monitoring through rich dashboards, as well as backup management and point-in-time recovery
See how much time and effort can be saved, as well as risks mitigated, with the help of a unified management platform over the more traditional, manual methods.
We’ve seen a 152% increase in ClusterControl installations by PostgreSQL users last year, so make sure you don’t miss out on the trend!
AGENDA
- Managing PostgreSQL “the old way”:
- Common challenges
- Important tasks to perform
- Tools that are available to help
- PostgreSQL automation and management with ClusterControl:
- Deployment
- Backup and recovery
- HA setups
- Failover
- Monitoring
- Live Demo
SPEAKER
Sebastian Insausti, Support Engineer at Severalnines, has loved technology since his childhood, when he did his first computer course (Windows 3.11). And from that moment he was decided on what his profession would be. He has since built up experience with MySQL, PostgreSQL, HAProxy, WAF (ModSecurity), Linux (RedHat, CentOS, OL, Ubuntu server), Monitoring (Nagios), Networking and Virtualization (VMWare, Proxmox, Hyper-V, RHEV).
Prior to joining Severalnines, Sebastian worked as a consultant to state companies in security, database replication and high availability scenarios. He’s also a speaker and has given a few talks locally on InnoDB Cluster and MySQL Enterprise together with an Oracle team. Previous to that, he worked for a Mexican company as chief of sysadmin department as well as for a local ISP (Internet Service Provider), where he managed customers' servers and connectivity.
Introducing the ultimate MariaDB cloud, SkySQLMariaDB plc
SkySQL is the first and only database-as-a-service (DBaaS) engineered for MariaDB by MariaDB, to use a state-of-the-art multi-cloud architecture built on Kubernetes and ServiceNow, and to deploy databases and data warehouses for transactional, analytical and hybrid transactional/analytical workloads.
In this session, we’ll lay out the vision for SkySQL, provide an overview of its capabilities, take a tour of its architecture, and discuss the long-term roadmap. We’ll wrap things up with a live demo of SkySQL, including a preview of its deep learning–based workload analysis and visualization interface.
How to power microservices with MariaDBMariaDB plc
Adoption of microservices is continuing at a rapid pace, but many deployments struggle when it comes to the database topology and data modeling. This session covers the pros and cons of different approaches (e.g., giving every microservice its own database or its own schema on a shared database) and various strategies for providing a consolidated view of data when different data is managed by different microservices.
Introducing the R2DBC async Java connectorMariaDB plc
Not too long ago, a reactive variant of the JDBC driver was released, known as Reactive Relational Database Connectivity (R2DBC for short). While R2DBC started as an experiment to enable integration of SQL databases into systems that use reactive programming models, it now specifies a full-fledged service-provider interface that can be used to retrieve data from a target data source.
In this session, we’ll take a look at the new MariaDB R2DBC connector and examine the advantages of fully reactive, non-blocking development with MariaDB. And, of course, we’ll dive in and get a first-hand look at what it’s like to use the new connector with some live coding!
Getting started in the cloud for developersMariaDB plc
Looking to get up and running in the cloud, and start building applications with MariaDB as fast as possible? In this session, Thomas Boyd walks through the quick-start process of deploying MariaDB in the most popular public clouds. He then touches on some of the essential differences between cloud database services, helping you to create the cloud database strategy that best meets your needs.
Cloud database vendors tend to report performance numbers for the sweet spot or running in highly optimized hardware with specific workload parameters.
Moreover, many of these systems are not tested under different failure scenarios that may appear in the public cloud.
At Netflix, as a cloud-native enterprise, our focus is on high availability. We achieve high availability by deploying at multiple regions.
Hence, our data store system performance is highly affected by our global deployment model, instance types and workload patterns.
Hence, we were interested in a Cloud database benchmark tool that could be deployed in a loosely-coupled fashion, as a microservice, and with the ability to dynamically change the configurations parameters at run time. In this paper, we present Netflix Data Benchmark (NDBench). NDBench offers pluggable patterns and loads and support for different client APIs. It offers the ability to deploy, manage and monitor multiple instances from a single point. NDBench was designed to run for infinite time. This nature offered us with the ability to test long-running database maintenance jobs, test database systems under conditions that affect the performance, such as compactions, repairs etc., and also gives us a view on client side issues like memory leaks and heap pressure. We have been running NDBench for almost 3 years having validated multiple database versions, tested numerous NoSQL systems running on the Cloud, and tested new functionalities. NDBench is major component of our testing and validation pipelines.
CCV: migrating our payment processing system to MariaDBMariaDB plc
CCV is a Dutch payment processor and loyalty provider. CCV's current payment processing platform is built on top of Microsoft SQL Server, but they are currently in the process of migrating it to MariaDB. This migration project is in progress and first production transactions are expected to run in 2020. In this session, Ernst Wernicke and Harry Dijkstra of CCV share how they are using MariaDB to meet critical high availability requirements, including geographic replication, zero data-loss, zero downtime (both planned and unplanned) and no single point of failure anywhere.
What to expect from MariaDB Platform X5, part 2MariaDB plc
This document summarizes new features and enhancements in MariaDB MaxScale 2.5 and MariaDB ColumnStore 1.5. Some key points include:
- MaxScale 2.5 includes a new graphical user interface, improved binlog router, capability to stream binlogs to Kafka as JSON, and distributed caching between MaxScale servers.
- ColumnStore 1.5 features a new API, PowerBI direct query connector, improved replication from InnoDB, and multinode support in SkySQL.
- Configuration and installation of ColumnStore has been simplified, including using a new ColumnStore.xml utility and S3 storage manager for redundant file storage in object storage.
How Pixid dropped Oracle and went hybrid with MariaDBMariaDB plc
Pixid replaced Oracle Database with MySQL in 2011, then soon migrated to MariaDB to get better performance, more features and synchronous clustering for high availability. In addition to high-performance transactions, their customers needed access to fast analytics for self-service reporting and data exploration. Pixid started with a separate columnar database for analytics, but with the release of MariaDB ColumnStore, they found a more elegant solution – deploying a single database platform to handle both transactions and analytics. In this session, Antoine Gosset and Jérôme Mouret share how Pixid went from Oracle Database to handling both transactional and analytical workloads with MariaDB.
TiDB Introduction - Boston MySQL Meetup GroupMorgan Tocker
This document provides an overview and summary of TiDB, an open-source distributed SQL database inspired by Google's Spanner and F1. The summary includes:
1. TiDB is a distributed SQL database that is compatible with MySQL and provides horizontal scalability, high availability, and strong consistency with a hybrid OLTP/OLAP architecture.
2. It consists of TiDB, TiKV, and PD components where TiDB is the frontend MySQL compatible database layer, TiKV is the distributed key-value storage layer, and PD is the placement driver for metadata management.
3. TiDB is being used by over 300 companies including Mobike for applications such as real-time analytics, high concurrency
Ceph Day Berlin is a conference about Ceph, an open source unified storage system. Ceph provides object, block, and file storage and has no single point of failure. It is scalable, allowing storage to grow and shrink online, and is widely used in OpenStack clouds, Kubernetes, and the Hadoop ecosystem. The Ceph Foundation was formed to support the Ceph community through events, infrastructure, and other initiatives. Upcoming events include Cephalocon in Barcelona in May 2019.
The capabilities and features of MariaDB Platform continue to expand, resulting in larger and more sophisticated production deployments – and the need for better tools. To provide DBAs with comprehensive, consolidating tooling, we created MariaDB Enterprise Tools: an easy-to-use, modular command-line interface for interacting with any part of MariaDB Platform.
In this session, we will provide a preview of the MariaDB Enterprise Client, walk through current and planned modules and discuss future plans for MariaDB Enterprise Tools – including SkySQL modules and the ability to create custom modules.
OpenNebulaConf2017EU: Hyper converged infrastructure with OpenNebula and Ceph...OpenNebula Project
Hyperconvergence is one of the big topics in datacenters at the moment. But is it more than an old wine in new bottles? Why we at Runtastic built an hyperconverged datacenter based on Opennebula with Ceph and what we learned.
YouTube: https://youtu.be/50Z4bmevTpg
Introducing TiDB - Percona Live FrankfurtMorgan Tocker
TiDB is an open-source distributed SQL database developed by PingCAP that is compatible with MySQL. It provides horizontal scalability, high availability, and consistent distributed transactions. Mobike, which has 200 million users and 9 million bikes, uses TiDB to handle over 30 TB of data per day. While TiDB aims to be compatible with MySQL, some features like stored procedures work differently or are still in development.
This document provides an agenda and overview for a technical deep dive presentation on Elastic Agent and Ingest Manager. The presentation will include demos of enrolling an agent, collecting data out of the box, and configuring data collection. It will also provide a technical overview of components like the new indexing strategy, Elastic Agent, config options, and the Elastic Package Registry for centralized package management. Questions from the audience will be taken at the end.
This document summarizes Ganglia, an open-source scalable distributed monitoring system. It introduces Ganglia's architecture which uses gmond daemons to gather metrics from servers and gmetad daemons to aggregate metrics. Metrics are stored and visualized using RRDTool and presented through a web frontend. Potential problems like a central node bottleneck are discussed along with solutions like distributing monitoring across multiple gmetad instances and storing databases in RAM.
First steps into developing an application as a suite of small services, and analysis of tools and architecture approaches to be used.
Topics covered:
1) What is a micro service architecture
2)Advantages in code procedures, team dynamics and scaling
3) How container services such as docker assist in its implementation
4) How to deploy code in a micro services architecture
5) Container Management tools and resource efficiency (mesos, kubernetes, aws container service)
6) Scaling up
By PeoplePerHour team
presented by CTO Spyros Lambrinidis & Senior DevOps Panagiotis Moustafellos @ Docker Athens Meetup 18/02/2015
The document discusses modern elastic datacenter architecture using Apache Mesos and DC/OS. It provides an introduction to Mesos and DC/OS, explaining how they allow building scalable, fault-tolerant distributed systems. It outlines the benefits of using Mesos and DC/OS, and describes how the speakers have implemented a solution using tools like Packer, Terraform, Ansible, and DC/OS to achieve scalability, automation, and high availability. Demos are presented on deploying and managing applications with DC/OS tools like Marathon and running Spark frameworks.
This summary provides an overview of the key points from the document in 3 sentences:
The document outlines the agenda for Season 3 Episode 1 of the Netflix OSS podcast, which includes lightning talks on 8 new projects including Atlas, Prana, Raigad, Genie 2, Inviso, Dynomite, Nicobar, and MSL. Representatives from Netflix, IBM Watson, Nike Digital, and Pivotal then each provide a 3-5 minute presentation on their featured project. The presentations describe the motivation, features and benefits of each project for observability, integration with the Netflix ecosystem, automation of Elasticsearch deployments, job scheduling, dynamic scripting for Java, message security, and developing microservices
This document discusses building a data platform in the cloud. It covers the evolution of data platforms from monolithic architectures to distributed event-driven architectures using a data lake. Key aspects of a cloud data platform include collecting and persisting all data in a data lake for standardized access, near real-time processing using streaming technologies, and building the platform using either fully managed or DIY/hybrid approaches on AWS. Design principles focus on event-driven separation of data producers and consumers and choosing the right technology for the problem.
Thinking DevOps in the era of the Cloud - Demi Ben-AriDemi Ben-Ari
The lines between Development and Operations people have gotten blurry and lots of skills needs to be held by both sides.
In the talk we'll talk about all of the considerations that are needed to be taken when creating a development and production environment, mentioning Continuous Integration, Continuous Deployment and the Buzzword "DevOps", also talking about some real implementations in the industry.
Of course how can we leave out the real enabler of the whole deal,
"The Cloud", Giving us a tool set that makes life much easier when implementing all of these practices.
Database automation guide - Oracle Community Tour LATAM 2023Nelson Calero
The tasks of the DBA role are in permanent evolution. There are new and changed functionalities in database versions, cloud services, integrations, and new tools. Automation has been always a big portion of the DBA work, and is constantly challenging our processes. This presentation explore these automation changes using examples from experience of supporting hundreds of Oracle installations of varying size and complexity, including the process of choosing the right tool for the task, implementation, and subsequent maintenance, mainly using Ansible.
This document discusses PingCAP's Kubernetes operator for TiDB, an open source distributed SQL database. It provides a brief history of PingCAP and the TiDB community. It then gives a technical overview of TiDB's architecture before explaining how the TiDB operator works. The operator allows users to deploy and manage TiDB clusters on Kubernetes through custom resources that are controlled by custom controllers. This provides capabilities like automated scaling, updates, and failover for stateful applications running on Kubernetes. The operator is open source and TiDB is also available as a managed service on GCP Marketplace.
USENIX LISA15: How TubeMogul Handles over One Trillion HTTP Requests a MonthNicolas Brousse
TubeMogul grew from few servers to over two thousands servers and handling over one trillion http requests a month, processed in less than 50ms each. To keep up with the fast growth, the SRE team had to implement an efficient Continuous Delivery infrastructure that allowed to do over 10,000 puppet deployment and 8,500 application deployment in 2014. In this presentation, we will cover the nuts and bolts of the TubeMogul operations engineering team and how they overcome challenges.
Last Conference 2017: Big Data in a Production Environment: Lessons LearntMark Grebler
Presentation at the 2017 LAST (Lean, Agile, Systems Thinking) Conference.
A presentation about the challenges involved in building a production Big Data system used directly by customers.
Netflix Container Scheduling and Execution - QCon New York 2016aspyker
Scheduling a Fuller House: Container Management At Netflix
Customers from over all over the world streamed Forty Two Billion hours of Netflix content last year. Various Netflix batch jobs and an increasing number of service applications use containers for their processing. In this talk Netflix will present a deep dive on the motivations and the technology powering container deployment on top of the AWS EC2 service. The talk will cover our approach to cloud resource management and scheduling with the open source Fenzo library, along with details on docker execution engine as a part of project Titus. As well, the talk will share some of the results so far, lessons learned, and end with a brief look at the developer experience for containers.
Introduction to Data Engineer and Data Pipeline at Credit OKKriangkrai Chaonithi
The document discusses the role of data engineers and data pipelines. It begins with an introduction to big data and why data volumes are increasing. It then covers what data engineers do, including building data architectures, working with cloud infrastructure, and programming for data ingestion, transformation, and loading. The document also explains data pipelines, describing extract, transform, load (ETL) processes and batch versus streaming data. It provides an example of Credit OK's data pipeline architecture on Google Cloud Platform that extracts raw data from various sources, cleanses and loads it into BigQuery, then distributes processed data to various applications. It emphasizes the importance of data engineers in processing and managing large, complex data sets.
Oracle EBS Journey to the Cloud - What is New in 2022 (UKOUG Breakthrough 22 ...Andrejs Prokopjevs
This presentation is a successor to the "Running Oracle EBS in the cloud." session held at the UKOUG Apps16 event (or other conferences later). The author would like to go through the latest updates of the year 2022 on what is still actual, what is not, key recommendations, and a comparison of the public cloud platforms certified. The cloud journey is a continuously client-demanding topic, and there are uncertainties still around the cloud journey options for Oracle E-Business Suite customers.
Thinking DevOps in the Era of the Cloud - Demi Ben-AriDemi Ben-Ari
The lines between Development and Operations people have gotten blurry and lots of skills needs to be held by both sides. In the talk we'll talk about all of the considerations that are needed to be taken when creating a development and production environment, mentioning Continuous Integration, Continuous Deployment and the Buzzword "DevOps", also talking about some real implementations in the industry. Of course how can we leave out the real enabler of the whole deal, "The Cloud", Giving us a tool set that makes life much easier when implementing all of these practices.
Build an Open Source Data Lake For Data ScientistsShawn Zhu
This is a talk I presented in 2019 ICSA (International Chinese Statistics Association) Applied Statistics Symposium in session "How Data Science Drives Success in Enterprises"
The document discusses the benefits and challenges of running big data workloads on cloud native platforms. Some key points discussed include:
- Big data workloads are migrating to the cloud to take advantage of scalability, flexibility and cost effectiveness compared to on-premises solutions.
- Enterprise cloud platforms need to provide centralized management and monitoring of multiple clusters, secure data access, and replication capabilities.
- Running big data on cloud introduces challenges around storage, networking, compute resources, and security that systems need to address, such as consistency issues with object storage, network throughput reductions, and hardware variations across cloud vendors.
- The open source community is helping users address these challenges to build cloud native data architectures
Next gen software operations models in the cloudAarno Aukia
This document summarizes a presentation by Aarno Aukia, CTO of VSHN - The DevOps Company. The presentation discusses next generation operations models including DevOps, containers, cloud native computing, and cloud migration. It explains how these new models enable higher levels of automation, standardization, elasticity and agility compared to traditional IT organizations.
This slide was delivered at the Kubernetes/Docker meetup in Cologne, Germany, hosted by Giant Swarms on how TiDB, an open source NewSQL distributed database, is deployed and managed on any Kubernetes-enabled cloud environment by applying the Operator pattern.
Similar to Designing for operability and managability (20)
Understanding Inductive Bias in Machine LearningSUTEJAS
This presentation explores the concept of inductive bias in machine learning. It explains how algorithms come with built-in assumptions and preferences that guide the learning process. You'll learn about the different types of inductive bias and how they can impact the performance and generalizability of machine learning models.
The presentation also covers the positive and negative aspects of inductive bias, along with strategies for mitigating potential drawbacks. We'll explore examples of how bias manifests in algorithms like neural networks and decision trees.
By understanding inductive bias, you can gain valuable insights into how machine learning models work and make informed decisions when building and deploying them.
A SYSTEMATIC RISK ASSESSMENT APPROACH FOR SECURING THE SMART IRRIGATION SYSTEMSIJNSA Journal
The smart irrigation system represents an innovative approach to optimize water usage in agricultural and landscaping practices. The integration of cutting-edge technologies, including sensors, actuators, and data analysis, empowers this system to provide accurate monitoring and control of irrigation processes by leveraging real-time environmental conditions. The main objective of a smart irrigation system is to optimize water efficiency, minimize expenses, and foster the adoption of sustainable water management methods. This paper conducts a systematic risk assessment by exploring the key components/assets and their functionalities in the smart irrigation system. The crucial role of sensors in gathering data on soil moisture, weather patterns, and plant well-being is emphasized in this system. These sensors enable intelligent decision-making in irrigation scheduling and water distribution, leading to enhanced water efficiency and sustainable water management practices. Actuators enable automated control of irrigation devices, ensuring precise and targeted water delivery to plants. Additionally, the paper addresses the potential threat and vulnerabilities associated with smart irrigation systems. It discusses limitations of the system, such as power constraints and computational capabilities, and calculates the potential security risks. The paper suggests possible risk treatment methods for effective secure system operation. In conclusion, the paper emphasizes the significant benefits of implementing smart irrigation systems, including improved water conservation, increased crop yield, and reduced environmental impact. Additionally, based on the security analysis conducted, the paper recommends the implementation of countermeasures and security approaches to address vulnerabilities and ensure the integrity and reliability of the system. By incorporating these measures, smart irrigation technology can revolutionize water management practices in agriculture, promoting sustainability, resource efficiency, and safeguarding against potential security threats.
Embedded machine learning-based road conditions and driving behavior monitoringIJECEIAES
Car accident rates have increased in recent years, resulting in losses in human lives, properties, and other financial costs. An embedded machine learning-based system is developed to address this critical issue. The system can monitor road conditions, detect driving patterns, and identify aggressive driving behaviors. The system is based on neural networks trained on a comprehensive dataset of driving events, driving styles, and road conditions. The system effectively detects potential risks and helps mitigate the frequency and impact of accidents. The primary goal is to ensure the safety of drivers and vehicles. Collecting data involved gathering information on three key road events: normal street and normal drive, speed bumps, circular yellow speed bumps, and three aggressive driving actions: sudden start, sudden stop, and sudden entry. The gathered data is processed and analyzed using a machine learning system designed for limited power and memory devices. The developed system resulted in 91.9% accuracy, 93.6% precision, and 92% recall. The achieved inference time on an Arduino Nano 33 BLE Sense with a 32-bit CPU running at 64 MHz is 34 ms and requires 2.6 kB peak RAM and 139.9 kB program flash memory, making it suitable for resource-constrained embedded systems.
KuberTENes Birthday Bash Guadalajara - K8sGPT first impressionsVictor Morales
K8sGPT is a tool that analyzes and diagnoses Kubernetes clusters. This presentation was used to share the requirements and dependencies to deploy K8sGPT in a local environment.
6th International Conference on Machine Learning & Applications (CMLA 2024)ClaraZara1
6th International Conference on Machine Learning & Applications (CMLA 2024) will provide an excellent international forum for sharing knowledge and results in theory, methodology and applications of on Machine Learning & Applications.
Advanced control scheme of doubly fed induction generator for wind turbine us...IJECEIAES
This paper describes a speed control device for generating electrical energy on an electricity network based on the doubly fed induction generator (DFIG) used for wind power conversion systems. At first, a double-fed induction generator model was constructed. A control law is formulated to govern the flow of energy between the stator of a DFIG and the energy network using three types of controllers: proportional integral (PI), sliding mode controller (SMC) and second order sliding mode controller (SOSMC). Their different results in terms of power reference tracking, reaction to unexpected speed fluctuations, sensitivity to perturbations, and resilience against machine parameter alterations are compared. MATLAB/Simulink was used to conduct the simulations for the preceding study. Multiple simulations have shown very satisfying results, and the investigations demonstrate the efficacy and power-enhancing capabilities of the suggested control system.
A review on techniques and modelling methodologies used for checking electrom...nooriasukmaningtyas
The proper function of the integrated circuit (IC) in an inhibiting electromagnetic environment has always been a serious concern throughout the decades of revolution in the world of electronics, from disjunct devices to today’s integrated circuit technology, where billions of transistors are combined on a single chip. The automotive industry and smart vehicles in particular, are confronting design issues such as being prone to electromagnetic interference (EMI). Electronic control devices calculate incorrect outputs because of EMI and sensors give misleading values which can prove fatal in case of automotives. In this paper, the authors have non exhaustively tried to review research work concerned with the investigation of EMI in ICs and prediction of this EMI using various modelling methodologies and measurement setups.
Using recycled concrete aggregates (RCA) for pavements is crucial to achieving sustainability. Implementing RCA for new pavement can minimize carbon footprint, conserve natural resources, reduce harmful emissions, and lower life cycle costs. Compared to natural aggregate (NA), RCA pavement has fewer comprehensive studies and sustainability assessments.
CHINA’S GEO-ECONOMIC OUTREACH IN CENTRAL ASIAN COUNTRIES AND FUTURE PROSPECTjpsjournal1
The rivalry between prominent international actors for dominance over Central Asia's hydrocarbon
reserves and the ancient silk trade route, along with China's diplomatic endeavours in the area, has been
referred to as the "New Great Game." This research centres on the power struggle, considering
geopolitical, geostrategic, and geoeconomic variables. Topics including trade, political hegemony, oil
politics, and conventional and nontraditional security are all explored and explained by the researcher.
Using Mackinder's Heartland, Spykman Rimland, and Hegemonic Stability theories, examines China's role
in Central Asia. This study adheres to the empirical epistemological method and has taken care of
objectivity. This study analyze primary and secondary research documents critically to elaborate role of
china’s geo economic outreach in central Asian countries and its future prospect. China is thriving in trade,
pipeline politics, and winning states, according to this study, thanks to important instruments like the
Shanghai Cooperation Organisation and the Belt and Road Economic Initiative. According to this study,
China is seeing significant success in commerce, pipeline politics, and gaining influence on other
governments. This success may be attributed to the effective utilisation of key tools such as the Shanghai
Cooperation Organisation and the Belt and Road Economic Initiative.
5. MeTripping - Introduction (2)
Architecture
Challenges
● Scale and performance
● Varying user traffic
● Data integration with 10s of data provides - different formats and SLAs
dynamic
data
static
data
7. Infrastructure & Environment
● OS Standardisation
○ Latest LTS Releases / Minimal Container OS
○ Minimal Docker Images (Alpine / Atomic)
● Package Management
○ Tarball Installation vs. Package Repos
○ Adopt Docker
● Config Management
○ Hand Manage
○ Ansible vs. Chef vs. Puppet
● Service Management
○ Manual start / stop of services
○ Supervisor vs. Systemd
8. Build & Release Process
● Build on laptops
● Using IDE For Deployment
● Hand Manage artifacts to remote servers
● Version Management
9. Metrics & Availability
● Health Checks & External Service Availability
○ Site 24x7 / Uptime Robot / Gomez
● Server Health Monitoring
○ CloudWatch, DataDog, Nagios, Sensu etc
● Application Performance Monitoring
○ Istio / Hystrix
○ Newrelic, App Dynamics, Elastic APM, StackDriver
○ CloudWatch, sysDig
● Logs (ELK)
10. Security & Compliance
● Secure Coding Guidelines
○ OWASP Top 10
○ Follow Industry Best Practices (PCI, HIPAA)
● Access Controls
○ Central User Management
○ Do not use shared accounts
○ Follow least privilege model
● Restrict Network Access
○ Use both Public & Private Networks
○ Restrict login access only to trusted networks
○ Protect Admin Pages with Google SSO + .htaccess
11. Application Availability and Scalability
● Resource allocation issues
○ Compute
■ Using old generation servers
■ Using “burstable” instances for production
■ Using high CPU instances without looking at actual CPU utilisation
○ Storage
■ Using magnetic storage
■ Under-provisioning / over-provisioning of storage
■ Provisioned IOPS with Databases
■ Using ephemeral storage
○ Network
■ Ephemeral IPs for Internet facing servers
■ SSL Termination on Application (Apache / Nginx)
■ Nginx / Apache as Application Load Balancers
■ Serving static assets from application
■ Mapping domains to Load Balancer IPs
12. Managing Costs
● Use less SaaS & PaaS
○ Binpack with Docker
○ Run local MySQL, ElasticSearch, Kafka, ELK etc
● Separate Accounts For BUs & Environments
○ Non Prod Environments (staging, dev etc)
○ Prod Environments
● Shutdown Non Prod Environments when not in use
● Housekeep regularly
13. Team Structure
● DevOps is hardest to hire (and retain)
● Training freshers in DevOps is time consuming
● What works well
○ Make Engineering Self Sufficient With Operations (Dev+Ops)
■ Make monitoring and deployment as self-service
○ Use Infrastructure As Code tools (Terraform)
○ Rotate oncall within the Dev Team
● Have a shared team to manage Infra
○ Account management
○ IT Stuff
○ Backup / Restore etc
14. Design & Architecture Best Practices
● System instrumentation - Systems and application monitoring
● Web-services architecture
● System standardisation (dockers)
○ Consistent environments
○ Simplified builds / releases
○ Scalable architecture
● Data systems best practices
○ Design for scale and performance
15. System Instrumentation - Systems / application monitoring
● Application monitoring setup is “must-have” requirement for all applications
○ Helps identify system and application deficiencies
○ Helps identify problems, proactively
○ Results in efficient (performance and cost effective) systems
16. Web-services architecture
● Create web-services and not “spider-web” of services
● Create fewer “power packed” services vs. many, many “simplistic” services
○ Push down complex data relationships into application code / database
● Create separate services for different data response times
○ Web-services for data stored in redis / memcached / elasticsearch be kept separate from web-services for
data from RDBMS
● Use tools such as Postman and Swagger to author and document web-services
Elasticsearch Postgres / Mongo Web Crawler
Hadoop / Spark
Middle Tier
Redis
19. System standardization (3)
● Standard base docker image for all
dockers
○ OS: Ubuntu 16.04
○ Python: 3.4
○ Setup non-system user
20. System standardisation (4)
● Separate Git repository for build and
configurations
○ MeTrippingDeloyment has docker compose ymls for build
and deployment settings for dev / stage / prod
environments
○ .env files contain environment settings (sourced in by
docker-compose)
22. Data Systems Best Practices
● Embrace hybrid (SQL + NoSQL + Big Data) system design
○ Store transaction data in RDBMS
■ Consider data partitioning
■ Move archive data to Big Data systems with Long Term Storage Backend
○ Store dimension / non-transaction data in NoSQL
■ MondoDB vs. CouchDB vs. Elasticsearch / Solr
○ Move complex data joins to backend data pipelines
○ Simplify star schema
● System design considerations
○ Use “non-constrained” CPUs
○ Use SSDs for data