1. The document provides an overview of OVH LAB's Enterprise Cloud Databases offering, including its architecture, features, pricing, and roadmap.
2. The architecture is designed for high availability, with automatic failover that can occur within 30 seconds. It uses dedicated hardware in multiple availability zones for isolation and includes daily backups.
3. Key features include PostgreSQL and planned MariaDB databases, 24/7 monitoring, automatic minor version updates, IP whitelisting, encryption, and observability tools for logs and metrics. Pricing aims to be competitive with AWS RDS and Google SQL.
EDB Failover Manager is EnterpriseDB’s newest product that helps you setup reliable and highly available Postgres configurations. Based on technology that has been hardened over the last couple of years, Failover Manager is the missing piece of the high availability solution that you have been asking for.
This presentation will give you the details on this rock-solid product -- what is it, how to use it, and the benefits of this lightweight, non single-point-of-failure product.
This presentation reviews:
• What is EDB Failover Manager
• How to minimize downtime
• Key features such as: Cluster health monitoring, node/database failure detection, automatic failover mechanisms and user customizable options.
This presentation is intended for organizations seeking a solution for High Availability with their Postgres database.
For a live demo or presentation please contact sales@enterprisedb.com.
Amazon EBS provides persistent block-level storage volumes for use with Amazon EC2 instances. In this technical session, you will discover how Amazon EBS can take your application deployments on EC2 to the next level. Session attendees will learn about the Amazon EBS features and benefits, how to identify applications that are appropriate for use with Amazon EBS, best practices, and details about its performance and volume types. We discuss how to maximize Amazon EBS performance, with a special emphasis on low-latency, high-throughput applications like transactional and NoSQL databases, and big data analysis frameworks like Hadoop and Kafka. We will also dive deep and discuss Elastic Volumes, our latest EBS feature that allows you to dynamically increase capacity, tune performance, and change the type of EBS volumes on the fly. Throughout, we share tips for success.
Intuitive APIs are critical success factors for modern software architectures. APIs should be easy to use, difficult to misuse, consumer friendly, easy to maintain and consistently designed.
In order to achieve these goals, it is important to develop APIs before starting the actual development and in a collaborative approach involving various stakeholders. This API-first design approach is important when it comes to exposing existing functionality in the enterprise, e.g. implemented as microservices, to the outside world.
But what role do APIs play in microservice architectures? How are API and Microservice implementations combined and how do I integrate them with a DevOps approach?
Questions answered in this session. A holistic development approach starting with API development up to the deployment of a microservice is considered. Tools such as Oracle Apiary, which support an API-first design approach or Oracle Wercker for the automation of build and deployment, will be presented.
This presentation is based on Lawrence To's Maximum Availability Architecture (MAA) Oracle Open World Presentation talking about the latest updates on high availability (HA) best practices across multiple architectures, features and products in Oracle Database 19c. It considers all workloads, OLTP, DWH and analytics, mixed workload as well as on-premises and cloud-based deployments.
EDB Failover Manager is EnterpriseDB’s newest product that helps you setup reliable and highly available Postgres configurations. Based on technology that has been hardened over the last couple of years, Failover Manager is the missing piece of the high availability solution that you have been asking for.
This presentation will give you the details on this rock-solid product -- what is it, how to use it, and the benefits of this lightweight, non single-point-of-failure product.
This presentation reviews:
• What is EDB Failover Manager
• How to minimize downtime
• Key features such as: Cluster health monitoring, node/database failure detection, automatic failover mechanisms and user customizable options.
This presentation is intended for organizations seeking a solution for High Availability with their Postgres database.
For a live demo or presentation please contact sales@enterprisedb.com.
Amazon EBS provides persistent block-level storage volumes for use with Amazon EC2 instances. In this technical session, you will discover how Amazon EBS can take your application deployments on EC2 to the next level. Session attendees will learn about the Amazon EBS features and benefits, how to identify applications that are appropriate for use with Amazon EBS, best practices, and details about its performance and volume types. We discuss how to maximize Amazon EBS performance, with a special emphasis on low-latency, high-throughput applications like transactional and NoSQL databases, and big data analysis frameworks like Hadoop and Kafka. We will also dive deep and discuss Elastic Volumes, our latest EBS feature that allows you to dynamically increase capacity, tune performance, and change the type of EBS volumes on the fly. Throughout, we share tips for success.
Intuitive APIs are critical success factors for modern software architectures. APIs should be easy to use, difficult to misuse, consumer friendly, easy to maintain and consistently designed.
In order to achieve these goals, it is important to develop APIs before starting the actual development and in a collaborative approach involving various stakeholders. This API-first design approach is important when it comes to exposing existing functionality in the enterprise, e.g. implemented as microservices, to the outside world.
But what role do APIs play in microservice architectures? How are API and Microservice implementations combined and how do I integrate them with a DevOps approach?
Questions answered in this session. A holistic development approach starting with API development up to the deployment of a microservice is considered. Tools such as Oracle Apiary, which support an API-first design approach or Oracle Wercker for the automation of build and deployment, will be presented.
This presentation is based on Lawrence To's Maximum Availability Architecture (MAA) Oracle Open World Presentation talking about the latest updates on high availability (HA) best practices across multiple architectures, features and products in Oracle Database 19c. It considers all workloads, OLTP, DWH and analytics, mixed workload as well as on-premises and cloud-based deployments.
Migrating Your AD to the Cloud with AWS Directory Services for Microsoft Acti...Amazon Web Services
As you continue along your cloud migration journey with AWS, moving Windows workload to the AWS Cloud is a critical step. It is essential to have an Active Directory in the cloud to seamlessly support your group policy management, authentication, and authorization. Learn more about it in this overview session on AWS Directory Service for Microsoft Active Directory, also known as AWS Managed Microsoft Active Directory (AD). Hear from the experts about the basics of this service, how to get started and its various use cases including its ability for trust-based federation
MySQL Group Replication - HandsOn TutorialKenny Gryp
During this tutorial, attendees have their hands on virtual machines and migrate standard Master - Slave architecture to the new MySQL native Group Replication.
After explaining briefly what is group replication and how this is important for MySQL HA architecture. We will cover how to verify the workload and the scheme to how GR can be used and configured.
Then we will go trough the migration steps with minimal impact on the live system.
Basic administration tasks are covered such as add/remove a node from the cluster. We also play with performance_schema to monitor our Group Replication cluster and understand how to control it.
by Isaiah Weiner, Sr. Manager of Solutions Architecture, AWS
Companies are using AWS to create and deploy efficient, fast, and cost-effective backup and restore capabilities to protect critical IT systems without incurring the infrastructure expense of a second physical site. In this session, we will talk about cloud-based services AWS provides to enable robust backup and rapid recovery of your IT infrastructure and data.
Learning Objectives:
- Learn how to make decisions about the service and share best practices and useful tips for success
- Learn about Content based routing, HTTP/2, WebSockets
- Secure your web applications using TLS termination, AWS WAF on Application Load Balancer
AWS Cloud Design Patterns (a.k.a. CDP) are generally repeatable solutions to commonly occurring problems in cloud architecting. In this session, we introduce CDP and explain how you can apply CDPs in practical scenarios such as photo sharing, e-commerce, and web site campaigns.
AWS adoption in financial services is accelerating, more and more large regulated FS organisations are using AWS to transform their business at scale. Hear from HSBC on how they've been successful in doings so, what are the lessons learnt and recommended best practices.
apidays Paris 2022 - Event-Driven API Management – why REST isn't enough, Ben...apidays
apidays Paris 2022 - APIs the next 10 years: Software, Society, Sovereignty, Sustainability
December 14, 15 & 16, 2022
Event-Driven API Management – why REST isn't enough
Benjamin Gottstein, Sales Engineer at Solace
------
Check out our conferences at https://www.apidays.global/
Do you want to sponsor or talk at one of our conferences?
https://apidays.typeform.com/to/ILJeAaV8
Learn more on APIscene, the global media made by the community for the community:
https://www.apiscene.io
Explore the API ecosystem with the API Landscape:
https://apilandscape.apiscene.io/
Deep dive into the API industry with our reports:
https://www.apidays.global/industry-reports/
Subscribe to our global newsletter:
https://apidays.typeform.com/to/i1MPEW
Simplifying Distributed Transactions with Sagas in Kafka (Stephen Zoio, Simpl...confluent
Microservices are seen as the way to simplify complex systems, until you need to coordinate a transaction across services, and in that instant, the dream ends. Transactions involving multiple services can lead to a spaghetti web of interactions. Protocols such as two-phase commit come with complexity and performance bottlenecks. The Saga pattern involves a simplified transactional model. In sagas, a sequence of actions are executed, and if any action fails, a compensating action is executed for each of the actions that have already succeeded. This is particularly well suited to long-running and cross-microservice transactions. In this talk we introduce the new Simple Sagas library (https://github.com/simplesourcing/simplesagas). Built using Kafka streams, it provides a scalable fault tolerance event-based transaction processing engine. We walk through a use case of coordinating a sequence of complex financial transactions. We demonstrate the easy to use DSL, show how the system copes with failure, and discuss this overall approach to building scalable transactional systems in an event-driven streaming context.
(Jason Gustafson, Confluent) Kafka Summit SF 2018
Kafka has a well-designed replication protocol, but over the years, we have found some extremely subtle edge cases which can, in the worst case, lead to data loss. We fixed the cases we were aware of in version 0.11.0.0, but shortly after that, another edge case popped up and then another. Clearly we needed a better approach to verify the correctness of the protocol. What we found is Leslie Lamport’s specification language TLA+.
In this talk I will discuss how we have stepped up our testing methodology in Apache Kafka to include formal specification and model checking using TLA+. I will cover the following:
1. How Kafka replication works
2. What weaknesses we have found over the years
3. How these problems have been fixed
4. How we have used TLA+ to verify the fixed protocol.
This talk will give you a deeper understanding of Kafka replication internals and its semantics. The replication protocol is a great case study in the complex behavior of distributed systems. By studying the faults and how they were fixed, you will have more insight into the kinds of problems that may lurk in your own designs. You will also learn a little bit of TLA+ and how it can be used to verify distributed algorithms.
MySQL 8 High Availability with InnoDB ClustersMiguel Araújo
MySQL’s InnoDB cluster provides a high-level, easy-to-use solution for MySQL high availability. Combining MySQL Group Replication with MySQL Router and the MySQL Shell into an integrated solution, InnoDB clusters offer easy setup and management of MySQL instances into a fault-tolerant database service. In this session learn how to set up a basic InnoDB cluster, integrate it with applications, and recognize and react to common failure scenarios that would otherwise lead to a database outage.
- Workshop presentation
Enterprise Cloud Databases are fully managed and clustered databases tailored for production needs.
OVH takes care of all the infrastructure setup, you end up with you SQL access and are able to focus on your business.
Migrating Your AD to the Cloud with AWS Directory Services for Microsoft Acti...Amazon Web Services
As you continue along your cloud migration journey with AWS, moving Windows workload to the AWS Cloud is a critical step. It is essential to have an Active Directory in the cloud to seamlessly support your group policy management, authentication, and authorization. Learn more about it in this overview session on AWS Directory Service for Microsoft Active Directory, also known as AWS Managed Microsoft Active Directory (AD). Hear from the experts about the basics of this service, how to get started and its various use cases including its ability for trust-based federation
MySQL Group Replication - HandsOn TutorialKenny Gryp
During this tutorial, attendees have their hands on virtual machines and migrate standard Master - Slave architecture to the new MySQL native Group Replication.
After explaining briefly what is group replication and how this is important for MySQL HA architecture. We will cover how to verify the workload and the scheme to how GR can be used and configured.
Then we will go trough the migration steps with minimal impact on the live system.
Basic administration tasks are covered such as add/remove a node from the cluster. We also play with performance_schema to monitor our Group Replication cluster and understand how to control it.
by Isaiah Weiner, Sr. Manager of Solutions Architecture, AWS
Companies are using AWS to create and deploy efficient, fast, and cost-effective backup and restore capabilities to protect critical IT systems without incurring the infrastructure expense of a second physical site. In this session, we will talk about cloud-based services AWS provides to enable robust backup and rapid recovery of your IT infrastructure and data.
Learning Objectives:
- Learn how to make decisions about the service and share best practices and useful tips for success
- Learn about Content based routing, HTTP/2, WebSockets
- Secure your web applications using TLS termination, AWS WAF on Application Load Balancer
AWS Cloud Design Patterns (a.k.a. CDP) are generally repeatable solutions to commonly occurring problems in cloud architecting. In this session, we introduce CDP and explain how you can apply CDPs in practical scenarios such as photo sharing, e-commerce, and web site campaigns.
AWS adoption in financial services is accelerating, more and more large regulated FS organisations are using AWS to transform their business at scale. Hear from HSBC on how they've been successful in doings so, what are the lessons learnt and recommended best practices.
apidays Paris 2022 - Event-Driven API Management – why REST isn't enough, Ben...apidays
apidays Paris 2022 - APIs the next 10 years: Software, Society, Sovereignty, Sustainability
December 14, 15 & 16, 2022
Event-Driven API Management – why REST isn't enough
Benjamin Gottstein, Sales Engineer at Solace
------
Check out our conferences at https://www.apidays.global/
Do you want to sponsor or talk at one of our conferences?
https://apidays.typeform.com/to/ILJeAaV8
Learn more on APIscene, the global media made by the community for the community:
https://www.apiscene.io
Explore the API ecosystem with the API Landscape:
https://apilandscape.apiscene.io/
Deep dive into the API industry with our reports:
https://www.apidays.global/industry-reports/
Subscribe to our global newsletter:
https://apidays.typeform.com/to/i1MPEW
Simplifying Distributed Transactions with Sagas in Kafka (Stephen Zoio, Simpl...confluent
Microservices are seen as the way to simplify complex systems, until you need to coordinate a transaction across services, and in that instant, the dream ends. Transactions involving multiple services can lead to a spaghetti web of interactions. Protocols such as two-phase commit come with complexity and performance bottlenecks. The Saga pattern involves a simplified transactional model. In sagas, a sequence of actions are executed, and if any action fails, a compensating action is executed for each of the actions that have already succeeded. This is particularly well suited to long-running and cross-microservice transactions. In this talk we introduce the new Simple Sagas library (https://github.com/simplesourcing/simplesagas). Built using Kafka streams, it provides a scalable fault tolerance event-based transaction processing engine. We walk through a use case of coordinating a sequence of complex financial transactions. We demonstrate the easy to use DSL, show how the system copes with failure, and discuss this overall approach to building scalable transactional systems in an event-driven streaming context.
(Jason Gustafson, Confluent) Kafka Summit SF 2018
Kafka has a well-designed replication protocol, but over the years, we have found some extremely subtle edge cases which can, in the worst case, lead to data loss. We fixed the cases we were aware of in version 0.11.0.0, but shortly after that, another edge case popped up and then another. Clearly we needed a better approach to verify the correctness of the protocol. What we found is Leslie Lamport’s specification language TLA+.
In this talk I will discuss how we have stepped up our testing methodology in Apache Kafka to include formal specification and model checking using TLA+. I will cover the following:
1. How Kafka replication works
2. What weaknesses we have found over the years
3. How these problems have been fixed
4. How we have used TLA+ to verify the fixed protocol.
This talk will give you a deeper understanding of Kafka replication internals and its semantics. The replication protocol is a great case study in the complex behavior of distributed systems. By studying the faults and how they were fixed, you will have more insight into the kinds of problems that may lurk in your own designs. You will also learn a little bit of TLA+ and how it can be used to verify distributed algorithms.
MySQL 8 High Availability with InnoDB ClustersMiguel Araújo
MySQL’s InnoDB cluster provides a high-level, easy-to-use solution for MySQL high availability. Combining MySQL Group Replication with MySQL Router and the MySQL Shell into an integrated solution, InnoDB clusters offer easy setup and management of MySQL instances into a fault-tolerant database service. In this session learn how to set up a basic InnoDB cluster, integrate it with applications, and recognize and react to common failure scenarios that would otherwise lead to a database outage.
- Workshop presentation
Enterprise Cloud Databases are fully managed and clustered databases tailored for production needs.
OVH takes care of all the infrastructure setup, you end up with you SQL access and are able to focus on your business.
A fotopedia presentation made at the MongoDay 2012 in Paris at Xebia Office.
Talk by Pierre Baillet and Mathieu Poumeyrol.
French Article about the presentation:
http://www.touilleur-express.fr/2012/02/06/mongodb-retour-sur-experience-chez-fotopedia/
Video to come.
Andrew Ryan describes how Facebook operates Hadoop to provide access as a shared resource between groups.
More information and video at:
http://developer.yahoo.com/blogs/hadoop/posts/2011/02/hug-feb-2011-recap/
Management and Automation of MongoDB Clusters - SlidesSeveralnines
Use MongoDB at Any Scale
As you scale, one of the challenges is optimizing your clusters and mitigating operational risk. Proper preparation can result in significant savings and reduced downtime.
This session covers:
* Deployment of dev/test/production environments across private data centers or public clouds
* What to monitor in production environments
* Management automation with ClusterControl from Severalnines
* How ClusterControl works with TokuMX
The session will give you the tools to more effectively manage your cluster, immediately. The presentation will include code samples and a live Q&A session.
This webinar is being delivered jointly by Severalnines & Tokutek. Severalnines provides automation and management tools to reduce the complexity of working with highly available database clusters. Tokutek provides high-performance and scalability for MongoDB, MySQL and MariaDB.
Equnix Business Solutions (Equnix) is an IT Solution provider in Indonesia, providing comprehensive solution services especially on the infrastructure side for corporate business needs based on research and Open Source. Equnix has 3 (three) main services known as the Trilogy of Services: Support (Maintenance/Managed), World class level of Software Development, and Expert Consulting and Assessment for High Performance Transactions System. Equnix is customer oriented, not product or principal. Equal opportunity based on merit is our credo in managing HR development.
Laine Campbell, CEO of Blackbird, will explain the options for running MySQL at high volumes at Amazon Web Services, exploring options around database as a service, hosted instances/storages and all appropriate availability, performance and provisioning considerations using real-world examples from Call of Duty, Obama for America and many more. Laine will show how to build highly available, manageable and performant MySQL environments that scale in AWS—how to maintain then, grow them and deal with failure. Some of the specific topics covered are:
* Overview of RDS and EC2 – pros, cons and usage patterns/antipatterns.
* Implementation choices in both offerings: instance sizing, ephemeral SSDs, EBS, provisioned IOPS and advanced techniques (RAID, mixed storage environments, etc…)
* Leveraging regions and availability zones for availability, business continuity and disaster recovery.
* Scaling patterns including read/write splitting, read distribution, functional dataset partitioning and horizontal dataset partitioning (aka sharding)
* Common failure modes – AZ and Region failures, EBS corruption, EBS performance inconsistencies and more.
* Managing and mitigating cost with various instance and storage options
Learn about new features in the 19c RAC database. In this session get a good understanding of the architecture of RAC , ASM and the Grid Infrastructure which involves processes, their communication mechanisms, startup sequences and then we move to scenarios and common troubleshooting scenarios with how to proceed to diagnose the same. We will learn to automatically troubleshoot hangs, collect and debug trace, perform best practices on your stack automatically and how to act on the recommendations
Voldemort & Hadoop @ Linkedin, Hadoop User Group Jan 2010Bhupesh Bansal
Jan 22nd, 2010 Hadoop meetup presentation on project voldemort and how it plays well with Hadoop at linkedin. The talk focus on Linkedin Hadoop ecosystem. How linkedin manage complex workflows, data ETL , data storage and online serving of 100GB to TB of data.
OpenNebulaConf 2016 - Measuring and tuning VM performance by Boyan Krosnov, S...OpenNebula Project
In this session we'll explore measuring VM performance and evaluating changes to settings or infrastructure which can affect performance positively. We'll also share the best current practice for architecture for high performance clouds from our experience.
Redis Developers Day 2014 - Redis Labs TalksRedis Labs
These are the slides that the Redis Labs team had used to accompany the session that we gave during the first ever Redis Developers Day on October 2nd, 2014, London. It includes some of the ideas we've come up with to tackle operational challenges in the hyper-dense, multi-tenants Redis deployments that our service - Redis Cloud - consists of.
Similar to OVH Lab - Enterprise Cloud Databases (20)
OVHcloud Startup Program : Découvrir l'écosystème au service des startups OVHcloud
L’équipe de l’OVHcloud Startup Program France Benelux a organisé, le 05 janvier dernier, son premier meetup online de l’année.
Le premier d’une longue série !
Cette première session, animée par Fanny Bouton, Startup Program Leader France Benelux, était l’occasion de découvrir toute l’ampleur de l’écosystème OVHcloud au service des startups au travers de l’OVHcloud Marketplace, l’Open Trusted Cloud Program ou encore avec l’OVHcloud Partner Program.
Ce rendez-vous a permis d’échanger directement avec l’ensemble des Program Leaders d’OVHcloud ainsi que nos partenaires tels que La BigAddress, Freelance Stack ou encore SmartGlobal.
Fine tune and deploy Hugging Face NLP modelsOVHcloud
Are you currently managing AI projects that require a lot of GPU power?
Are you tired of managing the complexity of your infrastructures, GPU instances and your Kubeflow yourself?
Need flexibility for your AI platform or SaaS solution?
OVHcloud innovates in AI by offering simple and turnkey solutions to train your models and put them into production.
How can you successfully migrate to hosted private cloud 2020OVHcloud
OVHcloud teams are pleased to offer you this webinar dedicated to migration to HPC2020 :
• What is HPC2020 & its features ?
• What are the migration paths and steps ?
• What resources are made available to you ?
• Q&A
OVHcloud Partner Webinar - Data ProcessingOVHcloud
OVHcloud vous apporte en avant-première son éclairage d’expert de l’infrastructure, alors que nous lançons une nouvelle gamme de services cloud dédiée à la Data. Celle-ci réduit drastiquement les contraintes d’infrastructure sur les étapes clés du cycle de vie de la donnée, et permet ainsi aux professionnels de la données (data engineers, data ops, data scientists…) de se concentrer sur sa valorisation.
OVHcloud Tech Talks S01E09 - OVHcloud Data Processing : Le nouveau service po...OVHcloud
Nous vivons une époque où tout est connecté, de nos ampoules à notre éditeur de texte, les objets et services qui nous entourent devienne de plus en plus intelligents. Pour ce faire ils génèrent des données, elles sont nécessaires au bon fonctionnement du service ou de l'objet, mais elles sont également utiles pour faire évoluer les produits.
Ces données peuvent rapidement représenter de gros volumes, plusieurs dizaines voir centaines de gigaoctets. Une question se pose alors, comment traiter de tels volumes ? Comment en tirer du sens et de la valeur à cette échelle ?
Avec OVHcloud Data Processing, une solution basée sur le framework Apache Spark, nous répondons à ce besoin. Venez découvrir comment vous aussi, en quelques cliques, pouvez exécuter votre code sur une infrastructure taillée pour vos besoins.
Au travers de différents exemples, comme une analyse du traffic des taxis New-Yorkais, nous verrons comment Data Processing a été pensé, comment il fonctionne et comment il peut être utilisé pour valoriser vos données.
OVHcloud Tech Talks S01E08 - GAIA-X pour les techs : OVHcloud & Scaleway vous...OVHcloud
La semaine dernière, les ministres de l'Économie de la France et l'Allemagne ont ont dévoilé les contours GAIA-X, visant à établir les bases d'un écosystème cloud Européen capable de proposer des services respectant des critères de sécurité et des normes européennes.
Mais qu'est-ce que ça veut dire concrètement pour vous, en tant que développeur, devops et/ou sysadmin ?
En quoi cette initiative gouvernementale conjointe peut vous apporter quelque chose au quotidien ? Et quand on parle de collaboration entre des acteurs franco-allemands, ça veut dire quoi en pratique ?
Pierre Gronlier, Solutions Architect chez OVHcloud et Yohann Prigent, VP Front chez Scaleway, ont été impliqués dans GAIA-X depuis des mois, au cœur notamment d'un des projets présentés jeudi dernier: le Démonstrateur GAIA-X (https://staging.gaia-x-demonstrator.eu/). Dans ce Tech Talk, ils vous ouvrent les coulisses du projet, et vous racontent non seulement le pourquoi et le comment ce démonstrateur a été conçu, avec tous les détails techniques, mais aussi et surtout, ce que ça veut dire GAIA-X pour vous en tant que Tech !
Avec Enterprise Cloud Databases, découvrez un service dédié, entièrement géré et surveillé, basé sur le système de gestion de bases de données relationnelle PostgreSQL, qui garantit une haute disponibilité pour vos charges de travail les plus critiques.
OVHcloud Tech Talks S01E07 – Introduction à l’intelligence artificielle pour ...OVHcloud
Tout le monde parle d’intelligence artificielle de nos jours et certaines personnes imaginent cela comme complexe.
La réalité est bien différente, les concepts sont simples, et des outils existent pour cacher les complexités d’implémentation.
Dans cet épisode, Jean-Louis Queguiner, nous explique ce qu'est l'intelligence artificielle, les méthodes qui existent et vous présente les concepts basiques des réseaux de neurones.
Pas de maths, c'est promis, juste du bon sens !
Un filesystem accessible depuis le réseau, un besoin très courant sur nos serveurs, et surtout depuis la montée en puissance des containers et de la scalabilité horizontale: “je veux accéder au même Filesystem sur tous mes noeuds!”
OVHcloud Tech Talks Fr S01E05 – L’opérateur Harbor, une nécessité pour certai...OVHcloud
Infrastructure, big data, bases de données, Kubernetes, load balancing, SaaS, PaaS, IaaS… À l’image de nos OVHcloud Meetups (comme ceux de Paris, Rennes ou Brest), les sujets des OVHcloud Tech Talks sont divers et variés, basés sur le partage de connaissance et les retours d’expérience, et toujours faits par des tech pour des tech.
OVHcloud Tech-Talk S01E04 - La télémétrie au service de l'agilitéOVHcloud
Je m'appelle Jérémy Hennart, je suis Program Manager, Scrum Master, Facilitateur, ou encore chef de projet. J’accompagne des équipes techniques dans mon quotidien, et dans cette épisode des OVHcloud Tech Talks, je vais vous raconter comment j’ai “agilisé” une équipe de 29 développeurs, avec la Télémétrie Agile !
Chez OVHcloud, nous utilisons en interne des modèles de Machine Learning qui aident à la prise de décision, dans des domaines allant de la lutte contre la fraude à l'amélioration de la maintenance de nos infrastructures.
Tirant parti des formats Open Source standard - tels que les SavedModels de Tensorflow - ML Serving permet aux utilisateurs de déployer facilement leurs modèles tout en bénéficiant de fonctionnalités essentielles telles que l'instrumentation, l'évolutivité et la gestion des versions des modèles.
Logging at OVHcloud :
Logs Data platform est la plateforme de collecte, d'analyse et de gestion centralisée de logs d'OVHcloud. Cette plateforme a pour but de répondre aux challenges que constitue l'indexation de plus de 4000 milliards de logs par une entreprise comme OVHcloud. Cette présentation vous décrira l'architecture générale de Logs Data Platform autour de ses composants centraux Elasticsearch et Graylog et vous décrira les différentes problématiques de scalabilité, disponibilité, performance et d'évolutivité qui sont le quotidien de l'équipe Observability à OVHcloud.
A la découverte du standard OpenStack et de ses APIs
OpenStack est la brique logicielle open source sur laquelle s'appuie OVHcloud pour proposer son offre de Public Cloud (compute, storage, network, …). OpenStack permet l’administration complète des ressources à travers une API particulièrement riche. Raison pour laquelle OVHcloud en donne un accès exhaustif aux utilisateurs de son Public Cloud, nombreux à manipuler leurs ressources en lignes de commande.
Au fil de ce meetup, nous poserons les bases de l’architecture et du fonctionnement d’OpenStack et de ses différents composants. Nous parlerons ensuite du fonctionnement des APIs OpenStack, éléments clés pour interagir avec OpenStack. Nous finirons par quelques usages de ces APIs au travers d’outils connus comme Terraform, qui permettront de mettre en évidence l’importance de proposer un standard dans l’univers du Public Cloud.
OVHcloud utilise Ceph depuis cinq ans pour certains de ses besoins de stockage, bien qu'étant composée de 2000 serveurs physiques et 20000 conteneurs, cette infrastructure est gérée au quotidien par une seule personne au RUN. Nous ferons une présentation et un retour d'expérience sur les différents moyens mis en oeuvre pour y arriver.
Migrer 3 millions de sites sans maitriser leur code source ? Impossible mais ...OVHcloud
Il y a deux ans, nous apprenions notre nouvelle mission : migrer les 3 millions de sites web hébergés dans notre datacentre de Paris. Sans en maitriser le code source, les migrer sans impact nous semblait totalement irréaliste.
18 mois plus tard, c'est terminé ! Pour y arriver, nous avons du configurer des proxy SQL, des tunnels réseau, migrer des IP entre nos datacentres, livrer des milliers de serveurs, bosser durant des dizaines de nuits, mais aussi s'organiser entre plusieurs équipes qui n'ont pas l'habitude de travailler ensemble. Quels sont les soucis technique et humains que nous avons rencontrés, et comment les avons nous résolu ? Retour d'expérience sur l'une des plus grosse migration que le web ai connu !
Le machine learning et l’IA sont des buzzwords qui font maintenant partie de notre quotidien. Pourtant, rares sont les projets qui osent inclure du ML dans leur cycle de vie.
Les raisons sont multiples :
- Inquiétudes sur un niveau d’expertise trop limité en DataScience
- Difficultés d’apprécier à l’avance le gap entre difficulté de mise en place et retour sur investissement
- Inquiétudes sur la pérennité des efforts investis : (dérive des modèles entrainés)
- Peur de s’engager dans un effort trop important de maintenance sur le long terme
Bien que fondées, ces raisons n’ont plus lieu d’être après la mise en place de procédés d’industrialisation spécifiques à ce genre de problème.
Venez découvrir comment nous avons fait converger les compétences des datascientists et des devops afin de créer une plate-forme de machine learning simple, scalable et accessible aux non-experts. De l’analyse des données à la mise en production de modèles nous verrons comment industrialiser les procédés d’apprentissage automatique sans le moindre effort.
Pour plus d'informations à propos de Prescience :
https://labs.ovh.com/machine-learning-platform
OVHcloud Hosted Private Cloud Platform Network use cases with VMware NSXOVHcloud
In this workshop VMware will provide a quick reminder of the main contributions of the NSX network virtualization platform: consistent network and security management, increased application resiliency, rapid migration of workloads to and from the cloud.
VMware and OVH will then move on to practical cases with implementation of micro-segmentation, dynamic routing, automatic deployment of an application, load balancing in the OVH Hosted Private Cloud. This workshop is aimed at a technical audience.
State of ICS and IoT Cyber Threat Landscape Report 2024 previewPrayukth K V
The IoT and OT threat landscape report has been prepared by the Threat Research Team at Sectrio using data from Sectrio, cyber threat intelligence farming facilities spread across over 85 cities around the world. In addition, Sectrio also runs AI-based advanced threat and payload engagement facilities that serve as sinks to attract and engage sophisticated threat actors, and newer malware including new variants and latent threats that are at an earlier stage of development.
The latest edition of the OT/ICS and IoT security Threat Landscape Report 2024 also covers:
State of global ICS asset and network exposure
Sectoral targets and attacks as well as the cost of ransom
Global APT activity, AI usage, actor and tactic profiles, and implications
Rise in volumes of AI-powered cyberattacks
Major cyber events in 2024
Malware and malicious payload trends
Cyberattack types and targets
Vulnerability exploit attempts on CVEs
Attacks on counties – USA
Expansion of bot farms – how, where, and why
In-depth analysis of the cyber threat landscape across North America, South America, Europe, APAC, and the Middle East
Why are attacks on smart factories rising?
Cyber risk predictions
Axis of attacks – Europe
Systemic attacks in the Middle East
Download the full report from here:
https://sectrio.com/resources/ot-threat-landscape-reports/sectrio-releases-ot-ics-and-iot-security-threat-landscape-report-2024/
Welocme to ViralQR, your best QR code generator.ViralQR
Welcome to ViralQR, your best QR code generator available on the market!
At ViralQR, we design static and dynamic QR codes. Our mission is to make business operations easier and customer engagement more powerful through the use of QR technology. Be it a small-scale business or a huge enterprise, our easy-to-use platform provides multiple choices that can be tailored according to your company's branding and marketing strategies.
Our Vision
We are here to make the process of creating QR codes easy and smooth, thus enhancing customer interaction and making business more fluid. We very strongly believe in the ability of QR codes to change the world for businesses in their interaction with customers and are set on making that technology accessible and usable far and wide.
Our Achievements
Ever since its inception, we have successfully served many clients by offering QR codes in their marketing, service delivery, and collection of feedback across various industries. Our platform has been recognized for its ease of use and amazing features, which helped a business to make QR codes.
Our Services
At ViralQR, here is a comprehensive suite of services that caters to your very needs:
Static QR Codes: Create free static QR codes. These QR codes are able to store significant information such as URLs, vCards, plain text, emails and SMS, Wi-Fi credentials, and Bitcoin addresses.
Dynamic QR codes: These also have all the advanced features but are subscription-based. They can directly link to PDF files, images, micro-landing pages, social accounts, review forms, business pages, and applications. In addition, they can be branded with CTAs, frames, patterns, colors, and logos to enhance your branding.
Pricing and Packages
Additionally, there is a 14-day free offer to ViralQR, which is an exceptional opportunity for new users to take a feel of this platform. One can easily subscribe from there and experience the full dynamic of using QR codes. The subscription plans are not only meant for business; they are priced very flexibly so that literally every business could afford to benefit from our service.
Why choose us?
ViralQR will provide services for marketing, advertising, catering, retail, and the like. The QR codes can be posted on fliers, packaging, merchandise, and banners, as well as to substitute for cash and cards in a restaurant or coffee shop. With QR codes integrated into your business, improve customer engagement and streamline operations.
Comprehensive Analytics
Subscribers of ViralQR receive detailed analytics and tracking tools in light of having a view of the core values of QR code performance. Our analytics dashboard shows aggregate views and unique views, as well as detailed information about each impression, including time, device, browser, and estimated location by city and country.
So, thank you for choosing ViralQR; we have an offer of nothing but the best in terms of QR code services to meet business diversity!
UiPath Test Automation using UiPath Test Suite series, part 4DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 4. In this session, we will cover Test Manager overview along with SAP heatmap.
The UiPath Test Manager overview with SAP heatmap webinar offers a concise yet comprehensive exploration of the role of a Test Manager within SAP environments, coupled with the utilization of heatmaps for effective testing strategies.
Participants will gain insights into the responsibilities, challenges, and best practices associated with test management in SAP projects. Additionally, the webinar delves into the significance of heatmaps as a visual aid for identifying testing priorities, areas of risk, and resource allocation within SAP landscapes. Through this session, attendees can expect to enhance their understanding of test management principles while learning practical approaches to optimize testing processes in SAP environments using heatmap visualization techniques
What will you get from this session?
1. Insights into SAP testing best practices
2. Heatmap utilization for testing
3. Optimization of testing processes
4. Demo
Topics covered:
Execution from the test manager
Orchestrator execution result
Defect reporting
SAP heatmap example with demo
Speaker:
Deepak Rai, Automation Practice Lead, Boundaryless Group and UiPath MVP
UiPath Test Automation using UiPath Test Suite series, part 3DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 3. In this session, we will cover desktop automation along with UI automation.
Topics covered:
UI automation Introduction,
UI automation Sample
Desktop automation flow
Pradeep Chinnala, Senior Consultant Automation Developer @WonderBotz and UiPath MVP
Deepak Rai, Automation Practice Lead, Boundaryless Group and UiPath MVP
Epistemic Interaction - tuning interfaces to provide information for AI supportAlan Dix
Paper presented at SYNERGY workshop at AVI 2024, Genoa, Italy. 3rd June 2024
https://alandix.com/academic/papers/synergy2024-epistemic/
As machine learning integrates deeper into human-computer interactions, the concept of epistemic interaction emerges, aiming to refine these interactions to enhance system adaptability. This approach encourages minor, intentional adjustments in user behaviour to enrich the data available for system learning. This paper introduces epistemic interaction within the context of human-system communication, illustrating how deliberate interaction design can improve system understanding and adaptation. Through concrete examples, we demonstrate the potential of epistemic interaction to significantly advance human-computer interaction by leveraging intuitive human communication strategies to inform system design and functionality, offering a novel pathway for enriching user-system engagements.
Observability Concepts EVERY Developer Should Know -- DeveloperWeek Europe.pdfPaige Cruz
Monitoring and observability aren’t traditionally found in software curriculums and many of us cobble this knowledge together from whatever vendor or ecosystem we were first introduced to and whatever is a part of your current company’s observability stack.
While the dev and ops silo continues to crumble….many organizations still relegate monitoring & observability as the purview of ops, infra and SRE teams. This is a mistake - achieving a highly observable system requires collaboration up and down the stack.
I, a former op, would like to extend an invitation to all application developers to join the observability party will share these foundational concepts to build on:
Le nuove frontiere dell'AI nell'RPA con UiPath Autopilot™UiPathCommunity
In questo evento online gratuito, organizzato dalla Community Italiana di UiPath, potrai esplorare le nuove funzionalità di Autopilot, il tool che integra l'Intelligenza Artificiale nei processi di sviluppo e utilizzo delle Automazioni.
📕 Vedremo insieme alcuni esempi dell'utilizzo di Autopilot in diversi tool della Suite UiPath:
Autopilot per Studio Web
Autopilot per Studio
Autopilot per Apps
Clipboard AI
GenAI applicata alla Document Understanding
👨🏫👨💻 Speakers:
Stefano Negro, UiPath MVPx3, RPA Tech Lead @ BSP Consultant
Flavio Martinelli, UiPath MVP 2023, Technical Account Manager @UiPath
Andrei Tasca, RPA Solutions Team Lead @NTT Data
Key Trends Shaping the Future of Infrastructure.pdfCheryl Hung
Keynote at DIGIT West Expo, Glasgow on 29 May 2024.
Cheryl Hung, ochery.com
Sr Director, Infrastructure Ecosystem, Arm.
The key trends across hardware, cloud and open-source; exploring how these areas are likely to mature and develop over the short and long-term, and then considering how organisations can position themselves to adapt and thrive.
Dev Dives: Train smarter, not harder – active learning and UiPath LLMs for do...UiPathCommunity
💥 Speed, accuracy, and scaling – discover the superpowers of GenAI in action with UiPath Document Understanding and Communications Mining™:
See how to accelerate model training and optimize model performance with active learning
Learn about the latest enhancements to out-of-the-box document processing – with little to no training required
Get an exclusive demo of the new family of UiPath LLMs – GenAI models specialized for processing different types of documents and messages
This is a hands-on session specifically designed for automation developers and AI enthusiasts seeking to enhance their knowledge in leveraging the latest intelligent document processing capabilities offered by UiPath.
Speakers:
👨🏫 Andras Palfi, Senior Product Manager, UiPath
👩🏫 Lenka Dulovicova, Product Program Manager, UiPath
Generative AI Deep Dive: Advancing from Proof of Concept to ProductionAggregage
Join Maher Hanafi, VP of Engineering at Betterworks, in this new session where he'll share a practical framework to transform Gen AI prototypes into impactful products! He'll delve into the complexities of data collection and management, model selection and optimization, and ensuring security, scalability, and responsible use.
Encryption in Microsoft 365 - ExpertsLive Netherlands 2024Albert Hoitingh
In this session I delve into the encryption technology used in Microsoft 365 and Microsoft Purview. Including the concepts of Customer Key and Double Key Encryption.
GDG Cloud Southlake #33: Boule & Rebala: Effective AppSec in SDLC using Deplo...James Anderson
Effective Application Security in Software Delivery lifecycle using Deployment Firewall and DBOM
The modern software delivery process (or the CI/CD process) includes many tools, distributed teams, open-source code, and cloud platforms. Constant focus on speed to release software to market, along with the traditional slow and manual security checks has caused gaps in continuous security as an important piece in the software supply chain. Today organizations feel more susceptible to external and internal cyber threats due to the vast attack surface in their applications supply chain and the lack of end-to-end governance and risk management.
The software team must secure its software delivery process to avoid vulnerability and security breaches. This needs to be achieved with existing tool chains and without extensive rework of the delivery processes. This talk will present strategies and techniques for providing visibility into the true risk of the existing vulnerabilities, preventing the introduction of security issues in the software, resolving vulnerabilities in production environments quickly, and capturing the deployment bill of materials (DBOM).
Speakers:
Bob Boule
Robert Boule is a technology enthusiast with PASSION for technology and making things work along with a knack for helping others understand how things work. He comes with around 20 years of solution engineering experience in application security, software continuous delivery, and SaaS platforms. He is known for his dynamic presentations in CI/CD and application security integrated in software delivery lifecycle.
Gopinath Rebala
Gopinath Rebala is the CTO of OpsMx, where he has overall responsibility for the machine learning and data processing architectures for Secure Software Delivery. Gopi also has a strong connection with our customers, leading design and architecture for strategic implementations. Gopi is a frequent speaker and well-known leader in continuous delivery and integrating security into software delivery.
4. 4
Before starting…
Our goal : provide relational databases for your critical workloads, with no compromises.
This document focus on offers, architecture, features, roadmap and pricing.
Please note that we are still in early access phase, allowing you to test but not to buy.
Lab page : https://labs.ovh.com/ha-database
Contact us if you have questions (last slide) !
5. 5
Performance
Services
Included in Web products
Or in standalone
No SLA, No HA
Databases
« Cloud Databases »
From free to 20€/month
Databases
Public Cloud
Databases
« Enterprise »
Brick in Public Cloud
Pay as you Go
Openstack compliance
Multi-tenant
Can be HA
No compromises
Dedicated hardware
(Single tenant)
Prodded
Early access phase
R&D
Starting 800€/month
Starting 20€ /month
You are here !
6. 6
Databases built with passion
During the last 2 years at OVH, our internal DBA team worked hard on internal industrialization of
critical relational databases architectures, starting with PostgreSQL, for our own needs.
We now have dozens of internal clusters benefiting from our new technology and decided to work
on a public offer, because you tend to have the same needs as OVH.
We started Early Access in January 2019, allowing us to gather customers feedbacks and fine-tune our
offers. This document explains where we are, and what we imagine, for you. Enjoy the reading !
R&D (done) + internal use
General Availability
January
2019
Early access. Infra & offers fine-tuning, API, …
Today October 2019
You
are
here
7. 7
Databases for a S.M.A.R.T. Cloud
Dedicated hardware
Each node is on a dedicated server,
just for you. We provide constant CPU
performances, constant IOPS and real
isolation.
100% Managed
We monitor your services 24/7. We
perform software maintenance and
hardware maintenance, and daily
backup your critical data.
Vanilla software
No vendor lock-in. We use open source
and vanilla software, trusted but the
community.
Simple pricing
Network traffic ? Included. Storage and
constant IOPS ? Included/
Observability tools, daily backups, and
so on ? Included !
Scalability
You databases can grow with your
needs. Change you database plan
when you want, and add up to 50
replicas for horizontal scalability.
High-Availability by default
Your workloads are critical. Our
architecture are highly available by
default, with automatic failover in few
seconds. We provide 99,99% SLA.
</>
8. 8
You should continue to read if…
• If you recognize yourself in some customers profiles below, the next slides are for you !
DB Admin
I want to take 0 risks on the infra. High-Availability is key point
I want well-managed services and observability tools
I want an offer who respect DB best practices (ACID, local
storage, security…)
I don’t trust custom-made DBMS. Vanilla or nothing, no vendor-
lockin for my company
IT Provider company
I need worry-free Databases service for my customers
I need to be competitive to win RFP
Pricing should be simple
Performance and high-availability are key points
CTO
I need a 24/7 monitored infra and high level SLA
I need to control my costs easily (TCO)
I want to be able to move from one Cloud Provider to another one
easily (no vendor lock-in)
10. 10
Architecture principles / schema
Read Only endpointRead Write endpoint
Backuper
node
Primary
node
R-W
Replica
node
R-O
Horizontal
Scaling
Replication
Load Balancing
1 x Database cluster. Includes by default 3 x dedicated nodes (primary, replica, backup)
Internet
Replication
Filer
storage
Filer
storage
11. 11
Architecture principle / roles description
Each database cluster is composed of different items :
• Load balancing : based on replicated appliances and HAProxy, they balance the network trafic to your nodes
(primary and replicas). You can use different ports for Read-Only and Read-Write, or use the same.
• Primary node : based on 1 x dedicated host (single-tenant), it accepts Read-Write operations. If you configure
your application to use the same port for Read and Write operations, Primary Node will also accept Read-Only
operations.
• Replicas nodes : based on n x dedicated host (single-tenant), they accept Read-Only operations. They allow you
horizontal scaling. By default, a cluster is composed of 1 x Replica Node.
• Backup node : based on 1 x dedicated host (single-tenant), it will NOT accept Read nor Write operations. It
replicates your data and is used for non-degradating backuping. Backups are performed on this dedicated node
instead of the production one.
• Cluster Storage : based on local SSD storage, with RAID10 (replicated storage). They will store your operational
data. Backups are stored in OVH filer storage
• Backup storage : based on 2 x OVH filer storage, they store your backups and allow you to restore backups.
12. 12
Architecture principles / roles discovery
Load Balancing
RW traffic:
Are you
primary?
Relational database clustering implies specific roles.
To counter outages scenarios, such as a “Primary Node
down”, we implemented high-availability templates :
• Automatic role discovery
– Primary node for RW traffic
– Secondaries nodes for RO traffic
– No traffic for backup node
– … everything made with Quorum (wikipedia explanation)
• Fast & Continuous discovery
– Probe every 30 seconds
Node
RO traffic:
Are you
primary?
Node
YES No
13. 13
Architecture principles / Regions & Availability Zones
Region
AZ AZ
LB Backup LB
Node Node
Backup
For improved resiliency, we propose multi-AZ redundancy
15. 15
Outage #1 : replica down
Region
AZ AZ
LB Backup LB
Primary Replica
Backup
1. Replica down, no other replicas
2. Automatic Failover : roles discovery
3. After max 30 seconds, Primary will handle Read-Only and Read-Write
4. OVH will re-attach a new replica automatically, back to nominal mode after synchronization
Read-Write impacts : No downtime, but can feel degraded performance
Read-Only impacts : degraded performance (1 node to accept all RO+RW instead of 2)
Steps
Animated slide
Presentation mode
16. 16
Outage #2 : primary down
Region
AZ AZ
LB Backup LB
Primary Replica
Backup
1. Primary down, 1 x replica up
2. Automatic Failover : roles discovery
3. After max 30 seconds, Replica will be elected as Primary, handling Read-Only and Read-Write
4. OVH will re-attach a new replica automatically, back to nominal mode after synchronization
Read-Write impacts : downtime, unable to perform operation during few seconds
Read-Only impacts : no downtime, potential degraded performances
Steps
Animated slide
Presentation mode
17. 17
Outage #3 : AZ down, quorum remain
Region
AZ AZ
LB Backup LB
Primary Replica
Backup
1. Availability zone down, 1 x primary up
2. Quorum Remain: After max 30 seconds, RO traffic is rerouted via load balancer automatically
3. Primary will handle Read-Only and Read-Write
4. OVH will re-attach a new replica automatically, back to nominal mode after synchronization
Read-Write impacts : No downtime, but can feel degraded performance
Read-Only impacts : degraded performance (1 node to accept all RO+RW instead of 2)
Steps
Animated slide
Presentation mode
18. 18
Outage #4 : AZ lost, quorum lost
Region
AZ AZ
LB Backup LB
Primary Replica
Backup
R
O
1. Availability zone down, 1 x replica up,
2. Quorum is lost. Cluster switch to Read-Only in order to avoid split brain
3. OVH will automatically reattach a Primary node, in a new AZ if possible
4. Back to nominal mode after synchronization
Read-Write impacts : downtime, until we reattach a Primary.
Read-Only impacts : no downtime, degraded performance (1 node to accept all RO+RW instead of 2)
Steps
Animated slide
Presentation mode
19. 19
Outage #5 : All cluster down
Region
AZ AZ
LB Backup LB
Primary Replica
Backup
R
O
1. Both availabilities Zones down
2. We still have access to backups : we restore a snapshot in another region
3. We don’t have access to backup : commitment of a 12 hours maximum RPO
Read-Write impacts : downtime, until we recover.
Read-Only impacts : downtime, until we recover
Steps
Animated slide
Presentation mode
20. 20
Planned #1 : Minor version update
Region
AZ AZ
LB Backup LB
Primary Replica
Backup
1. We update host per host to ensure that the cluster will not suffer any downtime
2. Before updating the primary we will switchover RW traffic to a replica by promoting it
Read-Write impacts : downtime during the switchover (max 30 seconds)
Read-Only impacts : no downtime, degraded performance (1 node to accept all RO+RW instead of 2)
Steps
Animated slide
Presentation mode
22. 22
Features list
Critical Cloud Databases
Billing method Monthly
SLA 99,99% (4 minutes per month)
DBMS proposed
Available : PostgreSQL 9.6, 10, 11
Planned : MariaDB
Managed Service Yes. Operating system, minor DBMS versions, hardware parts, network.
High Availability Yes, by default
Auto Failover Yes, performed in 30 seconds maximum
Geo-redundancy intra region (multiple
AZ)
Yes, optional
Clustering Yes, by default
Replicas (increase RO perfs) Yes, by default. You can add up to 10 replicas
DB instance resizing Yes (size up only for now)
Backups
Yes, 3 rolling months included for Daily Backups
On-demand backup
Always performed on a separated node (backuper) to avoid noise on production
Point-in-time recovery (PITR) Yes
Restore Yes
IP whitelisting Yes
End-to-end TLS/SSL Yes
Full disk encryption (LUKS) Yes
Public Network access Yes
Private network (vRack) Not for now, planned
Observability tools Yes, Logs, full metrics
API management Yes
CLI management Yes (super admin)
Web interface management Yes
Infrastrucutre
Backups
Management
Network
Security
24. 24
High Availability & Automatic Failover
• Automatic failure detection
– Continuous probing
• Fault Tolerant
– Remove failed node from cluster
• Fast Failover
– Maximum 30 sec
– No need to update DNS records
• In case of outage (node down, AZ down …)
– No downtime (except from failover)
– Lower performance
25. 25
Dedicated hardware
We guarantee performance
Physical
Nodes
No noisy neighbors
Isolation
Network
Nodes communicate
in their own network
with tight control using
security group
Zero trustConstant IOPS
Local Storage
Hardware RAID 10 for
both security & speed
Yours only
Cpu, Ram, I/O
dedicated &
guaranteed for your
workload
Performance
26. 26
Automatic and on-demand backups
Your data, safe and sound
Each Day
Your cluster is
backuped, replicated
multiple times.
Backups are performed
on dedicated node
(the backuper) to
avoid noise on
production. We keep
them 3 rolling months.
01
Daily
Right when
you want
You can always ask for
a backup when you
want, like for example
before a major update
in your app.
Backup are performed
on dedicated node
(the backuper) to avoid
noise on production.
02
On Demand
03
Whenever
Log files are also
backuped. This way
you can go back in
time, right to the
second.
PITR
27. 27
Restore
You are able to request backups restore when you want
• No downtime
– Restore on a dedicated host
• Close to the second
– Choose between your backups or specify a date (PITR)
• Pay per restore
– You select a cloud instance flavor, and you will pay your restore hourly.
28. 28
Backups/Restore : sum-up
What is done Perimeter included
Data daily auto backups We perform daily physical ZFS snapshots (we don’t use pg_dump). Datafiles on filesystem
Data “on demand” backups You can perform “on demand” backup through API and control panel, when you want Same as daily backups
Data backups process
Each backup is made on the “backuper node”, isolated from the production.
No impacts on your performances. We stop postgresql process on this node during this time.
N/A
Data backups retention By default, we keep all your backups for 3 rolling months. Daily backups
Data backups replication We keep data backups on 2 different and autonomous spaces, called filers storage Daily + “On demand” backups
Data backups integrity
We perform backup on a dedicated host (the backuper node) and we stop postgresql process
during this process. Integrity is preserved. We don’t perform integrity checks after (but soon)
Daily + “On demand” backups
WAL backup/retention We perform continuous backups of WAL, limited to 3 rolling month, on Object Storage. All WAL from primary node
Logs/Metrics retention
We store logs for 1 rolling month, metrics for 1 year, and give you observability tools to access
them.
Logs : PostgreSQL process
Metrics : all nodes
PITR feature We keep all your WAL allowing you PITR, see after. N/A
Restore a data backup
When you ask for a restore, you can request a backup ID or a specific day+hour.
If you request a backup ID, we will spawn an instance with your snapshot, in read-only, and
provide you and IP and ports to connect. You pay the same prices as OVH Public Cloud.
You are then free to do what you want (dump+restore on production, …)
If you ask for a specific day+hour, we will use PITR feature.
Daily backups
+
“On demand” backups
30. 30
Observability tools
Have a close look on your cluster
Logs & Metrics
We collect several
data on your cluster.
01
Collect
No extra cost
You don’t have to do
anything, we parse,
store and expose your
date right for you, for
2 months
02
Store
03
Open Source
Use industry
standard to use your
data. We provide
Graylog, Kibana
and Grafana for this
matter.
Profit
32. 32
Management
• CLI
– We provide vanilla database with super admin access. Use your standards commands!
• API
– Our OVH API allow you to start, resizing, delete a cluster, handle the backup and restore, whitelist IPs, …
• WEB Control Panel
– Everything you can do through API, but from a web interface. You will also be able to access billing console
and observability tools
33. 33
PostgreSQL extensions
• On top of PostgreSQL default extension we include :
– Ip4r
– Pglogical
– Pgrouting
– Postgis
– Wal2json
• This list is growing as our community can ask for more extensions coming for PGDG
repository
35. 35
Pricing : how-to
• Select a plan
• Add some options if needed
• Done ! Everything else is included
1
2
3
Included Optional
Managed service
Dedicated Nodes
1 x Primary
1 x Replica
1 x Backuper
RAID10 Storage with
constant IOPS
In/Out network traffic
Backup (3 months)
Additional replicas
Snapshot restore
36. 36
Estimated pricing tables
Nodes amount RAM (GB) /node IOPS /node Storage /node Price €H/T./month
3 dedicated nodes
are included by
default (primary,
replica, backuper).
16 To bench 450 GB RAID10 We target prices below AWS RDS /
Google SQL (simulating 1TB data
out + HA + std storage).
It includes in/out network traffic,
compute with constant IOPS,
storage, backups with 3 rolling
months retention, observability,
maintenance, …
32 To bench 450 GB RAID10
64 To bench 960 GB RAID10
128 To bench 1,9 TB RAID10
256 To bench 1,9 TB RAID10
Option Price / month
Replica (max 50) Depend of the cluster size
Backup restore OVH Public Cloud instance price
37. 37
Pricing comparison with RDS: 16GB cluster
• Needs : PostgreSQL 11 cluster in FRANCE region, with HA intra region (at least 1 x primary + 1 x replica) FULL TIME up
– 16GB RAM per node
– 450 GB storage per node
– Backups : 2 months (let’s say 1TB of storage)
OVH
Enterprise cloud DB
AWS RDS
General purpose storage
AWS RDS
Provisionned IOPS storage
1 x cluster 16GB
Included :
• 3 x nodes (primary, replica, backuper)
• 3 months backups
• In/Out traffic (unlimited)
• Storage : 450GB RAID10 with
constant IOPS (target min. 10’000),
i.e. 900 GB usable
Compute : 2 x db.m5.xlarge (single AZ) : $600
Storage : 450 GB : $119
Backup (0,095$ per GB) : 2TB : $190
Network In : free
Network out (0,09$ per GB) : $90
Compute : 2 x db.m5.xlarge (single AZ) : $600
Storage : 450 GB x : $130
Provisioned IOPS (5000) : $1160
Backup (0,095$ per GB) : 2TB : $190
Network In : free
Network out ( 0,09$ per 1TB) : $90
Total : $950 USD /month estimated.
Please consult us for pricing.
Total : $999 USD / month
/! you will have only 1350 IOPS at this price
GP = 3 IOPS per GB (punctual burst possible)
Total : $2170 USD /month
Made 07/05/2019. Prices from
https://calculator.s3.amazonaws.com/index.html
38. 38
Pricing comparison with RDS : 32GB cluster
• Needs : PostgreSQL 11 cluster in FRANCE region, with HA intra region (at least 1 x primary + 1 x replica) FULL TIME up
– 32GB RAM per node
– 450 GB storage per node
– Backups : 2 months (let’s say 1TB of storage)
OVH
Enterprise cloud DB
AWS RDS
General purpose storage
AWS RDS
Provisionned IOPS storage
1 x cluster 32GB
Included :
• 3 x nodes (primary, replica, backuper)
• 3 months backups
• In/Out traffic (unlimited)
• Storage : 450GB RAID10 with
constant IOPS (target min. 10’000),
i.e. 900 GB usable
Compute : 2 x db.m5.2xlarge (single AZ) :
$1200
Storage : 450 GB : $119
Backup (0,095$ per GB) : 2TB : $190
Network In : free
Network out (0,09$ per GB) : $90
Compute : 2 x 2db.m5.xlarge (single AZ) :
$1200
Storage : 450 GB x : $130
Provisioned IOPS (5000) : $1160
Backup (0,095$ per GB) : 2TB : $190
Network In : free
Network out ( 0,09$ per 1TB) : $90
Total : $1200 USD /month estimated.
Please consult us for pricing.
Total : $1599 USD / month
/! you will have only 1350 IOPS at this price
GP = 3 IOPS per GB (punctual burst possible)
Total : $2770 USD /month
With 5000 IOPS which is still low
Made 07/05/2019. Prices from
https://calculator.s3.amazonaws.com/index.html
40. 40
Estimated Roadmap
Public documentation Private network (vRack)
PostgreSQL
April 2020Today (April)
New DBMS (MariaDB, Redis, …)
More regions
August October
Offers
DBMS
Features
Doing
Pg_HBA
General Availability
Early access. Infra & offers fine-tuning, API, …
Custom settings (timezone, ..)
42. 42
F.A.Q.
How big can be the database?
How much querie/s the cluster can
handle?
It depends of the selected offer. Our
smallest offer provide ~900GB storage
available for PostgreSQL.
The offers above provide even more
storage, such as 1.8TB.
At some point it may be wise to use
sharding.
There is no better benchmark that your
own use-case. The amount of requests
per second depends a lot of the
worlkoad type (size of the request, in-
RAM use for your whole data, amount of
reads, amount of writes, CPU
consumption, etc).
43. 43
F.A.Q.
Can I customize my PostgreSQL
configuration?
Can I use extension? Can I use homemade
extension or patched extension?
Not for now on your side, but some
selected settings will be available soon.
Today each cluster is configured and
optimized by OVH. The configuration is
crafted based on years of experience
running our core databases.
As extension can impact cluster stability
and therefore QoS, You may install
extensions based on a OVH vetted list.
You cannot install/use homemade or
patched extension.
44. 44
F.A.Q.
How are backups managed?
Is the integrity of the backups checked?
How?
Backups are based on ZFS. They are made daily
by OVH and can be done on demand via the
OVH API.
To avoid bad impact on the production, we
process the backups jobs on a isolated host,
then we archive them on another location.
PostgreSQL service is stopped on this isolated
host before taking the snapshot to ensure the
consistency of the backup.
Not at the moment. To ensure backup integrity
we must know the business logic behind the
data (which we don't have). We plan to test
every single snapshot by restoring it on a test
instance and start PostgreSQL service.
45. 45
F.A.Q.
What is the backup frequency? What is the backup retention?
When activated, backups are
created daily. You also have the
possibility to perform manual
backups.
We keep backups during 3 rolling
months.
46. 46
F.A.Q.
Can I access all the backups? Can we restore a database with PITR
(Point-in-time-recovery)?
Backups are available via OVH API
and you cannot restore on a dedicated
host.
Yes, transaction logs are archived. This
way we can on your demand deploy a
dedicated host with a restoration of
your data right to the second.
47. 47
F.A.Q.
How is the load distributed on the cluster?
Is the replication synchronous or
asynchronous?
Read requests are distributed across the
cluster using a load balancer (HAproxy
on top of dedicated hardware).
There are two endpoints available, one
for read-write requests, one for read-only.
Only one server will receive read-write
requests. Based on your cluster host
count, one or more servers will receive
read-only requests.
So, in nominal mode, we provide one
host for read-write and one for read-only.
The replication is asynchronous.
Synchronous will be available soon.
48. 48
F.A.Q.
How a failed disk impact the
cluster/host?
How data corruption impact my cluster?
Disks are setup with RAID 10 so a
single disk failure won't impact its
host. Disks can be hot-swapped to
avoid downtime during replacement.
Therefore neither the host nor the
cluster is impacted by a failed disk.
• Application corruption
(malformed data inserted):
This is not per say a corruption
case but more a malformed data
case. This type of "corruption" will
be replicated across all nodes.
• Physical corruption:
We are using physical replication
(based on WAL), this means that
unless the corruption is written in
WAL files, the corruption will not be
spread across the cluster.
49. 49
F.A.Q.
If an issue happens, how the failover works? What
are the impacts?
Can I perform a "manual fallback", i.e. electing
another node as ”primary" ?
• If we lose a replica node (read access), there
is no need for failover. OVH will detect the
failure and will recover the replica.
• If we lose a primary node (read/write access),
automatic failover will happen. Another node
will be elected as a primary. During the
failover, some connections may be closed and
some write queries may fail but it will last for
few seconds only.
No, because there is no use-case who need a
manual fallback. For example, if there an
hardware issue on the primary node, OVH will
see it and perform the required operations to
recover from this issue.
50. 50
F.A.Q.
Worst-case scenario: what's happening if
OVH have an electrical failure inside the
whole datacenter?
Worst-case scenario: what's happening if we
have a cluster down, when writes operations
were ongoing, causing WAL corruptions?
We deploy cluster on regions, grouping
multiple availability zones, grouping multiple
datacenters:
• if minority of nodes is down (< 50%),
cluster will be safe
• if majority of nodes is down (>= 50%),
cluster will be in read-only mode
• if all nodes are down (= 100%), cluster will
reject connections
On region ca-west-qc-1 (Canada), there is
only one availability zone with electrical
connections across datacenters. An
electrical failure can take down the whole
infrastructure (extremely rare).
To avoid WAL corruption like this, the theory
is: hardware RAID cards + UPS with battery.
Our experience showed that these RAID
cards were causing much more troubles
compared to benefits (freezes, hazardous
rebuilds, ...).
If it's happening, our cluster will detect the
failure and promote another node as
”primary".
If physical corruption has been replicated
across the cluster, we will have to restore a
previous backup.
51. 51
F.A.Q.
Worst-case scenario: after a cluster
downtime, how long does it take to be up
again?
How can I monitor my cluster? May I use zabbix,
shinken, nagios?
If the cluster went down, but is able to
restart properly without data corruption or
hardware failures, it will take the same
time as rebooting. Few minutes.
Yes, you can monitor everything related to
PostgreSQL on the cluster using the
connection provided. We recommend you to
configure a dedicated user for monitoring.
Internally, we have everything in place to
monitor each clusters. Our solution is based
on shinken (mostly community plugin).
Furthermore, we will provide soon
observability tools such as Grafana and
Graylog.
52. 52
F.A.Q.
What are the exact maintenance tasks
performed by OVH?
Our goal is to simplify our customers life.
We deliver managed databases cluster, which
means in terms of maintenance:
• Linux OS security patches, distribution updates,
distribution upgrades
• DBMS security patches and minor versions
patches
• Hardware monitoring and maintenance: disk
failure, network failure, hosts failure, ...
DBMS major versions upgrade has to be defined
commonly and are not included. It could imply
services costs if needed (migration scenarios etc)
Where can I found all extensions available
on my cluster?
While connected to your cluster your can issue the
following command:
SELECT name
FROM pg_available_extensions
order by name;
53. 53
F.A.Q.
Where can I found all extensions available
offline?
Here is the list of available extensions at the time of
writing: address_standardizer,
address_standardizer_data_us, adminpack,
amcheck, autoinc, bloom, btree_gin, btree_gist,
citext, cube, dblink, dict_int, dict_xsyn, earthdistance,
file_fdw, fuzzystrmatch, hstore, hstore_plpython2u,
hstore_plpythonu, insert_username, intagg, intarray,
ip4r, isn, jsonb_plpython2u, jsonb_plpythonu, lo,
ltree, ltree_plpython2u, ltree_plpythonu,
moddatetime, pageinspect, pg_buffercache,
pg_freespacemap, pg_prewarm,
pg_stat_statements, pg_trgm, pg_visibility, pgcrypto,
pglogical, pglogical_origin, pgrouting, pgrowlocks,
pgstattuple, plpgsql, plpython2u, plpythonu, postgis,
postgis_sfcgal, postgis_tiger_geocoder,
postgis_topology, postgres_fdw, refint, seg, sslinfo,
tablefunc, tcn, timetravel, tsm_system_rows,
tsm_system_time, unaccent, uuid-ossp, xml2.
54. 54
Contact us
Questions, feedbacks ?
Feel free to contact us !
Request early access
Fill the form below :
@bastienOVH
databases@ml.ovh.net
https://labs.ovh.com/ha-database