After some years, MySQL with Galera became the most common solution for synchronous replication. The cloud (and EC2 in particular) was one of the platforms that most successfully employed MySQL/Galera installations.
This year with Aurora, Amazon introduced an alternative solution that use all the flexibility of AWS and simplicity of RDS.
This presentation describes the behavior of both MySQL/Galera and Aurora, showing the details of how the two different solutions behave when dealing with same load. We will highlight the strong point of each, and which represents the best tool - depending on the needs of the situation.
Attendees will be able to make an informed decision on what kind of solutions will be the most efficient, in respect to their actual requirements.
Prototyping Data Intensive Apps: TrendingTopics.orgPeter Skomoroch
Hadoop World 2009 talk on rapid prototyping of data intensive web applications with Hadoop, Hive, Amazon EC2, Python, and Ruby on Rails. Describes the process of building the open source trend tracking site trendingtopics.org
Prototyping Data Intensive Apps: TrendingTopics.orgPeter Skomoroch
Hadoop World 2009 talk on rapid prototyping of data intensive web applications with Hadoop, Hive, Amazon EC2, Python, and Ruby on Rails. Describes the process of building the open source trend tracking site trendingtopics.org
Advanced Autoscaling for Kubernetes & Amazon ECS - DEM05 - Anaheim AWS SummitAmazon Web Services
There are many things to consider when effectively architecting to scale a Kubernetes or Amazon EC2 Container Service (Amazon ECS), especially in heterogeneous environments that are composed of different machine types and sizes. To increase the cluster's efficiency, it is crucial to choose the right instance size and type for the right workload. In this talk, we discuss the two important concepts of Kubernetes auto-scaling: Headroom and 2 Levels Scaling. In addition, we review the different Kubernetes deployment tools, including Kubernetes operations (Kops).
From the AWS Briefing that took place at Croke Park, Dublin on 20 March 2014. Aa short tour around a small selection the innovative uses for AWS Services.
Developing Ansible Dynamic Inventory Script - Nov 2017Ahmed AbouZaid
A session about my experience with writing an external inventory script from scratch for "Netbox" (IPAM and DCIM tool from DigitalOcean network engineering team) and push it to upstream to became an official inventory script.
Repo:
https://github.com/AAbouZaid/netbox-as-ansible-inventory
The "Dynamic inventory" is one of nice features in Ansible, where you can use an external service as inventory for Ansible instead the basic text-based ini file. So you can use AWS EC2 as inventory of your hosts, or maybe OpenStack, or whatever ... you actually can use any source inventory for Ansible, and you can write your own "External Inventory Script".
Talk on Apache Spark I gave at Hyderabad Software Architects meetup on 20-Jan-2018.
Source code and commands are at
http://www.mediafire.com/file/tzmzahftxnabs0g/HSA-Spark-20-Jan-2018.zip
Federated Graphite in Docker - Denver Docker MeetupPhil Zimmerman
Slides from talk I gave at Denver Docker Meetup about a Federated Graphite Cluster in Docker using Consul, Consul-template, Registrator, Mesos and Marathon.
A quick walk through InfluxDB and TICK Stack.
Telegraf (Collect), InfluxDB (Store), Chrongraf (Visualize), and Kapacitor (Process).
- What is time series data?
- Why TICK Stack?
- Where could TICK Stack be used?
This presentation was inspired post read of "TimeSeries Databases" -- Ted Dunning & Ellen Friedman.
I have tried to summarize a lot of the previous bench marks. Hope others find it useful. The slides were compiled early 2015 so some of the results might have changed but the core literature should still hold.
apidays LIVE Paris 2021 - Building an AWS EC2 Carbon Emissions Dataset by Ben...apidays
apidays LIVE Paris 2021 - APIs and the Future of Software
December 7, 8 & 9, 2021
Building an AWS EC2 Carbon Emissions Dataset
Benjamin Davy, Innovation Manager at Teads
Initially presented at OpenWest 2014 conference.
Graphite and StatsD gather line series data and offer a robust set of APIs to access that data. While the tools are robust, the dashboards are straight from 1992 and alerting off the data is nonexistent. Nark, an opensource project, solves both of these problems. It provides easy to use dashboards and readily available alerts and notifications to users. It has been used in production at Lucid Software for almost a year. Related to Nark are the tools required to make Graphite highly available.
How a Particle Accelerator Monitors Scientific Experiments Using InfluxDBInfluxData
European XFEL are the creators of the strongest x-ray beam in the world. Their 3.4-km long X-ray free-electron laser underground tunnel is used by researchers from around the world. Scientists use their facilities to map atomic details of viruses, film chemical reactions, and study the processes in the interior of planets. Discover how European XFEL uses InfluxDB to monitor their scientific experiments and research.
In this webinar, Alessandro Silenzi will dive into:
European XFEL’s approach to empowering the worldwide community to push the boundaries of science
The evolution of their data management solution — from homegrown to InfluxDB
How a time series platform is used to analyze and validate experiment data
Business Dashboards using Bonobo ETL, Grafana and Apache AirflowRomain Dorgueil
Zero-to-one hands-on introduction to building a business dashboard using Bonobo ETL, Apache Airflow, and a bit of Grafana (because graphs are cool). The talk is based on the early version of our tools to visualize apercite.fr website. Plan, Implementation, Visualization, Monitoring and Iterate from there.
Amazon Aurora is a MySQL-compatible relational database engine that combines the speed and availability of high-end commercial databases with the simplicity and cost-effectiveness of open source databases. Amazon Aurora is disruptive technology in the database space, bringing a new architectural model and distributed systems techniques to provide far higher performance, availability and durability than previously available using conventional monolithic database techniques. In this session, we will do a deep-dive into some of the key innovations behind Amazon Aurora, discuss best practices and configurations, and share early customer experience from the field.
Leveraging Databricks for Spark PipelinesRose Toomey
How Coatue Management saved time and money by moving Spark pipelines to Databricks.
Talk given at AWS + Databricks ML Dev Day workshop in NYC on 27 February 2020.
Leveraging Databricks for Spark pipelinesRose Toomey
How Coatue Management saved time and money by moving Spark pipelines to Databricks.
Talk given at AWS + Databricks ML Dev Day workshop in NYC on 27 February 2020.
UKOUG version of a presentation trying to establish the sensible limits of parallelism on a couple of hardware configurations. Detailed white paper is at http://oracledoug.com/px_slaves.pdf
Advanced Autoscaling for Kubernetes & Amazon ECS - DEM05 - Anaheim AWS SummitAmazon Web Services
There are many things to consider when effectively architecting to scale a Kubernetes or Amazon EC2 Container Service (Amazon ECS), especially in heterogeneous environments that are composed of different machine types and sizes. To increase the cluster's efficiency, it is crucial to choose the right instance size and type for the right workload. In this talk, we discuss the two important concepts of Kubernetes auto-scaling: Headroom and 2 Levels Scaling. In addition, we review the different Kubernetes deployment tools, including Kubernetes operations (Kops).
From the AWS Briefing that took place at Croke Park, Dublin on 20 March 2014. Aa short tour around a small selection the innovative uses for AWS Services.
Developing Ansible Dynamic Inventory Script - Nov 2017Ahmed AbouZaid
A session about my experience with writing an external inventory script from scratch for "Netbox" (IPAM and DCIM tool from DigitalOcean network engineering team) and push it to upstream to became an official inventory script.
Repo:
https://github.com/AAbouZaid/netbox-as-ansible-inventory
The "Dynamic inventory" is one of nice features in Ansible, where you can use an external service as inventory for Ansible instead the basic text-based ini file. So you can use AWS EC2 as inventory of your hosts, or maybe OpenStack, or whatever ... you actually can use any source inventory for Ansible, and you can write your own "External Inventory Script".
Talk on Apache Spark I gave at Hyderabad Software Architects meetup on 20-Jan-2018.
Source code and commands are at
http://www.mediafire.com/file/tzmzahftxnabs0g/HSA-Spark-20-Jan-2018.zip
Federated Graphite in Docker - Denver Docker MeetupPhil Zimmerman
Slides from talk I gave at Denver Docker Meetup about a Federated Graphite Cluster in Docker using Consul, Consul-template, Registrator, Mesos and Marathon.
A quick walk through InfluxDB and TICK Stack.
Telegraf (Collect), InfluxDB (Store), Chrongraf (Visualize), and Kapacitor (Process).
- What is time series data?
- Why TICK Stack?
- Where could TICK Stack be used?
This presentation was inspired post read of "TimeSeries Databases" -- Ted Dunning & Ellen Friedman.
I have tried to summarize a lot of the previous bench marks. Hope others find it useful. The slides were compiled early 2015 so some of the results might have changed but the core literature should still hold.
apidays LIVE Paris 2021 - Building an AWS EC2 Carbon Emissions Dataset by Ben...apidays
apidays LIVE Paris 2021 - APIs and the Future of Software
December 7, 8 & 9, 2021
Building an AWS EC2 Carbon Emissions Dataset
Benjamin Davy, Innovation Manager at Teads
Initially presented at OpenWest 2014 conference.
Graphite and StatsD gather line series data and offer a robust set of APIs to access that data. While the tools are robust, the dashboards are straight from 1992 and alerting off the data is nonexistent. Nark, an opensource project, solves both of these problems. It provides easy to use dashboards and readily available alerts and notifications to users. It has been used in production at Lucid Software for almost a year. Related to Nark are the tools required to make Graphite highly available.
How a Particle Accelerator Monitors Scientific Experiments Using InfluxDBInfluxData
European XFEL are the creators of the strongest x-ray beam in the world. Their 3.4-km long X-ray free-electron laser underground tunnel is used by researchers from around the world. Scientists use their facilities to map atomic details of viruses, film chemical reactions, and study the processes in the interior of planets. Discover how European XFEL uses InfluxDB to monitor their scientific experiments and research.
In this webinar, Alessandro Silenzi will dive into:
European XFEL’s approach to empowering the worldwide community to push the boundaries of science
The evolution of their data management solution — from homegrown to InfluxDB
How a time series platform is used to analyze and validate experiment data
Business Dashboards using Bonobo ETL, Grafana and Apache AirflowRomain Dorgueil
Zero-to-one hands-on introduction to building a business dashboard using Bonobo ETL, Apache Airflow, and a bit of Grafana (because graphs are cool). The talk is based on the early version of our tools to visualize apercite.fr website. Plan, Implementation, Visualization, Monitoring and Iterate from there.
Amazon Aurora is a MySQL-compatible relational database engine that combines the speed and availability of high-end commercial databases with the simplicity and cost-effectiveness of open source databases. Amazon Aurora is disruptive technology in the database space, bringing a new architectural model and distributed systems techniques to provide far higher performance, availability and durability than previously available using conventional monolithic database techniques. In this session, we will do a deep-dive into some of the key innovations behind Amazon Aurora, discuss best practices and configurations, and share early customer experience from the field.
Leveraging Databricks for Spark PipelinesRose Toomey
How Coatue Management saved time and money by moving Spark pipelines to Databricks.
Talk given at AWS + Databricks ML Dev Day workshop in NYC on 27 February 2020.
Leveraging Databricks for Spark pipelinesRose Toomey
How Coatue Management saved time and money by moving Spark pipelines to Databricks.
Talk given at AWS + Databricks ML Dev Day workshop in NYC on 27 February 2020.
UKOUG version of a presentation trying to establish the sensible limits of parallelism on a couple of hardware configurations. Detailed white paper is at http://oracledoug.com/px_slaves.pdf
It’s been an exciting year for Amazon Aurora, the MySQL-compatible relational database engine that combines the speed and availability of high-end commercial databases with the simplicity and cost-effectiveness of open source databases. In this deep dive session, we’ll discuss best practices and explore new features, include high availability options and new integrations with AWS services. We’ll also discuss the recently-announced Aurora with PostgreSQL compatibility.
TupleJump: Breakthrough OLAP performance on Cassandra and SparkDataStax Academy
Apache Cassandra is rock-solid and widely deployed for OLTP and real-time applications, but it is typically not thought of as an OLAP database for analytical queries. This talk will show architectures and techniques for combining Apache Cassandra and Spark to yield a 10-1000x improvement in OLAP analytical performance. We will then introduce a new open-source project that combines the above performance improvements with the ease of use of Apache Cassandra, and compare it to implementations based on Hadoop and Parquet.
First, the existing Cassandra Spark connector allows one to easily load data from Cassandra to Spark. We'll cover how to accelerate queries through different caching options in Spark, and the tradeoffs and limitations around performance, memory, and updating data in real time. We then dive into the use of columnar storage layout and efficient coding techniques that dramatically speed up I/O for OLAP use cases. Cassandra features like triggers and custom secondary indexes allow for easy data ingestion into columnar format. Next, we explore how to integrate this new storage with Spark SQL and its pluggable data storage API. Future developments will enable extreme analytical database performance, including smart caching of column projections, a columnar version of Spark's Catalyst execution planner, and how vectorization makes for fast cache- and GPU-friendly calculations - see Spark's Project Tungsten.
FiloDB is a new open-source database using the above techniques to combine very fast Spark SQL analytical queries with the ease of use of Cassandra. We will briefly cover interesting use cases, such as:
* Easy exactly-once ingestion from Kafka for streaming and IoT applications
* Incremental computed columns and geospatial annotations. We'll discuss how FiloDB improves aggregations needed for choropleth maps over standard PostGIS solutions.
FiloDB - Breakthrough OLAP Performance with Cassandra and SparkEvan Chan
You want to ingest event, time-series, streaming data easily, yet have flexible, fast ad-hoc queries. Is this even possible? Yes! Find out how in this talk of combining Apache Cassandra and Apache Spark, using a new open-source database, FiloDB.
How to Monitor and Size Workloads on AWS i3 instancesScyllaDB
There is a new class of machines in town! Amazon recently unveiled i3, a new class of machines targeted at I/O-intensive workloads. Scylla will officially support i3, and previews are already available.
Join our webinar to learn how to build a state-of-the-art database solution. Presenters Glauber Costa and Eyal Gutkind will cover how to:
- Determine which workloads can benefit from i3 instances
- Ensure Scylla fully leverages the great resources in the i3 family
- Effectively navigate the Scylla monitoring system and identify bottlenecks
You'll also see a live demonstration with a dashboard featuring an i3 cluster with different data models and workloads.
Consolidate MySQL Shards Into Amazon Aurora Using AWS Database Migration Serv...Amazon Web Services
Learning Objectives:
- Learn how to scale your MySQL database at reduced cost and higher elasticity, by consolidating multiple shards into one Amazon Aurora cluster
- Learn about Amazon Aurora
- Learn about AWS Database Migration Service
Consolidate MySQL Shards Into Amazon Aurora Using AWS Database Migration Serv...Amazon Web Services
If you’re running a MySQL database at scale, there’s a good chance you’re sharding your database deployment. Sharding is a useful way to increase the scale of your deployment, but it has drawbacks like higher costs, high administration overheard and lower elasticity. It’s harder to grow or shrink a sharded database deployment to match your traffic patterns. In this session, we will discuss and demonstrate how to use AWS Database Migration Service to consolidate multiple MySQL shards into an Amazon Aurora cluster to reduce cost, improve elasticity and make it easier to manage your database.
Learning Objectives:
Learn how to scale your MySQL database at reduced cost and higher elasticity, by consolidating multiple shards into one Amazon Aurora cluster.
AWS and Open Cloud, All Things Open, 10/25/2013, Raleigh NCGreg DeKoenigsberg
How does open cloud compete with AWS? By recognizing that AWS has won, and by duplicating its functionality and semantics as rapidly as possible to provide users with desperately needed choice.
Scaling an invoicing SaaS from zero to over 350k customersSpeck&Tech
ABSTRACT: Fatture in Cloud was born in late 2013 on a single-server machine and scaled from zero to 35k customers at the end of 2018. Then, we faced the mandatory electronic invoicing which came into effect in Italy on 1st January 2019, and we experienced a huge growth to 350k customers in few months. In these 5 years, I've learned a lot about cloud architecture, scalability, optimization, DevOps, and we eventually achieved a 99,99% uptime even in the huge growth period.
BIO: Daniele Ratti is the Founder and CEO of Fatture in Cloud, which is currently the leader invoicing platform in Italy, counting more than 350k customers.
Amazon Aurora is a MySQL-compatible database engine that combines the speed and availability of high-end commercial databases with the simplicity and cost-effectiveness of open source databases. This session introduces you to Amazon Aurora, explains common use cases for the service, and helps you get started with building your first Amazon Aurora–powered application.
Amazon Aurora is a MySQL-compatible relational database engine that combines the speed and availability of high-end commercial databases with the simplicity and cost-effectiveness of open source databases. Amazon Aurora is disruptive technology in the database space, bringing a new architectural model and distributed systems techniques to provide far higher performance, availability and durability than previously available using conventional monolithic database techniques. In this session, we will do a deep-dive into some of the key innovations behind Amazon Aurora, discuss best practices and configurations, and share early customer experience from the field.
Amazon Aurora is a cloud-optimized relational database that combines the speed and availability of high-end commercial databases with the simplicity and cost-effectiveness of open source databases. The recently announced PostgreSQL-compatibility, together with the original MySQL compatibility, are perfect for new application development and for migrations from overpriced, restrictive commercial databases. In this session, we’ll do a deep dive into the new architectural model and distributed systems techniques behind Amazon Aurora, discuss best practices and configurations, look at migration options and share customer experience from the field.
AWS June 2016 Webinar Series - Amazon Aurora Deep Dive - Optimizing Database ...Amazon Web Services
Amazon Aurora is a MySQL-compatible relational database engine that combines the speed and availability of high-end commercial databases with the simplicity and cost-effectiveness of open source databases. Amazon Aurora is a disruptive technology in the database space, bringing a new architectural model and distributed system techniques to provide far higher performance, availability and durability than previously available using conventional monolithic database techniques. In this session, we will do a deep-dive into some of the key innovations behind Amazon Aurora, discuss best practices and configurations, and share customer experiences from the field.
Learning Objectives:
Learn how Amazon Aurora delivers 5x the performance and 1/10th the cost
Learn best practices for using Amazon Aurora
Spark Summit EU 2015: Lessons from 300+ production usersDatabricks
At Databricks, we have a unique view into over a hundred different companies trying out Spark for development and production use-cases, from their support tickets and forum posts. Having seen so many different workflows and applications, some discernible patterns emerge when looking at common performance and scalability issues that our users run into. This talk will discuss some of these common common issues from an engineering and operations perspective, describing solutions and clarifying misconceptions.
NEW LAUNCH! Introducing PostgreSQL compatibility for Amazon AuroraAmazon Web Services
After we launched Amazon Aurora, a cloud-native relational database with region-wide durability, high availability, fast failover, up to 15 read replicas, and up to five times the performance of MySQL, many of you asked us whether we could deliver the same features - but with PostgreSQL compatibility. We are now delivering a preview of Amazon Aurora with this functionality: we have built a PostgreSQL-compatible edition of Amazon Aurora, sharing the core Amazon Aurora innovations with the object-oriented capabilities, language interfaces, JSON compatibility, ANSI:SQL:2008 compliance, and broad functional richness of PostgreSQL. Amazon Aurora will provide full PostgreSQL compatibility while delivering more than twice the performance of the community PostgreSQL database on many workloads. At this session, we will be discussing the newest addition to Amazon Aurora in detail.
Percona xtra db cluster(pxc) non blocking operations, what you need to know t...Marco Tusa
Performing simple DDL operations as ADD/DROP INDEX in a tightly connected cluster as PXC, can become a nightmare. Metalock will prevent Data modifications for long period of time and to bypass this, we need to become creative, like using Rolling schema upgrade or Percona online-schema-change. With NBO, we will be able to avoid such craziness at least for a simple operation like adding an index. In this brief talk I will illustrate what you should do to see the negative effect of NON using NBO, as well what you should do to use it correctly and what to expect out of it.
The constant pressure to move DATA in containers and Kubernetes is creating a lot of confusion and misunderstanding.
This is particularly dangerous when talking about Relational Database Management System.
MySQL, as well as Oracle, Postgres or SQL Server, is a RDBM, as such subject to the erroneous interpretation caused by this new crazy shining things that will solve all. In this short talk we will clarify, that first of all, we are not looking to something new and second why we need to be very careful when talking about using Kubernetes and containers for RDBMS.
Comparing high availability solutions with percona xtradb cluster and percona...Marco Tusa
Percona XtraDB Cluster (PXC) is currently the most popular solution for HA in the MySQL ecosystem, and any solutions Galera-based as PXC have been the only viable option when looking for a high grade of HA using synchronous replication.
But Oracle had intensively worked on making Group Replication more solid and easy to use.
It is time to identify if Group Replication and attached solutions, like InnoDB cluster, can compete or even replace solutions based on Galera.
This presentation will focus on comparing the two solutions and how they behave when serving basic HA problems.
Attendees will be able to get a clearer understanding of which solutions will serve them better, and in which cases.
Accessing data through hibernate: what DBAs should tell to developers and vic...Marco Tusa
Accessing data through Hibernate, what DBA should tell to developers by Marco Tusa & Francisco Bordenave
This presentation will go through the simple process of accessing data from a Java application. What actually happens when we use a simple direct connection, and what instead happen using an ORM/Persistent layer like hibernate. How this apparently makes programmers life easier and DBAs days more difficult.
Best practice-high availability-solution-geo-distributed-finalMarco Tusa
Nowadays implementing different grades of business continuity for the data layer storage is a common requirement. When designing architectures that include MySQL as a data layer, we have different options to cover the required target. Nevertheless we still see a lot of confusion when in the need to properly cover concepts such as High Availability and Disaster Recovery. Confusion that often leads to improper architecture design and wrong solution implementation. This presentation aims to remove that confusion and provide clear guidelines when in the need to design a robust, flexible resilient architecture for your data layer.
In this presentation I am illustrating how and why InnodDB perform Merge and Split pages. I will also show what are the possible things to do to reduce the impact.
Robust HA Solutions - Native Support for PXC and InnoDB cluster in ProxySQL
This talk will illustrate and discuss several MySQL reference architectures that implement a different grade of tightly coupled database cluster.
We will show how ProxySQL implementation is a natural fit in all of them, and how easily it will provide additional stability and functionalities improvement.
Secure our data is a complex topic. We can build a very strong protection around our data, but nothing will prevent the one WHO could potentially access it to compromise the data integrity or to expose it.
This because we either under estimate the control we can or should impose, or because we think to do not have the tools to perform such control.
Nowadays to be able to control and manage what can access our data is a must, while how to do with standard tools it is a nightmare.
The presentation will guide you in a journey, there you will discover how implementing a quite robust protection, more than what you thought was possible.
Even more, it is possible and your performances will even improve. Cool right?
We will discuss:
- Access using not standard port
- Implement selective query access
- Define accessibility by location/ip/id
- Reduce to minimum cost of filtering
- Automate the query discovery
Are we there Yet?? (The long journey of Migrating from close source to opens...Marco Tusa
Migrating from Oracle to MySQL or another Open source RDBMS like Postgres is not as straightforward as many think if not well guided. Check what it means doing with someone that has done it already.
Mysql8 advance tuning with resource groupMarco Tusa
I have a very noisy secondary application written by a very, very bad developer that accesses my servers, mostly with read queries, and occasionally with write updates. Reads and writes are obsessive and create an impact on the MAIN application. My task is to limit the impact of this secondary application without having the main one affected. To do that I will create two resource groups, one for WRITE and another for READ. The first group, Write_app2, will have no cpu affiliation, but will have lowest priority.
Advance Sharding Solution with ProxySQL
ProxySQL is a very powerful platform that allows us to manipulate and manage our connections and queries in a simple but effective way.
Historically MySQL lacks in sharding capability. This significant missing part had often cause developer do implement sharding at application level, or DBA/SA to move on to another solution.
ProxySQL comes with an elegant and simple solution that allow us to implement sharding capability with MySQL without the need to perform significant, or at all, changes in the code.
This brief presentation will illustrate how to successfully configure and use ProxySQL to perform sharding, from very simple approach based on connection user/ip/port, to complicate ones that see the need to read values inside queries.
Geographically dispersed perconaxtra db cluster deploymentMarco Tusa
Geographically Dispersed Percona XtraDB Cluster Deployment
Percona XtraDB Cluster is a very robust, high performing and widly used solution to answer to High Availability needs. But it can be very challinging when we are in the need to deploy the cluster over a geographically disperse area.
This presentation will briefely discuss what is the right approach to sucessfully deploy PXC when in the need to cover multiple geographical sites, close and far.
- What is PXC and what happens in a set of node when commit
- Let us clarify, geo dispersed
- What to keep in mind then
- how to measure it correctly
- Use the right way (sync/async)
- Use help like replication_manager
Presentation shows how ProxySQL can improve the HA in solution like MySQL async and sync replication without the need to increase the platform complexity.
Scaling with sync_replication using Galera and EC2Marco Tusa
Challenging architecture design, and proof of concept on a real case of study using Syncrhomous solution.
Customer asks me to investigate and design MySQL architecture to support his application serving shops around the globe.
Scale out and scale in base to sales seasons.
Have you ever wondered how search works while visiting an e-commerce site, internal website, or searching through other types of online resources? Look no further than this informative session on the ways that taxonomies help end-users navigate the internet! Hear from taxonomists and other information professionals who have first-hand experience creating and working with taxonomies that aid in navigation, search, and discovery across a range of disciplines.
This presentation by Morris Kleiner (University of Minnesota), was made during the discussion “Competition and Regulation in Professions and Occupations” held at the Working Party No. 2 on Competition and Regulation on 10 June 2024. More papers and presentations on the topic can be found out at oe.cd/crps.
This presentation was uploaded with the author’s consent.
Sharpen existing tools or get a new toolbox? Contemporary cluster initiatives...Orkestra
UIIN Conference, Madrid, 27-29 May 2024
James Wilson, Orkestra and Deusto Business School
Emily Wise, Lund University
Madeline Smith, The Glasgow School of Art
This presentation, created by Syed Faiz ul Hassan, explores the profound influence of media on public perception and behavior. It delves into the evolution of media from oral traditions to modern digital and social media platforms. Key topics include the role of media in information propagation, socialization, crisis awareness, globalization, and education. The presentation also examines media influence through agenda setting, propaganda, and manipulative techniques used by advertisers and marketers. Furthermore, it highlights the impact of surveillance enabled by media technologies on personal behavior and preferences. Through this comprehensive overview, the presentation aims to shed light on how media shapes collective consciousness and public opinion.
Acorn Recovery: Restore IT infra within minutesIP ServerOne
Introducing Acorn Recovery as a Service, a simple, fast, and secure managed disaster recovery (DRaaS) by IP ServerOne. A DR solution that helps restore your IT infra within minutes.
0x01 - Newton's Third Law: Static vs. Dynamic AbusersOWASP Beja
f you offer a service on the web, odds are that someone will abuse it. Be it an API, a SaaS, a PaaS, or even a static website, someone somewhere will try to figure out a way to use it to their own needs. In this talk we'll compare measures that are effective against static attackers and how to battle a dynamic attacker who adapts to your counter-measures.
About the Speaker
===============
Diogo Sousa, Engineering Manager @ Canonical
An opinionated individual with an interest in cryptography and its intersection with secure software development.
2. Percona Live Europe 2016
Comparing synchronous
replication solutions in the
cloud
Marco Tusa Manager Consulting
Amsterdam, Netherlands | October 3 – 5, 2016
3. — Marco “The Grinch”
— Open source enthusiast
About me
4. A quick Overview of recent tests done in AWS comparing
EC2 with PXC and Aurora
About tests
http://goo.gl/d2Wq06
About Aurora
https://goo.gl/o32HaV
Overview
21. — For small installation Aurora is not a good fit, PXC was working better
— As workload scale and make sense to use larger boxes Aurora had
shown better results, scaling as expected.
— Aurora has still some issues (known):
— Hard limit to 16k connections
— When using hotspot it is easy to hit the 158 error problem (no is not the
Full text error)
— I am sure that I hit some connector issue here and there, exploring
alternatives to the MariaDB solution may be a good thing to keep in mind
(for the Aurora dev team).
I like to provide to my customer alternatives, Aurora is one but …
Conclusions