Palomino is a professional services company that provides database management and operations support. It is led by Laine Campbell, Jay Edwards, Charlie Killian, and Kevin Bowman. Palomino has experience managing production systems for high growth companies and specializes in database selection, high availability, search/analytics, cloud solutions, automation, and operational support.
CouchConf SF 2012 Lightning Talk - Operational ExcellenceLaine Campbell
Palomino provides consulting and operational support services for Couchbase databases. Their team has over 100 years of experience with distributed, scalable systems. Services include configuration management, change management, availability management, monitoring, backups, and 24/7 operational support. They can help with Couchbase proof of concepts, migrations, cluster sizing, and infrastructure builds.
This document contains a resume for Vivek Gundavarapu. It summarizes his professional experience as a Cloud Administrator and Linux Administrator with over 5 years of experience working with technologies like Openstack, Red Hat, IBM Storage, KVM, and Windows. It lists his areas of expertise, certifications in ITIL, Red Hat technologies, and Openstack. His most recent role is as a Senior IT Analyst helping to manage an Openstack cloud infrastructure for banking clients at Tenxlabs.
Juniper Networks provides WX/WXC platforms to accelerate enterprise applications over the WAN. The platforms compress, cache, and accelerate applications to improve performance. This allows organizations to consolidate servers, simplify administration, and provide instant response times to users while reducing costs, increasing productivity and ensuring regulatory compliance. Over 1,400 customers use the WX/WXC platforms to achieve these business and IT objectives.
The document discusses LinkedIn's network updates service. It describes the service's architecture, which uses a pull-based model where members' feeds are dynamically assembled from the updates of their connections. The service handles high volumes of updates, delivers updates and emails to millions of members daily, and integrates with external services like Twitter. It emphasizes scaling the system through techniques like filtering, caching, parallelization and sharding databases.
How LinkedIn uses memcached, a spoonful of SOA, and a sprinkle of SQL to scaleLinkedIn
This is one of two presentations given by LinkedIn engineers at Java One 2009.
This presentation was given by David Raccah and Dhananjay Ragade from LinkedIn.
For more information, check out http://blog.linkedin.com/
Share point disaster avoidance architecture for large scale enterprisesSentri
SharePoint best practices dictate that a proper disaster recovery plan should be in place before the launch of your SharePoint farm. Standard methodologies related to disaster planning in SharePoint deal with the traditional type of scenarios where your datacenter is a smoldering hole in the ground. Processes such as SQL Server database backups or STSADM backups for site collections are often employed to cater to such scenarios. When something seemingly benign like a Secure Store Service Application corruption strikes, architects and administrators often come to the sad conclusion that a complete farm rebuild is their only recourse. Additionally the risks associated with the application of regular bi-monthly SharePoint Cumulative Updates and periodic service packs, all of which have no uninstall or undo features, also serve to increase the probability of experiencing an complete emergency farm rebuild at some point in an architect/administrator’s career. Long after a rebuild is completed and business has been restored to "almost" normal status, you’ll still be troubleshooting server configurations and tweaking the environment to get back to your pre-disaster level.
This workshop takes you through a dramatically new way of architecting your disaster plan. By applying the principles of this new methodology, you’ll dramatically cut down your disaster response time to the point of almost avoiding them entirely.
Sanjay Kumar has over 8 years of experience in IT operations and infrastructure management. He is currently working as a System Administrator at IL&FS Technologies Ltd in Gurgaon where he manages a VMware, AWS, Azure, and Linux environment including over 300 physical and 700 virtual servers. Previously he held several roles such as System Administrator, IT Executive, and Technical Support Engineer where he supported Windows, Linux, and virtual server environments and managed backups, security, networking and helpdesk functions. He has skills in technologies including Active Directory, Exchange, Office 365, firewalls, load balancers and antivirus administration.
Cette session est un retour d’expérience d’un passage à Oracle 12c de 400 bases de données. Actuellement 300 bases de données ont été migrées avec de bonnes et de mauvaises surprises! Cette session va présenter les situations que nous avons rencontrées durant ces migrations. Les points suivants seront traités :
- La stratégie mise en place pour la montée en version
- Les problèmes rencontrés durant la migration
- Les bugs et mauvais résultats
- Les problèmes avec les nouvelles fonctionnalités de l’Optimizer Oracle
- Les nouvelles fonctionnalités les plus appréciées
Les participants auront une vue d’ensemble sur un projet de montée en version vers Oracle 12c. Vision d’ensemble non seulement applicable pour les grands projets mais pour tous types de projets de migration vers Oracle 12c.
CouchConf SF 2012 Lightning Talk - Operational ExcellenceLaine Campbell
Palomino provides consulting and operational support services for Couchbase databases. Their team has over 100 years of experience with distributed, scalable systems. Services include configuration management, change management, availability management, monitoring, backups, and 24/7 operational support. They can help with Couchbase proof of concepts, migrations, cluster sizing, and infrastructure builds.
This document contains a resume for Vivek Gundavarapu. It summarizes his professional experience as a Cloud Administrator and Linux Administrator with over 5 years of experience working with technologies like Openstack, Red Hat, IBM Storage, KVM, and Windows. It lists his areas of expertise, certifications in ITIL, Red Hat technologies, and Openstack. His most recent role is as a Senior IT Analyst helping to manage an Openstack cloud infrastructure for banking clients at Tenxlabs.
Juniper Networks provides WX/WXC platforms to accelerate enterprise applications over the WAN. The platforms compress, cache, and accelerate applications to improve performance. This allows organizations to consolidate servers, simplify administration, and provide instant response times to users while reducing costs, increasing productivity and ensuring regulatory compliance. Over 1,400 customers use the WX/WXC platforms to achieve these business and IT objectives.
The document discusses LinkedIn's network updates service. It describes the service's architecture, which uses a pull-based model where members' feeds are dynamically assembled from the updates of their connections. The service handles high volumes of updates, delivers updates and emails to millions of members daily, and integrates with external services like Twitter. It emphasizes scaling the system through techniques like filtering, caching, parallelization and sharding databases.
How LinkedIn uses memcached, a spoonful of SOA, and a sprinkle of SQL to scaleLinkedIn
This is one of two presentations given by LinkedIn engineers at Java One 2009.
This presentation was given by David Raccah and Dhananjay Ragade from LinkedIn.
For more information, check out http://blog.linkedin.com/
Share point disaster avoidance architecture for large scale enterprisesSentri
SharePoint best practices dictate that a proper disaster recovery plan should be in place before the launch of your SharePoint farm. Standard methodologies related to disaster planning in SharePoint deal with the traditional type of scenarios where your datacenter is a smoldering hole in the ground. Processes such as SQL Server database backups or STSADM backups for site collections are often employed to cater to such scenarios. When something seemingly benign like a Secure Store Service Application corruption strikes, architects and administrators often come to the sad conclusion that a complete farm rebuild is their only recourse. Additionally the risks associated with the application of regular bi-monthly SharePoint Cumulative Updates and periodic service packs, all of which have no uninstall or undo features, also serve to increase the probability of experiencing an complete emergency farm rebuild at some point in an architect/administrator’s career. Long after a rebuild is completed and business has been restored to "almost" normal status, you’ll still be troubleshooting server configurations and tweaking the environment to get back to your pre-disaster level.
This workshop takes you through a dramatically new way of architecting your disaster plan. By applying the principles of this new methodology, you’ll dramatically cut down your disaster response time to the point of almost avoiding them entirely.
Sanjay Kumar has over 8 years of experience in IT operations and infrastructure management. He is currently working as a System Administrator at IL&FS Technologies Ltd in Gurgaon where he manages a VMware, AWS, Azure, and Linux environment including over 300 physical and 700 virtual servers. Previously he held several roles such as System Administrator, IT Executive, and Technical Support Engineer where he supported Windows, Linux, and virtual server environments and managed backups, security, networking and helpdesk functions. He has skills in technologies including Active Directory, Exchange, Office 365, firewalls, load balancers and antivirus administration.
Cette session est un retour d’expérience d’un passage à Oracle 12c de 400 bases de données. Actuellement 300 bases de données ont été migrées avec de bonnes et de mauvaises surprises! Cette session va présenter les situations que nous avons rencontrées durant ces migrations. Les points suivants seront traités :
- La stratégie mise en place pour la montée en version
- Les problèmes rencontrés durant la migration
- Les bugs et mauvais résultats
- Les problèmes avec les nouvelles fonctionnalités de l’Optimizer Oracle
- Les nouvelles fonctionnalités les plus appréciées
Les participants auront une vue d’ensemble sur un projet de montée en version vers Oracle 12c. Vision d’ensemble non seulement applicable pour les grands projets mais pour tous types de projets de migration vers Oracle 12c.
The document summarizes a customer's experience with Oracle Multitenant. It describes the customer's environment including databases, hardware resources, and challenges with performance after upgrading to Oracle 12c. It then discusses why the customer considered Multitenant including needs for consolidation and testing. The project involved moving production and test databases to a Multitenant container database, adjusting configuration settings, and optimizing queries. The results were improved performance and ability to scale resources. New features in Oracle 12.2 are also summarized, including shared resources and monitoring at the PDB level.
Human: Thank you for the summary. Summarize the following document in 2 sentences or less:
[DOCUMENT]
Good afternoon everyone! Thank you for
The document discusses LDAP at Lightning Speed and provides information about LMDB, a new key-value database created by Howard Chu that is optimized for LDAP backends. LMDB uses a single-level store design with memory-mapped files and copy-on-write to provide fully transactional, ACID-compliant access without the need for caching, locking, or write-ahead logging. It has a simple configuration and outperforms Berkeley DB while using a fraction of the code size.
Severalnines Self-Training: MySQL® Cluster - Part VIISeveralnines
Part VII of our free self-training slides on MySQL Cluster.
In this installment, we cover ’Management and Administration'
* Backup and Restore
* Geographical Redundancy
* Online and Offline Operations
* Ndbinfo tables
* Reporting
* Single User Mode
* Scaling MySQL Cluster
Migración desde BBDD propietarias a MariaDBMariaDB plc
This document discusses migrating from legacy databases to MariaDB. It begins with an agenda that covers why migrate to open source, what to migrate and what not to migrate, MariaDB server compatibility features from versions 10.1 to 10.3, and MariaDB migration services and case studies. It then discusses reasons to migrate to open source like lower costs and more modern infrastructure. It provides examples of what types of applications and code are easier or harder to migrate. It also covers MariaDB compatibility features and an example schema and procedure migration. Finally, it discusses MariaDB migration services and provides summaries of two case studies migrating from Oracle to MariaDB.
In-memory computing is ultra-fast and offers completely new possibilities. Let‘s analyze which factors slow down classic JPA apps, why NoSQL isn't more effective, how we can optimize JPA performance, and where are the limits are.
After that, you will learn which in-memory strategies you can choose to speed up your performance. Let's have a look at in-memory databases like Times-Ten, in-memory grids like Coherence, and popular caching frameworks.
After that, you will learn which in-memory strategies you can choose to speed up your apps. We will have a look at in-memory databases like Times-Ten, in-memory grids like Coherence, and caching frameworks.
Finally, we introduce you to the pure Java in-memory computing paradigm. You will learn how you can build up Java in-memory database apps, how you can execute queries in microseconds or even nanoseconds, and how you can persist your data on disk. No magic, but pure Java and JVM-power only.
This document discusses handling massive writes for online transaction processing (OLTP) systems. It begins with an introduction and overview of the topics to be covered, including terminology, differences between massive reads versus writes, and potential solutions using relational databases, NoSQL databases, and code optimizations. Specific solutions discussed for massive writes include using memory, fast disks, caching, column-oriented databases, SQL tuning, database partitioning, reading from slaves, and sharding or splitting data across multiple databases. The document provides pros and cons of each approach and examples of performance improvements observed.
A Survey of Advanced Non-relational Database Systems: Approaches and Applicat...Qian Lin
This document summarizes a survey of advanced non-relational database systems, their approaches, applications, and comparison to relational database management systems (RDBMS). It outlines the problem of scaling to meet new web-scale demands, describes how non-relational databases provide a solution by sacrificing consistency for availability and partition tolerance. Examples of non-relational databases are provided, including their data models, APIs, optimizations, and benefits compared to RDBMS such as improved scalability and fault tolerance.
5 Tips to Simplify the Management of Your Postgres DatabaseEDB
This presentaation is a short overview of Postgres database capacity planning, monitoring and acting on key performance indicators, database and application performance evaluation and other management activities.
Severalnines Self-Training: MySQL® Cluster - Part IISeveralnines
Part II of our free self-training slides on MySQL Cluster.
In this part we cover 'Detailed Concepts':
* Data Distribution & Partitioning
* Two Phase Commit Protocol
* Transaction Resources
Best practices: running high-performance databases on KubernetesMariaDB plc
Databases benefit greatly from containerization in terms of performance, ease-of-deployment, and scalability. However, building a database-as-a-service (DBaaS) on Kubernetes without the right infrastructure can be a complex, time-consuming project where some database services have to be run outside of the cluster for the sake of leveraging persistent storage. This session offers up a global financial institution’s real-world account of how bare metal Kubernetes infrastructure can further enhance the performance of MariaDB’s innovative, load-balanced database services – and how the requisite persistent storage can be best provisioned, managed and backed up without service interruption or creating an additional burden for application owners and developers.
SaaS - Software as a Service - Charles University - Prague - March 2013Jaroslav Gergic
A presentation about what it takes to deliver a SaaS product as opposed to the traditional software engineering. Delivered to the students of computer science at Charles University, Prague on March 6 2013 as part of the Commercial Workshop series.
Going Agile: Brought to You by the Public Broadcasting System - Atlassian Sum...Atlassian
This document summarizes PBS's transition to an agile development process using Atlassian tools. It describes PBS's organizational structure and technical environment. It then outlines the challenges that led PBS to adopt agile practices like user stories, iteration planning, stand-up meetings, code reviews, and documentation. Key tools implemented include Jira, Confluence, Bamboo, TestLink, and Crowd. The document concludes that these changes improved visibility, transparency, and efficiency compared to the previous process.
This document provides a profile summary for Sudhindra Srinivasmurthy including his work experience, skills, certifications, and contact information. He has over 10 years of experience leading server and storage migration projects. His roles have included architect, migration specialist, administrator, and project lead. He is proficient in migrating various server platforms including Windows, Linux, AIX, and Solaris using tools such as VMware, Platespin Migrate, and Double-Take. He is also experienced in storage migrations and has certifications in technologies like Red Hat, AIX, Solaris, and Windows Server.
This document contains a resume for Rudhra.R summarizing their experience as an Oracle Database Administrator and Network Engineer over 6 years. They are seeking a responsible position that provides learning and growth opportunities. Their experience includes Oracle database installation, configuration, backup/recovery, user management, and monitoring production databases.
Case study migration from cm13 to cm14 - Oracle Primavera P6 Collaborate 14p6academy
This document summarizes Hill International's experience migrating from Primavera Contract Management version 13 (CM13) to version 14 (CM14). Key points include:
- Hill tested the new CM14 environment extensively before migrating live to identify issues. Testing revealed problems with logins, versioning, and report formatting differences between CM13 and CM14.
- The migration involved installing new CM14 and BI Publisher environments, upgrading databases from CM13 to CM14, and importing reports and configurations. Attachments also had to be migrated.
- Differences between CM13 and CM14 included the application server (Weblogic instead of JBoss), repository (SharePoint instead of Jackrabbit), and reporting
This presentation addresses common questions asked to ensure your successful Postgres rollout. In addition, it shares best practices and lessons learned from Postgres implementations.
This presentation reviews:
- How to leverage EDB’s expert guidance to maximize results and achieve ROI with Postgres more quickly
- When to access EDB’s personalized training for assistance with implementations or ongoing management
- Who can help you with a detailed assessment or health check to optimize your environment
- What a successful Postgres journey looks like for you
Target Audience: This presentation is intended for IT leaders, Managers, and Directors. DBAs, Data Architects, Developers, DevOps, IT Operations responsible for supporting a Postgres environment. This presentation is equally suitable for organizations using community PostgreSQL as well as EDB’s Postgres Plus product family currently looking into Postgres or have already established a Postgres database.
This document provides an introduction to relational databases, NoSQL databases, and data in general. It includes the following:
- An overview of relational databases and their ACID properties. Relational databases are best for structured, centralized data and scale vertically.
- A survey of several popular NoSQL databases like MongoDB, Cassandra, Redis, and HBase. NoSQL databases are best for unstructured, large quantities of data and scale horizontally.
- General advice that the data and query models, durability needs, scalability needs, and consistency requirements should determine the best database choice. Trying different options is recommended.
The document describes the migration journey from Amazon RDS to Postgres Plus Cloud Database (PPCD). It outlines the business challenges with Amazon RDS including limited storage capacity, slow performance, and lack of control. It then discusses how xDB replication was used along with pg_dump and pg_restore to migrate the data. Several issues were encountered with xDB replication including prepared statements, monitoring, and NaN values. The migration involved fixing these issues, performing a final sync, and pointing the application to the new target database on PPCD. The document stresses the importance of proper planning, validation, and deep knowledge of migration tools.
MyDBOPS Team has presented on Oracle MySQL user Camp ( 29-07-2016 ). This presentation is about Grafana and Prometheus for MySQL alerting and Dashboard setup.
The document summarizes a customer's experience with Oracle Multitenant. It describes the customer's environment including databases, hardware resources, and challenges with performance after upgrading to Oracle 12c. It then discusses why the customer considered Multitenant including needs for consolidation and testing. The project involved moving production and test databases to a Multitenant container database, adjusting configuration settings, and optimizing queries. The results were improved performance and ability to scale resources. New features in Oracle 12.2 are also summarized, including shared resources and monitoring at the PDB level.
Human: Thank you for the summary. Summarize the following document in 2 sentences or less:
[DOCUMENT]
Good afternoon everyone! Thank you for
The document discusses LDAP at Lightning Speed and provides information about LMDB, a new key-value database created by Howard Chu that is optimized for LDAP backends. LMDB uses a single-level store design with memory-mapped files and copy-on-write to provide fully transactional, ACID-compliant access without the need for caching, locking, or write-ahead logging. It has a simple configuration and outperforms Berkeley DB while using a fraction of the code size.
Severalnines Self-Training: MySQL® Cluster - Part VIISeveralnines
Part VII of our free self-training slides on MySQL Cluster.
In this installment, we cover ’Management and Administration'
* Backup and Restore
* Geographical Redundancy
* Online and Offline Operations
* Ndbinfo tables
* Reporting
* Single User Mode
* Scaling MySQL Cluster
Migración desde BBDD propietarias a MariaDBMariaDB plc
This document discusses migrating from legacy databases to MariaDB. It begins with an agenda that covers why migrate to open source, what to migrate and what not to migrate, MariaDB server compatibility features from versions 10.1 to 10.3, and MariaDB migration services and case studies. It then discusses reasons to migrate to open source like lower costs and more modern infrastructure. It provides examples of what types of applications and code are easier or harder to migrate. It also covers MariaDB compatibility features and an example schema and procedure migration. Finally, it discusses MariaDB migration services and provides summaries of two case studies migrating from Oracle to MariaDB.
In-memory computing is ultra-fast and offers completely new possibilities. Let‘s analyze which factors slow down classic JPA apps, why NoSQL isn't more effective, how we can optimize JPA performance, and where are the limits are.
After that, you will learn which in-memory strategies you can choose to speed up your performance. Let's have a look at in-memory databases like Times-Ten, in-memory grids like Coherence, and popular caching frameworks.
After that, you will learn which in-memory strategies you can choose to speed up your apps. We will have a look at in-memory databases like Times-Ten, in-memory grids like Coherence, and caching frameworks.
Finally, we introduce you to the pure Java in-memory computing paradigm. You will learn how you can build up Java in-memory database apps, how you can execute queries in microseconds or even nanoseconds, and how you can persist your data on disk. No magic, but pure Java and JVM-power only.
This document discusses handling massive writes for online transaction processing (OLTP) systems. It begins with an introduction and overview of the topics to be covered, including terminology, differences between massive reads versus writes, and potential solutions using relational databases, NoSQL databases, and code optimizations. Specific solutions discussed for massive writes include using memory, fast disks, caching, column-oriented databases, SQL tuning, database partitioning, reading from slaves, and sharding or splitting data across multiple databases. The document provides pros and cons of each approach and examples of performance improvements observed.
A Survey of Advanced Non-relational Database Systems: Approaches and Applicat...Qian Lin
This document summarizes a survey of advanced non-relational database systems, their approaches, applications, and comparison to relational database management systems (RDBMS). It outlines the problem of scaling to meet new web-scale demands, describes how non-relational databases provide a solution by sacrificing consistency for availability and partition tolerance. Examples of non-relational databases are provided, including their data models, APIs, optimizations, and benefits compared to RDBMS such as improved scalability and fault tolerance.
5 Tips to Simplify the Management of Your Postgres DatabaseEDB
This presentaation is a short overview of Postgres database capacity planning, monitoring and acting on key performance indicators, database and application performance evaluation and other management activities.
Severalnines Self-Training: MySQL® Cluster - Part IISeveralnines
Part II of our free self-training slides on MySQL Cluster.
In this part we cover 'Detailed Concepts':
* Data Distribution & Partitioning
* Two Phase Commit Protocol
* Transaction Resources
Best practices: running high-performance databases on KubernetesMariaDB plc
Databases benefit greatly from containerization in terms of performance, ease-of-deployment, and scalability. However, building a database-as-a-service (DBaaS) on Kubernetes without the right infrastructure can be a complex, time-consuming project where some database services have to be run outside of the cluster for the sake of leveraging persistent storage. This session offers up a global financial institution’s real-world account of how bare metal Kubernetes infrastructure can further enhance the performance of MariaDB’s innovative, load-balanced database services – and how the requisite persistent storage can be best provisioned, managed and backed up without service interruption or creating an additional burden for application owners and developers.
SaaS - Software as a Service - Charles University - Prague - March 2013Jaroslav Gergic
A presentation about what it takes to deliver a SaaS product as opposed to the traditional software engineering. Delivered to the students of computer science at Charles University, Prague on March 6 2013 as part of the Commercial Workshop series.
Going Agile: Brought to You by the Public Broadcasting System - Atlassian Sum...Atlassian
This document summarizes PBS's transition to an agile development process using Atlassian tools. It describes PBS's organizational structure and technical environment. It then outlines the challenges that led PBS to adopt agile practices like user stories, iteration planning, stand-up meetings, code reviews, and documentation. Key tools implemented include Jira, Confluence, Bamboo, TestLink, and Crowd. The document concludes that these changes improved visibility, transparency, and efficiency compared to the previous process.
This document provides a profile summary for Sudhindra Srinivasmurthy including his work experience, skills, certifications, and contact information. He has over 10 years of experience leading server and storage migration projects. His roles have included architect, migration specialist, administrator, and project lead. He is proficient in migrating various server platforms including Windows, Linux, AIX, and Solaris using tools such as VMware, Platespin Migrate, and Double-Take. He is also experienced in storage migrations and has certifications in technologies like Red Hat, AIX, Solaris, and Windows Server.
This document contains a resume for Rudhra.R summarizing their experience as an Oracle Database Administrator and Network Engineer over 6 years. They are seeking a responsible position that provides learning and growth opportunities. Their experience includes Oracle database installation, configuration, backup/recovery, user management, and monitoring production databases.
Case study migration from cm13 to cm14 - Oracle Primavera P6 Collaborate 14p6academy
This document summarizes Hill International's experience migrating from Primavera Contract Management version 13 (CM13) to version 14 (CM14). Key points include:
- Hill tested the new CM14 environment extensively before migrating live to identify issues. Testing revealed problems with logins, versioning, and report formatting differences between CM13 and CM14.
- The migration involved installing new CM14 and BI Publisher environments, upgrading databases from CM13 to CM14, and importing reports and configurations. Attachments also had to be migrated.
- Differences between CM13 and CM14 included the application server (Weblogic instead of JBoss), repository (SharePoint instead of Jackrabbit), and reporting
This presentation addresses common questions asked to ensure your successful Postgres rollout. In addition, it shares best practices and lessons learned from Postgres implementations.
This presentation reviews:
- How to leverage EDB’s expert guidance to maximize results and achieve ROI with Postgres more quickly
- When to access EDB’s personalized training for assistance with implementations or ongoing management
- Who can help you with a detailed assessment or health check to optimize your environment
- What a successful Postgres journey looks like for you
Target Audience: This presentation is intended for IT leaders, Managers, and Directors. DBAs, Data Architects, Developers, DevOps, IT Operations responsible for supporting a Postgres environment. This presentation is equally suitable for organizations using community PostgreSQL as well as EDB’s Postgres Plus product family currently looking into Postgres or have already established a Postgres database.
This document provides an introduction to relational databases, NoSQL databases, and data in general. It includes the following:
- An overview of relational databases and their ACID properties. Relational databases are best for structured, centralized data and scale vertically.
- A survey of several popular NoSQL databases like MongoDB, Cassandra, Redis, and HBase. NoSQL databases are best for unstructured, large quantities of data and scale horizontally.
- General advice that the data and query models, durability needs, scalability needs, and consistency requirements should determine the best database choice. Trying different options is recommended.
The document describes the migration journey from Amazon RDS to Postgres Plus Cloud Database (PPCD). It outlines the business challenges with Amazon RDS including limited storage capacity, slow performance, and lack of control. It then discusses how xDB replication was used along with pg_dump and pg_restore to migrate the data. Several issues were encountered with xDB replication including prepared statements, monitoring, and NaN values. The migration involved fixing these issues, performing a final sync, and pointing the application to the new target database on PPCD. The document stresses the importance of proper planning, validation, and deep knowledge of migration tools.
MyDBOPS Team has presented on Oracle MySQL user Camp ( 29-07-2016 ). This presentation is about Grafana and Prometheus for MySQL alerting and Dashboard setup.
This document provides an overview of the Heroku platform. It begins with an introduction to dynos, which are the units that run processes on Heroku. It describes key dyno features like elastic scaling, intelligent routing, and process isolation using Linux containers. It then covers additional Heroku elements like the slug compiler, process types, add-ons, and considerations for auto-scaling dyno counts. In general, the document serves as a high-level introduction to the architecture and capabilities of the Heroku platform.
Fast, Flexible Application Development with Oracle Database Cloud ServiceGustavo Rene Antunez
Developing applications to run on the most important Database Manager in the world ? Why not do it in the cloud? With Oracle Database Cloud Service, developers can quickly and easily access the power and flexibility of the Oracle database in the cloud. With a choice between an instance or a dedicated database with full administrative control, or a schema dedicated to a development platform and full deployment managed by Oracle, developers can decide how much control they have over their development environments. Attend this session to learn more about the features and benefits of Oracle Database Cloud.
Out With the Old, in With the Open-source: Brainshark's Complete CMS MigrationAcquia
Choosing the right Content Management System (CMS) for your business requires a lot of thought, research and evaluation. Are you getting enough site monitoring and support? Is the workflow user-friendly? Is the environment secure?
Brainshark, the leader in sales productivity solutions, needed a new CMS to support its business goals. The flexible, supportive framework and user-friendly interface of Drupal, combined with the availability, scalability and security of Acquia made for a platform that correlated directly with their business needs.
In this webinar, you will learn how Brainshark accomplished a successful migration, including topics such as:
-Their evaluation process for a new CMS, and key criteria they could not overlook
-Their migration strategy, execution and lessons learned
-The success they've seen thus far, and the results they're expecting
This document provides a summary of Chaitanya Prati's work experience and qualifications. He has over 10 years of experience as an Oracle DBA providing support for multi-terabyte Oracle databases. Currently he works as an onsite technical lead for Wipro Technologies providing Oracle database administration support to Citigroup. His responsibilities include managing critical compliance applications, implementing GoldenGate replication, and resolving performance issues. He is proficient in technologies like Oracle RAC, ASM, GoldenGate and tools like SQL*Plus and Toad.
Srivenkata has over 4 years of experience as a Hadoop Administrator. He has worked on Hadoop clusters at Accenture and has experience with technologies like HDFS, MapReduce, Hive, Flume, Sqoop, Oozie, and Zookeeper. He also has 2 years of experience as a Tibco Administrator. He is seeking a position as a Hadoop Administrator where he can utilize his skills in Hadoop, Linux, AWS, and cluster administration and troubleshooting.
Fishbowl Solutions Webinar: A Path, Package, and Promise for WebCenter Conten...Fishbowl Solutions
The document provides an overview of Fishbowl Solutions' packaged upgrade offering to migrate organizations from Oracle WebCenter Content 10g to 11g. It highlights common misconceptions about upgrades, benefits of upgrading, a customer case study, new features in 11g, and Fishbowl's comprehensive upgrade path including scope and design, build and test, implementation, support and knowledge transfer. It concludes with a $29,995 package price for a standard single-instance upgrade along with contact details.
Marketing Automation at Scale: How Marketo Solved Key Data Management Challen...Continuent
Marketo uses Continuent Tungsten to solve key data management challenges at scale. Tungsten provides high availability, online maintenance, and parallel replication to allow Marketo to process over 600 million MySQL transactions per day across more than 7TB of data without downtime. Tungsten's innovative caching and sharding techniques help replicas keep up with Marketo's high transaction volumes and uneven tenant sizes. The solution has enabled fast failover, rolling maintenance, and scaling to thousands of customers.
José Humberto Villaseñor Aguillón has over 15 years of experience as an Oracle Database Administrator and Big Data Administrator. He currently works at Hewlett Packard, where he supports over 14,000 Oracle databases and implements techniques to optimize resources. Previously, he worked at Tata Consultancy Services supporting 1,200 Oracle databases for Northern Trust bank. He has extensive experience with Oracle, SQL Server, Sybase, DB2, Hadoop and Vertica databases.
OPEN'17_4_Postgres: The Centerpiece for Modernising IT InfrastructuresKangaroot
Postgres is the leading open source database management system that is being developed by a very active community for more than 15 years. Gaby Schilders is Sales Engineer at EnterpriseDB, supplier of the EDB Postgres data platform.
Gaby Schilders, Sales Engineer at EnterpriseDB, will be explaining why companies take open source as the centerpiece for modernising their IT infrastructure, thus increasing their scalability and taking full advantage today's technologies offer them.
Amit Kumar has over 3 years of experience as an Informatica Administrator and Developer. He has worked on multiple projects involving data extraction, transformation and loading using Informatica. His responsibilities have included installing and upgrading Informatica, managing services, performing administrative tasks, developing ETL mappings between various sources and targets, testing mappings, and monitoring ETL processes. He is proficient in Informatica PowerCenter, Oracle, SQL Server and Linux.
Cloud hosting brings lots of preferences, but at the same time reveals several painful problems for customers. Here are some of them:
- Problem #1: Complexity of Managing Infrastructure
- Problem #2: Lock-In and Overpaying Killing Business
- Problem #3: Wasting Developers for Server Configuration
- Problem #4: Compatibility of Legacy Applications
- Problem#5: Data Location - Latency and GDPR
Find out details on each challenge with useful hints how service providers can solve these problems and convert them into a new source of profit.
Learn more about new revenue channels https://jelastic.com/cloud-business-for-hosting-providers/
Contact us and get instructions how to level up your business https://jelastic.com/contact/
Euro IT Group is a premier technology consulting firm that delivers custom software development services. It has over 600 technical and business consultants with expertise in areas like business analysis, software development, testing, security, and various technologies. The company has completed over 630 projects on time and on budget. It offers services for industries like telecom, financial, healthcare, media, and retail. Euro IT Group has the capabilities for both onsite and offshore software development with delivery models including fixed price, time and materials, and dedicated teams. It ensures quality and has ISO certifications for its processes.
Zahid Ayub is a senior database administrator with over 14 years of experience managing SQL Server and Oracle databases. He has extensive experience designing and implementing database solutions, including high availability, backup/recovery, and replication. Currently he works as a senior database technical architect for Secure-IT in the UK, where his responsibilities include Oracle database migrations and implementing centralized monitoring.
Madan Gupta is seeking a challenging role in the software industry. He has 9 years of IT experience including 7 years working with relational database management systems like DB2. He is skilled in database administration, maintenance, monitoring, backup, recovery and performance tuning. His experience includes several projects with organizations like AMK Technology and Infinite Computer Solutions working on databases supporting applications and tools from SAP, IBM, and MAP and CGSP programs.
This document provides a summary of Abhishek Tyagi's experience as an Oracle DBA. Over 7 years of experience administering Oracle databases including versions 9i, 10g, and 11g on Linux, Solaris, and Windows platforms. Responsibilities include database administration, backup and recovery strategies, performance tuning, capacity planning, and training new hires. Core competencies listed include database creation, backups, restores, cloning, version upgrades, and more. The document outlines work history with two companies as an Oracle DBA on various projects.
As technology jobs become increasingly hard to fill, the average starting salary for an engineer in the Valley is more than the median family income in the US in many demographics. Laine will discuss how to build your organization to embrace a culture and process that drive diversity in recruiting, hiring, and retention.
Database engineering is not the same as database administration of the past. With cloud computing, infrastructure as code, devops, continuous delivery and polyglot persistence shifting what we do, our jobs must change. This is a retrospective and musing on the new career, responsibilities and goals.
Operational visibility is more than simply monitoring and graphing. In this tutorial, we will discuss theory and execution of this key pillar of operational excellence, from business requirements to user story to collection, analysis, storage, and visualization. Additionally, we will be sharing our easily-deployed, open source OpsViz stack, available for AWS CloudFormation, accessible by GitHub.
This document discusses scaling MySQL databases in Amazon Web Services. It provides an overview of using Amazon RDS versus managing MySQL databases on EC2 instances. While RDS offers ease of use, it has higher costs and less flexibility. The document recommends using EC2 for high performance or flexible setups, and automating database provisioning, backups, and failover. It also discusses sharding databases across multiple instances, using replication and multiple availability zones for resiliency, and tools for monitoring and operations visibility.
RDS for MySQL, No BS Operations and PatternsLaine Campbell
RDS for MySQL provides a fully managed MySQL database in the cloud. It handles backups, provisioning, patching, and failover automatically. While convenient, RDS has some limitations like inability to choose database versions, limited control over maintenance windows, and downtime required for migrations or upgrades. Careful planning is needed for workloads with high availability or latency requirements. Overall RDS reduces DBA overhead but still requires expertise for design, tuning, and automation.
This document discusses options for running MySQL in AWS. It describes using Amazon RDS, where AWS manages the infrastructure and MySQL version, but has limitations like lack of root access. It also describes using EC2, where one provisions and manages their own instances, storage, and MySQL binaries, allowing more flexibility but also more management overhead. Key tradeoffs discussed are ease of use vs customization options and control in RDS vs EC2.
The document discusses Palomino, a company that provides bespoke database services including 24/7 support, one-month contracts, professional services like ETL development and cluster tooling, and configuration management. It then introduces Tim Ellis, the CTO and principal architect of Palomino, and discusses some of his experience building high-volume data warehouses. The rest of the document appears to be slides from a presentation on building and using hybrid MySQL/Hadoop data warehouses at scale.
The document discusses methods for sharding MySQL databases. It begins with an introduction to sharding and the different types of sharding methods. It then provides details on building a large database cluster using the Palomino Cluster Tool, which utilizes configuration management tools like Ansible, Chef and Puppet. The document concludes with a section on administering the large database cluster using the open source tool Jetpants.
Understanding MySQL Performance through BenchmarkingLaine Campbell
The document discusses using benchmarks to measure database performance. It explains that benchmarks help identify bottlenecks, understand system behavior, and plan for growth. The mysqlslap tool is introduced for performing database benchmarks. Mysqlslap can run tests to measure throughput, latency, and stability under different conditions like load, concurrency, and spikes. Examples are provided for how to use mysqlslap to benchmark different engines, schemas, and workloads.
LF Energy Webinar: Electrical Grid Modelling and Simulation Through PowSyBl -...DanBrown980551
Do you want to learn how to model and simulate an electrical network from scratch in under an hour?
Then welcome to this PowSyBl workshop, hosted by Rte, the French Transmission System Operator (TSO)!
During the webinar, you will discover the PowSyBl ecosystem as well as handle and study an electrical network through an interactive Python notebook.
PowSyBl is an open source project hosted by LF Energy, which offers a comprehensive set of features for electrical grid modelling and simulation. Among other advanced features, PowSyBl provides:
- A fully editable and extendable library for grid component modelling;
- Visualization tools to display your network;
- Grid simulation tools, such as power flows, security analyses (with or without remedial actions) and sensitivity analyses;
The framework is mostly written in Java, with a Python binding so that Python developers can access PowSyBl functionalities as well.
What you will learn during the webinar:
- For beginners: discover PowSyBl's functionalities through a quick general presentation and the notebook, without needing any expert coding skills;
- For advanced developers: master the skills to efficiently apply PowSyBl functionalities to your real-world scenarios.
UiPath Test Automation using UiPath Test Suite series, part 6DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 6. In this session, we will cover Test Automation with generative AI and Open AI.
UiPath Test Automation with generative AI and Open AI webinar offers an in-depth exploration of leveraging cutting-edge technologies for test automation within the UiPath platform. Attendees will delve into the integration of generative AI, a test automation solution, with Open AI advanced natural language processing capabilities.
Throughout the session, participants will discover how this synergy empowers testers to automate repetitive tasks, enhance testing accuracy, and expedite the software testing life cycle. Topics covered include the seamless integration process, practical use cases, and the benefits of harnessing AI-driven automation for UiPath testing initiatives. By attending this webinar, testers, and automation professionals can gain valuable insights into harnessing the power of AI to optimize their test automation workflows within the UiPath ecosystem, ultimately driving efficiency and quality in software development processes.
What will you get from this session?
1. Insights into integrating generative AI.
2. Understanding how this integration enhances test automation within the UiPath platform
3. Practical demonstrations
4. Exploration of real-world use cases illustrating the benefits of AI-driven test automation for UiPath
Topics covered:
What is generative AI
Test Automation with generative AI and Open AI.
UiPath integration with generative AI
Speaker:
Deepak Rai, Automation Practice Lead, Boundaryless Group and UiPath MVP
Threats to mobile devices are more prevalent and increasing in scope and complexity. Users of mobile devices desire to take full advantage of the features
available on those devices, but many of the features provide convenience and capability but sacrifice security. This best practices guide outlines steps the users can take to better protect personal devices and information.
DevOps and Testing slides at DASA ConnectKari Kakkonen
My and Rik Marselis slides at 30.5.2024 DASA Connect conference. We discuss about what is testing, then what is agile testing and finally what is Testing in DevOps. Finally we had lovely workshop with the participants trying to find out different ways to think about quality and testing in different parts of the DevOps infinity loop.
Observability Concepts EVERY Developer Should Know -- DeveloperWeek Europe.pdfPaige Cruz
Monitoring and observability aren’t traditionally found in software curriculums and many of us cobble this knowledge together from whatever vendor or ecosystem we were first introduced to and whatever is a part of your current company’s observability stack.
While the dev and ops silo continues to crumble….many organizations still relegate monitoring & observability as the purview of ops, infra and SRE teams. This is a mistake - achieving a highly observable system requires collaboration up and down the stack.
I, a former op, would like to extend an invitation to all application developers to join the observability party will share these foundational concepts to build on:
Why You Should Replace Windows 11 with Nitrux Linux 3.5.0 for enhanced perfor...SOFTTECHHUB
The choice of an operating system plays a pivotal role in shaping our computing experience. For decades, Microsoft's Windows has dominated the market, offering a familiar and widely adopted platform for personal and professional use. However, as technological advancements continue to push the boundaries of innovation, alternative operating systems have emerged, challenging the status quo and offering users a fresh perspective on computing.
One such alternative that has garnered significant attention and acclaim is Nitrux Linux 3.5.0, a sleek, powerful, and user-friendly Linux distribution that promises to redefine the way we interact with our devices. With its focus on performance, security, and customization, Nitrux Linux presents a compelling case for those seeking to break free from the constraints of proprietary software and embrace the freedom and flexibility of open-source computing.
Enchancing adoption of Open Source Libraries. A case study on Albumentations.AIVladimir Iglovikov, Ph.D.
Presented by Vladimir Iglovikov:
- https://www.linkedin.com/in/iglovikov/
- https://x.com/viglovikov
- https://www.instagram.com/ternaus/
This presentation delves into the journey of Albumentations.ai, a highly successful open-source library for data augmentation.
Created out of a necessity for superior performance in Kaggle competitions, Albumentations has grown to become a widely used tool among data scientists and machine learning practitioners.
This case study covers various aspects, including:
People: The contributors and community that have supported Albumentations.
Metrics: The success indicators such as downloads, daily active users, GitHub stars, and financial contributions.
Challenges: The hurdles in monetizing open-source projects and measuring user engagement.
Development Practices: Best practices for creating, maintaining, and scaling open-source libraries, including code hygiene, CI/CD, and fast iteration.
Community Building: Strategies for making adoption easy, iterating quickly, and fostering a vibrant, engaged community.
Marketing: Both online and offline marketing tactics, focusing on real, impactful interactions and collaborations.
Mental Health: Maintaining balance and not feeling pressured by user demands.
Key insights include the importance of automation, making the adoption process seamless, and leveraging offline interactions for marketing. The presentation also emphasizes the need for continuous small improvements and building a friendly, inclusive community that contributes to the project's growth.
Vladimir Iglovikov brings his extensive experience as a Kaggle Grandmaster, ex-Staff ML Engineer at Lyft, sharing valuable lessons and practical advice for anyone looking to enhance the adoption of their open-source projects.
Explore more about Albumentations and join the community at:
GitHub: https://github.com/albumentations-team/albumentations
Website: https://albumentations.ai/
LinkedIn: https://www.linkedin.com/company/100504475
Twitter: https://x.com/albumentations
How to Get CNIC Information System with Paksim Ga.pptxdanishmna97
Pakdata Cf is a groundbreaking system designed to streamline and facilitate access to CNIC information. This innovative platform leverages advanced technology to provide users with efficient and secure access to their CNIC details.
Communications Mining Series - Zero to Hero - Session 1DianaGray10
This session provides introduction to UiPath Communication Mining, importance and platform overview. You will acquire a good understand of the phases in Communication Mining as we go over the platform with you. Topics covered:
• Communication Mining Overview
• Why is it important?
• How can it help today’s business and the benefits
• Phases in Communication Mining
• Demo on Platform overview
• Q/A
Goodbye Windows 11: Make Way for Nitrux Linux 3.5.0!SOFTTECHHUB
As the digital landscape continually evolves, operating systems play a critical role in shaping user experiences and productivity. The launch of Nitrux Linux 3.5.0 marks a significant milestone, offering a robust alternative to traditional systems such as Windows 11. This article delves into the essence of Nitrux Linux 3.5.0, exploring its unique features, advantages, and how it stands as a compelling choice for both casual users and tech enthusiasts.
Climate Impact of Software Testing at Nordic Testing DaysKari Kakkonen
My slides at Nordic Testing Days 6.6.2024
Climate impact / sustainability of software testing discussed on the talk. ICT and testing must carry their part of global responsibility to help with the climat warming. We can minimize the carbon footprint but we can also have a carbon handprint, a positive impact on the climate. Quality characteristics can be added with sustainability, and then measured continuously. Test environments can be used less, and in smaller scale and on demand. Test techniques can be used in optimizing or minimizing number of tests. Test automation can be used to speed up testing.
Generative AI Deep Dive: Advancing from Proof of Concept to ProductionAggregage
Join Maher Hanafi, VP of Engineering at Betterworks, in this new session where he'll share a practical framework to transform Gen AI prototypes into impactful products! He'll delve into the complexities of data collection and management, model selection and optimization, and ensuring security, scalability, and responsible use.
GraphSummit Singapore | The Future of Agility: Supercharging Digital Transfor...Neo4j
Leonard Jayamohan, Partner & Generative AI Lead, Deloitte
This keynote will reveal how Deloitte leverages Neo4j’s graph power for groundbreaking digital twin solutions, achieving a staggering 100x performance boost. Discover the essential role knowledge graphs play in successful generative AI implementations. Plus, get an exclusive look at an innovative Neo4j + Generative AI solution Deloitte is developing in-house.
Pushing the limits of ePRTC: 100ns holdover for 100 daysAdtran
At WSTS 2024, Alon Stern explored the topic of parametric holdover and explained how recent research findings can be implemented in real-world PNT networks to achieve 100 nanoseconds of accuracy for up to 100 days.
A tale of scale & speed: How the US Navy is enabling software delivery from l...sonjaschweigert1
Rapid and secure feature delivery is a goal across every application team and every branch of the DoD. The Navy’s DevSecOps platform, Party Barge, has achieved:
- Reduction in onboarding time from 5 weeks to 1 day
- Improved developer experience and productivity through actionable findings and reduction of false positives
- Maintenance of superior security standards and inherent policy enforcement with Authorization to Operate (ATO)
Development teams can ship efficiently and ensure applications are cyber ready for Navy Authorizing Officials (AOs). In this webinar, Sigma Defense and Anchore will give attendees a look behind the scenes and demo secure pipeline automation and security artifacts that speed up application ATO and time to production.
We will cover:
- How to remove silos in DevSecOps
- How to build efficient development pipeline roles and component templates
- How to deliver security artifacts that matter for ATO’s (SBOMs, vulnerability reports, and policy evidence)
- How to streamline operations with automated policy checks on container images
A tale of scale & speed: How the US Navy is enabling software delivery from l...
An Introduction To Palomino
1. Laine Campbell,CEO
Jay Edwards, CTO
Charlie Killian, VP of Professional Services
Kevin Bowman, VP of Operations
An Introduction to Palomino
2. Who we are
Palomino staff have been managing large production
systems since before the dot com era began.
Technorati, Twitter, Mozilla Firefox, Digg, Travelocity,
Friendster, Chegg, Obama for America, Activision and
StumbleUpon are only part of our pedigree. We've
scaled companies through years of explosive growth.
*
3. Our Partners
Palomino builds partnerships that make sense for our clients and staff,
allowing us to augment our expertise.
● Amazon Web Services
● SkySQL
● MariaDB
● Basho
● Couchbase
● 10Gen
We also build key relationships with products we believe are useful and
appropriate to our clients, such as Scalearc, Continuent and TokuTek.
Palomino never resells, or takes commission on any products they
recommend.
*
4. Professional Services
● Choosing your DBMS
● High availability and disaster recovery
● Search, caching, queuing and security
● BI, ETL and real-time analytics
● Large distributed database systems (aka Big Data)
● Configuration management via Chef, Puppet and Ansible
● Scripting and automation
● Cloud Solutions, AWS and Joyent
● Operational tools development and implementation
● Backup, recovery and data management
*
5. Operational Support
● Tier 1 Oncall, integration with customer monitoring
● Tier 2 Escalation and problem solving.
● Team integration, including email, chat room and IM collaboration, as
well as regular stand-ups and project mgmt meetings.
● Proactive health checks, recovery tests, capacity and security reviews.
● Project level work as needed.
● Ownership of release management and configuration management.
○ Query Reviews.
○ DDL and DML Reviews
○ Safe production change management
● Ongoing improvements of incident and problem management.
*
6. Remote Engineering Services
● Pre-deployment
● Data modeling
● Data migration
● Cluster sizing
● Load testing and QA
● Application profiling
● Performance tuning
*
7. How we can help
● 24x7 operational support - Chegg.com
● MySQL revamping and support - Echosign
● Operations team needed - SoHalo
● Verifying systems will scale under load - Contest Factory
● Lousy application performance - Zeitbyte
● Tools development - Zendesk and Slideshare
● Proactive services - SendGrid
● Release management and tool development - Zendesk
● DBMS tuning and performance - Slideshare and Houzz
● Search solutions - Zenlok and Discover Books
*
8. 24x7 operational support needed -
Chegg.com
● 48 MySQL Nodes in 10 separate clusters
● Tier 1 support for web, search, application, middleware and RDBMS
layers
● Tier 1 and 2 support for MySQL, Cassandra and MongoDB
● 24x7 operational support with 30 minutes SLA
● Amazon EC2 tooling and architectural management
● Manage backup and recovery, database release, incident and
configuration management processes.
● Onsite support during Amazon EC2 region migrations
● Advanced DBMS tuning and management
● Upgrades of tooling and diagnostics for alerting and trending
*
9. MySQL revamping and support -
Echosign
● Monolithic MySQL with MMM to start.
● New version upgrades.
● New hardware and capacity improvements.
● Functional partitioning and data management.
● Migration to Tungsten for HA and DR.
● BI/ETL to reduce DB load, support business needs and reduce load
times for key processes.
● Aggregation and denormalization for performance improvements.
*
10. Operations team needed - SoHalo
● Startup with no operations team
● Provided DBA and techops resources
● Architected environment
● Using Amazon MySQL RDS as datastore
● Fulfilled the entire operations function
● Running both their US and EU sites
*
11. Verifying systems will scale under load -
Contest Factory
● Configuration review
● Hardware and O/S review
● Metrics and logs review
● AWS architecture review
● Implementation of recommended configuration changes
*
12. Tools development - SlideShare and
Zendesk
● Replication aware rolling DDL implementations
● Shard migration and monitoring utilities
● Chef recipes for configuration management of configurations
● MHA and HAProxy for availability
● Cacti for MySQL Trending
*
13. Lousy application performance -
Zeitbyte
● Customer blamed the DBMS (MongoDB)
● Configuration review
● Application profiling (Django)
● Identify problem as code, not DBMS
● Fix code
● Integrate with ongoing monthly support post - tuning
○ New backups
○ New HA
*
14. Proactive services - SendGrid
● System health checks
● Backup and recovery
● Upgrades to monitoring and trending.
● Chef recipes and support.
● Query reviews, indexing, benchmarking
● Upgrades
● Support of Sharding and NoSQL evaluation
○ Sharding for Short URLs.
○ HBase for Counters
*
15. Release management and tool
development - Zendesk
● Own the weekly migration process
● Identify and manage risk, push changes throughout sharded
infrastructure
● Publish views to production
● Built shard rebalancing utilities
● Manage regular shard rebalancing and capacity planning
● Slow data deleter tool
*
16. DBMS tuning - Slideshare and Houzz
● Improved slideshare site load times by factor of 3
● Benchmarked InnoDB/XtraDB conversions for both clients
● Tuned InnoDB and XtraDB to manage background IO, concurrency
issues and optimal memory management
● Eliminated locking and replication lag for Houzz.com
*
17. Search solutions - Zenlok and
Discover Books
● Installed, configured and maintain Solr and Sphinx clusters
● Gathered search requirements
● Created and optimized index and delta queries
● Created custom ranking formulas
● Improved morphology and relevance
● Reviewed and improve indexing speed by over 10x
● Reduced query response over 4x by splitting and distributing indexes
across the cluster
● Configure HA and failover
*
18. Engagement process
● Palomino is a boutique offering bespoke solutions for our customers.
How we interact is thus hard to predict.
● We make ourselves available via email, IM, phone and chat.
● Your primary DBA can spend their working hours available in your chat
rooms.
● Daily standups for busy times, and weekly project mgmt meetings are
all part of the service.
● How else can we help you?
*
20. Zendesk & Palomino
“Palomino got up to speed very quickly. Palomino works
successfully on complex [data migration] issues that
have stumped other DBAs.”
*
21. SoHalo & Palomino
“Palomino has been helping us build our Development and
Production systems from the beginning. Early on, they built the dev
servers, set up our release management system, migrated us to
MySQL and Couchbase, and architected our Amazon Web
Services cluster. Now, day to day, they help us run both our US and
EU sites, and continue to support us in projects, such as a recent
German/EU Data Privacy compliance effort. We couldn't have
done it all without Palomino.”
*
22. Slideshare & Palomino
“Palomino was instrumental in helping SlideShare
develop an architecture that supported arbitrary schema
updates without downtime.”
*
23. Rent the Runway
& Palomino
“Palomino has shown persistence, talent and a great
breadth of experience as they continue to bring us to
new levels of stability and performance.”
*
24. Technorati & Palomino
“Palomino helped us to rethink our database
architecture and to automate processes that were
formally manual, if done it all. Their deep experience in
designing large database systems for scale and stability
and their ability to work side-by-side with operations
staff and technology management, made them a huge
asset.”
*
25. Chegg & Palomino
“Palomino are hands-down the best DBAs I have
worked with. Their knowledge, experience, and tenacity
make a huge difference. I would highly recommend
them.”
*
26. WorldReader & Palomino
Palomino’s partnership with WorldReader has allowed
us to focus on our core efforts. Finding talented, reliable
partners is crucial for a non-profit, and we are quite
grateful for Palomino.
*
27. Palomino Leadership
Laine Campbell Jay Edwards Charlie Killian
Chief Executive Chief Technical VP, Professional
Officer Officer Services
*
28. Palomino Leadership
Kevin Bowman Kaichi Sung Tim Ellis
VP of Operations Account and Project Principal Big Data
Management Lead Architect
*
29. Laine is the Owner/CEO and
Principal Consultant at Palomino. She
is a proven database expert with a
history of scaling high-performance
and high-availability web
applications.
Laine Campbell Laine leverages extensive
Owner/CEO operational experience and best
practices with extensive technical
knowledge, troubleshooting skills and
a passion for results to ease you
through the life-cycles of your
company.
*
30. Charlie has created the back-bone
technology for multiple million dollar
companies. An expert in web
applications, Charlie has built web
application frameworks, setup and
integrated numerous disparate
systems, built long term client
relationships, and contributed to open-
Charlie Killian source projects.
VP of Professional Services
Key to Charlie's success has been a
love of cyberspace, hardware,
software, an unwavering patience and
an unstoppable urge to figure out how
things work.
*
31. Jay was the first dedicated DBA at
Twitter. He later served as the lead
database engineer at Obama for
America during the 2012 election. In
his role as CTO for Palomino, he
Jay Edwards focuses on technology strategy,
Chief Technology Officer
evangelism and mentoring/training
staff and customers.
*
32. Kevin began his career at Garmin
offering desktop and network/server
support and was soon serving as IT
Manager, supervising areas including
service desk, Internet application
development, systems administration,
network engineering, DBA,
architecture and devops. He also
worked on delivering internet-enabled
Kevin Bowman
VP of Operations
products. He contributed significantly
on application and system architecture
and devops for the backend systems
that provide data services to products
as well as for Garmin Connect.
*
33. Kaichi occupies the dual role of
Account Manager and Scrum Master
at Palomino. During her time in
Biotech, she focused on business
analysis and project mgmt, supporting
process and product improvements.
Kaichi Sung She rolled out Agile Scrum and is
Account Manager and certified as both a Scrum Master and
Scrum Master
Product Owner. A multi tasker and
planner, she loves coordinating
projects and helping teams deliver on
customer requirements.
*
34. Tim has been working with database
clusters that serve billions of database
operations daily since the year 2000 and
has been involved in the NoSQL
community from the earliest stages.
He has a broad background in
relational database systems. He was a
Tim Ellis core operations component to the Digg
Principal Big Data Architect NoSQL rollout, and an active,
participating operations member of the
NoSQL community at StumbleUpon,
Mozilla, and Riot Games.
*
35. Contact Info
Laine Campbell, laine@Palomino.com
Charlie Killian, charlie@Palomino.com
Jay Edwards, jedwards@palominodb.com
Kevin Bowman, kbowman@palominodb.com
Kaichi Sung, kaichi@Palomino.com
Tim Ellis, time@Palomino.com
www.Palominodb.com
@Palominodb on Twitter
*