Ken Rugg recently talked with Rafael Knuth on the OpenStack Online Meetup. Ken provided an overview of the Trove Project along with detailed descriptions of the latest provisioning and management features.
This document discusses Integral Networks' transition from legacy SAN storage solutions like NetApp and Dell EqualLogic to NexentaStor software-defined storage. It provides an overview of Integral Networks, their infrastructure workload, and the pain points they experienced with their previous storage platforms. It then outlines how the Nexenta architecture and NexentaStor software address their scalability, availability, and cost needs without compromising on features.
This document provides guidance and best practices for migrating database workloads to infrastructure as a service (IaaS) in Microsoft Azure. It discusses choosing the appropriate virtual machine series and storage options to meet performance needs. The document emphasizes migrating the workload, not the hardware, and using cloud services to simplify management like automated patching and backup snapshots. It also recommends bringing existing monitoring and management tools to the cloud when possible rather than replacing them. The key takeaways are to understand the workload demands, choose optimal IaaS configurations, leverage cloud-enabled tools, and involve database experts when issues arise to address the root cause rather than just adding resources.
Microsoft Azure zmienia się. Jego częśc poświęcona bazie danych (Windows Azure SQL Database) zmienia się jeszcze szybciej. Podczas tej sesji chciałbym pokazac tym, którzy nie widzieli, oraz przypomniec tym, którzy już coś wiedzą - o co chodzi z WASD, jakie zmiany nastapiły i czego możemy po tej bazie oczekiwać. Dla odważnych będzie okazja podłączenia się do konta w chmurze i przetestowania ych rozwiązań samemu.
This document discusses connecting Oracle Analytics Cloud (OAC) Essbase data to Microsoft Power BI. It provides an overview of Power BI and OAC, describes various methods for connecting the two including using a REST API and exporting data to Excel or CSV files, and demonstrates some visualization capabilities in Power BI including trends over time. Key lessons learned are that data can be accessed across tools through various connections, analytics concepts are often similar between tools, and while partnerships exist between Microsoft and Oracle, integration between specific products like Power BI and OAC is still limited.
The document provides an introduction to SQL Azure, Microsoft's relational database service. It discusses how SQL Azure leverages existing SQL Server skills and tools while providing new cloud capabilities. Key points include SQL Azure being highly scaled and secure, providing a database as a service, and targeting scenarios such as departmental apps, web apps, and ISVs that need simple deployment and self-management. Architecturally, SQL Azure uses a shared infrastructure with scalable high availability technology.
Azure Boot Camp 21.04.2018 SQL Server in Azure Iaas PaaS on-prem Lars PlatzdaschLars Platzdasch
This document provides an overview and comparison of SQL Server hosting options in Azure, including Azure SQL Database (PaaS) and SQL Server in Azure VMs (IaaS). It discusses the key differences between the two options, highlighting that Azure SQL Database is fully managed while SQL Server in VMs gives more control. It also covers topics like manageability, performance metrics, pricing tiers, security best practices, and demos of the Azure portal. The document aims to help audiences choose between the "red pill" of Azure SQL Database or the "blue pill" of SQL Server in Azure VMs.
This document summarizes optimizations for MySQL performance on Linux hardware. It covers SSD and memory performance impacts, file I/O, networking, and useful tools. The history of MySQL performance improvements is discussed from hardware upgrades like SSDs and more CPU cores to software optimizations like improved algorithms and concurrency. Optimizing per-server performance to reduce total servers needed is emphasized.
The document discusses running Hadoop clusters in the cloud and the challenges that presents. It introduces CloudFarmer, a tool that allows defining roles for VMs and dynamically allocating VMs to roles. This allows building agile Hadoop clusters in the cloud that can adapt as needs change without static configurations. CloudFarmer provides a web UI to manage roles and hosts.
This document discusses Integral Networks' transition from legacy SAN storage solutions like NetApp and Dell EqualLogic to NexentaStor software-defined storage. It provides an overview of Integral Networks, their infrastructure workload, and the pain points they experienced with their previous storage platforms. It then outlines how the Nexenta architecture and NexentaStor software address their scalability, availability, and cost needs without compromising on features.
This document provides guidance and best practices for migrating database workloads to infrastructure as a service (IaaS) in Microsoft Azure. It discusses choosing the appropriate virtual machine series and storage options to meet performance needs. The document emphasizes migrating the workload, not the hardware, and using cloud services to simplify management like automated patching and backup snapshots. It also recommends bringing existing monitoring and management tools to the cloud when possible rather than replacing them. The key takeaways are to understand the workload demands, choose optimal IaaS configurations, leverage cloud-enabled tools, and involve database experts when issues arise to address the root cause rather than just adding resources.
Microsoft Azure zmienia się. Jego częśc poświęcona bazie danych (Windows Azure SQL Database) zmienia się jeszcze szybciej. Podczas tej sesji chciałbym pokazac tym, którzy nie widzieli, oraz przypomniec tym, którzy już coś wiedzą - o co chodzi z WASD, jakie zmiany nastapiły i czego możemy po tej bazie oczekiwać. Dla odważnych będzie okazja podłączenia się do konta w chmurze i przetestowania ych rozwiązań samemu.
This document discusses connecting Oracle Analytics Cloud (OAC) Essbase data to Microsoft Power BI. It provides an overview of Power BI and OAC, describes various methods for connecting the two including using a REST API and exporting data to Excel or CSV files, and demonstrates some visualization capabilities in Power BI including trends over time. Key lessons learned are that data can be accessed across tools through various connections, analytics concepts are often similar between tools, and while partnerships exist between Microsoft and Oracle, integration between specific products like Power BI and OAC is still limited.
The document provides an introduction to SQL Azure, Microsoft's relational database service. It discusses how SQL Azure leverages existing SQL Server skills and tools while providing new cloud capabilities. Key points include SQL Azure being highly scaled and secure, providing a database as a service, and targeting scenarios such as departmental apps, web apps, and ISVs that need simple deployment and self-management. Architecturally, SQL Azure uses a shared infrastructure with scalable high availability technology.
Azure Boot Camp 21.04.2018 SQL Server in Azure Iaas PaaS on-prem Lars PlatzdaschLars Platzdasch
This document provides an overview and comparison of SQL Server hosting options in Azure, including Azure SQL Database (PaaS) and SQL Server in Azure VMs (IaaS). It discusses the key differences between the two options, highlighting that Azure SQL Database is fully managed while SQL Server in VMs gives more control. It also covers topics like manageability, performance metrics, pricing tiers, security best practices, and demos of the Azure portal. The document aims to help audiences choose between the "red pill" of Azure SQL Database or the "blue pill" of SQL Server in Azure VMs.
This document summarizes optimizations for MySQL performance on Linux hardware. It covers SSD and memory performance impacts, file I/O, networking, and useful tools. The history of MySQL performance improvements is discussed from hardware upgrades like SSDs and more CPU cores to software optimizations like improved algorithms and concurrency. Optimizing per-server performance to reduce total servers needed is emphasized.
The document discusses running Hadoop clusters in the cloud and the challenges that presents. It introduces CloudFarmer, a tool that allows defining roles for VMs and dynamically allocating VMs to roles. This allows building agile Hadoop clusters in the cloud that can adapt as needs change without static configurations. CloudFarmer provides a web UI to manage roles and hosts.
Hadoop World 2011: Practical HBase - Ravi Veeramchaneni, InformaticaCloudera, Inc.
This document discusses HBase, an open-source, non-relational, distributed database built on top of Hadoop. It provides an overview of why HBase is useful, examples of how Navteq uses HBase at scale, and considerations for designing HBase schemas and deploying HBase clusters, including hardware requirements and configuration tuning. The document also outlines some desired future features for HBase like better tools, secondary indexes, and security improvements.
VMworld 2013: Virtualizing Databases: Doing IT Right VMworld
VMworld 2013
Michael Corey, Ntirety, Inc
Jeff Szastak, VMware
Learn more about VMworld and register at http://www.vmworld.com/index.jspa?src=socmed-vmworld-slideshare
The document discusses various Oracle Cloud Infrastructure storage services including local NVMe storage, block volumes, file storage, object storage, and archive storage. It provides details on the type, durability, capacity, unit size, and use cases of each storage service. Local NVMe storage provides temporary SSD-based storage attached directly to compute instances, while block volumes provide durable block-level storage that can be attached to instances independently. File storage provides shared NFS-compatible file systems, object storage offers highly durable object storage, and archive storage is for long-term archival and backups.
View this presentation to gain insight into optimizing Postgres and savings for your data management. Visit EntepriseDB's > Resources > Webcasts to view the presentation by Jay Barrows, VP of Field Operations.
During this 45 -minute presentation, Jay Barrows, VP of Field Operations, will provide a business review of how, where and why businesses are leveraging PostgreSQL. In addition, he will go over the primary pains and business drivers shaping the data management landscape such as significant cost pressures combined with recent improvements to open source database options. Oracle migration is often considered the most powerful cost reduction opportunity if you understand the migration risks, and have a clear migration game plan.
Jay will discuss several use cases selected that highlight how enterprise customers are leveraging their findings from the adoption of other OSS products, to helping to bring Postgres to the extremely expensive and mission critical part of their IT stack - the DB. By doing so they are driving TCO down in very meaningful ways, sacrificing nothing in terms of performance, scalability, security or reliability. Many businesses are already leveraging OSS in much lower cost parts of IT stack (OS, middleware).
This presentation will be beneficial to decision-makers interested in enhancing their data management with PostgreSQL. I
This document discusses how Serengeti can be used to automate the deployment and management of Hadoop clusters on VMware vSphere. Some key points:
- Serengeti is a virtual appliance that can be deployed on vSphere and automates the provisioning of Hadoop clusters within 10 minutes from templates.
- It allows separating storage and compute by deploying Hadoop data nodes on shared storage and compute nodes as VMs for better elasticity and utilization.
- Serengeti supports elastic scaling of Hadoop clusters, multi-tenancy by isolating tenant workloads, and live configuration changes with rolling upgrades and no downtime.
The success of PostgreSQL supporting enterprise workloads has put the spotlight on where PostgreSQL development is headed next. Advances in recent releases have expanded the database’s ability to support new data types and unstructured data as data professionals wrestle with bigger and more complex data loads. Analysts are predicting a strong future for open source in the enterprise while companies are increasingly adopting open source into the data center to help control and reduce costs. Marc Linster, Senior Vice President of Products and Services at EnterpriseDB, will present his perspective on how PostgreSQL will continue to evolve to meet emerging new challenges in a world of Big Data and Cloud Computing.
Microsoft released SQL Azure more than two years ago - that's enough time for testing (I hope!). So, are you ready to move your data to the Cloud? If you’re considering a business (i.e. a production environment) in the Cloud, you need to think about methods for backing up your data, a backup plan for your data and, eventually, restoring with Red Gate Cloud Services. In this session, you’ll see the differences, functionality, restrictions, and opportunities in SQL Azure and On-Premise SQL Server 2008/2008 R2/2012. We’ll consider topics such as how to be prepared for backup and restore, and which parts of a cloud environment are most important: keys, triggers, indexes, prices, security, service level agreements, etc.
OCI Storage Services provides different types of storage for various use cases:
- Local NVMe SSD storage provides high-performance temporary storage that is not persistent.
- Block Volume storage provides durable block-level storage for applications requiring SAN-like features through iSCSI. Volumes can be resized, backed up, and cloned.
- File Storage Service provides shared file systems accessible over NFSv3 that are durable and suitable for applications like EBS and HPC workloads.
MySQL Enterprise Backup provides fast, consistent, online backups of MySQL databases. It allows for backing up InnoDB and MyISAM tables while the database is running, minimizing downtime. The tool takes physical backups of the data files rather than logical backups, allowing for very fast restore times compared to alternatives like mysqldump. It supports features like compressed backups, incremental backups, and point-in-time recovery.
This document summarizes Tobiasz Janusz Koprowski's presentation on Windows Azure SQL Database. It discusses planning considerations when migrating a SQL Server database to SQL Database, including database sizes and performance tiers, compatibility with SQL Server features, and security requirements. It also provides an overview of SQL Database backup, restore, and synchronization capabilities.
Software defined storage real or bs-2014Howard Marks
This document discusses software defined storage and evaluates whether it is a real technology or just hype. It defines software defined storage as storage software that runs on standard x86 server hardware and can be sold as software or as an appliance. The document examines different types of software defined storage like storage that runs on a single server, in a virtual machine, or across multiple hypervisor hosts in a scale-out cluster. It also compares the benefits and challenges of converged infrastructure solutions using software defined storage versus dedicated storage arrays.
KoprowskiT - SQLBITS X - 2am a disaster just beganTobias Koprowski
A document outlines best practices for surviving a disaster involving SQL Server infrastructure. It recommends being well prepared with regular backups stored offsite, documented restore procedures, clear roles and responsibilities, and service level agreements defining acceptable downtimes. Key aspects of preparation include backups, restore testing, documentation, contact lists, hardware and software inventory, passwords, encryption keys, defined teams, and keeping management informed. The overall message is that with proper planning, a disaster can be survived by following the best practice of being prepared.
This presentation will discuss best practices for designing and building a solid, robust and flexible Hadoop platform on an enterprise virtual infrastructure. Attendees will learn the flexibility and operational advantages of Virtual Machines such as fast provisioning, cloning, high levels of standardization, hybrid storage, vMotioning, increased stabilization of the entire software stack, High Availability and Fault Tolerance. This is a can`t miss presentation for anyone wanting to understand design, configuration and deployment of Hadoop in virtual infrastructures.
This document introduces the HPDA 100, a high performance database appliance built by the NGENSTOR Alliance. It has two server platforms using either a proprietary 4-core 6.3GHz CPU or Intel Xeon E5 CPUs. Networking uses 40GbE and storage interfaces provide up to 22.4TB of raw PCIe SSD storage or integration with external storage arrays. Specs list configurations with 16-72 CPU cores, 256GB-6TB memory, and 22.4TB of raw internal SSD storage. The document provides an overview of the hardware under the hood and specifications of the HPDA 100 high performance database appliance.
Data is as critical as ever. Storage costs are lower but we have more and more data to store. This is where Microsoft Azure Data Storage solutions come in. This slide deck provides an overview of the most important data storage options available in Azure.
Note: I did not create this deck. I instead combined slides from the Microsoft Azure-Readiness/DevCamp repo on GitHub (https://github.com/Azure-Readiness/DevCamp) while adding additional material from a slide deck of David Chappell's.
This talk was given at Cloud Camp Kitchener 2015.
Flexible and Fast Storage for Deep Learning with Alluxio Alluxio, Inc.
This document discusses how Alluxio provides fast and flexible storage for deep learning workloads. It summarizes Alluxio's capabilities to accelerate data processing and machine learning workflows by enabling data to be stored, cached, and processed directly in memory across distributed environments. Alluxio uses a unified namespace and intelligent caching to provide high-speed data access to remote data sources and overcome storage bottlenecks.
The document provides an overview of the Tesora DBaaS Platform, which is an enterprise-grade Database as a Service platform based on the OpenStack Trove project. Some key points:
- Tesora is a major contributor to the Trove project and provides the most advanced Trove distribution for enterprise usage.
- Databases have unique management needs compared to basic systems, and Tesora's platform addresses this through features like custom guest agents for each database, optimized database images, and automation for replication and clustering.
- The Enterprise Edition of Tesora's platform adds enterprise-level features like high availability replication for production workloads as well as 24/7 support, while the Community Edition provides a simplified installation of
This document summarizes a presentation about deploying Big Data as a Service (BDaaS) in the enterprise. It discusses how BDaaS can address conflicting needs of data scientists wanting flexibility and IT wanting control. It defines different types of BDaaS and requirements for enterprise deployment such as multi-tenancy, security, and application support. The presentation covers design decisions for BDaaS including running Hadoop/Spark unmodified using containers for isolation. It provides details on the implementation including network architecture, storage, and image management. It also discusses performance testing results and demos the BDaaS platform.
Hadoop World 2011: Practical HBase - Ravi Veeramchaneni, InformaticaCloudera, Inc.
This document discusses HBase, an open-source, non-relational, distributed database built on top of Hadoop. It provides an overview of why HBase is useful, examples of how Navteq uses HBase at scale, and considerations for designing HBase schemas and deploying HBase clusters, including hardware requirements and configuration tuning. The document also outlines some desired future features for HBase like better tools, secondary indexes, and security improvements.
VMworld 2013: Virtualizing Databases: Doing IT Right VMworld
VMworld 2013
Michael Corey, Ntirety, Inc
Jeff Szastak, VMware
Learn more about VMworld and register at http://www.vmworld.com/index.jspa?src=socmed-vmworld-slideshare
The document discusses various Oracle Cloud Infrastructure storage services including local NVMe storage, block volumes, file storage, object storage, and archive storage. It provides details on the type, durability, capacity, unit size, and use cases of each storage service. Local NVMe storage provides temporary SSD-based storage attached directly to compute instances, while block volumes provide durable block-level storage that can be attached to instances independently. File storage provides shared NFS-compatible file systems, object storage offers highly durable object storage, and archive storage is for long-term archival and backups.
View this presentation to gain insight into optimizing Postgres and savings for your data management. Visit EntepriseDB's > Resources > Webcasts to view the presentation by Jay Barrows, VP of Field Operations.
During this 45 -minute presentation, Jay Barrows, VP of Field Operations, will provide a business review of how, where and why businesses are leveraging PostgreSQL. In addition, he will go over the primary pains and business drivers shaping the data management landscape such as significant cost pressures combined with recent improvements to open source database options. Oracle migration is often considered the most powerful cost reduction opportunity if you understand the migration risks, and have a clear migration game plan.
Jay will discuss several use cases selected that highlight how enterprise customers are leveraging their findings from the adoption of other OSS products, to helping to bring Postgres to the extremely expensive and mission critical part of their IT stack - the DB. By doing so they are driving TCO down in very meaningful ways, sacrificing nothing in terms of performance, scalability, security or reliability. Many businesses are already leveraging OSS in much lower cost parts of IT stack (OS, middleware).
This presentation will be beneficial to decision-makers interested in enhancing their data management with PostgreSQL. I
This document discusses how Serengeti can be used to automate the deployment and management of Hadoop clusters on VMware vSphere. Some key points:
- Serengeti is a virtual appliance that can be deployed on vSphere and automates the provisioning of Hadoop clusters within 10 minutes from templates.
- It allows separating storage and compute by deploying Hadoop data nodes on shared storage and compute nodes as VMs for better elasticity and utilization.
- Serengeti supports elastic scaling of Hadoop clusters, multi-tenancy by isolating tenant workloads, and live configuration changes with rolling upgrades and no downtime.
The success of PostgreSQL supporting enterprise workloads has put the spotlight on where PostgreSQL development is headed next. Advances in recent releases have expanded the database’s ability to support new data types and unstructured data as data professionals wrestle with bigger and more complex data loads. Analysts are predicting a strong future for open source in the enterprise while companies are increasingly adopting open source into the data center to help control and reduce costs. Marc Linster, Senior Vice President of Products and Services at EnterpriseDB, will present his perspective on how PostgreSQL will continue to evolve to meet emerging new challenges in a world of Big Data and Cloud Computing.
Microsoft released SQL Azure more than two years ago - that's enough time for testing (I hope!). So, are you ready to move your data to the Cloud? If you’re considering a business (i.e. a production environment) in the Cloud, you need to think about methods for backing up your data, a backup plan for your data and, eventually, restoring with Red Gate Cloud Services. In this session, you’ll see the differences, functionality, restrictions, and opportunities in SQL Azure and On-Premise SQL Server 2008/2008 R2/2012. We’ll consider topics such as how to be prepared for backup and restore, and which parts of a cloud environment are most important: keys, triggers, indexes, prices, security, service level agreements, etc.
OCI Storage Services provides different types of storage for various use cases:
- Local NVMe SSD storage provides high-performance temporary storage that is not persistent.
- Block Volume storage provides durable block-level storage for applications requiring SAN-like features through iSCSI. Volumes can be resized, backed up, and cloned.
- File Storage Service provides shared file systems accessible over NFSv3 that are durable and suitable for applications like EBS and HPC workloads.
MySQL Enterprise Backup provides fast, consistent, online backups of MySQL databases. It allows for backing up InnoDB and MyISAM tables while the database is running, minimizing downtime. The tool takes physical backups of the data files rather than logical backups, allowing for very fast restore times compared to alternatives like mysqldump. It supports features like compressed backups, incremental backups, and point-in-time recovery.
This document summarizes Tobiasz Janusz Koprowski's presentation on Windows Azure SQL Database. It discusses planning considerations when migrating a SQL Server database to SQL Database, including database sizes and performance tiers, compatibility with SQL Server features, and security requirements. It also provides an overview of SQL Database backup, restore, and synchronization capabilities.
Software defined storage real or bs-2014Howard Marks
This document discusses software defined storage and evaluates whether it is a real technology or just hype. It defines software defined storage as storage software that runs on standard x86 server hardware and can be sold as software or as an appliance. The document examines different types of software defined storage like storage that runs on a single server, in a virtual machine, or across multiple hypervisor hosts in a scale-out cluster. It also compares the benefits and challenges of converged infrastructure solutions using software defined storage versus dedicated storage arrays.
KoprowskiT - SQLBITS X - 2am a disaster just beganTobias Koprowski
A document outlines best practices for surviving a disaster involving SQL Server infrastructure. It recommends being well prepared with regular backups stored offsite, documented restore procedures, clear roles and responsibilities, and service level agreements defining acceptable downtimes. Key aspects of preparation include backups, restore testing, documentation, contact lists, hardware and software inventory, passwords, encryption keys, defined teams, and keeping management informed. The overall message is that with proper planning, a disaster can be survived by following the best practice of being prepared.
This presentation will discuss best practices for designing and building a solid, robust and flexible Hadoop platform on an enterprise virtual infrastructure. Attendees will learn the flexibility and operational advantages of Virtual Machines such as fast provisioning, cloning, high levels of standardization, hybrid storage, vMotioning, increased stabilization of the entire software stack, High Availability and Fault Tolerance. This is a can`t miss presentation for anyone wanting to understand design, configuration and deployment of Hadoop in virtual infrastructures.
This document introduces the HPDA 100, a high performance database appliance built by the NGENSTOR Alliance. It has two server platforms using either a proprietary 4-core 6.3GHz CPU or Intel Xeon E5 CPUs. Networking uses 40GbE and storage interfaces provide up to 22.4TB of raw PCIe SSD storage or integration with external storage arrays. Specs list configurations with 16-72 CPU cores, 256GB-6TB memory, and 22.4TB of raw internal SSD storage. The document provides an overview of the hardware under the hood and specifications of the HPDA 100 high performance database appliance.
Data is as critical as ever. Storage costs are lower but we have more and more data to store. This is where Microsoft Azure Data Storage solutions come in. This slide deck provides an overview of the most important data storage options available in Azure.
Note: I did not create this deck. I instead combined slides from the Microsoft Azure-Readiness/DevCamp repo on GitHub (https://github.com/Azure-Readiness/DevCamp) while adding additional material from a slide deck of David Chappell's.
This talk was given at Cloud Camp Kitchener 2015.
Flexible and Fast Storage for Deep Learning with Alluxio Alluxio, Inc.
This document discusses how Alluxio provides fast and flexible storage for deep learning workloads. It summarizes Alluxio's capabilities to accelerate data processing and machine learning workflows by enabling data to be stored, cached, and processed directly in memory across distributed environments. Alluxio uses a unified namespace and intelligent caching to provide high-speed data access to remote data sources and overcome storage bottlenecks.
The document provides an overview of the Tesora DBaaS Platform, which is an enterprise-grade Database as a Service platform based on the OpenStack Trove project. Some key points:
- Tesora is a major contributor to the Trove project and provides the most advanced Trove distribution for enterprise usage.
- Databases have unique management needs compared to basic systems, and Tesora's platform addresses this through features like custom guest agents for each database, optimized database images, and automation for replication and clustering.
- The Enterprise Edition of Tesora's platform adds enterprise-level features like high availability replication for production workloads as well as 24/7 support, while the Community Edition provides a simplified installation of
This document summarizes a presentation about deploying Big Data as a Service (BDaaS) in the enterprise. It discusses how BDaaS can address conflicting needs of data scientists wanting flexibility and IT wanting control. It defines different types of BDaaS and requirements for enterprise deployment such as multi-tenancy, security, and application support. The presentation covers design decisions for BDaaS including running Hadoop/Spark unmodified using containers for isolation. It provides details on the implementation including network architecture, storage, and image management. It also discusses performance testing results and demos the BDaaS platform.
The document discusses using data virtualization and masking to optimize database migrations to the cloud. It notes that traditional copying of data is inefficient for large environments and can incur high data transfer costs in the cloud. Using data virtualization allows creating virtual copies of production databases that only require a small storage footprint. Masking sensitive data before migrating non-production databases ensures security while reducing costs. Overall, data virtualization and masking enable simpler, more secure, and cost-effective migrations to cloud environments.
This document provides an introduction and overview of OpenStack Trove, which is a Database as a Service (DBaaS) component of OpenStack. It discusses what OpenStack Trove is, its architecture, supported databases, features like provisioning, backups and replication. It also covers getting started with Trove and the roles of Mirantis and Tesora in providing enterprise-hardened Trove solutions.
Percona Live 4/14/15: Leveraging open stack cinder for peak application perfo...Tesora
In this session, speakers Amrith Kumar (Tesora), Steven Walchek (SolidFire), and Chris Merz (SolidFire) discuss Cinder, the OpenStack block storage service, and OpenStack Trove.
Achieving Cost and Resource efficiency within OpenStack through Trove Database-As-A-Service (DBaaS)
Trove is an OpenStack DBaaS that allows organizations to leverage their OpenStack infrastructure in a cost-effective way to deploy solutions built upon traditional databases. Trove provides a unified solution for all database types and can provide cost and resource savings through reduced complexity. It allows rapid provisioning of database instances, standardized infrastructure, and self-service capabilities for database management. Trove is integrated with OpenStack and supports both relational and non-relational databases to provide a flexible database solution.
Azure SQL Database Managed Instance is a new flavor of Azure SQL Database that is a game changer. It offers near-complete SQL Server compatibility and network isolation to easily lift and shift databases to Azure (you can literally backup an on-premise database and restore it into a Azure SQL Database Managed Instance). Think of it as an enhancement to Azure SQL Database that is built on the same PaaS infrastructure and maintains all it's features (i.e. active geo-replication, high availability, automatic backups, database advisor, threat detection, intelligent insights, vulnerability assessment, etc) but adds support for databases up to 35TB, VNET, SQL Agent, cross-database querying, replication, etc. So, you can migrate your databases from on-prem to Azure with very little migration effort which is a big improvement from the current Singleton or Elastic Pool flavors which can require substantial changes.
The document describes OpenStack Trove, an OpenStack service that provides database as a service functionality. It discusses how Trove allows developers to provision and manage relational and non-relational databases in OpenStack clouds through self-service APIs. The document also provides an overview of how Trove works, how it is used in production environments today, and how users can get started with provisioning and managing databases using the Trove APIs and CLI tools.
Postgres for Digital Transformation:NoSQL Features, Replication, FDW & MoreAshnikbiz
This document discusses how PostgreSQL can enable digital transformation. It notes that digital transformation involves developing new types of products/services rather than just enhancing existing systems, moving to microservices architectures, and adopting data platforms. It then outlines how PostgreSQL supports these changes through its document store capabilities, foreign data wrappers for integration with other data sources, replication server for high availability, and containerized deployment options. Case studies are presented showing how enterprises have realized performance improvements, cost savings, and near real-time data exchange using PostgreSQL's unified relational and non-relational features.
This document provides an overview of a NoSQL Night event presented by Clarence J M Tauro from Couchbase. The presentation introduces NoSQL databases and discusses some of their advantages over relational databases, including scalability, availability, and partition tolerance. It covers key concepts like the CAP theorem and BASE properties. The document also provides details about Couchbase, a popular document-oriented NoSQL database, including its architecture, data model using JSON documents, and basic operations. Finally, it advertises Couchbase training courses for getting started and administration.
This document provides an overview and summary of the author's background and expertise. It states that the author has over 30 years of experience in IT working on many BI and data warehouse projects. It also lists that the author has experience as a developer, DBA, architect, and consultant. It provides certifications held and publications authored as well as noting previous recognition as an SQL Server MVP.
Leveraging OpenStack Cinder for Peak Application PerformanceNetApp
Deploying performance sensitive, database-driven applications in OpenStack can be tenuous if you are unsure how to utilize the Cinder API to get the most out of your OpenStack block storage.
This presentation:
Introduces Cinder, the OpenStack block storage service
Talks about the unique attributes of performance-sensitive applications and what this means in OpenStack
Walks you through how to use Cinder volume types and extra specs to guarantee performance to your various cloud workloads
Discusses OpenStack Trove and what it means for running database as a service in your OpenStack cloud
Should I move my database to the cloud?James Serra
So you have been running on-prem SQL Server for a while now. Maybe you have taken the step to move it from bare metal to a VM, and have seen some nice benefits. Ready to see a TON more benefits? If you said “YES!”, then this is the session for you as I will go over the many benefits gained by moving your on-prem SQL Server to an Azure VM (IaaS). Then I will really blow your mind by showing you even more benefits by moving to Azure SQL Database (PaaS/DBaaS). And for those of you with a large data warehouse, I also got you covered with Azure SQL Data Warehouse. Along the way I will talk about the many hybrid approaches so you can take a gradual approve to moving to the cloud. If you are interested in cost savings, additional features, ease of use, quick scaling, improved reliability and ending the days of upgrading hardware, this is the session for you!
This presentation is for those of you who are interested in moving your on-prem SQL Server databases and servers to Azure virtual machines (VM’s) in the cloud so you can take advantage of all the benefits of being in the cloud. This is commonly referred to as a “lift and shift” as part of an Infrastructure-as-a-service (IaaS) solution. I will discuss the various Azure VM sizes and options, migration strategies, storage options, high availability (HA) and disaster recovery (DR) solutions, and best practices.
Microsoft Data Platform - What's includedJames Serra
This document provides an overview of a speaker and their upcoming presentation on Microsoft's data platform. The speaker is a 30-year IT veteran who has worked in various roles including BI architect, developer, and consultant. Their presentation will cover collecting and managing data, transforming and analyzing data, and visualizing and making decisions from data. It will also discuss Microsoft's various product offerings for data warehousing and big data solutions.
A deep dive into trove: Scale 13x Linux Expo 2/22/15Tesora
Kenneth Rugg, founder and CEO of Tesora, gave a presentation on OpenStack Trove at SCALE 13x in Los Angeles on February 22, 2015. He discussed how Trove provides database as a service capabilities for OpenStack, including self-service provisioning and management of both relational and non-relational databases. Rugg also provided an overview of Trove's architecture, key features including backups and replication, and plans for upcoming releases such as support for database clusters.
New Ceph capabilities and Reference ArchitecturesKamesh Pemmaraju
Have you heard about Inktank Ceph and are interested to learn some tips and tricks for getting started quickly and efficiently with Ceph? Then this is the session for you!
In this two part session you learn details of:
• the very latest enhancements and capabilities delivered in Inktank Ceph Enterprise such as a new erasure coded storage back-end, support for tiering, and the introduction of user quotas.
• best practices, lessons learned and architecture considerations founded in real customer deployments of Dell and Inktank Ceph solutions that will help accelerate your Ceph deployment.
Software Defined Storage, Big Data and Ceph - What Is all the Fuss About?Red_Hat_Storage
Software Defined Storage, Big Data and Ceph - What Is all the Fuss About? By: Kamesh Pemmaraju,Neil Levine
Have you heard about Inktank Ceph and are interested to learn some tips and tricks for getting started quickly and efficiently with Ceph? Then this is the session for you! In this two part session you learn details of: • the very latest enhancements and capabilities delivered in Inktank Ceph Enterprise such as a new erasure coded storage back-end, support for tiering, and the introduction of user quotas. • best practices, lessons learned and architecture considerations founded in real customer deployments of Dell and Inktank Ceph solutions that will help accelerate your Ceph deployment.
Optymalizacja środowiska Open Source w celu zwiększenia oszczędności i kontroliEDB
The document discusses optimizing the Open Source environment to increase savings and control. It covers evolving database infrastructure models in enterprises to get more for less. Key areas discussed include where Postgres can be most easily implemented, Postgres advances that enable new data types and challenges, and how to assess whether and how to implement Postgres. Case studies are presented that demonstrate cost savings and performance benefits organizations achieved by adopting Postgres.
Model-driven operations use models to abstract away complexity and enable reuse across organizations for large, distributed systems like OpenStack, Ceph, Hadoop and Kubernetes. Models define the long term costs of operating software by standardizing deployments and automating operations tasks.
- OpenStack started in 2010 as a software-defined infrastructure project between NASA and Rackspace, and now has over 6,200 contributors from 360 companies collaborating on common goals.
- Walmart uses OpenStack extensively, with over 170,000 cores and 30 cloud regions, and also uses OneOps for managing over 5,000 users, 3,000 applications/services, and 40,000+ monthly deployments.
- Walmart wants to move OneOps into the OpenStack community to increase innovation and collaboration through OneOps being a publicly developed project on GitHub. They will be attending various OpenStack user group and conference events to discuss OneOps in OpenStack.
OpenStack has seen success with deployments, products, and services. To ensure long term health and success, Red Hat promotes an "upstream first" mindset where investments are prioritized in the OpenStack community. This includes designing, developing, testing, and contributing all code upstream. It brings benefits like influence, quality, security, and interoperability. Horizontal teams work across projects in areas like release management, infrastructure, documentation, and more. Individuals can help by becoming active contributors and serving as liaisons between teams.
The document introduces various OpenStack resources for users including mailing lists, IRC channels, working groups, videos, and tools. It is presented by Tyler Britten from IBM Cloud and provides an overview of basic OpenStack information sources like openstack.org as well as more specialized resources for different user groups. Contact information and links are provided for exploring OpenStack communities and getting involved with various projects.
This document discusses a Total Cost of Ownership (TCO) model for OpenStack clouds that was created by Red Hat. It finds that OpenStack has the lowest costs of any private cloud option on the market. The model accounts for costs across hardware, software, staffing and other areas. It analyzes how costs are reduced through automation, server density, and other factors. The document advocates for measuring and reducing virtual machine costs as clouds scale over time.
Pete Chadwick discusses the past, present, and future of OpenStack. Originally, operating systems abstracted hardware and virtualization abstracted operating systems. OpenStack now abstracts virtualization by providing an API for software-defined infrastructure that reduces risk, simplifies migrations, and allows for more adoption of new technologies. The future of OpenStack includes community roadmaps that provide direction for over 25 projects and gather requirements to create user stories and implement specifications over multiple releases. Examples of the Newton Design Series community roadmap aim to present information by themes and be updated twice per release cycle.
1. Containers provide process isolation and reproducible environments for applications at scale. Container orchestration helps manage the lifecycle of containers across hosts.
2. OpenStack provides infrastructure automation for container hosts. Using containers with OpenStack allows developers to quickly deploy applications while operations can manage compliance, auditing, and networking.
3. Container orchestration systems like Kubernetes deployed with OpenStack solutions like VMware Integrated OpenStack simplify container management by providing visibility, health checks, affinity/anti-affinity, and lifecycle management of containers.
The document discusses Verizon's OpenStack-based cloud platform and the challenges of managing it at a hyperscale level. Some key points discussed include defining Verizon's cloud platform to provide on-demand, self-service infrastructure to users; the difficulties of managing large and distributed cloud deployments at scale; and facilitating easy self-service for users while also providing operators visibility into utilization, capacity, and other metrics. The document also covers Verizon's use of OpenStack metering and APIs to track usage at scale and provide reporting to stakeholders.
Stateful Applications On the Cloud: A PayPal JourneyTesora
1) PayPal operates a large OpenStack cloud platform with over 10,000 physical servers hosting 100,000 VMs to run over 1000 services for their business.
2) They wanted to move stateful applications like messaging, streaming, caching and databases to the cloud but faced challenges with agility, efficiency, elasticity and onboarding while preserving stateful data.
3) After evaluating options like network block storage, ephemeral disks, and hyperconverged storage, they chose to use VMs with attached local disks which does not lose data when VMs are lost and has lower network bandwidth needs and costs, though storage is lost if the host fails.
So Your OpenStack Cloud is Built...Now What? Tesora
This document discusses automating OpenStack cloud administration tasks. It begins by reviewing common cloud decisions and day-to-day operator tasks. It then discusses how OpenStack and automation work well together and considerations for automating tasks. Several examples of automating tasks like creating users/projects and health monitoring are provided. It emphasizes adopting an "Administration DevOps" approach to automate operations and make the cloud more scalable and efficient.
Secrets of Success: Building Community Through Meetups Tesora
Slides from the panel discussion at OpenStack Days East featuring Tassoula Kokkoris, Gary Kevorkian, Lisa-Marie Namphy, and Ken Hui on August 23, 2016.
The document discusses the State of OpenStack Product Management work group. It was formed in 2014 to improve OpenStack delivery and user experience. The work group gathers requirements, creates user stories, implements specifications with projects, and generates a multi-release community roadmap. It consists of product managers, technologists, operators, and end users from diverse organizations. The work group collects requirements from various groups and perspectives, creates user stories, and works with projects to implement stories through blueprints and specifications. It provides a community roadmap to show direction across over 25 projects.
This document discusses Comcast's use of OpenStack for cloud computing. It notes that Comcast has 34 regions, over 700 tenants, and 20,000 instances running on OpenStack. It details Comcast's history with OpenStack, including starting in 2012 with three regions on Essex and upgrading to newer versions over time. Currently, Comcast runs IceHouse across 34 regions, with over 960,000 cores, 20,000 VMs, and plans to deploy Mitaka this year across multiple regions.
This document summarizes findings from the 2016 OpenStack User Survey. It shows that:
1) OpenStack deployments are becoming more mature, with over half of production deployments now using releases from 2015 or later.
2) On average, OpenStack clouds run 11 projects, with Compute and Identity being the most widely adopted. Storage, Networking and Image projects are also popular.
3) Users see Containers and Network Functions Virtualization as important emerging technologies, though many felt they were not ready for production use in 2016.
4) User satisfaction with OpenStack has steadily increased over time, with production deployments rating it highly compared to commercial alternatives.
The document discusses best practices for running OpenStack in production from the perspective of lessons learned from real enterprise customers. It recommends setting realistic expectations given an organization's realities rather than comparing to hyperscalers. It also advocates embracing change by using orchestration to manage a hybrid and evolving technology stack across private and public clouds. Through a banking customer example, it shows how partnerships rather than outsourcing provide agility. A demo then illustrates deploying and managing applications across OpenStack and VMware and using Kubernetes for microservices on a hybrid cloud. The bank was able to deploy 5000 VMs, reduce costs by 40%, and introduce new technologies and platforms in under a year through this hybrid cloud strategy.
Leveraging OpenStack to Run Mesos/Marathon at Charter CommunicationsTesora
This document discusses Time Warner Cable's strategy to leverage OpenStack and run Mesos/Marathon on their infrastructure. The team's goal is to automate everything and make best practices easy for development teams. Their strategy involves using OpenStack for infrastructure, Mesos for resource management, Marathon for scheduling, and various other tools like Quay, Jenkins, Vault, StatsD, and ELK. They have made progress automating their setup using Ansible and proving success by fully provisioning a cluster with a single command. Their future plans include evangelizing the platform to help migrate more services and deploy monoliths, adding more shared services, and making the platform more turnkey and valuable out of the box.
This document provides a summary of a presentation about using Docker volume plugins with OpenStack Cinder block storage.
The presentation discusses:
1. The speaker introducing themselves and their background with OpenStack Cinder.
2. An overview of the Docker volume plugin API and how the speaker created a Cinder volume plugin in Golang to provide block storage to Docker containers.
3. A demonstration of deploying a sample web application on a Docker Swarm cluster using the Cinder volume plugin to persist Redis data, showing how storage can be provided to containers across nodes.
This document discusses how OpenStack can support mobile applications by providing scalable infrastructure and services. OpenStack provides storage, compute resources, and messaging that mobile apps require. It offers scalability, elasticity, resiliency and security. OpenStack components like Swift, Glance, Nova, Neutron, Heat, and Zaqar help meet the needs of mobile apps for storage, compute, orchestration, and messaging. Mobile backend as a service (MBaaS) platforms can also be implemented using OpenStack to provide user management, push notifications, and analytics for mobile apps.
- NetApp operates a large internal private cloud called the Global Engineering Cloud (GEC) using OpenStack. The GEC provides infrastructure as a service for NetApp employees.
- The GEC uses FlexPod with Cisco networking, UCS compute, and NetApp storage. It has over 75,000 VM capacity spread across multiple regions around the world.
- NetApp has automated the deployment, configuration, and upgrades of OpenStack using tools like Puppet, Jenkins, and Git to manage the large, global OpenStack cloud at scale.
Jacob Rosenberg gave a presentation on OpenStack at Bloomberg. He discussed why Bloomberg uses a private cloud (to have proximity to data and control customization). Bloomberg chose OpenStack because it had the most established community but wanted to make their own technology choices, resulting in them building their own OpenStack-based private cloud called BCPC. The cloud has seen significant adoption within Bloomberg with growth of instances and CPUs, though some challenges were faced integrating existing systems. Future plans include further promoting adoption, container hosting, and new hardware capabilities.
In the rapidly evolving landscape of technologies, XML continues to play a vital role in structuring, storing, and transporting data across diverse systems. The recent advancements in artificial intelligence (AI) present new methodologies for enhancing XML development workflows, introducing efficiency, automation, and intelligent capabilities. This presentation will outline the scope and perspective of utilizing AI in XML development. The potential benefits and the possible pitfalls will be highlighted, providing a balanced view of the subject.
We will explore the capabilities of AI in understanding XML markup languages and autonomously creating structured XML content. Additionally, we will examine the capacity of AI to enrich plain text with appropriate XML markup. Practical examples and methodological guidelines will be provided to elucidate how AI can be effectively prompted to interpret and generate accurate XML markup.
Further emphasis will be placed on the role of AI in developing XSLT, or schemas such as XSD and Schematron. We will address the techniques and strategies adopted to create prompts for generating code, explaining code, or refactoring the code, and the results achieved.
The discussion will extend to how AI can be used to transform XML content. In particular, the focus will be on the use of AI XPath extension functions in XSLT, Schematron, Schematron Quick Fixes, or for XML content refactoring.
The presentation aims to deliver a comprehensive overview of AI usage in XML development, providing attendees with the necessary knowledge to make informed decisions. Whether you’re at the early stages of adopting AI or considering integrating it in advanced XML development, this presentation will cover all levels of expertise.
By highlighting the potential advantages and challenges of integrating AI with XML development tools and languages, the presentation seeks to inspire thoughtful conversation around the future of XML development. We’ll not only delve into the technical aspects of AI-powered XML development but also discuss practical implications and possible future directions.
Programming Foundation Models with DSPy - Meetup SlidesZilliz
Prompting language models is hard, while programming language models is easy. In this talk, I will discuss the state-of-the-art framework DSPy for programming foundation models with its powerful optimizers and runtime constraint system.
Things to Consider When Choosing a Website Developer for your Website | FODUUFODUU
Choosing the right website developer is crucial for your business. This article covers essential factors to consider, including experience, portfolio, technical skills, communication, pricing, reputation & reviews, cost and budget considerations and post-launch support. Make an informed decision to ensure your website meets your business goals.
Unlock the Future of Search with MongoDB Atlas_ Vector Search Unleashed.pdfMalak Abu Hammad
Discover how MongoDB Atlas and vector search technology can revolutionize your application's search capabilities. This comprehensive presentation covers:
* What is Vector Search?
* Importance and benefits of vector search
* Practical use cases across various industries
* Step-by-step implementation guide
* Live demos with code snippets
* Enhancing LLM capabilities with vector search
* Best practices and optimization strategies
Perfect for developers, AI enthusiasts, and tech leaders. Learn how to leverage MongoDB Atlas to deliver highly relevant, context-aware search results, transforming your data retrieval process. Stay ahead in tech innovation and maximize the potential of your applications.
#MongoDB #VectorSearch #AI #SemanticSearch #TechInnovation #DataScience #LLM #MachineLearning #SearchTechnology
Essentials of Automations: The Art of Triggers and Actions in FMESafe Software
In this second installment of our Essentials of Automations webinar series, we’ll explore the landscape of triggers and actions, guiding you through the nuances of authoring and adapting workspaces for seamless automations. Gain an understanding of the full spectrum of triggers and actions available in FME, empowering you to enhance your workspaces for efficient automation.
We’ll kick things off by showcasing the most commonly used event-based triggers, introducing you to various automation workflows like manual triggers, schedules, directory watchers, and more. Plus, see how these elements play out in real scenarios.
Whether you’re tweaking your current setup or building from the ground up, this session will arm you with the tools and insights needed to transform your FME usage into a powerhouse of productivity. Join us to discover effective strategies that simplify complex processes, enhancing your productivity and transforming your data management practices with FME. Let’s turn complexity into clarity and make your workspaces work wonders!
Generating privacy-protected synthetic data using Secludy and MilvusZilliz
During this demo, the founders of Secludy will demonstrate how their system utilizes Milvus to store and manipulate embeddings for generating privacy-protected synthetic data. Their approach not only maintains the confidentiality of the original data but also enhances the utility and scalability of LLMs under privacy constraints. Attendees, including machine learning engineers, data scientists, and data managers, will witness first-hand how Secludy's integration with Milvus empowers organizations to harness the power of LLMs securely and efficiently.
Full-RAG: A modern architecture for hyper-personalizationZilliz
Mike Del Balso, CEO & Co-Founder at Tecton, presents "Full RAG," a novel approach to AI recommendation systems, aiming to push beyond the limitations of traditional models through a deep integration of contextual insights and real-time data, leveraging the Retrieval-Augmented Generation architecture. This talk will outline Full RAG's potential to significantly enhance personalization, address engineering challenges such as data management and model training, and introduce data enrichment with reranking as a key solution. Attendees will gain crucial insights into the importance of hyperpersonalization in AI, the capabilities of Full RAG for advanced personalization, and strategies for managing complex data integrations for deploying cutting-edge AI solutions.
UiPath Test Automation using UiPath Test Suite series, part 6DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 6. In this session, we will cover Test Automation with generative AI and Open AI.
UiPath Test Automation with generative AI and Open AI webinar offers an in-depth exploration of leveraging cutting-edge technologies for test automation within the UiPath platform. Attendees will delve into the integration of generative AI, a test automation solution, with Open AI advanced natural language processing capabilities.
Throughout the session, participants will discover how this synergy empowers testers to automate repetitive tasks, enhance testing accuracy, and expedite the software testing life cycle. Topics covered include the seamless integration process, practical use cases, and the benefits of harnessing AI-driven automation for UiPath testing initiatives. By attending this webinar, testers, and automation professionals can gain valuable insights into harnessing the power of AI to optimize their test automation workflows within the UiPath ecosystem, ultimately driving efficiency and quality in software development processes.
What will you get from this session?
1. Insights into integrating generative AI.
2. Understanding how this integration enhances test automation within the UiPath platform
3. Practical demonstrations
4. Exploration of real-world use cases illustrating the benefits of AI-driven test automation for UiPath
Topics covered:
What is generative AI
Test Automation with generative AI and Open AI.
UiPath integration with generative AI
Speaker:
Deepak Rai, Automation Practice Lead, Boundaryless Group and UiPath MVP
In his public lecture, Christian Timmerer provides insights into the fascinating history of video streaming, starting from its humble beginnings before YouTube to the groundbreaking technologies that now dominate platforms like Netflix and ORF ON. Timmerer also presents provocative contributions of his own that have significantly influenced the industry. He concludes by looking at future challenges and invites the audience to join in a discussion.
CAKE: Sharing Slices of Confidential Data on BlockchainClaudio Di Ciccio
Presented at the CAiSE 2024 Forum, Intelligent Information Systems, June 6th, Limassol, Cyprus.
Synopsis: Cooperative information systems typically involve various entities in a collaborative process within a distributed environment. Blockchain technology offers a mechanism for automating such processes, even when only partial trust exists among participants. The data stored on the blockchain is replicated across all nodes in the network, ensuring accessibility to all participants. While this aspect facilitates traceability, integrity, and persistence, it poses challenges for adopting public blockchains in enterprise settings due to confidentiality issues. In this paper, we present a software tool named Control Access via Key Encryption (CAKE), designed to ensure data confidentiality in scenarios involving public blockchains. After outlining its core components and functionalities, we showcase the application of CAKE in the context of a real-world cyber-security project within the logistics domain.
Paper: https://doi.org/10.1007/978-3-031-61000-4_16
GraphRAG for Life Science to increase LLM accuracyTomaz Bratanic
GraphRAG for life science domain, where you retriever information from biomedical knowledge graphs using LLMs to increase the accuracy and performance of generated answers
Let's Integrate MuleSoft RPA, COMPOSER, APM with AWS IDP along with Slackshyamraj55
Discover the seamless integration of RPA (Robotic Process Automation), COMPOSER, and APM with AWS IDP enhanced with Slack notifications. Explore how these technologies converge to streamline workflows, optimize performance, and ensure secure access, all while leveraging the power of AWS IDP and real-time communication via Slack notifications.
How to Get CNIC Information System with Paksim Ga.pptxdanishmna97
Pakdata Cf is a groundbreaking system designed to streamline and facilitate access to CNIC information. This innovative platform leverages advanced technology to provide users with efficient and secure access to their CNIC details.
AI 101: An Introduction to the Basics and Impact of Artificial IntelligenceIndexBug
Imagine a world where machines not only perform tasks but also learn, adapt, and make decisions. This is the promise of Artificial Intelligence (AI), a technology that's not just enhancing our lives but revolutionizing entire industries.
1. OpenStack Online Meetup:
What is Trove, the Database as a
S ervice on OpenStack?
October 14, 2014
2. Traditional IT
§ Provisioning by DBA’s
§ Database management by
specialists
§ Waterfall development
§ Few large machines / bare metal
§ Oracle enterprise licenses
§ Captive audience
Cloud
§ Self-service provisioning
§ Developers manage their own
databases
§ Agile development
§ Many small machines /
virtualization
§ Many data management
technologies
§ Competition with AWS
2
Transformation of Cloud Data Management
3. What is OpenStack Trove
§ Database as a Service for OpenStack
§ Self service database provisioning
§ Full database lifecycle management
§ Multi-database support
§ Both Relational and NoSQL
4. What’s OpenStack Trove?
Mission statement:
“To
provide
scalable
and
reliable
Cloud
Database
as
a
Service
provisioning
func8onality
for
both
rela8onal
and
non-‐rela8onal
database
engines,
and
to
con8nue
to
improve
its
fully-‐featured
and
extensible
open
source
framework.”
10/8/2014
5. OpenStack Trove Highlights
§ Designed to run entirely on OpenStack
§ Quickly and easily use relational or non-relational database
§ Without the burden of complex administrative tasks
§ Manage multiple database instances
§ Automates admin deployment, configuration, patching,
backups, restores, and monitoring
6. Keystone
Cinder
Volume
Cinder
Volume
Cinder
Volume
Nova-‐Networking/Neutron
trove-‐api
trove-‐
taskmanager
trove-‐conductor
Compute
Instance
Guest
Agent
SQL/NoSQL
Data
DDaatata
Backup
DBBa
cBkaucpk
up
DB
BBacakcukupp
Guest
Image
Nova
Cinder
SwiC
Glance
Message
Bus
Trove
7. Trove Multi-Datastore Architecture
All datastore specific code
isolated to Guest Agents
Trove
Controller
Message
Bus
Guest
Agent
Guest
Agent
Guest
Agent
Guest
Agent
Trove
Dashboard
(Horizon)
Guest
Agent
Guest
Agent
Datastore agnostic code in Trove
Controller & Dashboard
Guest
Agent
8. Tuning
• Automatically tune my.cnf
• Buffer Pool Size
• Log file size
• max_connections
• Sane defaults
• InnoDB only
• Disable load data infile
• Disable select into outfile
• New API to programmatically
set configuration groups
tesora.com
Managing Trove
Security
§ Security groups
§ Turn off SSH
§ Remove anonymous user
§ Remove non-localhost users
§ Remove local file access
§ Mangle root user password
§ Apply security patches
automatically
Management
• Create database / schema
• Create users
• Grant permissions to a User
to a Schema
• Enable root user
• Resize flavor
• Resize volume
• Full and incremental backups
9. Trove Production Deployments
§ eBay Private Cloud
§ Began mid 2013
§ Multiple Databases
§ MySQL, MongoDB, Redis
Cassandra, Couchbase
§ Multi-region + HA
§ Working on Clustering
§ Public Cloud
§ HP Cloud Relational
Database
§ Launched May 2012
§ Rackspace Cloud
Databases
§ Launched August 2012
10. § Key Use Cases
§ Development & test
§ Web application hosting
§ On-demand analytics
§ Critical Capabilities
§ Self-service provisioning &
management
§ Fleet wide configuration
§ Multi-datastore architecture
10
Common Use Cases and Capabilities
11. What does Trove support?
§ Incubated in Havana, integrated in Icehouse
§ Supported single instance MySQL, Cassandra, MongoDB, Couchbase
and Redis
§ Basic Backup & Restore for MySQL, instance resizing
§ Launch instance from backup
§ New in Juno
§ Replication (MySQL), Clustering (MongoDB)
§ First iteration of PostgreSQL support
§ Support for Neutron
10/8/2014 OpenStack
Meetup:
an
update
on
Trove
12. The Future of Trove
§ Planned for Kilo
§ Additional replication and clustering capabilities
§ Support for additional databases
§ Looking ahead
§ Transitioning from basic infrastructure to a platform
§ Enterprise needs: Security, monitoring, metering/billing
§ More database support: Oracle, Vertica
13. What’s Unique About Database as a Service?
§ Databases are different
§ Different management skillsets
§ Requires significant administration
§ Each DB with own “personality”
§ Many don’t like the cloud
§ Rely on other basic systems
§ Trove is different
§ Each DB needs own guest agent
§ Consistent management across instances
§ Images need tuning and customization
§ Guest agents more than just drivers
§ Trove leverages Nova, Cinder …
Tesora
is
addressing
these
differences
14. Tesora: The Trove Company
§ Enterprise DBaaS Platform
§ Based on Trove
§ #1 contributor to Trove project
§ Ten developers on project
§ One on Trove core
Trove
Contributors
(Sept
2014)
Diverse community, but other major
contributors are cloud service operators,
not database product specialists
15. Tesora DBaaS Platform
Tesora
DBaaS
PlaBorm
Enterprise
EdiFon
Adds
enterprise
features,
robustness
and
support
Tesora
DBaaS
PlaBorm
Community
EdiFon
Trove
with
simplified
installa8on
and
management
Trove
OpenStack
DBaaS
Project
OpenStack
CerFfied
Guest
Images
Preconfigured
database
images
Nova,
Cinder,
SwiD,
Heat,
Glance,
Keystone,
Neutron,
Horizon
• OpOmized
Trove
datastore
images
for
supported
technologies
• Tested
for
a
wide
range
of
databases
• Works
on
Enterprise
or
Community
EdiOon
• Enterprise
features
exposing
capabiliOes
of
underlying
DBs
• AutomaOon
for
replicaOon
and
clustering
• 24/7
Support
with
enterprise
SLAs
• Simplified
installaOon
and
configuraOon
• Extensive
tesOng
• Maintenance
and
bug
fixes
16. Advanced
features
going
mainstream
Tesora
Enterprise
Tesora
Community
plus…
• Specialized
features,
high
value
for
some
enterprise
requirements
ContribuFons
back
to
the
community
Tesora
Community
Trove
plus…
• BeZer
out
of
the
box
experience,
ease
to
implement
DBaas
Development Lifecycle
Trove
Core
DBaaS
• Upstream
first
development
for
major
new
funcOonality
Early
access
to
new
funcFonality
Specialized
enhancements
17. Edition Differences
OpenStack
Trove
Tesora
DBaaS
PlaBorm
Community
EdiFon
Tesora
DBaaS
PlaBorm
Enterprise
EdiFon
V1.1
InstallaFon
and
ConfiguraFon
Automated
installaFon
and
opFmized
DB
configuraFons
Automated
installaOon
and
opOmized
DB
configuraOons
Tested
DistribuFons
Devstack
RDO,
RHOS,
Ubuntu
RDO,
RHOS,
Ubuntu
Enterprise
DBaaS
FuncFonality
DB
provisioning,
resize,
backup-‐
restore,
user
management
DB
provisioning,
resize,
backup-‐
restore,
user
management
DB
provisioning,
resize,
backup-‐restore,
user
management,
replicaFon
Web-‐based
Management
DB
provisioning,
backup/restore
DB
provisioning,
backup/restore,
mulF-‐datastore
DB
provisioning,
mulO-‐datastore,
resize,
full
and
incremental
backup/restore
Technical
Support
Community
forums
Community
forums,
email,
bug
fixes/patches
24/7
support
with
enterprise
SLAs
,
bug
fixes/patches