This document provides an overview of DevOps and how it relates to database administrators (DBAs). It discusses key DevOps concepts like continuous delivery, configuration management, and release coordination. Agile methodologies like Scrum, Kanban, and Extreme Programming are described. DevOps tools that can help DBAs are also covered, including virtualization platforms, containers, configuration management tools like Ansible, and the periodic table of DevOps tools. The document aims to explain how DevOps impacts and involves DBAs in its goal of faster, more reliable software delivery.
The document discusses DevOps and the role of database administrators (DBAs) in a DevOps environment. It defines DevOps as emphasizing collaboration between development and IT operations to automate software delivery and infrastructure changes. Key aspects of DevOps covered include concepts like continuous delivery, configuration management, build automation, and virtualization. The document argues that DBAs should be involved in DevOps practices like testing, packaging, and monitoring databases to help ensure quality and provide value to development and operations teams.
This document discusses the role of database administrators (DBAs) in DevOps environments. It begins with an introduction to DevOps, emphasizing collaboration between developers and IT professionals. It then explores how DBAs are impacted, noting both opportunities for DBAs to influence decisions and embrace automation, as well as risks of being seen as roadblocks. The document provides overviews of various DevOps practices and tools that DBAs can learn, such as configuration management, continuous delivery, and GitHub. It argues that DBAs should update their skills while automating some traditional tasks, and embrace techniques like data virtualization, snapshots, and DataOps to remove databases as roadblocks to DevOps goals.
The Last Frontier- Virtualization, Hybrid Management and the CloudKellyn Pot'Vin-Gorman
This document discusses virtualization, hybrid management, and cloud computing. It begins with an introduction to virtualization and discusses trends showing increasing adoption of public cloud infrastructure and platforms. The document then explores how companies are migrating applications and data to the cloud using various approaches like backups, data migration tools, and virtualization. It argues that data virtualization provides benefits over traditional migration methods by reducing costs, network usage, and storage requirements when moving workloads to the cloud.
451 Research: Data Is the Key to Friction in DevOpsDelphix
- The document discusses how data friction impacts DevOps initiatives and the benefits of using Delphix to remove data friction.
- It provides an overview of 451 Research findings that most organizations deploy code changes daily and have large, complex application changes. This puts pressure on development teams to access production-like data for testing.
- Choice Hotels' journey is presented as a case study where they implemented Delphix to automate provisioning of test databases from production data. This allowed developers faster access to fresh data for testing and removed bottlenecks in their testing cycles.
- The key benefits of Delphix are that it provides instant access to production-like data for various teams while ensuring data is secure and compliant through
The document discusses how the role of the database administrator (DBA) is evolving from a database-centric role to a DevOps and DataOps focused role. It notes that data is a source of friction for development teams due to "data gravity", but that virtualizing databases and creating "data pods" allows DBAs to remove this friction and enable self-service access to development data. This evolution is necessary for DBAs and organizations to support modern practices like DevOps in a world where data and development cycles are constantly increasing.
Accelerate Design and Development of Data Projects Using AWSDelphix
The document discusses accelerating data projects using AWS and Delphix. It describes how Dentegra uses Delphix on AWS to increase data agility and protection. Delphix allows Dentegra to provision development environments faster by masking and replicating only changed data from production to AWS. This reduces storage costs and speeds up application development cycles. The document also outlines benefits of AWS for data migration such as scalability, security, and cost effectiveness.
This document discusses strategies for migrating workloads to the cloud. It begins by reviewing current cloud trends, such as the growth of hybrid cloud environments. It then examines common migration approaches, such as backing up on-premises data and restoring in the cloud. However, it notes that this does not account for ongoing data loads and connectivity issues. The document emphasizes the importance of optimizing for the cloud prior to migration to avoid unexpected costs from storage, data transfer fees, and inefficient applications. It provides examples of cloud monitoring tools that can help with optimization and troubleshooting performance issues during and after migration.
This document discusses database management and cloud computing trends. It notes that most enterprises now have a multi-cloud strategy, with workloads running in both public and private clouds. Common cloud migration methods like backing up on-premises databases and restoring in the cloud are discussed. The importance of optimizing for the cloud to reduce costs is emphasized, such as minimizing data footprints and reducing data transfers. Popular cloud platforms like AWS, Azure, Google Cloud, and tools for monitoring cloud resources are also mentioned.
The document discusses DevOps and the role of database administrators (DBAs) in a DevOps environment. It defines DevOps as emphasizing collaboration between development and IT operations to automate software delivery and infrastructure changes. Key aspects of DevOps covered include concepts like continuous delivery, configuration management, build automation, and virtualization. The document argues that DBAs should be involved in DevOps practices like testing, packaging, and monitoring databases to help ensure quality and provide value to development and operations teams.
This document discusses the role of database administrators (DBAs) in DevOps environments. It begins with an introduction to DevOps, emphasizing collaboration between developers and IT professionals. It then explores how DBAs are impacted, noting both opportunities for DBAs to influence decisions and embrace automation, as well as risks of being seen as roadblocks. The document provides overviews of various DevOps practices and tools that DBAs can learn, such as configuration management, continuous delivery, and GitHub. It argues that DBAs should update their skills while automating some traditional tasks, and embrace techniques like data virtualization, snapshots, and DataOps to remove databases as roadblocks to DevOps goals.
The Last Frontier- Virtualization, Hybrid Management and the CloudKellyn Pot'Vin-Gorman
This document discusses virtualization, hybrid management, and cloud computing. It begins with an introduction to virtualization and discusses trends showing increasing adoption of public cloud infrastructure and platforms. The document then explores how companies are migrating applications and data to the cloud using various approaches like backups, data migration tools, and virtualization. It argues that data virtualization provides benefits over traditional migration methods by reducing costs, network usage, and storage requirements when moving workloads to the cloud.
451 Research: Data Is the Key to Friction in DevOpsDelphix
- The document discusses how data friction impacts DevOps initiatives and the benefits of using Delphix to remove data friction.
- It provides an overview of 451 Research findings that most organizations deploy code changes daily and have large, complex application changes. This puts pressure on development teams to access production-like data for testing.
- Choice Hotels' journey is presented as a case study where they implemented Delphix to automate provisioning of test databases from production data. This allowed developers faster access to fresh data for testing and removed bottlenecks in their testing cycles.
- The key benefits of Delphix are that it provides instant access to production-like data for various teams while ensuring data is secure and compliant through
The document discusses how the role of the database administrator (DBA) is evolving from a database-centric role to a DevOps and DataOps focused role. It notes that data is a source of friction for development teams due to "data gravity", but that virtualizing databases and creating "data pods" allows DBAs to remove this friction and enable self-service access to development data. This evolution is necessary for DBAs and organizations to support modern practices like DevOps in a world where data and development cycles are constantly increasing.
Accelerate Design and Development of Data Projects Using AWSDelphix
The document discusses accelerating data projects using AWS and Delphix. It describes how Dentegra uses Delphix on AWS to increase data agility and protection. Delphix allows Dentegra to provision development environments faster by masking and replicating only changed data from production to AWS. This reduces storage costs and speeds up application development cycles. The document also outlines benefits of AWS for data migration such as scalability, security, and cost effectiveness.
This document discusses strategies for migrating workloads to the cloud. It begins by reviewing current cloud trends, such as the growth of hybrid cloud environments. It then examines common migration approaches, such as backing up on-premises data and restoring in the cloud. However, it notes that this does not account for ongoing data loads and connectivity issues. The document emphasizes the importance of optimizing for the cloud prior to migration to avoid unexpected costs from storage, data transfer fees, and inefficient applications. It provides examples of cloud monitoring tools that can help with optimization and troubleshooting performance issues during and after migration.
This document discusses database management and cloud computing trends. It notes that most enterprises now have a multi-cloud strategy, with workloads running in both public and private clouds. Common cloud migration methods like backing up on-premises databases and restoring in the cloud are discussed. The importance of optimizing for the cloud to reduce costs is emphasized, such as minimizing data footprints and reducing data transfers. Popular cloud platforms like AWS, Azure, Google Cloud, and tools for monitoring cloud resources are also mentioned.
The document discusses challenges with moving databases to the cloud and proposes a solution using data virtualization. It summarizes that virtualizing databases with tools like Delphix and DBVisit allows for instant provisioning of development environments without physical copies. Databases are packaged into "data pods" that can be easily replicated and kept in sync. This streamlines cloud migrations by removing bottlenecks around copying and moving large amounts of database data.
The document discusses using data virtualization and masking to optimize database migrations to the cloud. It notes that traditional copying of data is inefficient for large environments and can incur high data transfer costs in the cloud. Using data virtualization allows creating virtual copies of production databases that only require a small storage footprint. Masking sensitive data before migrating non-production databases ensures security while reducing costs. Overall, data virtualization and masking enable simpler, more secure, and cost-effective migrations to cloud environments.
The Rise of DataOps: Making Big Data Bite Size with DataOpsDelphix
Marc embraces database virtualization and containerization to help Dave's team adopt DataOps practices. This allows team members to access self-service virtual test environments on demand. It increases data accessibility by 10%, resulting in over $65 million in additional income. DataOps removes the biggest barrier by automating and accelerating data delivery to support fast development and testing cycles.
This document discusses the history of data and computing technology from the 19th century to present day. It covers early computer architecture from von Neumann and bottlenecks caused by the CPU and network. An example is given of how data collection and analysis in 1854 London could have prevented a cholera outbreak if taken seriously. The document argues that while technology has advanced, many challenges around data economics, architecture, and analysis remain. It questions whether we are still "farming dinosaurs" with outdated approaches to data management.
Enterprise Data Warehouse Optimization: 7 Keys to SuccessHortonworks
You have a legacy system that no longer meet the demands of your current data needs, and replacing it isn’t an option. But don’t panic: Modernizing your traditional enterprise data warehouse is easier than you may think.
Cloud Native, Cloud First and Hybrid: How Different Organizations are Approac...Amazon Web Services
The advent of highly scalable, easy-to-deploy technology is transforming both private and public entities – but it’s not a one-size-fits-all approach. Each organization has its own cloud journey to share. Some start with pilot projects, while others jump into mission-critical programs. Adopting the cloud doesn’t mean starting over – it’s about enhancing your existing infrastructure. In this session, learn firsthand from MTCnovo and United Kingdom Data Archive (UKDA) share on how they are using the cloud to build on their existing technologies and learning valuable lessons along the way.
Speaker:
Nathan Cunningham, Associate Director, Big Data, UK Data Archive
Simone Hume, Business Development Manager, Amazon Web Services
Chris Martin, CTO, MTCnovo
Jonathan Snowball, CIO, MTCnovo
During the second half of 2016, IBM built a state of the art Hadoop cluster with the aim of running massive scale workloads. The amount of data available to derive insights continues to grow exponentially in this increasingly connected era, resulting in larger and larger data lakes year after year. SQL remains one of the most commonly used languages used to perform such analysis, but how do today’s SQL-over-Hadoop engines stack up to real BIG data? To find out, we decided to run a derivative of the popular TPC-DS benchmark using a 100 TB dataset, which stresses both the performance and SQL support of data warehousing solutions! Over the course of the project, we encountered a number of challenges such as poor query execution plans, uneven distribution of work, out of memory errors, and more. Join this session to learn how we tackled such challenges and the type of tuning that was required to the various layers in the Hadoop stack (including HDFS, YARN, and Spark) to run SQL-on-Hadoop engines such as Spark SQL 2.0 and IBM Big SQL at scale!
Speaker
Simon Harris, Cognitive Analytics, IBM Research
This presentation on Open Source and Cloud Technologies was given by Vizuri SVP Joe Dickman at the 2012 Destination Marketing Technology Forum in Raleigh, NC. For more information please visit our website at www.vizuri.com or email solutions@vizuri.com.
This document discusses virtualization, hybrid management, and cloud computing. It begins by defining virtualization as the creation of virtual versions of operating systems, servers, storage devices, databases, files or network resources rather than actual physical versions. It then discusses trends showing increasing adoption of virtualization and cloud computing over time. The document outlines methods companies use to migrate to the cloud, such as backups and restoring from backups. It argues that data virtualization is a better option as it optimizes migration for cloud cost structures and capabilities. Data virtualization can transform data across platforms, mask confidential data, and replicate data securely to the cloud. This provides a complete solution for cloud migrations.
The document discusses a webinar comparing the total cost of ownership (TCO) of hyperconverged infrastructure and public cloud solutions. It presents a TCO model comparing a SimpliVity hyperconverged cluster to an equivalent Amazon EC2 cloud configuration. Over 3 years, the analysis found that the SimpliVity solution had a TCO 22-49% lower than the AWS configuration. Key factors that make hyperconverged infrastructure more cost-effective include lower upfront capital costs through converged hardware, simplified management and expansion, and efficiency features. While public cloud remains viable for some workloads, the economics are no longer a given compared to hyperconverged appliances according to the analysis.
Cloud Computing: Powering the Future of Development and TestingTechWell
Developers and testers are under constant pressure to operate more efficiently, cut costs, and deliver on time. Without access to scalable, flexible, and cost effective computing resources, these challenges are magnified. Brett Goodwin explains how to create scalable dev/test environments in the cloud, and shares best practices for reducing cycle time and decreasing project costs. Learn how scalable, cloud-based data centers can run software without complicated re-writes; enable rapid defect resolution with snapshots and clones; and provide global collaboration for multiple product and release teams. Brett presents a case study of Cushman and Wakefield, the world's largest privately held real estate services firm, which struggled with an on-premises development and testing environment. By moving its dev/test infrastructure to the cloud, they reduced provisioning time from days to minutes, within four months doubled the number of projects supported, and improved collaboration among dispersed teams.
There are options beyond a straight forward lift and shift into Infrastructure as a Service. This session is about learning about how Azure helps modernize applications faster utilising modern technologies like PaaS, containers and serverless
The document discusses containers and Docker Enterprise Edition (EE). It notes that by 2020, over 50% of organizations will be running containers in production. Containers simplify infrastructure by allowing applications to run on any infrastructure. Docker EE provides additional capabilities for enterprises like security features, automation, and support that are required beyond the open source Docker Engine. It highlights customer examples where Docker EE helped accelerate projects, increase scalability, and migrate applications to the cloud. The document promotes Docker services to help customers develop a containerization strategy and achieve benefits like cost savings, agility, and productivity gains.
The Netflix recipe for migrating your organization from building a datacenter based product to a cloud based product. First presented at the Silicon Valley Cloud Computing Meetup "Speak Cloudy to Me" on Saturday April 30th, 2011
This document discusses HPE Helion CloudSystem on HPE Hyper Converged 250. It provides an overview of HPE's hyperconverged solution and how it can be used to deploy HPE Helion CloudSystem. Some key benefits highlighted include fast time-to-value, predictable and linear scaling, lower costs and complexity of storage, and reduced footprint. Use cases discussed include small-scale virtual private clouds for production, starter clouds for fast time to value, and accelerating application testing and development with simple infrastructure as a service.
The document discusses how the author had an epiphany about using database virtualization to simplify patching and upgrades. It provides an example of how virtualizing databases with Delphix eliminates the need to repeatedly apply patches to each test environment and allows patches to be tested on virtual copies without impacting existing environments. It estimates this approach can save over 80% on storage usage and significantly reduce the time spent on routine database maintenance tasks.
The Webinar takes participants through the entire cloud migration life-cycle – from initial analysis to final migration. We evaluate the leading cloud DBMS offerings from Amazon, Microsoft and Oracle. We also compare IaaS and DBaaS to better understand the two architectures and identify the most appropriate use case for each platform.
We finish by providing RDX’s recommended database migration procedures and the vendor utilities you can leverage to ensure trouble-free cloud transitions. Learn from experts who have migrated dozens of on-premises systems to the cloud!
This document discusses DevOps and how it relates to database administrators (DBAs). It begins with a story about data corruption resulting from a lack of formal development processes. It then defines DevOps and discusses how including DBAs is important for efficiency. The document outlines common DevOps terms and tools and how database virtualization fits into the DevOps model. It addresses cultural challenges for DBAs in adopting DevOps and how DBAs can provide value through collaboration, skills updates, and familiarity with the DevOps toolchain.
Marc embraces database virtualization and containers to help Dave's development team overcome data issues slowing their work. Virtualizing the database and creating "data pods" allows self-service access and the ability to quickly provision testing environments. This enables the team to work more efficiently and meet sprint goals. DataOps is introduced to fully integrate data into DevOps practices, removing it as a bottleneck through tools that provide versioning, automation and developer-friendly interfaces.
The document discusses challenges with moving databases to the cloud and proposes a solution using data virtualization. It summarizes that virtualizing databases with tools like Delphix and DBVisit allows for instant provisioning of development environments without physical copies. Databases are packaged into "data pods" that can be easily replicated and kept in sync. This streamlines cloud migrations by removing bottlenecks around copying and moving large amounts of database data.
The document discusses using data virtualization and masking to optimize database migrations to the cloud. It notes that traditional copying of data is inefficient for large environments and can incur high data transfer costs in the cloud. Using data virtualization allows creating virtual copies of production databases that only require a small storage footprint. Masking sensitive data before migrating non-production databases ensures security while reducing costs. Overall, data virtualization and masking enable simpler, more secure, and cost-effective migrations to cloud environments.
The Rise of DataOps: Making Big Data Bite Size with DataOpsDelphix
Marc embraces database virtualization and containerization to help Dave's team adopt DataOps practices. This allows team members to access self-service virtual test environments on demand. It increases data accessibility by 10%, resulting in over $65 million in additional income. DataOps removes the biggest barrier by automating and accelerating data delivery to support fast development and testing cycles.
This document discusses the history of data and computing technology from the 19th century to present day. It covers early computer architecture from von Neumann and bottlenecks caused by the CPU and network. An example is given of how data collection and analysis in 1854 London could have prevented a cholera outbreak if taken seriously. The document argues that while technology has advanced, many challenges around data economics, architecture, and analysis remain. It questions whether we are still "farming dinosaurs" with outdated approaches to data management.
Enterprise Data Warehouse Optimization: 7 Keys to SuccessHortonworks
You have a legacy system that no longer meet the demands of your current data needs, and replacing it isn’t an option. But don’t panic: Modernizing your traditional enterprise data warehouse is easier than you may think.
Cloud Native, Cloud First and Hybrid: How Different Organizations are Approac...Amazon Web Services
The advent of highly scalable, easy-to-deploy technology is transforming both private and public entities – but it’s not a one-size-fits-all approach. Each organization has its own cloud journey to share. Some start with pilot projects, while others jump into mission-critical programs. Adopting the cloud doesn’t mean starting over – it’s about enhancing your existing infrastructure. In this session, learn firsthand from MTCnovo and United Kingdom Data Archive (UKDA) share on how they are using the cloud to build on their existing technologies and learning valuable lessons along the way.
Speaker:
Nathan Cunningham, Associate Director, Big Data, UK Data Archive
Simone Hume, Business Development Manager, Amazon Web Services
Chris Martin, CTO, MTCnovo
Jonathan Snowball, CIO, MTCnovo
During the second half of 2016, IBM built a state of the art Hadoop cluster with the aim of running massive scale workloads. The amount of data available to derive insights continues to grow exponentially in this increasingly connected era, resulting in larger and larger data lakes year after year. SQL remains one of the most commonly used languages used to perform such analysis, but how do today’s SQL-over-Hadoop engines stack up to real BIG data? To find out, we decided to run a derivative of the popular TPC-DS benchmark using a 100 TB dataset, which stresses both the performance and SQL support of data warehousing solutions! Over the course of the project, we encountered a number of challenges such as poor query execution plans, uneven distribution of work, out of memory errors, and more. Join this session to learn how we tackled such challenges and the type of tuning that was required to the various layers in the Hadoop stack (including HDFS, YARN, and Spark) to run SQL-on-Hadoop engines such as Spark SQL 2.0 and IBM Big SQL at scale!
Speaker
Simon Harris, Cognitive Analytics, IBM Research
This presentation on Open Source and Cloud Technologies was given by Vizuri SVP Joe Dickman at the 2012 Destination Marketing Technology Forum in Raleigh, NC. For more information please visit our website at www.vizuri.com or email solutions@vizuri.com.
This document discusses virtualization, hybrid management, and cloud computing. It begins by defining virtualization as the creation of virtual versions of operating systems, servers, storage devices, databases, files or network resources rather than actual physical versions. It then discusses trends showing increasing adoption of virtualization and cloud computing over time. The document outlines methods companies use to migrate to the cloud, such as backups and restoring from backups. It argues that data virtualization is a better option as it optimizes migration for cloud cost structures and capabilities. Data virtualization can transform data across platforms, mask confidential data, and replicate data securely to the cloud. This provides a complete solution for cloud migrations.
The document discusses a webinar comparing the total cost of ownership (TCO) of hyperconverged infrastructure and public cloud solutions. It presents a TCO model comparing a SimpliVity hyperconverged cluster to an equivalent Amazon EC2 cloud configuration. Over 3 years, the analysis found that the SimpliVity solution had a TCO 22-49% lower than the AWS configuration. Key factors that make hyperconverged infrastructure more cost-effective include lower upfront capital costs through converged hardware, simplified management and expansion, and efficiency features. While public cloud remains viable for some workloads, the economics are no longer a given compared to hyperconverged appliances according to the analysis.
Cloud Computing: Powering the Future of Development and TestingTechWell
Developers and testers are under constant pressure to operate more efficiently, cut costs, and deliver on time. Without access to scalable, flexible, and cost effective computing resources, these challenges are magnified. Brett Goodwin explains how to create scalable dev/test environments in the cloud, and shares best practices for reducing cycle time and decreasing project costs. Learn how scalable, cloud-based data centers can run software without complicated re-writes; enable rapid defect resolution with snapshots and clones; and provide global collaboration for multiple product and release teams. Brett presents a case study of Cushman and Wakefield, the world's largest privately held real estate services firm, which struggled with an on-premises development and testing environment. By moving its dev/test infrastructure to the cloud, they reduced provisioning time from days to minutes, within four months doubled the number of projects supported, and improved collaboration among dispersed teams.
There are options beyond a straight forward lift and shift into Infrastructure as a Service. This session is about learning about how Azure helps modernize applications faster utilising modern technologies like PaaS, containers and serverless
The document discusses containers and Docker Enterprise Edition (EE). It notes that by 2020, over 50% of organizations will be running containers in production. Containers simplify infrastructure by allowing applications to run on any infrastructure. Docker EE provides additional capabilities for enterprises like security features, automation, and support that are required beyond the open source Docker Engine. It highlights customer examples where Docker EE helped accelerate projects, increase scalability, and migrate applications to the cloud. The document promotes Docker services to help customers develop a containerization strategy and achieve benefits like cost savings, agility, and productivity gains.
The Netflix recipe for migrating your organization from building a datacenter based product to a cloud based product. First presented at the Silicon Valley Cloud Computing Meetup "Speak Cloudy to Me" on Saturday April 30th, 2011
This document discusses HPE Helion CloudSystem on HPE Hyper Converged 250. It provides an overview of HPE's hyperconverged solution and how it can be used to deploy HPE Helion CloudSystem. Some key benefits highlighted include fast time-to-value, predictable and linear scaling, lower costs and complexity of storage, and reduced footprint. Use cases discussed include small-scale virtual private clouds for production, starter clouds for fast time to value, and accelerating application testing and development with simple infrastructure as a service.
The document discusses how the author had an epiphany about using database virtualization to simplify patching and upgrades. It provides an example of how virtualizing databases with Delphix eliminates the need to repeatedly apply patches to each test environment and allows patches to be tested on virtual copies without impacting existing environments. It estimates this approach can save over 80% on storage usage and significantly reduce the time spent on routine database maintenance tasks.
The Webinar takes participants through the entire cloud migration life-cycle – from initial analysis to final migration. We evaluate the leading cloud DBMS offerings from Amazon, Microsoft and Oracle. We also compare IaaS and DBaaS to better understand the two architectures and identify the most appropriate use case for each platform.
We finish by providing RDX’s recommended database migration procedures and the vendor utilities you can leverage to ensure trouble-free cloud transitions. Learn from experts who have migrated dozens of on-premises systems to the cloud!
This document discusses DevOps and how it relates to database administrators (DBAs). It begins with a story about data corruption resulting from a lack of formal development processes. It then defines DevOps and discusses how including DBAs is important for efficiency. The document outlines common DevOps terms and tools and how database virtualization fits into the DevOps model. It addresses cultural challenges for DBAs in adopting DevOps and how DBAs can provide value through collaboration, skills updates, and familiarity with the DevOps toolchain.
Marc embraces database virtualization and containers to help Dave's development team overcome data issues slowing their work. Virtualizing the database and creating "data pods" allows self-service access and the ability to quickly provision testing environments. This enables the team to work more efficiently and meet sprint goals. DataOps is introduced to fully integrate data into DevOps practices, removing it as a bottleneck through tools that provide versioning, automation and developer-friendly interfaces.
This document discusses using virtualization and containers to improve database deployments in development environments. It notes that traditional database deployments are slow, taking 85% of project time for creation and refreshes. Virtualization allows for more frequent releases by speeding up refresh times. The document discusses how virtualization engines can track database changes and provision new virtual databases in seconds from a source database. This allows developers and testers to self-service provision databases without involving DBAs. It also discusses how virtualization and containers can optimize database deployments in cloud environments by reducing storage usage and data transfers.
This document discusses the transition from DevOps to DataOps. It begins by introducing the speaker, Kellyn Pot'Vin-Gorman, and their background. It then provides definitions and histories of DevOps and some common DevOps tools and practices. The document argues that database administrators (DBAs) need to embrace DevOps tools and practices like automation, version control, and database virtualization in order to stay relevant. It presents database virtualization and containerization as ways to overcome "data gravity" and better enable continuous delivery of database changes. Finally, it discusses how methodologies like Agile, Scrum, and Kanban can be combined with data-centric tools to transition from DevOps to DataOps.
Kellyn Pot’Vin-Gorman discusses DevOps tools for winning agility. She emphasizes that while many organizations automate testing, the DevOps journey is longer and involves additional steps like orchestration between environments, security, collaboration, and establishing a culture of continuous improvement. She also stresses that organizations should not forget about managing their data as part of the DevOps process and advocates for approaches like database virtualization to help enhance DevOps initiatives.
This document discusses virtualizing big data in the cloud using Delphix data virtualization software. It begins with an introduction of the presenter and their background. It then discusses trends in cloud adoption, including how most enterprises now use a hybrid cloud strategy. It also discusses how big data projects are increasingly being deployed in the cloud. The document demonstrates how Delphix can be used to virtualize flat files containing big data, eliminating duplication and enabling features like snapshots and cloning. It shows how files can be provisioned from a source to targets, including the cloud, and refreshed or rewound when needed. In summary, the document illustrates how Delphix virtualizes big data files to simplify deployment and management in cloud environments.
This document discusses test data management and how it can help improve testing efficiency. It notes that over 80% of organizations stated that receiving or refreshing test data took up over 90% of testing time. It then discusses how tools for test data management can help by quickly generating test data sets that match the testing cycle and help isolate failures. It also discusses challenges like data cloning being ineffective and not matching what developers and testers face in production. The document advocates for approaches like data virtualization that can deliver fast, full copies of production data for testing while ensuring security of non-production data through techniques like data masking.
Managing IT environment complexity in a Multi-Cloud WorldShashi Kiran
IT environments are continuing to get complex. How do you better manage this to speed up digitization and application modernization efforts using environments-as-a-service
The document provides an introduction to DevOps, including definitions of DevOps, the DevOps lifecycle, principles of DevOps, and why DevOps is needed. DevOps is a culture that promotes collaboration between development and operations teams to deploy code to production faster and more reliably through automation. The DevOps lifecycle includes development, testing, integration, deployment, and monitoring phases. Key principles are customer focus, shared responsibility, continuous improvement, automation, collaboration, and monitoring. DevOps aims to streamline software delivery, improve predictability, and reduce costs.
Enterprise DevOps and the Modern Mainframe Webcast PresentationCompuware
Compuware and CloudBees demonstrate how you can apply modern DevOps practices to your mainframe applications using Compuware ISPW and Topaz for Total Test with CloudBees Jenkins. Compuware Product Manager Steve Kansa and CloudBees DevOps Evangelist Brian Dawson will:
- Position the mainframe as part of your DevOps and CI/CD journey
- Explain how Jenkins automates mainframe source code management and testing
- Demo a CI/CD workflow on a COBOL application
Watch the full presentation on YouTube: https://www.youtube.com/watch?v=x4MWrPy3bKM.
The document discusses DevOps practices like continuous integration (CI) and continuous delivery/deployment (CD). It explains that DevOps aims to improve software development and operations by increasing automation, reducing deployment times, and enabling more frequent and safer software releases. CI principles include automating builds, testing, and deployments. CD builds on CI by further automating the software release process and reducing risks of major releases.
The document discusses the Software Development Life Cycle (SDLC) and DevOps. It defines SDLC as a process used by the software industry to design, develop, and test high-quality software. SDLC aims to produce software that meets expectations within time and cost estimates. The document then discusses DevOps, defining it as a culture promoting collaboration between development and operations teams to deploy code to production faster using automation. It outlines the DevOps lifecycle and principles, why DevOps is needed to streamline the software delivery process, and tools used in DevOps.
DevOps in Practices document provides an overview of DevOps practices and microservice architecture. It discusses that DevOps aims to reduce the time between introducing changes to a system and deploying those changes in a production environment. Microservices architecture breaks applications into smaller, independent services that are built around business capabilities. Netflix is highlighted as an example that pioneered this approach at a large scale using AWS. Key aspects of DevOps like continuous integration, infrastructure as code, and automated testing are explained in the context of enabling faster delivery with microservices.
Enabling multicloud in the enterprise with DevSecOpsJosh Boyd
Core federal agencies are using multiple cloud providers to avoid vendor lock-in and optimize their workloads' infrastructure. Taking advantage of each cloud provider's strengths comes with some challenges: multicloud security and compliance, inventory tracking, resource utilization, and software delivery automation.
In this session, you'll see how Red Hat CloudForms and Red Hat OpenShift Container Platform, paired with Booz Allen’s Solutions Delivery Platform, addresses these challenges and brings governance to your DevOps pipeline and multicloud environment.
Amazon Web Services and PaaS - Enterprise Java for the Cloud Era? - Mark Pric...jaxconf
The extraordinary growth of Java during the last decade owed everything to the set of infrastructure services that application servers provided as part of the platform. However, TCO eventually drove the move to the cloud and PaaS (Platform as a Service) is set to deliver a standard run-time for the next generation of applications, replacing the proprietary infrastructure provided by the application server vendors. Now the question is: where do developers of real-world business applications look for a common set of standard infrastructure services? Is there a common framework that can provide essential application services, such as message queueing, push notifications, email integration, in-memory caching and processing? Amazon Web Services (AWS) with their highly-scaleable IaaS (Infrastructure as a Service) model are an obvious answer, but how best to combine Java's rich ecosystem of tools, frameworks and knowledge with the scale and cost-effectiveness of cloud-based web services? This session will help you to understand how you can deliver applications that make effective use of those services by using a Java PaaS, without being forced to support the underlying infrastructure. In this code-rich session, aimed at architects and developers, Mark Prichard of CloudBees will show how you can: Pass Amazon security credentials and configuration parameters to PaaS applications at run-time to provide customized environments; use JDBC and Amazon RDS (Relational Data Service) to provide resilient and performant relational data servicesReplace JMS queues and topics with Amazon SQS (Simple Queue Service) and SNS (Simple Notification Service) to develop cloud-based messaging applications; use Amazon's SES (Simple Email Service) from Java applications. We'll also look at other cloud e-mail services that offer easy integration with the PaaS modelRun distributed caching solutions in the cloud using Amazon ElastiCache's in-memory distributed caching with Java PaaS deployments.
This presentation talks about - What is DevOps? Why it's required for the Information Technology industry? And, more importantly, what are the DevOps trend in 2019 and later.
Managing ScaleIO as Software on Mesos - David vonThenen - Dell EMC World 2017{code} by Dell EMC
Software can be complex, but it is a key part of modern data centers. {code}'s ScaleIO Framework for Apache Mesos is a storage framework that automates the complete lifecycle of the ScaleIO storage platform on top of commodity hardware. Moving storage to a framework reduces the complexity involved and transforms the operational approach. Watch how the Mesos framework simplifies all aspects of ScaleIO to provide storage for containerized applications.
This are my keynote slides from SQL Saturday Oregon 2023 on AI and the Intersection of AI, Machine Learning and Economnic Challenges as a Technical Specialist
This document discusses migrating high IO SQL Server workloads to Azure. It begins by explaining that every company has at least one "whale" workload that requires high CPU, memory and IO. These whales can be challenging to move to the cloud. The document then provides tips on determining if a workload's issue is truly high IO or caused by another factor. It discusses various wait events that may indicate IO problems and tools for monitoring IO performance. Finally, it covers some considerations for IO in the cloud.
This document provides an overview of options for running Oracle solutions on Microsoft Azure infrastructure as a service (IaaS). It discusses architectural considerations for high availability, disaster recovery, storage, licensing, and migrating workloads from Oracle Exadata. Key points covered include using Oracle Data Guard for replication and failover, storage options like Azure NetApp Files that can support Exadata workloads, and identifying databases that are not dependent on Exadata features for lift and shift to Azure IaaS. The document aims to help customers understand how to optimize their use of Oracle solutions when deploying to Azure.
This document provides guidance and best practices for migrating database workloads to infrastructure as a service (IaaS) in Microsoft Azure. It discusses choosing the appropriate virtual machine series and storage options to meet performance needs. The document emphasizes migrating the workload, not the hardware, and using cloud services to simplify management like automated patching and backup snapshots. It also recommends bringing existing monitoring and management tools to the cloud when possible rather than replacing them. The key takeaways are to understand the workload demands, choose optimal IaaS configurations, leverage cloud-enabled tools, and involve database experts when issues arise to address the root cause rather than just adding resources.
This document discusses strategies for managing ADHD as an adult. It begins by describing the three main types of ADHD - inattentive, hyperactive-impulsive, and combined. It then lists some of the biggest challenges of ADHD like executive dysfunction, disorganization, lack of attention, procrastination, and internal preoccupation. The document provides tips and strategies for overcoming each challenge through organization, scheduling, list-making, breaking large tasks into small ones, and using technology tools. It emphasizes finding accommodations that work for the individual and their specific ADHD presentation and challenges.
This document provides guidance and best practices for using Infrastructure as a Service (IaaS) on Microsoft Azure for database workloads. It discusses key differences between IaaS, Platform as a Service (PaaS), and Software as a Service (SaaS). The document also covers Azure-specific concepts like virtual machine series, availability zones, storage accounts, and redundancy options to help architects design cloud infrastructures that meet business requirements. Specialized configurations like constrained VMs and ultra disks are also presented along with strategies for ensuring high performance and availability of database workloads on Azure IaaS.
Kellyn Gorman shares her experience living with ADHD and strategies for turning it into a positive. She discusses how ADHD impacted her childhood and how it still presents challenges as an adult. However, with the right tools and understanding of her needs, she is able to find success. She provides tips for organizing, prioritizing tasks, managing distractions, and accessing support. The key is learning about ADHD and how to structure one's environment and routine to play to one's strengths rather than fighting against the condition.
Migrating Oracle workloads to Azure requires understanding the workload and hardware requirements. It is important to analyze the workload using the Automatic Workload Repository (AWR) report to accurately size infrastructure needs. The right virtual machine series and storage options must be selected to meet the identified input/output and capacity needs. Rather than moving existing hardware, the focus should be migrating the Oracle workload to take advantage of cloud capabilities while ensuring performance and high availability.
This document discusses overcoming silos when implementing DevOps for a new product at a company. The teams involved were dispersed globally and siloed in their tools and processes. Challenges included isolating workload sizes, choosing a Linux image, and team ownership issues. The solution involved aligning teams, automating deployment with Bash scripts called by Terraform and Azure DevOps, and evolving the automation. This improved communication, decreased teams from 120 people to 7, and increased deployments and profits for the successful project.
This document discusses best practices for migrating database workloads to Azure Infrastructure as a Service (IaaS). Some key points include:
- Choosing the appropriate VM series like E or M series optimized for database workloads.
- Using availability zones and geo-redundant storage for high availability and disaster recovery.
- Sizing storage correctly based on the database's input/output needs and using premium SSDs where needed.
- Migrating existing monitoring and management tools to the cloud to provide familiarity and automating tasks like backups, patching, and problem resolution.
This document provides an overview of how to successfully migrate Oracle workloads to Microsoft Azure. It begins with an introduction of the presenter and their experience. It then discusses why customers might want to migrate to the cloud and the different Azure database options available. The bulk of the document outlines the key steps in planning and executing an Oracle workload migration to Azure, including sizing, deployment, monitoring, backup strategies, and ensuring high availability. It emphasizes adapting architectures for the cloud rather than directly porting on-premises systems. The document concludes with recommendations around automation, education resources, and references for Oracle-Azure configurations.
This document discusses the future of data and the Azure data ecosystem. It highlights that by 2025 there will be 175 zettabytes of data in the world and the average person will have over 5,000 digital interactions per day. It promotes Azure services like Power BI, Azure Synapse Analytics, Azure Data Factory and Azure Machine Learning for extracting value from data through analytics, visualization and machine learning. The document provides overviews of key Azure data and analytics services and how they fit together in an end-to-end data platform for business intelligence, artificial intelligence and continuous intelligence applications.
This is the second session of the learning pathway at PASS Summit 2019, which is still a stand alone session to teach you how to write proper Linux BASH scripts
This document discusses techniques for optimizing Power BI performance. It recommends tracing queries using DAX Studio to identify slow queries and refresh times. Tracing tools like SQL Profiler and log files can provide insights into issues occurring in the data sources, Power BI layer, and across the network. Focusing on optimization by addressing wait times through a scientific process can help resolve long-term performance problems.
The document provides tips and tricks for scripting success on Linux. It begins with introducing the speaker and emphasizing that the session will focus on best practices for those already familiar with BASH scripting. It then details various tips across multiple areas: setting the shell and environment variables, adding headers and comments to scripts, validating input, implementing error handling and debugging, leveraging utilities like CRON for scheduling, and ensuring scripts continue running across sessions. The tips are meant to help authors write more readable, maintainable, and reliable scripts.
This document discusses connecting Oracle Analytics Cloud (OAC) Essbase data to Microsoft Power BI. It provides an overview of Power BI and OAC, describes various methods for connecting the two including using a REST API and exporting data to Excel or CSV files, and demonstrates some visualization capabilities in Power BI including trends over time. Key lessons learned are that data can be accessed across tools through various connections, analytics concepts are often similar between tools, and while partnerships exist between Microsoft and Oracle, integration between specific products like Power BI and OAC is still limited.
Mentors provide guidance and support, while sponsors use their influence to advocate for and promote a protege's career. Obtaining both mentors and sponsors is important for advancing in one's field and overcoming biases, yet women often have fewer sponsors than men. The document outlines strategies for how women can find and work with sponsors, and how men can act as allies in supporting women. Developing representation of women in technology fields through mentorship and sponsorship can help initiatives become self-sustaining over time.
Kellyn Pot'Vin-Gorman presented on GDPR compliance. Some key points include:
- GDPR went into effect in May 2018 and covers any data belonging to an EU citizen.
- Fines for non-compliance can be up to 4% of annual revenue or €20 million.
- DBAs play a role in identifying critical data, auditing processes, and reporting on compliance.
- An AI tool assessed the privacy policies of 14 major companies and found they all failed to meet GDPR requirements.
- Achieving compliance requires security frameworks, data mapping, encryption, access controls, and dedicated teams.
Generating privacy-protected synthetic data using Secludy and MilvusZilliz
During this demo, the founders of Secludy will demonstrate how their system utilizes Milvus to store and manipulate embeddings for generating privacy-protected synthetic data. Their approach not only maintains the confidentiality of the original data but also enhances the utility and scalability of LLMs under privacy constraints. Attendees, including machine learning engineers, data scientists, and data managers, will witness first-hand how Secludy's integration with Milvus empowers organizations to harness the power of LLMs securely and efficiently.
Northern Engraving | Nameplate Manufacturing Process - 2024Northern Engraving
Manufacturing custom quality metal nameplates and badges involves several standard operations. Processes include sheet prep, lithography, screening, coating, punch press and inspection. All decoration is completed in the flat sheet with adhesive and tooling operations following. The possibilities for creating unique durable nameplates are endless. How will you create your brand identity? We can help!
Introduction of Cybersecurity with OSS at Code Europe 2024Hiroshi SHIBATA
I develop the Ruby programming language, RubyGems, and Bundler, which are package managers for Ruby. Today, I will introduce how to enhance the security of your application using open-source software (OSS) examples from Ruby and RubyGems.
The first topic is CVE (Common Vulnerabilities and Exposures). I have published CVEs many times. But what exactly is a CVE? I'll provide a basic understanding of CVEs and explain how to detect and handle vulnerabilities in OSS.
Next, let's discuss package managers. Package managers play a critical role in the OSS ecosystem. I'll explain how to manage library dependencies in your application.
I'll share insights into how the Ruby and RubyGems core team works to keep our ecosystem safe. By the end of this talk, you'll have a better understanding of how to safeguard your code.
Skybuffer SAM4U tool for SAP license adoptionTatiana Kojar
Manage and optimize your license adoption and consumption with SAM4U, an SAP free customer software asset management tool.
SAM4U, an SAP complimentary software asset management tool for customers, delivers a detailed and well-structured overview of license inventory and usage with a user-friendly interface. We offer a hosted, cost-effective, and performance-optimized SAM4U setup in the Skybuffer Cloud environment. You retain ownership of the system and data, while we manage the ABAP 7.58 infrastructure, ensuring fixed Total Cost of Ownership (TCO) and exceptional services through the SAP Fiori interface.
AppSec PNW: Android and iOS Application Security with MobSFAjin Abraham
Mobile Security Framework - MobSF is a free and open source automated mobile application security testing environment designed to help security engineers, researchers, developers, and penetration testers to identify security vulnerabilities, malicious behaviours and privacy concerns in mobile applications using static and dynamic analysis. It supports all the popular mobile application binaries and source code formats built for Android and iOS devices. In addition to automated security assessment, it also offers an interactive testing environment to build and execute scenario based test/fuzz cases against the application.
This talk covers:
Using MobSF for static analysis of mobile applications.
Interactive dynamic security assessment of Android and iOS applications.
Solving Mobile app CTF challenges.
Reverse engineering and runtime analysis of Mobile malware.
How to shift left and integrate MobSF/mobsfscan SAST and DAST in your build pipeline.
Taking AI to the Next Level in Manufacturing.pdfssuserfac0301
Read Taking AI to the Next Level in Manufacturing to gain insights on AI adoption in the manufacturing industry, such as:
1. How quickly AI is being implemented in manufacturing.
2. Which barriers stand in the way of AI adoption.
3. How data quality and governance form the backbone of AI.
4. Organizational processes and structures that may inhibit effective AI adoption.
6. Ideas and approaches to help build your organization's AI strategy.
Your One-Stop Shop for Python Success: Top 10 US Python Development Providersakankshawande
Simplify your search for a reliable Python development partner! This list presents the top 10 trusted US providers offering comprehensive Python development services, ensuring your project's success from conception to completion.
"Frontline Battles with DDoS: Best practices and Lessons Learned", Igor IvaniukFwdays
At this talk we will discuss DDoS protection tools and best practices, discuss network architectures and what AWS has to offer. Also, we will look into one of the largest DDoS attacks on Ukrainian infrastructure that happened in February 2022. We'll see, what techniques helped to keep the web resources available for Ukrainians and how AWS improved DDoS protection for all customers based on Ukraine experience
HCL Notes und Domino Lizenzkostenreduzierung in der Welt von DLAUpanagenda
Webinar Recording: https://www.panagenda.com/webinars/hcl-notes-und-domino-lizenzkostenreduzierung-in-der-welt-von-dlau/
DLAU und die Lizenzen nach dem CCB- und CCX-Modell sind für viele in der HCL-Community seit letztem Jahr ein heißes Thema. Als Notes- oder Domino-Kunde haben Sie vielleicht mit unerwartet hohen Benutzerzahlen und Lizenzgebühren zu kämpfen. Sie fragen sich vielleicht, wie diese neue Art der Lizenzierung funktioniert und welchen Nutzen sie Ihnen bringt. Vor allem wollen Sie sicherlich Ihr Budget einhalten und Kosten sparen, wo immer möglich. Das verstehen wir und wir möchten Ihnen dabei helfen!
Wir erklären Ihnen, wie Sie häufige Konfigurationsprobleme lösen können, die dazu führen können, dass mehr Benutzer gezählt werden als nötig, und wie Sie überflüssige oder ungenutzte Konten identifizieren und entfernen können, um Geld zu sparen. Es gibt auch einige Ansätze, die zu unnötigen Ausgaben führen können, z. B. wenn ein Personendokument anstelle eines Mail-Ins für geteilte Mailboxen verwendet wird. Wir zeigen Ihnen solche Fälle und deren Lösungen. Und natürlich erklären wir Ihnen das neue Lizenzmodell.
Nehmen Sie an diesem Webinar teil, bei dem HCL-Ambassador Marc Thomas und Gastredner Franz Walder Ihnen diese neue Welt näherbringen. Es vermittelt Ihnen die Tools und das Know-how, um den Überblick zu bewahren. Sie werden in der Lage sein, Ihre Kosten durch eine optimierte Domino-Konfiguration zu reduzieren und auch in Zukunft gering zu halten.
Diese Themen werden behandelt
- Reduzierung der Lizenzkosten durch Auffinden und Beheben von Fehlkonfigurationen und überflüssigen Konten
- Wie funktionieren CCB- und CCX-Lizenzen wirklich?
- Verstehen des DLAU-Tools und wie man es am besten nutzt
- Tipps für häufige Problembereiche, wie z. B. Team-Postfächer, Funktions-/Testbenutzer usw.
- Praxisbeispiele und Best Practices zum sofortigen Umsetzen
Conversational agents, or chatbots, are increasingly used to access all sorts of services using natural language. While open-domain chatbots - like ChatGPT - can converse on any topic, task-oriented chatbots - the focus of this paper - are designed for specific tasks, like booking a flight, obtaining customer support, or setting an appointment. Like any other software, task-oriented chatbots need to be properly tested, usually by defining and executing test scenarios (i.e., sequences of user-chatbot interactions). However, there is currently a lack of methods to quantify the completeness and strength of such test scenarios, which can lead to low-quality tests, and hence to buggy chatbots.
To fill this gap, we propose adapting mutation testing (MuT) for task-oriented chatbots. To this end, we introduce a set of mutation operators that emulate faults in chatbot designs, an architecture that enables MuT on chatbots built using heterogeneous technologies, and a practical realisation as an Eclipse plugin. Moreover, we evaluate the applicability, effectiveness and efficiency of our approach on open-source chatbots, with promising results.
zkStudyClub - LatticeFold: A Lattice-based Folding Scheme and its Application...Alex Pruden
Folding is a recent technique for building efficient recursive SNARKs. Several elegant folding protocols have been proposed, such as Nova, Supernova, Hypernova, Protostar, and others. However, all of them rely on an additively homomorphic commitment scheme based on discrete log, and are therefore not post-quantum secure. In this work we present LatticeFold, the first lattice-based folding protocol based on the Module SIS problem. This folding protocol naturally leads to an efficient recursive lattice-based SNARK and an efficient PCD scheme. LatticeFold supports folding low-degree relations, such as R1CS, as well as high-degree relations, such as CCS. The key challenge is to construct a secure folding protocol that works with the Ajtai commitment scheme. The difficulty, is ensuring that extracted witnesses are low norm through many rounds of folding. We present a novel technique using the sumcheck protocol to ensure that extracted witnesses are always low norm no matter how many rounds of folding are used. Our evaluation of the final proof system suggests that it is as performant as Hypernova, while providing post-quantum security.
Paper Link: https://eprint.iacr.org/2024/257
What is an RPA CoE? Session 1 – CoE VisionDianaGray10
In the first session, we will review the organization's vision and how this has an impact on the COE Structure.
Topics covered:
• The role of a steering committee
• How do the organization’s priorities determine CoE Structure?
Speaker:
Chris Bolin, Senior Intelligent Automation Architect Anika Systems
In the realm of cybersecurity, offensive security practices act as a critical shield. By simulating real-world attacks in a controlled environment, these techniques expose vulnerabilities before malicious actors can exploit them. This proactive approach allows manufacturers to identify and fix weaknesses, significantly enhancing system security.
This presentation delves into the development of a system designed to mimic Galileo's Open Service signal using software-defined radio (SDR) technology. We'll begin with a foundational overview of both Global Navigation Satellite Systems (GNSS) and the intricacies of digital signal processing.
The presentation culminates in a live demonstration. We'll showcase the manipulation of Galileo's Open Service pilot signal, simulating an attack on various software and hardware systems. This practical demonstration serves to highlight the potential consequences of unaddressed vulnerabilities, emphasizing the importance of offensive security practices in safeguarding critical infrastructure.
How to Interpret Trends in the Kalyan Rajdhani Mix Chart.pdfChart Kalyan
A Mix Chart displays historical data of numbers in a graphical or tabular form. The Kalyan Rajdhani Mix Chart specifically shows the results of a sequence of numbers over different periods.
DevOps derives from both development and operations, groups that DBAs often have a foot in each of.
There is a high focus on collaboration, geared on methodologies, process and practice.
The goal is to release more frequently, more successfully and with less bugs.
Agile 2008 conference, Andrew Clay Shafer and Patrick Debois discussed "Agile Infrastructure”
The term DevOps was popularized through a series of "devopsdays" starting in 2009 in Belgium
Agile and DevOps aren’t one in the same, but as it’s well known, DevOps came out of Agile’s success.
Agile= culture, where DevOps focuses more on the organization changes.
Build automation is the process of automating the creation of a software build and the associated processes including: compiling computer source code into binary code, packaging binary code, and running automated tests.
Configuration management (CM) is a systems engineering process for establishing and maintaining consistency of a product's performance, functional, and physical attributes with its requirements, design, and operational information throughout its life.
A DBA’s desire for low risk and stability assists here as we desire routine that results in expected outcomes.
Continuous delivery (CD) is a software engineering approach in which teams produce software ... incremental updates to applications in production. A straightforward and repeatable deployment process is important for continuous delivery.
This is another area that introduces risk, so DBAs can be very adverse to it, but the focus on a single feature, helps minimize the impact and often isolate any issues.
Release Orchestration is the use of tools like XLRelease which manage software releases from the development stage to the actual software release itself.
I’m going to add to this definition with Data version control.
In computing, virtualization means to create a virtual version of a device or resource, such as a server, storage device, network or even a database. The framework divides the resource into one or more execution environments. For data, this can result in a golden copy or source that is used for a centralized location and removal of duplicated data. For read and writes, having unique data for that given copy, while duplicates are kept to singular.
Point out the engine and size after we’ve compressed and de-duplicated.
Note that each of the VDBs will take approximately 5-10G vs. 1TB to offer a FULL read/write copy of the production system
It will do so in just a matter of minutes.
That this can also be done for the application tier!
Over 80% of time is waiting for RDBMS, (relational databases) to be refreshed. Developers and Testers are waiting for data to do their primary functions.
This allows for faster and less costly migrations to the cloud, too.
Package software into standardized units for development, shipment and deployment. A container image is a lightweight, stand-alone, executable package of a piece of software that includes everything needed to run it: code, runtime, system tools, system libraries, settings.
We refer to a container as a template in our product.
This is a cornerstone to developers and testers, so as DBAs, we know the pain when a developer comes to us to flashback a database and before that, recover or logically recover, (import or datapump) independent objects. What is The developer/tester could do this for themselves?
This is the interface for Developers and testers- they can bookmark before important tasks or rewind to any point in the process. They can bookmark and branch for full development/testing needs.
This may appear to be a traffic disaster of changes, but for developers with Agile experience, a “sprint” looks just like this. You have different sprints that are quick runs and merges where developers are working separately on code that must merge successfully at the correct intersection and be deployed.
Versioning with source control is displayed at the top, using Virtual images. You can see each iteration of the sprints.
In the middle section is the branches of that occur during the development process. A virtual can be spun from a virtual, which means that it’s easier for developers to work from the work another developer has produced.
Stopping points and release via a clone is simply minutes vs. hours or days.
The maturity of the DevOps environment will decide how silo’d or how blended the role you’ll have in DevOps vs. your standard role as a DBA.
Methods provide a format or guide to work from. Hybrid approaches often implement best.
Collaboration methods ensure that communication continues when team members return to their desks
Deployment tools help with documenting and lessons learned
Build tools help with automation and orchestration
Scrum focuses on features, bug fixes and backlog debt. Serves very large teams, including those 800+
Lean’s goal is to eliminate all waste, over demand on resources and ability to deliver faster and more effectively each time.
XP is one of the most controversial due to the ability to deliver even to large companies every couple minutes if required.. Very disciplined approach.
Crystal is often known under Crystal Clear, Yellow Orange and others.
Often uses a whiteboard with sticky notes…
Like Rapid Deploy, more focused on delivering one feature as the product.
Ant is another java based built tool that’s part of Apache open-source project.
Similar to Make and written in XML.
This Groovy script executes another script, making it valuable in environments that already have a number of mature scripts in place that should be reused in automation.