Bringing
Databases
Into the
DevOps
Fold
USING DEVOPS
TOOLS AND
TECHNIQUES
TO ACCELERATE
DATABASE-
DRIVEN
PROJECTS
2
DevOps is all about the fast iteration of
application changes and then quickly
deploying those applications to the user
base. However, for DevOps to be a true
success in the enterprise, teams must
leverage automation as much as possible
to remove time-consuming manual steps
from the delivery pipeline.
For many, the database has become the stumbling block when it comes to
achieving DevOps nirvana, simply because databases tend to be treated
as separate entities in the development process and are often siloed in
their own world of change management and provisioning.
For businesses striving to maximize the potential of DevOps, the traditional
methodologies around database design, modification, management and
provisioning must evolve into something that works hand in hand with
DevOps teams. Otherwise, DevOps teams can expect delays or time
lapses in delivery schedules if they are forced to wait for DBAs to make
database changes.
In other words, database changes must become part of the DevOps
process to achieve the agility required by today’s businesses. Failure to
integrate database change processes as part of a DevOps practice will
slow down projects, preventing the use of DevOps practices to achieve
desired shorter iterations and faster releases.
“Agile/DevOps methodologies can greatly improve data projects,
specifically in the data testing arena,” said Bill Hayduk, founder and CEO
of RTTS. “Testing is a key DevOps emphasis and most organizations do
not properly validate the ETL processes that produce the data which they
utilize to make every day business decisions.”
3
DevOps has transformed how development and
operations teams work together, effectively
removing the barriers that kept development
and operations isolated from one another. The
net result of tearing down those silos was one
of enhance productivity and more effective
teamwork. Enterprises found that adopting agile
methodologies and DevOps processes brought
significant improvement to the delivery cycle
of applications and better incorporated user
feedback, improving quality.
“DevOps methodologies create a cultural
expectation of repeatability that can be applied
to databases as well,” noted Justin McCarthy,
CTO and co-founder of strongDM. “For example:
structured forward data migrations are common
practice, but DevOps practices also unlock access
to repeatable testing of down migrations, partial
migration, multi-version schema compatibility,
backup recovery, and even security checks.”
However, even the most ardent adopters of
DevOps are finding that there is still a long way
to go before achieving everything that DevOps
promises. Case in point are the silos that have
been built around data. Those silos have been
reinforced by security concerns, compliance
concerns and, naturally, the DBAs, who are
responsible for databases and the data within.
“For a long time there has been a separation
between DBAs, developers, operations and
testers, which has triggered countless late-night
sessions, confusion between groups and data
corruption,” Hayduk said. “By integrating teams
and fostering collaboration, many of the mistakes
which impact deployments can be avoided.”
Tearing
Down the
Database
Silos
4
the software division of
the smart Data Testing solution
Test Automation: Automate your data validation & data testing processes
Query Wizards: Create tests visually, without any coding
Cross-Platform: Test against Hadoop, NoSQL, data warehouses, BI reports,
DBs, flat files, XML, Excel, JSON files, mainframe files, & more
Analytics & Leverage our Data Analytics Dashboard and Data Intelligence
Intelligence: Reports to provide visualization into your data
DevOps: First-of-its-kind full DevOps solution for Continuous Data Testing
Improve your data quality at speed
QuerySurge is the smart Data Testing solution that automates data
validation of Big Data, Data Warehouses, Enterprise Applications and Business
Intelligence reports with full DevOps functionality for continuous testing.
Use Cases
Free Trial
• 3-day trial in the cloud
• 15-day trial download
• 30-day POC
6
For DevOps teams, the database silo has
become a point of contention, one that
introduces holdoffs into the DevOps cycle.
Developers are finding they must wait for
DBAs to execute on database changes
before they are able to move ahead
with application iterations. While those
delays may not be intentional, they are an
indication of how databases are siloed. In
their quest to maintain quality, preserve
reliability, guarantee compatibility and
ensure security, DBAs rely on their own
manual processes, checklists and testing
methodologies, which are anything but agile.
With so much at stake, DBAs have been
resistant to change, especially if that
change introduces automation. The irony
here is that automation has the potential to
reduce errors and prevent manual mistakes
while enhancing security. Automation
has become one of the cornerstones of
DevOps, proven to bring forth efficiencies
and enhance productivity. What’s more,
automation that drives workflows frees
developers to work on more important
tasks, further enhancing productivity and
speeding the delivery pipeline.
“The goal of DevOps methodologies is
to reduce the friction between groups
dedicated to building software with
those responsible for software care and
feeding in production,” Hayduk said. “In
the data space, this means proper ETL
implementation, vetted by continuous
data testing, which lowers the risk that
business stakeholders take action based
on incorrect data.”
Tearing down the database silo
requires a few tasks, some of which are
management-centric (such as getting
buy-in from stakeholders) and others
purely technical. On the management side
of the equation, using the achievements
of DevOps goes a long way to assuaging
DBA and C-suite fears, while the promise
of increased agility helps to drive the
narrative of what serves the business best.
Ultimately, demonstrating how DevOps
practices facilitate shorter iterations and
faster releases goes a long way toward
establishing the need to bring DevOps to
the database, which can create a leaner
and faster development process.
The technical aspects of tearing down
database silos can be summed up
as leveraging DevOps tools, such as
automation, collaboration and change
management. The DevOps framework
fits into the narrative of database
development and management quite well
and introduces the concept of continuous
testing, automation and agility to
databases, which were once thought of as
monolithic lakes of data.
7
Bringing
DevOps to
Databases
The objective of bringing DevOps processes to
database functions, such as change management,
should result in an increase of the speed of delivery
of any database changes. Yet, incorporating
DevOps into the realm of databases may be met
with some prerequisites that require some creative
thinking, along with some foundational changes.
One challenge comes in the form of tightly coupled
architectures, wherein central databases are tightly
integrated into multiple applications or systems.
Problems can arise when a change is made to
a database to support a particular application
modification, but the change has the unintended
effect of disrupting other applications. When multiple
applications are tightly and directly integrated with a
database, changes can cause a cascade of failures.
“The benefits of bringing DevOps to the data world
are similar to bringing DevOps to the development
cycle. When a proper DevOps process is put in place,
teams see faster delivery cycles, higher data quality,
faster issue identification and more collaboration
between groups,” Hayduk said.
One way to reduce the possibility of those
problems is to adopt a microservices architecture,
which consists of independently deployable
services that are designed for a specific business
function. Each microservice contains the needed
code to allow services to interact with each other
through well-defined interfaces, eliminating the
need for applications to directly interact with the
database. A suite of microservices can be used
to build interfaces between applications and
databases, eliminating the need for an application
to directly access the database. That, in turn,
removes many of the problems created by
database schema changes by changing the data
environment from a monolithic one to something
much more adaptable and agile to support rapid
application iterations. However, developers will
still need to address the concepts of integrating
security into the process.
8
“If you're going to treat infrastructure as code, you
also need to treat security as code,” McCarthy said.
“Whenever you provision a new database, server or
k8s cluster, you should also provision permissions,
assign using role-based access controls in order to
enforce least privilege by default. Finally, the entire
process is fully auditable so you can easily answer
who did what, when & where.”
Microservices aside, there are many other
challenges facing those looking to embrace DevOps
in the realm of the database, including discarding
some traditional approaches and treating change
management differently. The traditional approach
to schema changes was a waterfall process,
in which developers would make changes to a
database only when absolutely needed and then
illustrate those either in code or as SQL scripts.
Only then would DBAs review those changes, test
and then coordinate deploying those changes to
production. However, it is difficult to make changes
that late in the production cycle, when DBA input
came at the point just prior to release of the
changes, which made any potential modifications
or corrections costly and time-consuming.
“DevOps ideals can help us be more intentional
about the full life cycle of state in our application,”
McCarthy added. “Just as blue/green deployments
force applications to address explicit HTTP
connection draining, frequently and automatically
exercising database failover procedures can verify
that an application can cope with live changes to
database connection pools, cache warming delays,
and even temporary use of read-only state.”
The traditional waterfall development and
deployment process proves to be the antithesis
of DevOps and severely limits the ability to
achieve any type of agility. One of the first steps
for bringing DevOps to databases consists of
introducing version or source control for database
changes, which then opens the door to DevOps
practices such as continuous integration.
Once source control is introduced as part of the
normal database development workflow, several
productivity gains should be realized resulting
from the synchronization of database structures
across the development, test and production
environments. Additionally, it ensures database
development teams communicate changes
with others, provides a version to roll back to if
required and helps maintain a solid audit trail. Other
advantages include faster change processing,
lower possibility that code commits to source
10
control will be forgotten and a reduction in task
switching, improving efficiency.
“In the DevOps ideal, data-related issues
can quickly be identified and sent to data-
knowledgeable team members who can quickly
triage the data flow,” Hayduk said.
Adopting source control allows application and
database development teams to work in parallel
and coordinate changes through the same
processes. That makes it easier for databases to
keep pace with updates in the application, helping
to release more stable, consistent builds. The
next step for achieving database DevOps comes
in the form of continuous integration and release
management.
“The goal of a DevOps team should be to enable
developers to describe, build and deploy faster
with minimal input from DevOps,” McCarthy said.
“DevOps teams should be there for guidance,
patterns, tooling, code reviews on Terraform, but let
them run their applications as they want to.”
Here, database teams, as part of the DevOps
culture, can align the development of database
scripts with application code in continuous
integration and release management. Most DevOps
developers already use continuous integration
to test their code automatically and release
management tools to automate application
deployment. Database developers will be able
to use the same tools to achieve continuous
delivery for databases. DevOps ideology promotes
continuous delivery, which encompasses software
development through to deployment. Instead of
thinking of software release as a separate activity,
continuous delivery means software is always
ready for release.
“It is very common for development teams to
want to switch to a more agile framework without
considering the impact of this change on other
groups. Many development teams try to spearhead
these changes in an ad-hoc fashion and hope that
other teams will eventually get on board,” Hayduk
said. “It is critical that all teams in the SDLC be
brought on board early to ensure a synergistic
workflow. This includes not just the development
and operations teams but the testing, analysis
and data science teams. Among these, testing
is often the most-neglected, but in the DevOps
scheme, it demands as much attention as any
other stakeholder. Culture is one of the hardest of
team features to change, but it is at the core of the
collaborative orientation that DevOps is based on.”
11
Different
Approaches
Explained
Bringing DevOps to databases is difficult.
Whether you’re doing microservices or source
code control, there are some challenges
you need to solve as a team: one part is
technology, and another is the process and
culture. What makes the whole thing much
more difficult than app code is the fact that
databases are stateful systems.
There are two philosophies for solving this
problem: one is the state-based or declarative
approach and the other is the migration-base
or imperative approach.
STATE-BASED OR
DECLARATIVEAPPROACH
To embrace the declarative approach,
you declare the desired end state of the
infrastructure and the tooling will do all the
comparison between the ideal state and
the target environment. It will generate
all the scripts to be applied to the target
environment. The problem is, these tools
aren’t always right.
“Here’s an example I like to use with the
declarative approach: You know your current
address, you know your current structure
and logic, you know your future address, the
future state you want to be at and it’s kind of
like using a GPS,” said Sanjay Challa, Director
of Product Marketing, Datical. “You have
zero predictability on how the tool will get
you there. Sometimes these tools are not
necessarily deterministic, in the sense
that dev environments are often a little bit
different from production environments.
So, we’re about to go to production with
code that was freshly generated, and
there’s a tremendous amount of risk.”
MIGRATION-BASED OR
IMPERATIVEAPPROACH
It’s like getting directions to go to a new
market from your grandma’s new house.
You have no idea what the starting address
is and no idea what the ending address is,
but grandma knows how to get there and
tells you what turns to take in order to
arrive to your destination.
“You described very specifically every single
turn and you have a tremendous amount
of control, so you don’t have the lack of
predictability,” explained Challa. “You might
not know the starting address; you might
not know your ending address, but you
certainly know with absolute certainty the
path you’re going to take to get from point A
to point B. And a lot of DevOps teams prefer
this approach because you can build once,
deploy often.”
This approach gives you a high level of
control, but you also lose some of the
visibility.
“You may think, ‘We did 500 tiny little
migrations, this table has evolved a
lot.’ What’s the final state of the table?
‘Well nobody knows,’” said Challa. “You
have to go collect all the puzzle pieces
12
of every little migration that was made to that
table to answer what the final state is and that
starts to become a headache for the operators,
DBAs, release folks, ops folks,” continued Challa.
And what does that table look like? ‘I don’t know
because we applied 50 different migrations to it.’”
WHAT’STHE BESTAPPROACH?
The declarative approach is great for ops teams
but not so good for developers or from an agile
development point of view. On the other hand, the
imperative approach is great for developers, you get
a lot of control by prescribing specifically how to get
from point A to point B, but no good for ops teams
since you have no idea what the ending state is.
“The nirvana of database change is a blended
approach,” said Challa. “The best practice really is
to make the changes throughout your
pipeline with these atomic migrations.
Whether that means doing that
comparison upfront, do that comparison
only once to something like a dev
environment, decompose that
comparison (even if a tool generated
that comparison), break it down into all the
little steps so that you have the atomicity.”
You will also need visibility and transparency, and
this is where tracking systems like source code
control come into help. “Another best practice
we recommend (and some of the tools already
have it) is clearing all the objects from schema
and reconstructing them or having a baseline and
resetting to the baseline, and building off of that,”
said Challa. “This will allow you to rework these
changes without forcing it to roll forward on every
little mistake and get it right. This will help the team
get the right set of changes in the right order and
done the right way,” said Challa.
13
While many of the tools used for DevOps can be applied
across different team disciplines, databases have some
unique elements that make selecting the appropriate tools
critical for success.
“Toolsets are a major issue for implementing agile
frameworks,” Hayduk noted. “Many companies have
a hodgepodge group of tools which either poorly
communicate with one another or do not communicate
at all. It’s critical that all tools can communicate (API) with
one another and provide real time analytics on issues
as they occur. Older toolsets which lack this integration
can become a challenge for teams as they are likely to
become siloed. Replacement tools are often required (often
requiring new skillsets). Furthermore, many teams, prior to
implementing DevOps, do not have a full stable of tools to
automate as much as is possible.”
Tools that are geared toward instituting DevOps into
database operations encompass automation, testing and
access control tools that bring agility to the management of
databases.
“As with all DevOps transformations, teams find it difficult
to alter existing paradigms,” Hayduk said. “There is a
constant concern that streamlining the change process
will either lead to missed deadlines or introduce more
defects. Additionally, by integrating existing data platforms
with other toolsets/groups, there is a fear that control will
be lost from groups. The ability to adapt to new cultural
memes across teams is key to a successful DevOps
transformation.”
Leveraging DevOps Tools
14
 	 http://www.devops.com
 	https://twitter.com/devopsdotcom
 	https://www.facebook.com/devopscom

Bringing Databases into the DevOps Fold

  • 1.
    Bringing Databases Into the DevOps Fold USING DEVOPS TOOLSAND TECHNIQUES TO ACCELERATE DATABASE- DRIVEN PROJECTS
  • 2.
  • 3.
    DevOps is allabout the fast iteration of application changes and then quickly deploying those applications to the user base. However, for DevOps to be a true success in the enterprise, teams must leverage automation as much as possible to remove time-consuming manual steps from the delivery pipeline. For many, the database has become the stumbling block when it comes to achieving DevOps nirvana, simply because databases tend to be treated as separate entities in the development process and are often siloed in their own world of change management and provisioning. For businesses striving to maximize the potential of DevOps, the traditional methodologies around database design, modification, management and provisioning must evolve into something that works hand in hand with DevOps teams. Otherwise, DevOps teams can expect delays or time lapses in delivery schedules if they are forced to wait for DBAs to make database changes. In other words, database changes must become part of the DevOps process to achieve the agility required by today’s businesses. Failure to integrate database change processes as part of a DevOps practice will slow down projects, preventing the use of DevOps practices to achieve desired shorter iterations and faster releases. “Agile/DevOps methodologies can greatly improve data projects, specifically in the data testing arena,” said Bill Hayduk, founder and CEO of RTTS. “Testing is a key DevOps emphasis and most organizations do not properly validate the ETL processes that produce the data which they utilize to make every day business decisions.” 3
  • 4.
    DevOps has transformedhow development and operations teams work together, effectively removing the barriers that kept development and operations isolated from one another. The net result of tearing down those silos was one of enhance productivity and more effective teamwork. Enterprises found that adopting agile methodologies and DevOps processes brought significant improvement to the delivery cycle of applications and better incorporated user feedback, improving quality. “DevOps methodologies create a cultural expectation of repeatability that can be applied to databases as well,” noted Justin McCarthy, CTO and co-founder of strongDM. “For example: structured forward data migrations are common practice, but DevOps practices also unlock access to repeatable testing of down migrations, partial migration, multi-version schema compatibility, backup recovery, and even security checks.” However, even the most ardent adopters of DevOps are finding that there is still a long way to go before achieving everything that DevOps promises. Case in point are the silos that have been built around data. Those silos have been reinforced by security concerns, compliance concerns and, naturally, the DBAs, who are responsible for databases and the data within. “For a long time there has been a separation between DBAs, developers, operations and testers, which has triggered countless late-night sessions, confusion between groups and data corruption,” Hayduk said. “By integrating teams and fostering collaboration, many of the mistakes which impact deployments can be avoided.” Tearing Down the Database Silos 4
  • 5.
    the software divisionof the smart Data Testing solution Test Automation: Automate your data validation & data testing processes Query Wizards: Create tests visually, without any coding Cross-Platform: Test against Hadoop, NoSQL, data warehouses, BI reports, DBs, flat files, XML, Excel, JSON files, mainframe files, & more Analytics & Leverage our Data Analytics Dashboard and Data Intelligence Intelligence: Reports to provide visualization into your data DevOps: First-of-its-kind full DevOps solution for Continuous Data Testing Improve your data quality at speed QuerySurge is the smart Data Testing solution that automates data validation of Big Data, Data Warehouses, Enterprise Applications and Business Intelligence reports with full DevOps functionality for continuous testing. Use Cases Free Trial • 3-day trial in the cloud • 15-day trial download • 30-day POC
  • 6.
  • 7.
    For DevOps teams,the database silo has become a point of contention, one that introduces holdoffs into the DevOps cycle. Developers are finding they must wait for DBAs to execute on database changes before they are able to move ahead with application iterations. While those delays may not be intentional, they are an indication of how databases are siloed. In their quest to maintain quality, preserve reliability, guarantee compatibility and ensure security, DBAs rely on their own manual processes, checklists and testing methodologies, which are anything but agile. With so much at stake, DBAs have been resistant to change, especially if that change introduces automation. The irony here is that automation has the potential to reduce errors and prevent manual mistakes while enhancing security. Automation has become one of the cornerstones of DevOps, proven to bring forth efficiencies and enhance productivity. What’s more, automation that drives workflows frees developers to work on more important tasks, further enhancing productivity and speeding the delivery pipeline. “The goal of DevOps methodologies is to reduce the friction between groups dedicated to building software with those responsible for software care and feeding in production,” Hayduk said. “In the data space, this means proper ETL implementation, vetted by continuous data testing, which lowers the risk that business stakeholders take action based on incorrect data.” Tearing down the database silo requires a few tasks, some of which are management-centric (such as getting buy-in from stakeholders) and others purely technical. On the management side of the equation, using the achievements of DevOps goes a long way to assuaging DBA and C-suite fears, while the promise of increased agility helps to drive the narrative of what serves the business best. Ultimately, demonstrating how DevOps practices facilitate shorter iterations and faster releases goes a long way toward establishing the need to bring DevOps to the database, which can create a leaner and faster development process. The technical aspects of tearing down database silos can be summed up as leveraging DevOps tools, such as automation, collaboration and change management. The DevOps framework fits into the narrative of database development and management quite well and introduces the concept of continuous testing, automation and agility to databases, which were once thought of as monolithic lakes of data. 7
  • 8.
    Bringing DevOps to Databases The objectiveof bringing DevOps processes to database functions, such as change management, should result in an increase of the speed of delivery of any database changes. Yet, incorporating DevOps into the realm of databases may be met with some prerequisites that require some creative thinking, along with some foundational changes. One challenge comes in the form of tightly coupled architectures, wherein central databases are tightly integrated into multiple applications or systems. Problems can arise when a change is made to a database to support a particular application modification, but the change has the unintended effect of disrupting other applications. When multiple applications are tightly and directly integrated with a database, changes can cause a cascade of failures. “The benefits of bringing DevOps to the data world are similar to bringing DevOps to the development cycle. When a proper DevOps process is put in place, teams see faster delivery cycles, higher data quality, faster issue identification and more collaboration between groups,” Hayduk said. One way to reduce the possibility of those problems is to adopt a microservices architecture, which consists of independently deployable services that are designed for a specific business function. Each microservice contains the needed code to allow services to interact with each other through well-defined interfaces, eliminating the need for applications to directly interact with the database. A suite of microservices can be used to build interfaces between applications and databases, eliminating the need for an application to directly access the database. That, in turn, removes many of the problems created by database schema changes by changing the data environment from a monolithic one to something much more adaptable and agile to support rapid application iterations. However, developers will still need to address the concepts of integrating security into the process. 8
  • 10.
    “If you're goingto treat infrastructure as code, you also need to treat security as code,” McCarthy said. “Whenever you provision a new database, server or k8s cluster, you should also provision permissions, assign using role-based access controls in order to enforce least privilege by default. Finally, the entire process is fully auditable so you can easily answer who did what, when & where.” Microservices aside, there are many other challenges facing those looking to embrace DevOps in the realm of the database, including discarding some traditional approaches and treating change management differently. The traditional approach to schema changes was a waterfall process, in which developers would make changes to a database only when absolutely needed and then illustrate those either in code or as SQL scripts. Only then would DBAs review those changes, test and then coordinate deploying those changes to production. However, it is difficult to make changes that late in the production cycle, when DBA input came at the point just prior to release of the changes, which made any potential modifications or corrections costly and time-consuming. “DevOps ideals can help us be more intentional about the full life cycle of state in our application,” McCarthy added. “Just as blue/green deployments force applications to address explicit HTTP connection draining, frequently and automatically exercising database failover procedures can verify that an application can cope with live changes to database connection pools, cache warming delays, and even temporary use of read-only state.” The traditional waterfall development and deployment process proves to be the antithesis of DevOps and severely limits the ability to achieve any type of agility. One of the first steps for bringing DevOps to databases consists of introducing version or source control for database changes, which then opens the door to DevOps practices such as continuous integration. Once source control is introduced as part of the normal database development workflow, several productivity gains should be realized resulting from the synchronization of database structures across the development, test and production environments. Additionally, it ensures database development teams communicate changes with others, provides a version to roll back to if required and helps maintain a solid audit trail. Other advantages include faster change processing, lower possibility that code commits to source 10
  • 11.
    control will beforgotten and a reduction in task switching, improving efficiency. “In the DevOps ideal, data-related issues can quickly be identified and sent to data- knowledgeable team members who can quickly triage the data flow,” Hayduk said. Adopting source control allows application and database development teams to work in parallel and coordinate changes through the same processes. That makes it easier for databases to keep pace with updates in the application, helping to release more stable, consistent builds. The next step for achieving database DevOps comes in the form of continuous integration and release management. “The goal of a DevOps team should be to enable developers to describe, build and deploy faster with minimal input from DevOps,” McCarthy said. “DevOps teams should be there for guidance, patterns, tooling, code reviews on Terraform, but let them run their applications as they want to.” Here, database teams, as part of the DevOps culture, can align the development of database scripts with application code in continuous integration and release management. Most DevOps developers already use continuous integration to test their code automatically and release management tools to automate application deployment. Database developers will be able to use the same tools to achieve continuous delivery for databases. DevOps ideology promotes continuous delivery, which encompasses software development through to deployment. Instead of thinking of software release as a separate activity, continuous delivery means software is always ready for release. “It is very common for development teams to want to switch to a more agile framework without considering the impact of this change on other groups. Many development teams try to spearhead these changes in an ad-hoc fashion and hope that other teams will eventually get on board,” Hayduk said. “It is critical that all teams in the SDLC be brought on board early to ensure a synergistic workflow. This includes not just the development and operations teams but the testing, analysis and data science teams. Among these, testing is often the most-neglected, but in the DevOps scheme, it demands as much attention as any other stakeholder. Culture is one of the hardest of team features to change, but it is at the core of the collaborative orientation that DevOps is based on.” 11
  • 12.
    Different Approaches Explained Bringing DevOps todatabases is difficult. Whether you’re doing microservices or source code control, there are some challenges you need to solve as a team: one part is technology, and another is the process and culture. What makes the whole thing much more difficult than app code is the fact that databases are stateful systems. There are two philosophies for solving this problem: one is the state-based or declarative approach and the other is the migration-base or imperative approach. STATE-BASED OR DECLARATIVEAPPROACH To embrace the declarative approach, you declare the desired end state of the infrastructure and the tooling will do all the comparison between the ideal state and the target environment. It will generate all the scripts to be applied to the target environment. The problem is, these tools aren’t always right. “Here’s an example I like to use with the declarative approach: You know your current address, you know your current structure and logic, you know your future address, the future state you want to be at and it’s kind of like using a GPS,” said Sanjay Challa, Director of Product Marketing, Datical. “You have zero predictability on how the tool will get you there. Sometimes these tools are not necessarily deterministic, in the sense that dev environments are often a little bit different from production environments. So, we’re about to go to production with code that was freshly generated, and there’s a tremendous amount of risk.” MIGRATION-BASED OR IMPERATIVEAPPROACH It’s like getting directions to go to a new market from your grandma’s new house. You have no idea what the starting address is and no idea what the ending address is, but grandma knows how to get there and tells you what turns to take in order to arrive to your destination. “You described very specifically every single turn and you have a tremendous amount of control, so you don’t have the lack of predictability,” explained Challa. “You might not know the starting address; you might not know your ending address, but you certainly know with absolute certainty the path you’re going to take to get from point A to point B. And a lot of DevOps teams prefer this approach because you can build once, deploy often.” This approach gives you a high level of control, but you also lose some of the visibility. “You may think, ‘We did 500 tiny little migrations, this table has evolved a lot.’ What’s the final state of the table? ‘Well nobody knows,’” said Challa. “You have to go collect all the puzzle pieces 12
  • 13.
    of every littlemigration that was made to that table to answer what the final state is and that starts to become a headache for the operators, DBAs, release folks, ops folks,” continued Challa. And what does that table look like? ‘I don’t know because we applied 50 different migrations to it.’” WHAT’STHE BESTAPPROACH? The declarative approach is great for ops teams but not so good for developers or from an agile development point of view. On the other hand, the imperative approach is great for developers, you get a lot of control by prescribing specifically how to get from point A to point B, but no good for ops teams since you have no idea what the ending state is. “The nirvana of database change is a blended approach,” said Challa. “The best practice really is to make the changes throughout your pipeline with these atomic migrations. Whether that means doing that comparison upfront, do that comparison only once to something like a dev environment, decompose that comparison (even if a tool generated that comparison), break it down into all the little steps so that you have the atomicity.” You will also need visibility and transparency, and this is where tracking systems like source code control come into help. “Another best practice we recommend (and some of the tools already have it) is clearing all the objects from schema and reconstructing them or having a baseline and resetting to the baseline, and building off of that,” said Challa. “This will allow you to rework these changes without forcing it to roll forward on every little mistake and get it right. This will help the team get the right set of changes in the right order and done the right way,” said Challa. 13
  • 14.
    While many ofthe tools used for DevOps can be applied across different team disciplines, databases have some unique elements that make selecting the appropriate tools critical for success. “Toolsets are a major issue for implementing agile frameworks,” Hayduk noted. “Many companies have a hodgepodge group of tools which either poorly communicate with one another or do not communicate at all. It’s critical that all tools can communicate (API) with one another and provide real time analytics on issues as they occur. Older toolsets which lack this integration can become a challenge for teams as they are likely to become siloed. Replacement tools are often required (often requiring new skillsets). Furthermore, many teams, prior to implementing DevOps, do not have a full stable of tools to automate as much as is possible.” Tools that are geared toward instituting DevOps into database operations encompass automation, testing and access control tools that bring agility to the management of databases. “As with all DevOps transformations, teams find it difficult to alter existing paradigms,” Hayduk said. “There is a constant concern that streamlining the change process will either lead to missed deadlines or introduce more defects. Additionally, by integrating existing data platforms with other toolsets/groups, there is a fear that control will be lost from groups. The ability to adapt to new cultural memes across teams is key to a successful DevOps transformation.” Leveraging DevOps Tools 14
  • 16.
     http://www.devops.com  https://twitter.com/devopsdotcom  https://www.facebook.com/devopscom