This document discusses patterns for successful data migration projects. It faces challenges such as unknown legacy data, data quality issues, limited resources and time constraints. The patterns presented are:
1. Develop with Production Data - Use real legacy data from the start to uncover corner cases and improve understanding of data semantics.
2. Migrate along Domain Partitions - Divide migration into independent parts like customers then orders to make it manageable and allow early verification.
3. Measure Migration Quality - Define metrics to quantify migration quality and ensure they are regularly calculated to prevent unnoticed data corruption and avoid downtime.
In this fast-paced data-driven world, the fallout from a single data quality issue can cost thousands of dollars in a matter of hours. To catch these issues quickly, system monitoring for data quality requires a different set of strategies from other continuous regression efforts. Like a race car pit crew, you need detection mechanisms that not only don’t interfere with what you are monitoring but also allow for strategic analysis off-track. You need to use every second your subject is at rest to repair and clean up problems that could affect performance. As the systems in race cars vary, the tools and resources available to the data quality professional vary from one organization to the next. You need to be able to leverage the tools at hand to implement your solutions. Shauna Ayers and Catherine Cruz Agosto show you how to develop testing strategies to detect issues with data integration timing, operational dependencies, reference data management, and data integrity—even in production systems. See how you can leverage this testing to provide proactive notification alerts and feed business intelligence dashboards to communicate the health of your organization’s data systems to both operation support and non-technical personnel.
Data Migration: A White Paper by Bloor ResearchFindWhitePapers
This paper is about using information management software from Business Objects, an SAP company, for SAP data migration projects, either for upgrades from one version of SAP to a newer one, or from other environments to SAP. In practice, many of the considerations that apply to SAP data migrations are no different from those that pertain generally to non-SAP environments.
Our DCIM Product suite GFS Crane provides policy and constraint-based capacity planning for data centers resolving conflicting objectives like maximizing asset utilization, minimizing power consumption while delivering high availability at over 99%.
Benefits of Using an EDC System
Contact info@trialjoin.com for more information about patient recruitment help, obtaining new studies or help with site management.
In this fast-paced data-driven world, the fallout from a single data quality issue can cost thousands of dollars in a matter of hours. To catch these issues quickly, system monitoring for data quality requires a different set of strategies from other continuous regression efforts. Like a race car pit crew, you need detection mechanisms that not only don’t interfere with what you are monitoring but also allow for strategic analysis off-track. You need to use every second your subject is at rest to repair and clean up problems that could affect performance. As the systems in race cars vary, the tools and resources available to the data quality professional vary from one organization to the next. You need to be able to leverage the tools at hand to implement your solutions. Shauna Ayers and Catherine Cruz Agosto show you how to develop testing strategies to detect issues with data integration timing, operational dependencies, reference data management, and data integrity—even in production systems. See how you can leverage this testing to provide proactive notification alerts and feed business intelligence dashboards to communicate the health of your organization’s data systems to both operation support and non-technical personnel.
Data Migration: A White Paper by Bloor ResearchFindWhitePapers
This paper is about using information management software from Business Objects, an SAP company, for SAP data migration projects, either for upgrades from one version of SAP to a newer one, or from other environments to SAP. In practice, many of the considerations that apply to SAP data migrations are no different from those that pertain generally to non-SAP environments.
Our DCIM Product suite GFS Crane provides policy and constraint-based capacity planning for data centers resolving conflicting objectives like maximizing asset utilization, minimizing power consumption while delivering high availability at over 99%.
Benefits of Using an EDC System
Contact info@trialjoin.com for more information about patient recruitment help, obtaining new studies or help with site management.
A brief introduction to Data Quality rule development and implementation covering:
- What are Data Quality Rules.
- Examples of Data Quality Rules.
- What are the benefits of rules.
- How can I create my own rules?
- What alternate approaches are there to building my own rules?
The presentation also includes a very brief overview of our Data Quality Rule services. For more information on this please contact us.
Database Management | Why Data Warehouse Projects FailMichael Findling
Using Schema Examination Tools to Ensure Information Quality whitepaper. Data Quality is one of the hottest topics in any IT shop. Although very important, Data Quality is far from being enough because decisions are based on information, not on data. Having quality data does not assure quality information. To have quality information, it is necessary to have quality data, but this is not sufficient on its own. We need more.
META DATA QUALITY CONTROL ARCHITECTURE IN DATA WAREHOUSINGIJCSEIT Journal
The quality is the key concept in each and every analysis as well as in computing applications. Today we
gather large volumes of information and store them in multidimensional mode that is in data warehouses
then analyze the data to be used in exact decision making in various fields. The studies proved that most of
the volumes of data are not useful for analysis due to lack of quality caused by improper data handling
techniques. This paper try to find out a solution to achieve the quality of data from the foundation of data
repositories and try to avoid quality anomalies at meta data level. This paper also proposes the new model
of Meta data architecture.
Understanding, Planning and Achieving
Data Quality in Your Organization
by Joe Caserta, President of Caserta Concepts
For more information, visit www.casertaconcepts.com or contact us at info@casertaconcepts.com
This presentation provides current research relative to women with disabilities who are victims of domestic violence. There is little research for this demographic; thus, more research is needed to advance the needs of this population.
You have started your asset finance systems implementation. What are the typical pain points ahead? In this third of three articles. Richmond Consulting Group looks at three areas that will need attention if the journey is to be a smooth one!
We welcome comments and would be happy to help you get your project off to a good start.
You have started your asset finance systems implementation. What are the typical pain points ahead? In this third of three articles. Richmond Consulting Group looks at three areas that will need attention if the journey is to be a smooth one!
We welcome comments and would be happy to help you get your project off to a good start.
You have started your asset finance systems implementation. What are the typical pain points ahead? In this third of three articles. Richmond Consulting Group looks at three areas that will need attention if the journey is to be a smooth one!
We welcome comments and would be happy to help you get your project off to a good start.
Building a mind map for test data management.
Overview
1. Test data source
2. Extract or create data
3. Transform data
4. Provision
5. Target
Source: http://debasishbhadra.blogspot.com/2013/12/create-your-own-mindmap-for-test-data.html
Presentation on Data Mesh: The paradigm shift is a new type of eco-system architecture, which is a shift left towards a modern distributed architecture in which it allows domain-specific data and views “data-as-a-product,” enabling each domain to handle its own data pipelines.
A brief introduction to Data Quality rule development and implementation covering:
- What are Data Quality Rules.
- Examples of Data Quality Rules.
- What are the benefits of rules.
- How can I create my own rules?
- What alternate approaches are there to building my own rules?
The presentation also includes a very brief overview of our Data Quality Rule services. For more information on this please contact us.
Database Management | Why Data Warehouse Projects FailMichael Findling
Using Schema Examination Tools to Ensure Information Quality whitepaper. Data Quality is one of the hottest topics in any IT shop. Although very important, Data Quality is far from being enough because decisions are based on information, not on data. Having quality data does not assure quality information. To have quality information, it is necessary to have quality data, but this is not sufficient on its own. We need more.
META DATA QUALITY CONTROL ARCHITECTURE IN DATA WAREHOUSINGIJCSEIT Journal
The quality is the key concept in each and every analysis as well as in computing applications. Today we
gather large volumes of information and store them in multidimensional mode that is in data warehouses
then analyze the data to be used in exact decision making in various fields. The studies proved that most of
the volumes of data are not useful for analysis due to lack of quality caused by improper data handling
techniques. This paper try to find out a solution to achieve the quality of data from the foundation of data
repositories and try to avoid quality anomalies at meta data level. This paper also proposes the new model
of Meta data architecture.
Understanding, Planning and Achieving
Data Quality in Your Organization
by Joe Caserta, President of Caserta Concepts
For more information, visit www.casertaconcepts.com or contact us at info@casertaconcepts.com
This presentation provides current research relative to women with disabilities who are victims of domestic violence. There is little research for this demographic; thus, more research is needed to advance the needs of this population.
You have started your asset finance systems implementation. What are the typical pain points ahead? In this third of three articles. Richmond Consulting Group looks at three areas that will need attention if the journey is to be a smooth one!
We welcome comments and would be happy to help you get your project off to a good start.
You have started your asset finance systems implementation. What are the typical pain points ahead? In this third of three articles. Richmond Consulting Group looks at three areas that will need attention if the journey is to be a smooth one!
We welcome comments and would be happy to help you get your project off to a good start.
You have started your asset finance systems implementation. What are the typical pain points ahead? In this third of three articles. Richmond Consulting Group looks at three areas that will need attention if the journey is to be a smooth one!
We welcome comments and would be happy to help you get your project off to a good start.
Building a mind map for test data management.
Overview
1. Test data source
2. Extract or create data
3. Transform data
4. Provision
5. Target
Source: http://debasishbhadra.blogspot.com/2013/12/create-your-own-mindmap-for-test-data.html
Presentation on Data Mesh: The paradigm shift is a new type of eco-system architecture, which is a shift left towards a modern distributed architecture in which it allows domain-specific data and views “data-as-a-product,” enabling each domain to handle its own data pipelines.
Businesses make critical decisions using key data assets, but stakeholders often find it difficult to navigate the complex data landscape to ensure they have the right data and understand it correctly. Companies are dealing with a number of different technologies, multiple data formats, and high data volumes, along with the requirements for data security and governance.
Why Businesses Must Adopt NetSuite ERP Data MigrationJade Global
Business Should look at NetSuite for Seamless ERP Data Migration. Follow these Data migration Best Practices & Strategies for ERP implementation. Read Now!
Webinar: Successful Data Migration to Microsoft Dynamics 365 CRM | InSyncAPPSeCONNECT
This #Webinar will cover everything you should know to prepare for a Successful CRM Data Migration. Understand the intricacies of data and it's importance in your organization and explore the possibilities of successful Data Migration to your Microsoft Dynamics CRM Platform.
A CRM or Customer Relationship Management (CRM) solution is an essential component in a business as it takes into account all the details of the customers and their journey. But a CRM is never functional without data! That is why, moving data from one system to another is essential in order to set up a new system to utilize the data that already exists in the current system(s). This a must for organizations who want to nurture and help their customers grow.
Data Migration can be a complex and cumbersome process, more complex than people realize, but with a solid strategy in place, it can help organizations seamlessly transfer data from one system to another.
Most Data Migration solutions only transfer Master data, but Transactional data is as much valuable and the right solution and tools can manage that as well. While you need to consider data sources, data fields and other aspects while Migrating Data to Microsoft Dynamics CRM, this webinar will help you learn about the correct approach, best practices and actions involved during the process.
#MSDyn365 #MSDynCRM
The key points to be covered in the webinar are:
- Introduction to Data Migration
- A Guide to Prepare Templates
- Ways to do Data Cleaning
- Options for Data Import
- How to do Data Verification
- Successfully Migrating Data to Dynamics 365 CRM
If you are planning to employ Microsoft Dynamics 365 CRM in your organization, this webinar will help you strategize about CRM data migration and plan for a seamless experience.
Start your #DataMigration today: https://insync.co.in/data-migration/
The Shifting Landscape of Data IntegrationDATAVERSITY
Enterprises and organizations from every industry and scale are working to leverage data to achieve their strategic objectives — whether they are to be more profitable, effective, risk-tolerant, prepared, sustainable, and/or adaptable in an ever-changing world. Data has exploded in volume during the last decade as humans and machines alike produce data at an exponential pace. Also, exciting technologies have emerged around that data to improve our abilities and capabilities around what we can do with data.
Behind this data revolution, there are forces at work, causing enterprises to shift the way they leverage data and accelerate the demand for leverageable data. Organizations (and the climates in which they operate) are becoming more and more complex. They are also becoming increasingly digital and, thus, dependent on how data informs, transforms, and automates their operations and decisions. With increased digitization comes an increased need for both scale and agility at scale.
In this session, we have undertaken an ambitious goal of evaluating the current vendor landscape and assessing which platforms have made, or are in the process of making, the leap to this new generation of Data Management and integration capabilities.
Heuristic Test Strategy Model for Custom Software Development Project:
"You have joined the development team in a company that is doing custom software development, writing this program to a specification that was negotiated and incorporated into
the development contract. Your objective is to test the program in a way that lets you determine whether it will be acceptable to the customer (as indicated by conformity with the specification). "
Engineering Machine Learning Data Pipelines Series: Tracking Data Lineage fro...Precisely
Tackling the challenge of designing a machine learning model and putting it into production is the key to getting value back – and the roadblock that stops many promising machine learning projects. After the data scientists have done their part, engineering robust production data pipelines has its own set of challenges. Syncsort software helps the data engineer every step of the way.
Once you’ve found and matched duplicates to resolve entities, the next step is to track the lineage of data as it moves from source to final analysis – which is required by several regulations and by the need to verify the source of final decisions or recommendations.
Using Apache Atlas or Cloudera Navigator and tracking data outside the cluster can be done with enterprise governance tools like Collibra or ASG Data Intelligence. However, for auditing purposes you don’t need part of the data provenance – you need ALL of it. Getting a coherent source to consumption view of where specific data came from and how it was changed along the way is much harder than it should be in modern data architectures.
View this 15-minute webcast on-demand to learn how to tackle these challenges in large scale production implementations.
Building a Robust Big Data QA Ecosystem to Mitigate Data Integrity ChallengesCognizant
With big data growing exponentially, the need to test semi-structured and unstructured data has risen; we offer several strategies for big data quality assurance (QA), taking into account data security, scalability and performance issues. Our recommendations center around data warehouse testing, performance testing and test data management.
A Deep Dive into NetSuite Data Migration.pdfPratik686562
NetSuite data migration is the process of moving data from one system to another, usually from an older system to the most recent version of NetSuite. Throughout the migration process, it makes sure the data is accurate and reliable.
Rapidly Enable Tangible Business Value through Data VirtualizationDenodo
Watch full webinar here: https://bit.ly/3EEU2vK
Uber, the world’s largest taxi company, own no fleet; AirBnb , the largest accommodation provider owns no real estate. The extraordinary way that companies is growing fast, globally and with little investment, was with thin layers on top of a complex system of others’ goods or services that owned the customer interface. In Digital transformation- Data Minimization sometimes very useful to deliver business value rapidly without physical data redundancy – specially for seamless data migration from OLTP, OLAP, Legacy platforms for quick data domain/Data product access for incremental value until the desired Architecture/data estate evolve. To achieve the same -Data virtualization logically allows an application to retrieve and manipulate data without requiring technical details about the data, such as how it is formatted at source, or where it is physically located, and can provide a single customer view of the overall data. While implementing next-gen solution leveraging DV, it has certain set of key considerations and caveat with focused long term strategy, Target state Architecture and use case.
UiPath Test Automation using UiPath Test Suite series, part 3DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 3. In this session, we will cover desktop automation along with UI automation.
Topics covered:
UI automation Introduction,
UI automation Sample
Desktop automation flow
Pradeep Chinnala, Senior Consultant Automation Developer @WonderBotz and UiPath MVP
Deepak Rai, Automation Practice Lead, Boundaryless Group and UiPath MVP
Essentials of Automations: Optimizing FME Workflows with ParametersSafe Software
Are you looking to streamline your workflows and boost your projects’ efficiency? Do you find yourself searching for ways to add flexibility and control over your FME workflows? If so, you’re in the right place.
Join us for an insightful dive into the world of FME parameters, a critical element in optimizing workflow efficiency. This webinar marks the beginning of our three-part “Essentials of Automation” series. This first webinar is designed to equip you with the knowledge and skills to utilize parameters effectively: enhancing the flexibility, maintainability, and user control of your FME projects.
Here’s what you’ll gain:
- Essentials of FME Parameters: Understand the pivotal role of parameters, including Reader/Writer, Transformer, User, and FME Flow categories. Discover how they are the key to unlocking automation and optimization within your workflows.
- Practical Applications in FME Form: Delve into key user parameter types including choice, connections, and file URLs. Allow users to control how a workflow runs, making your workflows more reusable. Learn to import values and deliver the best user experience for your workflows while enhancing accuracy.
- Optimization Strategies in FME Flow: Explore the creation and strategic deployment of parameters in FME Flow, including the use of deployment and geometry parameters, to maximize workflow efficiency.
- Pro Tips for Success: Gain insights on parameterizing connections and leveraging new features like Conditional Visibility for clarity and simplicity.
We’ll wrap up with a glimpse into future webinars, followed by a Q&A session to address your specific questions surrounding this topic.
Don’t miss this opportunity to elevate your FME expertise and drive your projects to new heights of efficiency.
LF Energy Webinar: Electrical Grid Modelling and Simulation Through PowSyBl -...DanBrown980551
Do you want to learn how to model and simulate an electrical network from scratch in under an hour?
Then welcome to this PowSyBl workshop, hosted by Rte, the French Transmission System Operator (TSO)!
During the webinar, you will discover the PowSyBl ecosystem as well as handle and study an electrical network through an interactive Python notebook.
PowSyBl is an open source project hosted by LF Energy, which offers a comprehensive set of features for electrical grid modelling and simulation. Among other advanced features, PowSyBl provides:
- A fully editable and extendable library for grid component modelling;
- Visualization tools to display your network;
- Grid simulation tools, such as power flows, security analyses (with or without remedial actions) and sensitivity analyses;
The framework is mostly written in Java, with a Python binding so that Python developers can access PowSyBl functionalities as well.
What you will learn during the webinar:
- For beginners: discover PowSyBl's functionalities through a quick general presentation and the notebook, without needing any expert coding skills;
- For advanced developers: master the skills to efficiently apply PowSyBl functionalities to your real-world scenarios.
Observability Concepts EVERY Developer Should Know -- DeveloperWeek Europe.pdfPaige Cruz
Monitoring and observability aren’t traditionally found in software curriculums and many of us cobble this knowledge together from whatever vendor or ecosystem we were first introduced to and whatever is a part of your current company’s observability stack.
While the dev and ops silo continues to crumble….many organizations still relegate monitoring & observability as the purview of ops, infra and SRE teams. This is a mistake - achieving a highly observable system requires collaboration up and down the stack.
I, a former op, would like to extend an invitation to all application developers to join the observability party will share these foundational concepts to build on:
UiPath Test Automation using UiPath Test Suite series, part 4DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 4. In this session, we will cover Test Manager overview along with SAP heatmap.
The UiPath Test Manager overview with SAP heatmap webinar offers a concise yet comprehensive exploration of the role of a Test Manager within SAP environments, coupled with the utilization of heatmaps for effective testing strategies.
Participants will gain insights into the responsibilities, challenges, and best practices associated with test management in SAP projects. Additionally, the webinar delves into the significance of heatmaps as a visual aid for identifying testing priorities, areas of risk, and resource allocation within SAP landscapes. Through this session, attendees can expect to enhance their understanding of test management principles while learning practical approaches to optimize testing processes in SAP environments using heatmap visualization techniques
What will you get from this session?
1. Insights into SAP testing best practices
2. Heatmap utilization for testing
3. Optimization of testing processes
4. Demo
Topics covered:
Execution from the test manager
Orchestrator execution result
Defect reporting
SAP heatmap example with demo
Speaker:
Deepak Rai, Automation Practice Lead, Boundaryless Group and UiPath MVP
Smart TV Buyer Insights Survey 2024 by 91mobiles.pdf91mobiles
91mobiles recently conducted a Smart TV Buyer Insights Survey in which we asked over 3,000 respondents about the TV they own, aspects they look at on a new TV, and their TV buying preferences.
State of ICS and IoT Cyber Threat Landscape Report 2024 previewPrayukth K V
The IoT and OT threat landscape report has been prepared by the Threat Research Team at Sectrio using data from Sectrio, cyber threat intelligence farming facilities spread across over 85 cities around the world. In addition, Sectrio also runs AI-based advanced threat and payload engagement facilities that serve as sinks to attract and engage sophisticated threat actors, and newer malware including new variants and latent threats that are at an earlier stage of development.
The latest edition of the OT/ICS and IoT security Threat Landscape Report 2024 also covers:
State of global ICS asset and network exposure
Sectoral targets and attacks as well as the cost of ransom
Global APT activity, AI usage, actor and tactic profiles, and implications
Rise in volumes of AI-powered cyberattacks
Major cyber events in 2024
Malware and malicious payload trends
Cyberattack types and targets
Vulnerability exploit attempts on CVEs
Attacks on counties – USA
Expansion of bot farms – how, where, and why
In-depth analysis of the cyber threat landscape across North America, South America, Europe, APAC, and the Middle East
Why are attacks on smart factories rising?
Cyber risk predictions
Axis of attacks – Europe
Systemic attacks in the Middle East
Download the full report from here:
https://sectrio.com/resources/ot-threat-landscape-reports/sectrio-releases-ot-ics-and-iot-security-threat-landscape-report-2024/
DevOps and Testing slides at DASA ConnectKari Kakkonen
My and Rik Marselis slides at 30.5.2024 DASA Connect conference. We discuss about what is testing, then what is agile testing and finally what is Testing in DevOps. Finally we had lovely workshop with the participants trying to find out different ways to think about quality and testing in different parts of the DevOps infinity loop.
A tale of scale & speed: How the US Navy is enabling software delivery from l...sonjaschweigert1
Rapid and secure feature delivery is a goal across every application team and every branch of the DoD. The Navy’s DevSecOps platform, Party Barge, has achieved:
- Reduction in onboarding time from 5 weeks to 1 day
- Improved developer experience and productivity through actionable findings and reduction of false positives
- Maintenance of superior security standards and inherent policy enforcement with Authorization to Operate (ATO)
Development teams can ship efficiently and ensure applications are cyber ready for Navy Authorizing Officials (AOs). In this webinar, Sigma Defense and Anchore will give attendees a look behind the scenes and demo secure pipeline automation and security artifacts that speed up application ATO and time to production.
We will cover:
- How to remove silos in DevSecOps
- How to build efficient development pipeline roles and component templates
- How to deliver security artifacts that matter for ATO’s (SBOMs, vulnerability reports, and policy evidence)
- How to streamline operations with automated policy checks on container images
Assure Contact Center Experiences for Your Customers With ThousandEyes
Data migration patterns special
1. Patterns for Data Migration Projects
Martin Wagner
martin.wagner@tngtech.com
http://www.tngtech.com
Tim Wellhausen
kontakt@tim-wellhausen.de
http://www.tim-wellhausen.de
March 17, 2011
Introduction
Data migration is a common operation in enterprise computing. Whenever a new system
is introduced and after each merger and acquisition, existing data has to be moved from
a legacy system to some hopefully more suitable target system.
A successful data migration project needs to manage several key challenges that appear
in most such projects:
First, you don’t know what exactly is in the legacy system. The older the legacy system is,
the less likely it is that its original developers are still members of the development team
or that they are at least available to support the data migration project. Additionally,
even if the legacy system has been well documented in the beginning, you often cannot
rely on the correctness and completeness of the documentation after years of change.
Second, data quality might be an issue. The quality constraints on the data in the old
system may be lower than the constraints in the target system. Inconsistent or missing
data entries that the legacy system somehow copes with (or ignores) might cause severe
problems in the target system. In addition, the data migration itself might corrupt the
data in a way that is not visible to the software developers but only to business users.
Third, it is difficult to get decisions from the business side. Business experts typically
already struggle to get their daily work done while, at the same time, working on the
requirements for the new system. During the development of the data migration code,
many decisions need to be made that may affect the quality and consistency of the
migrated data.
Fourth, development time is restricted. The development of the data migration project
cannot start before the target system’s domain model is well-defined and must be finished
1
2. before the go-live of the new system. As the main focus typically is on the development
of the target system, the data migration project often does not get as many resources
as needed.
Fifth, run time is restricted. Large collections of legacy data may take a long time to
migrate. During the development of the migration code, a long round trip cycle makes
it difficult to test the data migration code regularly. At go-live it may not be feasible
to stop the legacy system long enough to ensure that the legacy data to migrate is both
constant and consistent during the complete migration process.
This paper presents several patterns that deal with these issues:
• Develop with Production Data: Use real data from the production system
for tests during the development of the migration code.
• Migrate along Domain Partitions: Divide and conquer the migration effort
by migrating largely independent parts of the domain model one after another.
• Measure Migration Quality: Implement code that collects and stores all
sorts of information about the outcome of the migration during every run.
• Daily Quality Reports: Generate detailed reports about the measured quality
of the migrated data and make it available to all affected stake holders.
In parallel to this paper, Andreas R¨uping wrote a collection of patterns with the title
Transform! [5]. His patterns cope with the same forces as described here. Among these
patterns are:
• Robust Processing: To prevent the migration process to halt from unexpected
failure, apply extensive exception handling to cope with all kinds of problematic
input data.
• Data Cleansing: To prevent the new application from being swamped with
useless data right from the start, enhance your transformation processes with data
cleansing mechanisms.
• Incremental Transformation: Perform an initial data migration a while be-
fore the new application goes live. Migrate data that has changed since then
immediately before the new application is launched.
The intended audience for this paper are project leads, both on the technical and the
business side, and software developers in general who need to get a data migration
project done successfully.
Test infrastructure
A data migration effort needs a supporting infrastructure. How you set up such an
infrastructure and what alternatives you have is out of the scope of this paper. Rather,
the paper assumes the existence of an infrastructure as outlined in figure 1:
2
3. Figure 1: Data flow between productive and test systems during data migrations
The final data migration takes place from the legacy system into the new system in the
production environment. Once this migration has been successfully done, the project
has succeeded. To perform test runs of the data migration at development time, test
environments for both the legacy and the new system exist and are available to test the
migration code. To set up the tests, legacy data from the production system needs to be
copied into the corresponding test system. After a migration test run, the new system
can be checked for the outcome of the test.
Running example
Throughout the paper, we will refer to an exemplary migration project to which the
patterns are applied. The task at hand is to migrate data about customers and tickets
from a legacy system to a new system with a similar yet somewhat different domain
model.
The legacy system contains just one large database with all information about the cus-
tomers and the tickets that they bought. For repeated ticket sells to existing customers,
new entries were created. The new system splits both customer and ticket data into
several tables. Customers are now explicit entities and may have several addresses. For
tickets, current and historical information are now kept separately.
3
4. Patterns for Data Migration Projects
Develop with Production Data
Context
You need to migrate data from a legacy system to a new system whose capabilities and
domain model differ significantly from the legacy system. The exact semantics of the
legacy data are not documented.
Problem
Once the data migration has started on the production environment, the migration code
should run mostly flawless to avoid migrating inconsistent data into the target system.
How do you ensure that the migration code covers all cases of the legacy
data?
Forces
The following aspects make the problem difficult:
• The legacy system’s exact behavior is lost in history. Its developers may not be
available anymore or do not remember the exact requirements determining the
legacy system’s behavior.
• All existing states in the data must be revealed. During the development of the
migration code many decisions must be made how to handle special cases. Missing
knowledge about some cases data must not go unnoticed for a long time. At least
you should know which cases exist that you may safely ignore.
• Wrong assumptions. Unit tests are a good means to verify assumptions about how
the migration code should work. Such tests cannot guarantee, however, that these
assumptions are correct.
• Insufficient test data. Testing the migration code with hand-crafted test data does
not guarantee that all corner cases are covered because some corner cases might
just not be known.
Solution
From the beginning on, retrieve snapshots of the legacy data from the pro-
duction system and develop and test the migration code using real data.
Try to get a complete snapshot of the legacy data from the production system, or, if that
is not feasible because of the data size, try to get a significantly large amount of legacy
data to cover most corner cases. If the legacy data changes during the development,
4
5. Patterns for Data Migration Projects
regularly update your snapshot and always develop against the most current available
data. All developers should use the same data set to avoid any inconsistencies.
Whenever your code makes wrong assumptions, the migration is likely to fail. Even
though you still have no documentation at hand for the exact semantics of the legacy
system, you are informed early on about gaps in your knowledge and therefore in the
migration code.
If the amount of legacy data is too huge to work with on a daily base, try to define a
working set that most likely covers all cases. You could reduce the amount of data, for
example, by copying only those data sets that have been used (read/modified) in the
production system lately.
Production data often contains sensitive information that must not be available to every-
one because of legal reasons, business reasons or privacy protection. Examples are credit
card information from customers or data that has a high business value such as financial
trade data. In such cases, production data needs to be modified (e.g. anonymized)
before the development team is allowed to use it.
If you test your migration code on production data, you get a good feeling for how long
the migration might take at go-live.
Consequences
Applying this pattern has the following advantages:
• Detecting corner cases. When production data is fed into the migration code from
the beginning on, it is more likely that all boundary conditions are detected during
test runs.
• Increasing the understanding of data semantics. Developers improve their under-
standing of the implicit meaning and the quality of the existing legacy data. If
they are forced to run their code with real data, they cannot decide to silently
ignore corner cases.
You need also to consider the following liabilities:
• Slow progress in the beginning. If there are many corner cases to handle, the
development efforts may slow down significantly in the beginning because you
cannot as easily concentrate on the most common cases at first.
• More effort. Taking snapshots of legacy data might involve extra effort for ex-
tracting it from the legacy system and loading it into a test system. Setting up
such a test system might be a time-consuming task on its own.
• Too much data. The complete production data may be too voluminous to be used
in test runs. In that case, you need to find out whether you can restrict testing to
a subset of the production data. If you Migrate along Domain Partitions,
you may be able to restrict test runs to subsets of the full legacy data.
5
6. Patterns for Data Migration Projects
Example
In our example CRM system, we set up jobs triggered by a continuous integration system.
The jobs perform the following task at the push of a button or every night:
• Dump the legacy database to a file storage and restore the database dump to a
test legacy system.
• Set up a clean test installation of the new system with an empty database only
filled with master data.
• Run the current version of the migration code on the test systems.
6
7. Patterns for Data Migration Projects
Migrate along Domain Partitions
Context
You want to migrate data from a legacy system to a new system. The legacy system’s
capabilities cannot be matched one-to-one onto the new system but the domain of the
target system resembles the legacy system’s domain.
Problem
The migration of a data entry of the legacy system may affect multiple domains of the
target system. Moreover, even multiple business units may be affected. The complexity
of migration projects can be overwhelming.
How can you reduce the complexity of a migration effort?
Forces
The following aspects make the problem difficult:
• The legacy system’s domain model may be ill-defined. This may be the result of
inadequate design, or of uncontrolled growth within the system’s domain model.
It is unclear how data found in the legacy system is structured and interrelated.
• Different users have contradicting views on the legacy system. Every business unit
employing the legacy system may have a different conceptual model of what the
legacy system does. It becomes very hard to analyze the legacy system in a way
that provides a consistent view on its domain model.
• The technical view on the legacy system is often counter-intuitive. The develop-
ment team more often than not does only have a view on database tables or similar
artifacts of the legacy system. Semantics associated by business users with special
states in raw data is not visible. However, a successful migration relies on eliciting
the intricacies of the legacy system.
Solution
Migrate the data along sub-domains defined by the target system.
Introducing a new system allows for defining a consistent domain model that is used
across business units [2]. This model can then be partitioned along meaningful bound-
aries, e.g. along parts used primarily by different business units.
The actual migration then takes place sub-domain by sub-domain. For example, in an
order management system there may be individual customers, each having a number of
7
8. Patterns for Data Migration Projects
orders, and with each order there may be tickets associated with it. It thus makes sense
to first migrate the customers, then the orders, and finally the tickets.
For each sub-domain, all necessary data has to be assembled in the legacy system,
regardless of the legacy system’s domain model. For example, the legacy system could
have been modeled such that there was no explicit customer, instead each order contains
customer information. Then in the first migration step, only the customer information
should be taken from the legacy system’s order domain. In the second migration step,
the remaining legacy order domain data should be migrated.
Consequences
To Migrate along Domain Partitions offers the following advantages:
• The data migration becomes manageable. Due to much smaller building blocks it
becomes easier to spot problems. Furthermore, individual migration steps can be
handled by more people simultaneously.
• Each step of the data migration is testable in isolation. This reduces the overall
test effort and may increase the quality of tests if additional integration tests are
provided.
• Verifying the migration results can start earlier. As soon as a single sub-domain
migration is ready, business units affected by this sub-domain can verify the mi-
gration results. It is no longer necessary to wait until all migration code is ready.
To Migrate along Domain Partitions also has the following liabilities and short-
comings:
• The migration run takes longer. It is very likely that data from the legacy system
has to be read and processed multiple times.
• Errors in migrations propagate. If there is some problem with the migration of a
sub-domain performed in the beginning, many problems in dependent sub-domain
migrations will likely result. Thus, the earlier the migration of a sub-domain
occurs, the higher its quality must be. This necessitates an iterative approach to
implementation and testing.
• Redundant information might be introduced. If data of the legacy system is read
multiple times, there is the possibility of introducing the same information into
the target system multiple times. This may be necessary if distinct artifacts have
had the same designator in the legacy system, but may also be highly problematic.
Care has to be taken to avoid any unwanted redundancies in the target system.
Example
In our example CRM system, the following data partitions might be possible:
8
9. Patterns for Data Migration Projects
• Customer base data
• Customer addresses
• Ticket data
• Ticket history
The data migration process first processes all entries to generate the base customer
entities in the new system. Then, it runs once more over all entries of the legacy system
to create address entries as necessary. In the next step, all current information about
sold tickets are migrated with references to the already migrated customer entries. As
last step, historical ticket information is retrieved from the legacy table and written into
the new history table.
9
10. Patterns for Data Migration Projects
Measure Migration Quality
Context
You need to migrate data from a legacy system to a new system. The legacy system’s
capabilities cannot be matched one-to-one onto the new system. A staging system with
production data is available so that you can Develop with Production Data at
any time. The migration code is regularly tested against the production data.
Problem
Resources are limited. Thus, there is often a trade-off between migration quality and
costs, leading to potential risks when deploying the migration code.
How do you prevent the migration code from transforming data wrongly
unnoticed?
Forces
The following aspects make the problem difficult:
• Some changes cannot be rolled back. Although you may have data backups of the
target system available, it may not be easy or possible at all to roll back all changes
to the target system. You therefore need to know the risk of migrating corrupted
data.
• Down-times need to be avoided. Even if it is possible to roll back all changes, a
migration run often requires all users of both the source and the target system to
stop using them. Such down-times may be very costly and therefore need to be
avoided.
• Know when to stop improving the quality The 80-20 rule applies to many migra-
tion efforts. Business may not want to achieve 100% quality because it might be
cheaper to correct some broken data entries manually or even leave some data in
an inconsistent state.
• Small technical changes in the migration code often lead to significant changes in
the migration quality. Yet, the people designing and implementing the migration
code are usually not able to fully assess the implications of their changes.
Solution
In collaboration with business, define metrics that measure the quality of the
migrated data and make sure that these metrics are calculated regularly.
10
11. Patterns for Data Migration Projects
Integrate well-designed logging facilities into the migration code. Make sure that the
result of each migration step is easily accessible in the target system, e.g. as special
database table. For example, there may be an entry in the database table for each
migrated data set, indicating if it was migrated without any problems, or with warnings
or if some error occurred that prevented the migration.
Add validation tasks to your migration code suite, providing quantitative and qualitative
sanity checks of the migration run. For example, check that the number of data sets in
the legacy system matches the number of data sets in the new system.
Make sure that at least the most significant aggregated results of each migration test
run are archived to allow for comparing the effects of changes in the migration code.
Consequences
To Measure Migration Quality offers the following advantages:
• Results of migration become visible. By checking the logged information, you can
quickly see what the migration actually did.
• Quick feedback cycles. The effects of each change in the migration code upon the
migration quality metrics are visible immediately after the next migration test run.
This allows the development team to quickly verify the results of their decisions.
• Implicit assumptions on structures in the legacy system are revealed. Whenever a
developer makes implicit assumptions on some properties of the legacy system that
are not correct, it is very likely that the migration will produce errors or warnings
for at least some data sets.
• Potential for legacy data clean-up is revealed. Problems that are not coming from
the migration code itself, but from data quality issues in the legacy system, will
be revealed before the actual migration in the production system. This provides
the unique opportunity to clean up the data quality in a holistic fashion.
To Measure Migration Quality also has the following liabilities and shortcomings:
• Additional up-front effort necessary. Setting up test suites to measure the quality
of the outcome requires additional effort from the beginning on.
• Challenge to find suitable metrics. You have to be careful to create meaningful
metrics. If there is a warning for every single data set, it is very likely that people
will ignore all warnings.
• Too much transparency Not everyone may appreciate data quality problems de-
tected by thorough quality analysis in the legacy system. Pointing people to re-
quired efforts might cause pressure against the overall migration project, along the
lines of “if we have to do this much data cleaning to migrate, it is not worth the
effort”.
11
12. Patterns for Data Migration Projects
Example
In our example legacy CRM system, there are no database foreign key relationships
between a ticket and a customer the ticket refers to. Instead, user agents were required
to fill in the customer’s ID in a String field of the ticket table. In the target system,
referential integrity is enforced.
The results of the migration are stored in a logging table, having the following columns:
• Source value designates the primary key of an entry in the legacy system.
• Target value designates the corresponding primary key of an entry in the target
system.
• Result is either OK or ERROR.
• Message gives additional explanations about the migration described by the log
entry.
As part of writing the migration code, we check if the target system’s database throws
an error about a missing foreign key relationship whenever we try to add a migrated
ticket entity to it. If it does, we write an ERROR entry along with a Message telling
Ticket entity misses foreign key to customer. Customer value in legacy system is XYZ.
into the migration log table, otherwise we write an OK entry.
12
13. Patterns for Data Migration Projects
Daily Quality Reports
Context
You need to migrate data from a legacy system to a new system. You Measure Mi-
gration Quality to get results on your current state of the migration after every test
run and are therefore able to control the risk of the data migration. Migration tests are
run regularly, e.g. every night in an effort of Continuous Integration [1]. You want
to involve business experts into testing the outcome of the data migration early on.
Problem
IT cannot fathom the quality attributes of the domain. Therefore, the decisions defining
the trade-offs are made by business units and not the IT department implementing the
migration. However, it is not always easy to get feedback on the effect of some code
change onto the migration quality.
How can you closely involve business into the data migration effort?
Forces
The following aspects make the problem difficult:
• People are lazy. If there is any effort involved in controlling the current state of
the migration quality metrics, it is likely that business won’t notice any significant
changes.
• Business has little time to spare. You cannot expect business to analyze and
understand measured data about the migration quality in detail.
Solution
After every migration test run, generate a detailed report about the state of
the migration quality and send it to dedicated business experts.
To make the reports easily accessible to business, provide aggregated views of the mi-
gration results to allow for trend analysis. For example, provide key statistics indicating
the number of successfully migrated data sets, the number of data sets migrated with
warnings, the number of data sets that could not be migrated due to problems of data
quality or the migration code and the number of those not migrated due to technical
errors.
It is important to start sending daily quality reports early enough to be able to quickly
incorporate any feedback you get. It is also important not to start these reports too
early to avoid spamming the receivers with mails that they tend to ignore if the reports
13
14. Patterns for Data Migration Projects
are not meaningful to them. It also crucial to find business experts that are willing and
interested in checking preliminary migration results.
Make sure that the migrated data is written into a database that is part of a test
environment of the new system. Business experts who receive quality reports may then
easily check the migrated data in new system themselves.
Consequences
Daily Quality Reports offer the following advantages:
• The state of the migration code becomes highly transparent. If the results of each
migration test run are made accessible to business units, they can closely track the
progress of the development team charged with the migration.
• Easy to obtain feedback. By triggering an automatic migration run on the test
environment, it becomes comparatively easy to receive immediate feedback on any
changes to the migration code.
• Risk is delegated to business. By receiving feedback early and often, business
experts are able to evaluate the risk of the data migration effort. If necessary,
managers can be involved to decide, for example, whether some quality issues
should ignored or resolved even if that influences the project plan.
Daily Quality Reports also have the following liabilities and shortcomings:
• Efforts necessary for generating reports. Additional up-front effort is required to
set up the reporting structures. To ensure comparability between different test
runs, the reporting structures have to be defined before the actual migration code
gets done.
• Political issues. The high degree of transparency might lead to political issues
in environments that are not used to it. If corporate culture dictates projects to
report no problems, it might be a bad idea to let people from outside the migration
development team peek into the current, potentially problematic state.
• Possible delays because of data volume. If there is much data to be migrated, the
execution time of a full migration test run may be very long. This prevents tight
feedback loops.
• Regular updates of legacy data may be needed. If the data in the legacy production
system changes regularly (for example because business users clean up the exist-
ing legacy data), you may need to also automate or at least simplify transfering
production data into the migration test system so that business is able to check
the effect of their data cleaning on the new system.
14
15. Patterns for Data Migration Projects
Example
To provide daily quality reports in our example system, we add some code at the end of
a migration test run that aggregates the number of OK and ERROR entries and reports
the numbers such that trend reports can be built using Excel and provided to business
people for reporting purposes. If necessary, the generation of high-level trend reports
can be implemented as part of the migration code, and a continuous integration server’s
mail capabilities are used to send the reports to business.
Also, lists are generated that show all source values leading to ERROR entries. These
lists are given to user agents to manually correct them and to the IT development team to
come up with algorithms that cleverly match lexicographically similar references. After
each subsequent test run, the procedure is repeated until a sufficient level of quality is
reached.
15
16. Patterns for Data Migration Projects
Related Work
Regardless the importance of data migration projects, there is currently surprisingly few
literature on this topic.
The pattern collection Transform! by Andreas R¨uping [5] contains several patterns for
data migration projects that nicely complement the pattern collection presented here.
The technical details of data migration efforts have also been described in pattern form
by Microsoft [6]. Also, Haller gives a thorough overview of the variants in executing
data migrations [4] and the project management tasks associated with it [3].
Acknowledments
The authors are very thankful to Hugo Sereno Ferreira who provided valuable feedback
during the shepherding process.
The workshop participants at EuroPLoP 2010 also provided lots of ideas and suggestions
for improvements, most of which are incorporated into the current version.
Special thanks also go to Andreas R¨uping for his close cooperation after we discovered
that we are working on the same topic at the same time.
References
[1] Paul M. Duvall, Steve Matyas, and Andrew Glover. Continuous Integration: Im-
proving Software Quality and Reducing Risk. Addison-Wesley Longman, 2007.
[2] Eric J. Evans. Domain-Driven Design: Tackling Complexity in the Heart of Software.
Addison-Wesley Longman, 2003.
[3] Klaus Haller. Data migration project management and standard software experi-
ences in avaloq implementation projects. In Proceedings of the DW2008 Conference,
2008.
[4] Klaus Haller. Towards the industrialization of data migration: Concepts and patterns
for standard software implementation projects. In Proceedings of CAiSE 2009, LNCS
5565, pages 63–78, 2009.
[5] Andreas R¨uping. Transform! - patterns for data migration. In EuroPLoP 2010 -
Proceedings of the 15th European Conference on Pattern Languages of Programming,
2010.
[6] Philip Teale, Christopher Etz, Michael Kiel, and Carsten Zeitz. Data patterns.
Technical report, Microsoft Corporation, 2003.
16