Data Pipelines - The Golden Circles, a deck about the why, how and what of both business value and technology considerations for building effective data pipelines.
Tracking and predicting scheduled vehicle journeys presents several challenges. Efficient tracking requires minimizing costs while maintaining accurate vehicle state correspondence with accuracy guarantees, which depends on effective prediction algorithms. Accurate prediction is difficult as external factors influencing vehicle movements are hard to foresee in advance. Statistical analysis of sparse and incomplete historical tracking data to cluster similar journey patterns and match sub-journeys adds further complexity.
Build vs buy : data pipeline - approachspry_pradeep
All data pipelines are the underlying infrastructure build for the data flow through your software systems, so it is inevitable for team to build these. Here we review which segments can be offloaded to data pipeline providers and which pieces to be built in house. And how we can have a fluid approach while continuously evaluating both build and buy based on parameters like speed, cost and analytics needs.
The document discusses a data brewery meetup in Dallas for increasing business data awareness and literacy. It provides an overview of the data pipeline process including data sources, extraction, transformation, modeling, and decisioning. The meetup agenda is outlined which includes aligning understanding, sharing stories and use cases, discussion, and networking. The overall goals are to get feedback, answers, make connections, and share knowledge.
Data Transformed provides data analytics services to help clients transform their data and gain insights. They offer budget planning, data preparation, and visualization services. Their data preparation services include data quality, master data management, and textual ETL. Their visualization services include dashboards, reports, and analytics. They aim to help clients make data-driven decisions and get faster, more streamlined results.
(ENT305) Develop an Enterprise-wide Cloud Adoption Strategy | AWS re:Invent 2014Amazon Web Services
Taking a "cloud first" approach requires a different approach than you probably had to consider for your initial few workloads in the cloud. You'll be diving into the deep end of hybrid environments, and that means taking a broad view of your IT strategy, architecture, and organizational design.
Through our experience in helping enterprises navigate this change, AWS has developed the Cloud Adoption Framework (CAF) to assist with planning, creating, managing, and supporting the shift. In this session, we cover how the CAF offers practical guidance and comprehensive guidelines to enterprise organizations, particularly around roles, governance, and efficiency.
How to-crawl-your-way-into-ranking-with-indexable-contentAnton Shulke
To get content indexed and ranked, search engines prioritize crawling important and frequently changing pages based on links and queries. Crawling is constrained by computational resources, so search engines use techniques like importance scoring, change detection, and URL patterns to efficiently discover new and updated content while being polite to websites. Maintaining consistent internal linking, sitemaps, and canonicalization helps search engines confidently discover and represent website content.
Data Management for High Performance AnalyticsMary Snyder
High-performance analytics is only as good as the data management supporting it.
In fact, high-performance data management plays a key role when it comes to in-database, in-memory and in-stream analytics.
In this webinar Dan Socenau from SAS explores:
•The data management building blocks needed to succeed with high-performance analytics.
•Assessing, planning and executing these bedrock data management capabilities.
•How to deploy a modern data analysis practice.
View the on-demand webinar: http://www.sas.com/en_us/webinars/data-management-high-performance-analytics.html
Tracking and predicting scheduled vehicle journeys presents several challenges. Efficient tracking requires minimizing costs while maintaining accurate vehicle state correspondence with accuracy guarantees, which depends on effective prediction algorithms. Accurate prediction is difficult as external factors influencing vehicle movements are hard to foresee in advance. Statistical analysis of sparse and incomplete historical tracking data to cluster similar journey patterns and match sub-journeys adds further complexity.
Build vs buy : data pipeline - approachspry_pradeep
All data pipelines are the underlying infrastructure build for the data flow through your software systems, so it is inevitable for team to build these. Here we review which segments can be offloaded to data pipeline providers and which pieces to be built in house. And how we can have a fluid approach while continuously evaluating both build and buy based on parameters like speed, cost and analytics needs.
The document discusses a data brewery meetup in Dallas for increasing business data awareness and literacy. It provides an overview of the data pipeline process including data sources, extraction, transformation, modeling, and decisioning. The meetup agenda is outlined which includes aligning understanding, sharing stories and use cases, discussion, and networking. The overall goals are to get feedback, answers, make connections, and share knowledge.
Data Transformed provides data analytics services to help clients transform their data and gain insights. They offer budget planning, data preparation, and visualization services. Their data preparation services include data quality, master data management, and textual ETL. Their visualization services include dashboards, reports, and analytics. They aim to help clients make data-driven decisions and get faster, more streamlined results.
(ENT305) Develop an Enterprise-wide Cloud Adoption Strategy | AWS re:Invent 2014Amazon Web Services
Taking a "cloud first" approach requires a different approach than you probably had to consider for your initial few workloads in the cloud. You'll be diving into the deep end of hybrid environments, and that means taking a broad view of your IT strategy, architecture, and organizational design.
Through our experience in helping enterprises navigate this change, AWS has developed the Cloud Adoption Framework (CAF) to assist with planning, creating, managing, and supporting the shift. In this session, we cover how the CAF offers practical guidance and comprehensive guidelines to enterprise organizations, particularly around roles, governance, and efficiency.
How to-crawl-your-way-into-ranking-with-indexable-contentAnton Shulke
To get content indexed and ranked, search engines prioritize crawling important and frequently changing pages based on links and queries. Crawling is constrained by computational resources, so search engines use techniques like importance scoring, change detection, and URL patterns to efficiently discover new and updated content while being polite to websites. Maintaining consistent internal linking, sitemaps, and canonicalization helps search engines confidently discover and represent website content.
Data Management for High Performance AnalyticsMary Snyder
High-performance analytics is only as good as the data management supporting it.
In fact, high-performance data management plays a key role when it comes to in-database, in-memory and in-stream analytics.
In this webinar Dan Socenau from SAS explores:
•The data management building blocks needed to succeed with high-performance analytics.
•Assessing, planning and executing these bedrock data management capabilities.
•How to deploy a modern data analysis practice.
View the on-demand webinar: http://www.sas.com/en_us/webinars/data-management-high-performance-analytics.html
The fastest way to convert etl analytics and data warehouse to AWS- Impetus W...Impetus Technologies
This presentation discusses Impetus Technologies' services for migrating enterprise data warehouses, analytics, and ETL workloads to AWS. It provides an overview of Impetus' cloud migration strategy and methodology, including automating workload transformation through assessment, code conversion, data migration, validation, and execution on AWS. A demo is shown of Impetus' automated tools that can reduce migration efforts by 50% by intelligently transforming queries, data, procedures, scheduling, and more to cloud-native AWS analytics services like Redshift, EMR, and Snowflake.
Data Con LA 2020
Description
The data warehouse has been an analytics workhorse for decades. Unprecedented volumes of data, new types of data, and the need for advanced analyses like machine learning brought on the age of the data lake. But Hadoop by itself doesn't really live up to the hype. Now, many companies have a data lake, a data warehouse, or a mishmash of both, possibly combined with a mandate to go to the cloud. The end result can be a sprawling mess, a lot of duplicated effort, a lot of missed opportunities, a lot of projects that never made it into production, and a lot of financial investment without return. Technical and spiritual unification of the two opposed camps can make a powerful impact on the effectiveness of analytics for the business overall. Over time, different organizations with massive IoT workloads have found practical ways to bridge the artificial gap between these two data management strategies. Look under the hood at how companies have gotten IoT ML projects working, and how their data architectures have changed over time. Learn about new architectures that successfully supply the needs of both business analysts and data scientists. Get a peek at the future. In this area, no one likes surprises.
*Look at successful data architectures from companies like Philips, Anritsu, Uber,
*Learn to eliminate duplication of effort between data science and BI data engineering teams
*Avoid some of the traps that have caused so many big data analytics implementations to fail
*Get AI and ML projects into production where they have real impact, without bogging down essential BI
*Study analytics architectures that work, why and how they work, and where they're going from here
Speaker
Paige Roberts,Vertica, Open Source Relations Manager
Multi-Tenancy in Data Lakes are on the rise. When looking at multi-tenancy from the lens of data governance, a lot is changing the landscape, and the way we have been operating with respect to the governance model probably needs a rethink. It is time to think of Governance and its various entities as a first-class citizen in data architecture and bake it as part of the platform. We will look at the various aspects of governance, extending to accommodate the growing compliance and regulatory requirements and suggestive architectural approaches to realize the same.
The document summarizes Microsoft's SQL Server 2005 Analysis Services (SSAS). It provides an overview of SSAS capabilities such as data mining algorithms, unified dimensional modeling, scalability features, and integrated manageability with SQL Server. It also describes demos of the OLAP and data mining capabilities and how SSAS can be deployed and managed for scalability, availability, and serviceability.
All about Big Data components and the best tools to ingest, process, store and visualize the data.
This is a keynote from the series "by Developer for Developers" powered by eSolutionsGrup.
( Big ) Data Management - Data Quality - Global concepts in 5 slidesNicolas Sarramagna
This document discusses data quality and management. It covers:
- The importance of data quality metrics like accuracy, completeness, conformity, and timeliness for verifying data.
- Why data quality is needed, as poor quality data can cause issues in applications and business processes.
- Approaches to improving data quality, including identifying quality issues, acting on the data directly through normalization and rules, or acting on underlying processes.
This document summarizes how AWS can help optimize Microsoft workloads. It discusses how AWS allows lifting and shifting of Windows instances, improved elasticity, optimized storage, serverless architectures, and managed services to optimize costs over time. It covers building foundations like IAM, VPC, and directory services. It also discusses platform identity, core network infrastructure, Windows identity, migrating workloads to AWS, database migration, repeatable architectures, administration at scale, keeping systems updated, and licensing options on AWS.
Accelerating Insight - Smart Data Lake Customer Success StoriesCambridge Semantics
At Gartner Data & Analytics Summit 2017 Alok Prasad, President, was joined by Peter Horowitz of PricewaterhouseCoopers in presenting a session on how Cambridge Semantics' in-memory, massively parallel, semantic graph-based platform delivers an accelerating edge to data-driven organizations, while maintaining trust with security and governance.
SAP BI with BO from LCC Infotech,Hyderabadlccinfotech
This document provides an overview of course contents for SAP BI & BO training. It covers topics such as introduction to data warehousing, SAP BW architecture and data modeling, data loading and extraction in SAP BW, SAP BO tools including Universe Designer, Information Design Tool, Crystal Reports, Web Intelligence, and dashboards. The training will provide skills in areas such as data warehousing concepts, SAP BW data management, building reports and dashboards using SAP BO tools connected to SAP BW systems.
This document summarizes a presentation about Hansen Technologies' migration of their IT infrastructure from an on-premises data center to AWS. It discusses Hansen's motivations for migrating, the process they went through with migration partner Apps Associates, and the benefits they experienced after migrating to AWS, including lower costs, improved uptime, and ability to leverage managed services. It also provides an overview of considerations for migrating applications and databases to AWS and security best practices in the cloud.
The document discusses transforming IT with AWS cloud services. It describes AWS's layered architecture with foundational, platform and application services. It provides guidance on planning a cloud transformation including developing people skills, conducting assessments, creating a roadmap, financial analysis, technology fit, and aligning with enterprise IT programs. The document recommends standardizing on cloud patterns, using the full breadth of AWS services, and investing in a discovery workshop to build a cloud strategy.
The document summarizes key technologies and concepts for business intelligence (BI), including components of a good BI system. It discusses Oracle BI Suite Enterprise Edition as the technology of choice, highlighting its common enterprise data model, Oracle BI Server, role-based dashboards and analytic applications. The document concludes with contact information for the speakers.
The document discusses challenges that organizations face after a merger, including multiple disconnected systems and applications. It proposes adopting a service-oriented architecture (SOA) using Pipeline Pilot as a solution. Pipeline Pilot provides reusable components and web services that allow for rapid application development. This helps streamline systems, reduce costs, and provide flexibility needed to adapt to changing business needs in a post-merger environment.
This document outlines a data migration process from a legacy data environment to a S/4 HANA target environment. The process involves extracting data from various legacy sources like databases, flat files, and Excel. The data is then staged and transformed using a data services platform to validate, cleanse, and parse the data according to business rules. Finally, the cleaned data is loaded into the S/4 HANA target environment using pre-built load routines and SAP configuration is extracted to complete the migration.
Cloud and Analytics - From Platforms to an EcosystemDatabricks
Zurich North America is one of the largest providers of insurance solutions and services in the world with customers representing a wide range of industries from agriculture to construction and more than 90 percent of the Fortune 500.
This document discusses managed IT services provided by GSS America. It outlines various challenges faced by organizations including budget constraints, lack of skills, and regulatory compliance issues. GSS provides a range of managed services including infrastructure management, application management, and service delivery frameworks. Case studies demonstrate how GSS has helped clients through dedicated support teams, standard operating procedures, and optimized costs while ensuring quality of service and end user satisfaction.
GSS America\'s Workplace Services aim at equipping customer’s business with round-the-clock support, through its Global Operations Command Center (GOCC). Its comprehensive range of workplace services gives customers the ability to reduce their costs and improve their service levels. GSS intends to help global enterprises cut down on their infrastructure maintenance costs and provide access to expert skills.
The fastest way to convert etl analytics and data warehouse to AWS- Impetus W...Impetus Technologies
This presentation discusses Impetus Technologies' services for migrating enterprise data warehouses, analytics, and ETL workloads to AWS. It provides an overview of Impetus' cloud migration strategy and methodology, including automating workload transformation through assessment, code conversion, data migration, validation, and execution on AWS. A demo is shown of Impetus' automated tools that can reduce migration efforts by 50% by intelligently transforming queries, data, procedures, scheduling, and more to cloud-native AWS analytics services like Redshift, EMR, and Snowflake.
Data Con LA 2020
Description
The data warehouse has been an analytics workhorse for decades. Unprecedented volumes of data, new types of data, and the need for advanced analyses like machine learning brought on the age of the data lake. But Hadoop by itself doesn't really live up to the hype. Now, many companies have a data lake, a data warehouse, or a mishmash of both, possibly combined with a mandate to go to the cloud. The end result can be a sprawling mess, a lot of duplicated effort, a lot of missed opportunities, a lot of projects that never made it into production, and a lot of financial investment without return. Technical and spiritual unification of the two opposed camps can make a powerful impact on the effectiveness of analytics for the business overall. Over time, different organizations with massive IoT workloads have found practical ways to bridge the artificial gap between these two data management strategies. Look under the hood at how companies have gotten IoT ML projects working, and how their data architectures have changed over time. Learn about new architectures that successfully supply the needs of both business analysts and data scientists. Get a peek at the future. In this area, no one likes surprises.
*Look at successful data architectures from companies like Philips, Anritsu, Uber,
*Learn to eliminate duplication of effort between data science and BI data engineering teams
*Avoid some of the traps that have caused so many big data analytics implementations to fail
*Get AI and ML projects into production where they have real impact, without bogging down essential BI
*Study analytics architectures that work, why and how they work, and where they're going from here
Speaker
Paige Roberts,Vertica, Open Source Relations Manager
Multi-Tenancy in Data Lakes are on the rise. When looking at multi-tenancy from the lens of data governance, a lot is changing the landscape, and the way we have been operating with respect to the governance model probably needs a rethink. It is time to think of Governance and its various entities as a first-class citizen in data architecture and bake it as part of the platform. We will look at the various aspects of governance, extending to accommodate the growing compliance and regulatory requirements and suggestive architectural approaches to realize the same.
The document summarizes Microsoft's SQL Server 2005 Analysis Services (SSAS). It provides an overview of SSAS capabilities such as data mining algorithms, unified dimensional modeling, scalability features, and integrated manageability with SQL Server. It also describes demos of the OLAP and data mining capabilities and how SSAS can be deployed and managed for scalability, availability, and serviceability.
All about Big Data components and the best tools to ingest, process, store and visualize the data.
This is a keynote from the series "by Developer for Developers" powered by eSolutionsGrup.
( Big ) Data Management - Data Quality - Global concepts in 5 slidesNicolas Sarramagna
This document discusses data quality and management. It covers:
- The importance of data quality metrics like accuracy, completeness, conformity, and timeliness for verifying data.
- Why data quality is needed, as poor quality data can cause issues in applications and business processes.
- Approaches to improving data quality, including identifying quality issues, acting on the data directly through normalization and rules, or acting on underlying processes.
This document summarizes how AWS can help optimize Microsoft workloads. It discusses how AWS allows lifting and shifting of Windows instances, improved elasticity, optimized storage, serverless architectures, and managed services to optimize costs over time. It covers building foundations like IAM, VPC, and directory services. It also discusses platform identity, core network infrastructure, Windows identity, migrating workloads to AWS, database migration, repeatable architectures, administration at scale, keeping systems updated, and licensing options on AWS.
Accelerating Insight - Smart Data Lake Customer Success StoriesCambridge Semantics
At Gartner Data & Analytics Summit 2017 Alok Prasad, President, was joined by Peter Horowitz of PricewaterhouseCoopers in presenting a session on how Cambridge Semantics' in-memory, massively parallel, semantic graph-based platform delivers an accelerating edge to data-driven organizations, while maintaining trust with security and governance.
SAP BI with BO from LCC Infotech,Hyderabadlccinfotech
This document provides an overview of course contents for SAP BI & BO training. It covers topics such as introduction to data warehousing, SAP BW architecture and data modeling, data loading and extraction in SAP BW, SAP BO tools including Universe Designer, Information Design Tool, Crystal Reports, Web Intelligence, and dashboards. The training will provide skills in areas such as data warehousing concepts, SAP BW data management, building reports and dashboards using SAP BO tools connected to SAP BW systems.
This document summarizes a presentation about Hansen Technologies' migration of their IT infrastructure from an on-premises data center to AWS. It discusses Hansen's motivations for migrating, the process they went through with migration partner Apps Associates, and the benefits they experienced after migrating to AWS, including lower costs, improved uptime, and ability to leverage managed services. It also provides an overview of considerations for migrating applications and databases to AWS and security best practices in the cloud.
The document discusses transforming IT with AWS cloud services. It describes AWS's layered architecture with foundational, platform and application services. It provides guidance on planning a cloud transformation including developing people skills, conducting assessments, creating a roadmap, financial analysis, technology fit, and aligning with enterprise IT programs. The document recommends standardizing on cloud patterns, using the full breadth of AWS services, and investing in a discovery workshop to build a cloud strategy.
The document summarizes key technologies and concepts for business intelligence (BI), including components of a good BI system. It discusses Oracle BI Suite Enterprise Edition as the technology of choice, highlighting its common enterprise data model, Oracle BI Server, role-based dashboards and analytic applications. The document concludes with contact information for the speakers.
The document discusses challenges that organizations face after a merger, including multiple disconnected systems and applications. It proposes adopting a service-oriented architecture (SOA) using Pipeline Pilot as a solution. Pipeline Pilot provides reusable components and web services that allow for rapid application development. This helps streamline systems, reduce costs, and provide flexibility needed to adapt to changing business needs in a post-merger environment.
This document outlines a data migration process from a legacy data environment to a S/4 HANA target environment. The process involves extracting data from various legacy sources like databases, flat files, and Excel. The data is then staged and transformed using a data services platform to validate, cleanse, and parse the data according to business rules. Finally, the cleaned data is loaded into the S/4 HANA target environment using pre-built load routines and SAP configuration is extracted to complete the migration.
Cloud and Analytics - From Platforms to an EcosystemDatabricks
Zurich North America is one of the largest providers of insurance solutions and services in the world with customers representing a wide range of industries from agriculture to construction and more than 90 percent of the Fortune 500.
This document discusses managed IT services provided by GSS America. It outlines various challenges faced by organizations including budget constraints, lack of skills, and regulatory compliance issues. GSS provides a range of managed services including infrastructure management, application management, and service delivery frameworks. Case studies demonstrate how GSS has helped clients through dedicated support teams, standard operating procedures, and optimized costs while ensuring quality of service and end user satisfaction.
GSS America\'s Workplace Services aim at equipping customer’s business with round-the-clock support, through its Global Operations Command Center (GOCC). Its comprehensive range of workplace services gives customers the ability to reduce their costs and improve their service levels. GSS intends to help global enterprises cut down on their infrastructure maintenance costs and provide access to expert skills.
Predictably Improve Your B2B Tech Company's Performance by Leveraging DataKiwi Creative
Harness the power of AI-backed reports, benchmarking and data analysis to predict trends and detect anomalies in your marketing efforts.
Peter Caputa, CEO at Databox, reveals how you can discover the strategies and tools to increase your growth rate (and margins!).
From metrics to track to data habits to pick up, enhance your reporting for powerful insights to improve your B2B tech company's marketing.
- - -
This is the webinar recording from the June 2024 HubSpot User Group (HUG) for B2B Technology USA.
Watch the video recording at https://youtu.be/5vjwGfPN9lw
Sign up for future HUG events at https://events.hubspot.com/b2b-technology-usa/
Build applications with generative AI on Google CloudMárton Kodok
We will explore Vertex AI - Model Garden powered experiences, we are going to learn more about the integration of these generative AI APIs. We are going to see in action what the Gemini family of generative models are for developers to build and deploy AI-driven applications. Vertex AI includes a suite of foundation models, these are referred to as the PaLM and Gemini family of generative ai models, and they come in different versions. We are going to cover how to use via API to: - execute prompts in text and chat - cover multimodal use cases with image prompts. - finetune and distill to improve knowledge domains - run function calls with foundation models to optimize them for specific tasks. At the end of the session, developers will understand how to innovate with generative AI and develop apps using the generative ai industry trends.
Orchestrating the Future: Navigating Today's Data Workflow Challenges with Ai...Kaxil Naik
Navigating today's data landscape isn't just about managing workflows; it's about strategically propelling your business forward. Apache Airflow has stood out as the benchmark in this arena, driving data orchestration forward since its early days. As we dive into the complexities of our current data-rich environment, where the sheer volume of information and its timely, accurate processing are crucial for AI and ML applications, the role of Airflow has never been more critical.
In my journey as the Senior Engineering Director and a pivotal member of Apache Airflow's Project Management Committee (PMC), I've witnessed Airflow transform data handling, making agility and insight the norm in an ever-evolving digital space. At Astronomer, our collaboration with leading AI & ML teams worldwide has not only tested but also proven Airflow's mettle in delivering data reliably and efficiently—data that now powers not just insights but core business functions.
This session is a deep dive into the essence of Airflow's success. We'll trace its evolution from a budding project to the backbone of data orchestration it is today, constantly adapting to meet the next wave of data challenges, including those brought on by Generative AI. It's this forward-thinking adaptability that keeps Airflow at the forefront of innovation, ready for whatever comes next.
The ever-growing demands of AI and ML applications have ushered in an era where sophisticated data management isn't a luxury—it's a necessity. Airflow's innate flexibility and scalability are what makes it indispensable in managing the intricate workflows of today, especially those involving Large Language Models (LLMs).
This talk isn't just a rundown of Airflow's features; it's about harnessing these capabilities to turn your data workflows into a strategic asset. Together, we'll explore how Airflow remains at the cutting edge of data orchestration, ensuring your organization is not just keeping pace but setting the pace in a data-driven future.
Session in https://budapestdata.hu/2024/04/kaxil-naik-astronomer-io/ | https://dataml24.sessionize.com/session/667627
End-to-end pipeline agility - Berlin Buzzwords 2024Lars Albertsson
We describe how we achieve high change agility in data engineering by eliminating the fear of breaking downstream data pipelines through end-to-end pipeline testing, and by using schema metaprogramming to safely eliminate boilerplate involved in changes that affect whole pipelines.
A quick poll on agility in changing pipelines from end to end indicated a huge span in capabilities. For the question "How long time does it take for all downstream pipelines to be adapted to an upstream change," the median response was 6 months, but some respondents could do it in less than a day. When quantitative data engineering differences between the best and worst are measured, the span is often 100x-1000x, sometimes even more.
A long time ago, we suffered at Spotify from fear of changing pipelines due to not knowing what the impact might be downstream. We made plans for a technical solution to test pipelines end-to-end to mitigate that fear, but the effort failed for cultural reasons. We eventually solved this challenge, but in a different context. In this presentation we will describe how we test full pipelines effectively by manipulating workflow orchestration, which enables us to make changes in pipelines without fear of breaking downstream.
Making schema changes that affect many jobs also involves a lot of toil and boilerplate. Using schema-on-read mitigates some of it, but has drawbacks since it makes it more difficult to detect errors early. We will describe how we have rejected this tradeoff by applying schema metaprogramming, eliminating boilerplate but keeping the protection of static typing, thereby further improving agility to quickly modify data pipelines without fear.
4th Modern Marketing Reckoner by MMA Global India & Group M: 60+ experts on W...Social Samosa
The Modern Marketing Reckoner (MMR) is a comprehensive resource packed with POVs from 60+ industry leaders on how AI is transforming the 4 key pillars of marketing – product, place, price and promotions.
6. WHY
Cost of data access is high
Data should be interpretable
Data reliability expectation goes high fast
Data publishing is make or break
Resilience to failures with replay, fallback and routing
Enable iterative processes and replay/ switch