SlideShare a Scribd company logo
1 of 30
© 2022 Neo4j, Inc. All rights reserved.
© 2022 Neo4j, Inc. All rights reserved.
Redesigning Dell Pricing
Platform
Andrew Nepogodin,
Cloud Architect
Bhanu Naidu,
Data Engineer
Natraj Rachakonda,
Data Engineer
Agenda
Dell Digital Introduction
• Pre-assembly Model value
• Separation of concerns: data preparation vs runtime execution
Evolving from SOA to Serverless Architecture – how Graph became a necessity
• Case for Consolidation & Denormalization
• Increased data density (avoiding proliferation)
Why Neo4J?
• Pure SaaS principles
• Horizontal scalability as foundation
• Flexible schema
• Ad-hoc model to leverage Engine capabilities
• Lessons learned
• Data migration strategy
• Ops management – support, monitoring, alerts, administration, backups
Success story of Pricing Engine
Dell Digital IT
Product
Price
Payment
Cart
Quote
Order
Grow
Mature
Consolidate
Dell Digital IT
EMEA APJ
AMER
0
5
10
15
20
25
30
35
40
1 2 3 4 5 6
Legacy Service Oriented Architecture
Legacy SOA (cont.)
Client
Commerce
Service
Orchestrator
Dependency
Service
Data
Dependency
Service
X
X
Serverless Architecture
Pre-compute or pre-assembly
Pre-compute vs pre-assembly
Ingestion events (zero runtime dependency)
Why Graph
(Neo4j)
Service?
• Data "gluing" mechanism for disconnected
source systems
• No proliferation during input; denormalized
output
• Natural representation of structure and relations
• Schema-less... almost
• Compliance with declarative modeling
• Efficient traversals including recursion and
circular references
• Relation (Edges) are first-class citizens
• Distributed load between Reads & Writes
• Support for server-side plugins
Data Density
• No proliferation at input
• Denormalized output
Data density:
Authoring
Scope Forest
Denormalization
• Runtime data collection is expensive
• Self-sufficient runtime packages
• Key-value storage
• Data integrity (old package used till new one is ready)
Pricing Engine SaaS offering
Integrating
disconnected Data
Sources
Price Stages as
business constructs
rather than
technology process
Common domain
problems (e.g.
Rounding &
Compensation)
Price mutating
actions via Formulas
Capabilities over use-
cases
Ad-hoc business
model declaration (no
code changes)
Price explanation Data usage insight
CAP Theorem
Expanding:
Horizontal
Scaling or
Parallelization
Data-driven architecture
• Input data:
• Product Structure
• Components
• Adjustments
• Connecting data:
• Joining Product and Components
• Joining Nodes of Product Structure and Adjustment
• Applying data:
• Adjustment candidate selection
• Runtime context as an Adjustment application condition
Item
(laptop)
Module:
Memory
Module:
HD
Option:
8GB
Option:
16GB
Option:
1TB
Option:
2TB
SKU:
8GB
SKU:
16GB
SKU:
1TB
SKU:
2TB
StandardPrice:
$100
StandardPrice:
$180
StandardPrice:
$200
StandardPrice:
$300
Graph as
common
denominator
• Modeling of relationship
• Condition Expression
• DNF
• Problems to solve:
• For a given Product get
all applicable
Adjustments
• Get all Products affected
by the given
Adjustments
family.id=11111 and component.id==C-123
Item
id=I1
Classification
family==11111
Module
Component
id=C-222
Component
id=C-123
Adjustment
Dynamic
Graph
schema
Item
Specification Classification
Component
AdjustmentContext
Adjustment
has
is
is
is
is
appliedTo
appliedTo
contains
contains
ConditionGroup
Service
Context
includes
belongs
has
Main Schema Ad-hoc Schema
Things we
wish we
knew
sooner
Separating Read and Write operations
Memory footprint
Multi-label capability
Bookmarks
Disallow slow Node to be a Leader
Migrating legacy data into new Platform
Initial load
K8S
Cloud Function
...
Cloud Function
Product
Source
Neo4J
Stream
Product
Crawler
Component
Source
Component
Crawler
Component
Source
Component
Crawler
Migrating legacy data (cont.)
Change events
Stream
Stream
Stream
Adjustments
Components
Products
Neo4J
Cloud Function
Cloud Function
Cloud Function
Production
Data Volume
• Uneven load distribution (peaks and valleys)
Up to 50 Million payloads per day
• 2-4 Vertices
• 1-4 Edges
80% are Small payloads:
• 4-12 Vertices
• 3-12 Edges
15% are Medium payloads
• Up to 4000 vertices
• Up to 4000 edges
5% are Large Payloads
Production Database Infrastructure
Monitoring & Alerts
Splunk
• Ingest all neo4j related logs
• Configure macros to capture log events
• Create custom dashboards
• Set up alerts
Neo4j Performance dashboard with Grafana
Prometheus/Grafana
• Export neo4j host and database metrics to Prometheus
• Set up custom dashboards dedicated to host metrics and DB metrics
• Configured email alerts with alert manager
No client interruption backup/restore
Customized bash script to perform full/incremental backups
Daily backups with crontab
Backups saved to NAS disk
Delphix to refresh non prod environments
Summary:
Benefits for
business
•Across domain – authoring, shop, transact
•Across regions
Unified pricing experience
•Self-service configuration effort (no recompilation/redeployment)
•No duplicated functionality across domains
Reduced time to market
•Supporting what-if scenarios
•Virtually unlimited price authoring logic
•True Delta price presentation
Decoupling business from architectural constraints
•Guaranteed SLA
•99.999% availability
•Controllable system load
Platform stability
Questions and
Answers

More Related Content

What's hot

Data Mesh Part 4 Monolith to Mesh
Data Mesh Part 4 Monolith to MeshData Mesh Part 4 Monolith to Mesh
Data Mesh Part 4 Monolith to Mesh
Jeffrey T. Pollock
 
Standing on the Shoulders of Open-Source Giants: The Serverless Realtime Lake...
Standing on the Shoulders of Open-Source Giants: The Serverless Realtime Lake...Standing on the Shoulders of Open-Source Giants: The Serverless Realtime Lake...
Standing on the Shoulders of Open-Source Giants: The Serverless Realtime Lake...
HostedbyConfluent
 

What's hot (20)

Apache Kafka With Spark Structured Streaming With Emma Liu, Nitin Saksena, Ra...
Apache Kafka With Spark Structured Streaming With Emma Liu, Nitin Saksena, Ra...Apache Kafka With Spark Structured Streaming With Emma Liu, Nitin Saksena, Ra...
Apache Kafka With Spark Structured Streaming With Emma Liu, Nitin Saksena, Ra...
 
Data Governance Trends and Best Practices To Implement Today
Data Governance Trends and Best Practices To Implement TodayData Governance Trends and Best Practices To Implement Today
Data Governance Trends and Best Practices To Implement Today
 
Who Should Own Data Governance – IT or Business?
Who Should Own Data Governance – IT or Business?Who Should Own Data Governance – IT or Business?
Who Should Own Data Governance – IT or Business?
 
Creating a Data-Driven Organization, Crunchconf, October 2015
Creating a Data-Driven Organization, Crunchconf, October 2015Creating a Data-Driven Organization, Crunchconf, October 2015
Creating a Data-Driven Organization, Crunchconf, October 2015
 
EY: Why graph technology makes sense for fraud detection and customer 360 pro...
EY: Why graph technology makes sense for fraud detection and customer 360 pro...EY: Why graph technology makes sense for fraud detection and customer 360 pro...
EY: Why graph technology makes sense for fraud detection and customer 360 pro...
 
2023 Trends in Enterprise Analytics
2023 Trends in Enterprise Analytics2023 Trends in Enterprise Analytics
2023 Trends in Enterprise Analytics
 
How to build a successful Data Lake
How to build a successful Data LakeHow to build a successful Data Lake
How to build a successful Data Lake
 
Modernizing to a Cloud Data Architecture
Modernizing to a Cloud Data ArchitectureModernizing to a Cloud Data Architecture
Modernizing to a Cloud Data Architecture
 
Building End-to-End Delta Pipelines on GCP
Building End-to-End Delta Pipelines on GCPBuilding End-to-End Delta Pipelines on GCP
Building End-to-End Delta Pipelines on GCP
 
Data Mesh Part 4 Monolith to Mesh
Data Mesh Part 4 Monolith to MeshData Mesh Part 4 Monolith to Mesh
Data Mesh Part 4 Monolith to Mesh
 
Data Platform Architecture Principles and Evaluation Criteria
Data Platform Architecture Principles and Evaluation CriteriaData Platform Architecture Principles and Evaluation Criteria
Data Platform Architecture Principles and Evaluation Criteria
 
Snowflake: The Good, the Bad, and the Ugly
Snowflake: The Good, the Bad, and the UglySnowflake: The Good, the Bad, and the Ugly
Snowflake: The Good, the Bad, and the Ugly
 
Webinar Data Mesh - Part 3
Webinar Data Mesh - Part 3Webinar Data Mesh - Part 3
Webinar Data Mesh - Part 3
 
Introduction to Azure Databricks
Introduction to Azure DatabricksIntroduction to Azure Databricks
Introduction to Azure Databricks
 
Pourquoi Leroy Merlin a besoin d'un Knowledge Graph ?
Pourquoi Leroy Merlin a besoin d'un Knowledge Graph ?Pourquoi Leroy Merlin a besoin d'un Knowledge Graph ?
Pourquoi Leroy Merlin a besoin d'un Knowledge Graph ?
 
Big Data & Data Lakes Building Blocks
Big Data & Data Lakes Building BlocksBig Data & Data Lakes Building Blocks
Big Data & Data Lakes Building Blocks
 
Introducing the Snowflake Computing Cloud Data Warehouse
Introducing the Snowflake Computing Cloud Data WarehouseIntroducing the Snowflake Computing Cloud Data Warehouse
Introducing the Snowflake Computing Cloud Data Warehouse
 
Lakehouse in Azure
Lakehouse in AzureLakehouse in Azure
Lakehouse in Azure
 
Intro to Neo4j and Graph Databases
Intro to Neo4j and Graph DatabasesIntro to Neo4j and Graph Databases
Intro to Neo4j and Graph Databases
 
Standing on the Shoulders of Open-Source Giants: The Serverless Realtime Lake...
Standing on the Shoulders of Open-Source Giants: The Serverless Realtime Lake...Standing on the Shoulders of Open-Source Giants: The Serverless Realtime Lake...
Standing on the Shoulders of Open-Source Giants: The Serverless Realtime Lake...
 

Similar to How Dell Used Neo4j Graph Database to Redesign Their Pricing-as-a-Service Platform

Become More Data-driven by Leveraging Your SAP Data
Become More Data-driven by Leveraging Your SAP DataBecome More Data-driven by Leveraging Your SAP Data
Become More Data-driven by Leveraging Your SAP Data
Denodo
 
Sap hana sap webinar 12-2-13 v1
Sap hana sap webinar  12-2-13 v1Sap hana sap webinar  12-2-13 v1
Sap hana sap webinar 12-2-13 v1
Rick Speyer
 

Similar to How Dell Used Neo4j Graph Database to Redesign Their Pricing-as-a-Service Platform (20)

Oracle Big Data Appliance and Big Data SQL for advanced analytics
Oracle Big Data Appliance and Big Data SQL for advanced analyticsOracle Big Data Appliance and Big Data SQL for advanced analytics
Oracle Big Data Appliance and Big Data SQL for advanced analytics
 
Gamma Soft and NuoDB Speed Up Data Consolidation And Cloud Migration
Gamma Soft and NuoDB Speed Up Data Consolidation And Cloud MigrationGamma Soft and NuoDB Speed Up Data Consolidation And Cloud Migration
Gamma Soft and NuoDB Speed Up Data Consolidation And Cloud Migration
 
FInal Project - USMx CC605x Cloud Computing for Enterprises - Hugo Aquino
FInal Project - USMx CC605x Cloud Computing for Enterprises - Hugo AquinoFInal Project - USMx CC605x Cloud Computing for Enterprises - Hugo Aquino
FInal Project - USMx CC605x Cloud Computing for Enterprises - Hugo Aquino
 
Bridging the Last Mile: Getting Data to the People Who Need It (APAC)
Bridging the Last Mile: Getting Data to the People Who Need It (APAC)Bridging the Last Mile: Getting Data to the People Who Need It (APAC)
Bridging the Last Mile: Getting Data to the People Who Need It (APAC)
 
NRB SAP Hosting & Cloud Solutions
NRB SAP Hosting & Cloud SolutionsNRB SAP Hosting & Cloud Solutions
NRB SAP Hosting & Cloud Solutions
 
Data Con LA 2018 - Populating your Enterprise Data Hub for Next Gen Analytics...
Data Con LA 2018 - Populating your Enterprise Data Hub for Next Gen Analytics...Data Con LA 2018 - Populating your Enterprise Data Hub for Next Gen Analytics...
Data Con LA 2018 - Populating your Enterprise Data Hub for Next Gen Analytics...
 
Building a devops CMDB
Building a devops CMDBBuilding a devops CMDB
Building a devops CMDB
 
Graph Data Science at Scale
Graph Data Science at ScaleGraph Data Science at Scale
Graph Data Science at Scale
 
Oracle big data appliance and solutions
Oracle big data appliance and solutionsOracle big data appliance and solutions
Oracle big data appliance and solutions
 
Azure Days 2019: Grösser und Komplexer ist nicht immer besser (Meinrad Weiss)
Azure Days 2019: Grösser und Komplexer ist nicht immer besser (Meinrad Weiss)Azure Days 2019: Grösser und Komplexer ist nicht immer besser (Meinrad Weiss)
Azure Days 2019: Grösser und Komplexer ist nicht immer besser (Meinrad Weiss)
 
Become More Data-driven by Leveraging Your SAP Data
Become More Data-driven by Leveraging Your SAP DataBecome More Data-driven by Leveraging Your SAP Data
Become More Data-driven by Leveraging Your SAP Data
 
AVATA presents Upgrading Demantra Webinar
AVATA presents Upgrading Demantra WebinarAVATA presents Upgrading Demantra Webinar
AVATA presents Upgrading Demantra Webinar
 
Scaling Multi-Cloud Deployments with Denodo: Automated Infrastructure Management
Scaling Multi-Cloud Deployments with Denodo: Automated Infrastructure ManagementScaling Multi-Cloud Deployments with Denodo: Automated Infrastructure Management
Scaling Multi-Cloud Deployments with Denodo: Automated Infrastructure Management
 
Transform Your Data Integration Platform From Informatica To ODI
Transform Your Data Integration Platform From Informatica To ODI Transform Your Data Integration Platform From Informatica To ODI
Transform Your Data Integration Platform From Informatica To ODI
 
Ultime Novità di Prodotto Neo4j
Ultime Novità di Prodotto Neo4j Ultime Novità di Prodotto Neo4j
Ultime Novità di Prodotto Neo4j
 
Lessons from Building Large-Scale, Multi-Cloud, SaaS Software at Databricks
Lessons from Building Large-Scale, Multi-Cloud, SaaS Software at DatabricksLessons from Building Large-Scale, Multi-Cloud, SaaS Software at Databricks
Lessons from Building Large-Scale, Multi-Cloud, SaaS Software at Databricks
 
Sap hana sap webinar 12-2-13 v1
Sap hana sap webinar  12-2-13 v1Sap hana sap webinar  12-2-13 v1
Sap hana sap webinar 12-2-13 v1
 
MT12 - SAP solutions from Dell – from your Datacenter to the Cloud
MT12 - SAP solutions from Dell – from your Datacenter to the CloudMT12 - SAP solutions from Dell – from your Datacenter to the Cloud
MT12 - SAP solutions from Dell – from your Datacenter to the Cloud
 
Peek into Neo4j Product Strategy and Roadmap
Peek into Neo4j Product Strategy and RoadmapPeek into Neo4j Product Strategy and Roadmap
Peek into Neo4j Product Strategy and Roadmap
 
Ready solutions with Red Hat
Ready solutions with Red HatReady solutions with Red Hat
Ready solutions with Red Hat
 

More from Neo4j

More from Neo4j (20)

Your enemies use GenAI too - staying ahead of fraud with Neo4j
Your enemies use GenAI too - staying ahead of fraud with Neo4jYour enemies use GenAI too - staying ahead of fraud with Neo4j
Your enemies use GenAI too - staying ahead of fraud with Neo4j
 
BT & Neo4j _ How Knowledge Graphs help BT deliver Digital Transformation.pptx
BT & Neo4j _ How Knowledge Graphs help BT deliver Digital Transformation.pptxBT & Neo4j _ How Knowledge Graphs help BT deliver Digital Transformation.pptx
BT & Neo4j _ How Knowledge Graphs help BT deliver Digital Transformation.pptx
 
Workshop: Enabling GenAI Breakthroughs with Knowledge Graphs - GraphSummit Milan
Workshop: Enabling GenAI Breakthroughs with Knowledge Graphs - GraphSummit MilanWorkshop: Enabling GenAI Breakthroughs with Knowledge Graphs - GraphSummit Milan
Workshop: Enabling GenAI Breakthroughs with Knowledge Graphs - GraphSummit Milan
 
Workshop - Architecting Innovative Graph Applications- GraphSummit Milan
Workshop -  Architecting Innovative Graph Applications- GraphSummit MilanWorkshop -  Architecting Innovative Graph Applications- GraphSummit Milan
Workshop - Architecting Innovative Graph Applications- GraphSummit Milan
 
LARUS - Galileo.XAI e Gen-AI: la nuova prospettiva di LARUS per il futuro del...
LARUS - Galileo.XAI e Gen-AI: la nuova prospettiva di LARUS per il futuro del...LARUS - Galileo.XAI e Gen-AI: la nuova prospettiva di LARUS per il futuro del...
LARUS - Galileo.XAI e Gen-AI: la nuova prospettiva di LARUS per il futuro del...
 
GraphSummit Milan - Visione e roadmap del prodotto Neo4j
GraphSummit Milan - Visione e roadmap del prodotto Neo4jGraphSummit Milan - Visione e roadmap del prodotto Neo4j
GraphSummit Milan - Visione e roadmap del prodotto Neo4j
 
GraphSummit Milan - Neo4j: The Art of the Possible with Graph
GraphSummit Milan - Neo4j: The Art of the Possible with GraphGraphSummit Milan - Neo4j: The Art of the Possible with Graph
GraphSummit Milan - Neo4j: The Art of the Possible with Graph
 
LARUS - Galileo.XAI e Gen-AI: la nuova prospettiva di LARUS per il futuro del...
LARUS - Galileo.XAI e Gen-AI: la nuova prospettiva di LARUS per il futuro del...LARUS - Galileo.XAI e Gen-AI: la nuova prospettiva di LARUS per il futuro del...
LARUS - Galileo.XAI e Gen-AI: la nuova prospettiva di LARUS per il futuro del...
 
UNI DI NAPOLI FEDERICO II - Il ruolo dei grafi nell'AI Conversazionale Ibrida
UNI DI NAPOLI FEDERICO II - Il ruolo dei grafi nell'AI Conversazionale IbridaUNI DI NAPOLI FEDERICO II - Il ruolo dei grafi nell'AI Conversazionale Ibrida
UNI DI NAPOLI FEDERICO II - Il ruolo dei grafi nell'AI Conversazionale Ibrida
 
CERVED e Neo4j su una nuvola, migrazione ed evoluzione di un grafo mission cr...
CERVED e Neo4j su una nuvola, migrazione ed evoluzione di un grafo mission cr...CERVED e Neo4j su una nuvola, migrazione ed evoluzione di un grafo mission cr...
CERVED e Neo4j su una nuvola, migrazione ed evoluzione di un grafo mission cr...
 
From Knowledge Graphs via Lego Bricks to scientific conversations.pptx
From Knowledge Graphs via Lego Bricks to scientific conversations.pptxFrom Knowledge Graphs via Lego Bricks to scientific conversations.pptx
From Knowledge Graphs via Lego Bricks to scientific conversations.pptx
 
Novo Nordisk: When Knowledge Graphs meet LLMs
Novo Nordisk: When Knowledge Graphs meet LLMsNovo Nordisk: When Knowledge Graphs meet LLMs
Novo Nordisk: When Knowledge Graphs meet LLMs
 
Workshop - Best of Both Worlds_ Combine KG and Vector search for enhanced R...
Workshop - Best of Both Worlds_ Combine  KG and Vector search for  enhanced R...Workshop - Best of Both Worlds_ Combine  KG and Vector search for  enhanced R...
Workshop - Best of Both Worlds_ Combine KG and Vector search for enhanced R...
 
Neo4j - How KGs are shaping the future of Generative AI at AWS Summit London ...
Neo4j - How KGs are shaping the future of Generative AI at AWS Summit London ...Neo4j - How KGs are shaping the future of Generative AI at AWS Summit London ...
Neo4j - How KGs are shaping the future of Generative AI at AWS Summit London ...
 
QIAGEN: Biomedical Knowledge Graphs for Data Scientists and Bioinformaticians
QIAGEN: Biomedical Knowledge Graphs for Data Scientists and BioinformaticiansQIAGEN: Biomedical Knowledge Graphs for Data Scientists and Bioinformaticians
QIAGEN: Biomedical Knowledge Graphs for Data Scientists and Bioinformaticians
 
EY_Graph Database Powered Sustainability
EY_Graph Database Powered SustainabilityEY_Graph Database Powered Sustainability
EY_Graph Database Powered Sustainability
 
SIEMENS: RAPUNZEL – A Tale About Knowledge Graph
SIEMENS: RAPUNZEL – A Tale About Knowledge GraphSIEMENS: RAPUNZEL – A Tale About Knowledge Graph
SIEMENS: RAPUNZEL – A Tale About Knowledge Graph
 
Build your next Gen AI Breakthrough - April 2024
Build your next Gen AI Breakthrough - April 2024Build your next Gen AI Breakthrough - April 2024
Build your next Gen AI Breakthrough - April 2024
 
Connecting the Dots for Information Discovery.pdf
Connecting the Dots for Information Discovery.pdfConnecting the Dots for Information Discovery.pdf
Connecting the Dots for Information Discovery.pdf
 
ISDEFE - GraphSummit Madrid - ARETA: Aviation Real-Time Emissions Token Accre...
ISDEFE - GraphSummit Madrid - ARETA: Aviation Real-Time Emissions Token Accre...ISDEFE - GraphSummit Madrid - ARETA: Aviation Real-Time Emissions Token Accre...
ISDEFE - GraphSummit Madrid - ARETA: Aviation Real-Time Emissions Token Accre...
 

Recently uploaded

Finding Java's Hidden Performance Traps @ DevoxxUK 2024
Finding Java's Hidden Performance Traps @ DevoxxUK 2024Finding Java's Hidden Performance Traps @ DevoxxUK 2024
Finding Java's Hidden Performance Traps @ DevoxxUK 2024
Victor Rentea
 
TrustArc Webinar - Unified Trust Center for Privacy, Security, Compliance, an...
TrustArc Webinar - Unified Trust Center for Privacy, Security, Compliance, an...TrustArc Webinar - Unified Trust Center for Privacy, Security, Compliance, an...
TrustArc Webinar - Unified Trust Center for Privacy, Security, Compliance, an...
TrustArc
 
Cloud Frontiers: A Deep Dive into Serverless Spatial Data and FME
Cloud Frontiers:  A Deep Dive into Serverless Spatial Data and FMECloud Frontiers:  A Deep Dive into Serverless Spatial Data and FME
Cloud Frontiers: A Deep Dive into Serverless Spatial Data and FME
Safe Software
 

Recently uploaded (20)

Six Myths about Ontologies: The Basics of Formal Ontology
Six Myths about Ontologies: The Basics of Formal OntologySix Myths about Ontologies: The Basics of Formal Ontology
Six Myths about Ontologies: The Basics of Formal Ontology
 
ChatGPT and Beyond - Elevating DevOps Productivity
ChatGPT and Beyond - Elevating DevOps ProductivityChatGPT and Beyond - Elevating DevOps Productivity
ChatGPT and Beyond - Elevating DevOps Productivity
 
Less Is More: Utilizing Ballerina to Architect a Cloud Data Platform
Less Is More: Utilizing Ballerina to Architect a Cloud Data PlatformLess Is More: Utilizing Ballerina to Architect a Cloud Data Platform
Less Is More: Utilizing Ballerina to Architect a Cloud Data Platform
 
Design and Development of a Provenance Capture Platform for Data Science
Design and Development of a Provenance Capture Platform for Data ScienceDesign and Development of a Provenance Capture Platform for Data Science
Design and Development of a Provenance Capture Platform for Data Science
 
Finding Java's Hidden Performance Traps @ DevoxxUK 2024
Finding Java's Hidden Performance Traps @ DevoxxUK 2024Finding Java's Hidden Performance Traps @ DevoxxUK 2024
Finding Java's Hidden Performance Traps @ DevoxxUK 2024
 
Platformless Horizons for Digital Adaptability
Platformless Horizons for Digital AdaptabilityPlatformless Horizons for Digital Adaptability
Platformless Horizons for Digital Adaptability
 
AWS Community Day CPH - Three problems of Terraform
AWS Community Day CPH - Three problems of TerraformAWS Community Day CPH - Three problems of Terraform
AWS Community Day CPH - Three problems of Terraform
 
Exploring Multimodal Embeddings with Milvus
Exploring Multimodal Embeddings with MilvusExploring Multimodal Embeddings with Milvus
Exploring Multimodal Embeddings with Milvus
 
CNIC Information System with Pakdata Cf In Pakistan
CNIC Information System with Pakdata Cf In PakistanCNIC Information System with Pakdata Cf In Pakistan
CNIC Information System with Pakdata Cf In Pakistan
 
Decarbonising Commercial Real Estate: The Role of Operational Performance
Decarbonising Commercial Real Estate: The Role of Operational PerformanceDecarbonising Commercial Real Estate: The Role of Operational Performance
Decarbonising Commercial Real Estate: The Role of Operational Performance
 
Web Form Automation for Bonterra Impact Management (fka Social Solutions Apri...
Web Form Automation for Bonterra Impact Management (fka Social Solutions Apri...Web Form Automation for Bonterra Impact Management (fka Social Solutions Apri...
Web Form Automation for Bonterra Impact Management (fka Social Solutions Apri...
 
TEST BANK For Principles of Anatomy and Physiology, 16th Edition by Gerard J....
TEST BANK For Principles of Anatomy and Physiology, 16th Edition by Gerard J....TEST BANK For Principles of Anatomy and Physiology, 16th Edition by Gerard J....
TEST BANK For Principles of Anatomy and Physiology, 16th Edition by Gerard J....
 
TrustArc Webinar - Unified Trust Center for Privacy, Security, Compliance, an...
TrustArc Webinar - Unified Trust Center for Privacy, Security, Compliance, an...TrustArc Webinar - Unified Trust Center for Privacy, Security, Compliance, an...
TrustArc Webinar - Unified Trust Center for Privacy, Security, Compliance, an...
 
"I see eyes in my soup": How Delivery Hero implemented the safety system for ...
"I see eyes in my soup": How Delivery Hero implemented the safety system for ..."I see eyes in my soup": How Delivery Hero implemented the safety system for ...
"I see eyes in my soup": How Delivery Hero implemented the safety system for ...
 
Understanding the FAA Part 107 License ..
Understanding the FAA Part 107 License ..Understanding the FAA Part 107 License ..
Understanding the FAA Part 107 License ..
 
Introduction to Multilingual Retrieval Augmented Generation (RAG)
Introduction to Multilingual Retrieval Augmented Generation (RAG)Introduction to Multilingual Retrieval Augmented Generation (RAG)
Introduction to Multilingual Retrieval Augmented Generation (RAG)
 
JavaScript Usage Statistics 2024 - The Ultimate Guide
JavaScript Usage Statistics 2024 - The Ultimate GuideJavaScript Usage Statistics 2024 - The Ultimate Guide
JavaScript Usage Statistics 2024 - The Ultimate Guide
 
Cloud Frontiers: A Deep Dive into Serverless Spatial Data and FME
Cloud Frontiers:  A Deep Dive into Serverless Spatial Data and FMECloud Frontiers:  A Deep Dive into Serverless Spatial Data and FME
Cloud Frontiers: A Deep Dive into Serverless Spatial Data and FME
 
WSO2's API Vision: Unifying Control, Empowering Developers
WSO2's API Vision: Unifying Control, Empowering DevelopersWSO2's API Vision: Unifying Control, Empowering Developers
WSO2's API Vision: Unifying Control, Empowering Developers
 
Polkadot JAM Slides - Token2049 - By Dr. Gavin Wood
Polkadot JAM Slides - Token2049 - By Dr. Gavin WoodPolkadot JAM Slides - Token2049 - By Dr. Gavin Wood
Polkadot JAM Slides - Token2049 - By Dr. Gavin Wood
 

How Dell Used Neo4j Graph Database to Redesign Their Pricing-as-a-Service Platform

Editor's Notes

  1. Today we are going to talk about evolutional transformation that took place at Dell Digital org. We are going to cover challenges of our legacy architecture, how we decided to addressed them, and most importantly what technologies we had to leverage to achieve our goal. Our special focus of interest is going to be around Graph technology which happened to fit perfectly well to address our architectural objectives. Our goal was to create Pricing Engine service implemented under SaaS principles to deliver scalable, resilient, zero-loss, flexible and highly customizable solution for Dell’s pricing needs. Also, we are going to highlight lessons that we’ve learned along our journey, a bit of stats about data volume and our Ops model.
  2. Dell Digital covers set of services required for operating Commerce platform. There are several domains addressing various aspect of sale lifecycle such as Quote, Payment, Order, Product, Cart and Price. Each of those domains is own large ecosystem comprising of many services, tools, processes and procedures. There are wide variety of services being used – from legacy systems to modern cloud-based solutions. And the important goal is to keep those systems to be able to communicate to each other while executing transition from older frameworks and architecture to newer ones. The process of gradual replacement and retiring legacy systems we are referring as Digital Transformation. To a degree, it is almost a surgical procedures – replacing pieces while the entire commerce platform keeps functioning so that end customers experience no issues.
  3. Dell Digital is spread across the globe physically and logically. In the past, each region was able to operate successfully with a great degree of isolation. However, nowadays isolation can make extremely hard to unify operating model and most importantly, customer experience. In realities of the present world, all parts of business units must function in collaboration and synchronization to achieve global company’s objectives. However, operating different market segments under unified set of services is not a trivial task especially with the baggage of separated and disconnected tools and services. Dell Digital is taking steps in unifying the segments, starting from pricing authoring concern, and ending with prices displayed on the customer’s screen. Unified and scalable data management allows Dell to expand its business year after year.
  4. Let’s dive a bit into history, the time when Service Oriented Architecture was a dominating golden standard for enterprise systems. Many of you at some point time have dealt with one of variations of Enterprise Service Bus architecture. There is many existing commercial and custom solutions, but the main principle is the same – there is an Orchestrator service that is responsible for gluing together disconnected services. In some cases, the Orchestrator would talk to a legacy tool, in some cases, it would be talking directly to a DB but ultimately, all obtained data had to be transformed into some common format that Orchestrator can act upon.
  5. While the idea of orchestrator that understand data format from multiple disconnected systems was a great advancement from monolith architecture, it had own challenges. A typical customer request was looking like this: Initiate a request to a Commerce Service which act as an Orchestrator Orchestrator initiates sequential or parallel communication with external Dependency Services or raw data sources All responses are obtained, the data is processed, and the response is passed back to the caller. However, as it usually happens, the reality is often not as bright as it looks on a diagram. Should one of the Dependency Services fail or simply being unhealthy, the entire customer response is compromised either from SLA standpoint or in the worst-case scenario cannot be complete at all. This creates a challenge of indirect coupling between various external systems. Of course, each of the identified challenges can be addressed with some architectural improvements but all that would come at the expense of complicating the solutions. And as we all know the simpler is the architecture the better it is from virtually any perspective.
  6. What is the natural step in improving runtime request experience? This would be eliminating runtime dependencies on external services. However, we still need data, right? The answer is to make data available without a need to make a trip to an external system. That’s where serverless, or event-driven architecture becomes an attractive option to collect all required data before using it. The main architectural focus switches from data preparation at runtime towards using ready-to-consume data prepared ahead of time so that runtime computational and communication cost is minimal. Prepare your data offline, use your data online. Upstream system feed stream of changes into background services that are responsible for connecting data from various sources and generating self-sufficient data content. Preparing self-sufficient data content can be referred as denormalization. Then during the runtime, a request is served by denormalized data package that has no dependency on the external systems, services or data. All heavy lifting that was previously executed by runtime Orchestrator is now served by offline Consolidator which does not participate in runtime execution and thus is not a subject of runtime SLA.
  7. Since we mentioned denormalization approach, let’s have a brief overview of different flavors of that concept. Foundation of data presentation in any commerce platform is the price of a product. Price of a product that you typically see on a screen is usually taken from a pricing document. However, to get that single number on a screen, there is a complex business and procedural logic involved. For large retailers like Dell with hundreds of thousands of different products available in different geographical and business segments, setting individual price per product would be highly inefficient. Instead, prices are authored in different systems, while targeting specific properties or attributes. Once price-decision points are consolidated from multiple authoring systems, the denormalization process generates a document that ultimately will be used to displayed price to end customers. In case of a single a price point, the denormalization technique can be referred as a pre-compute. One product has one price within the given context. Pretty simple. However, Dell has specifics. Many Dell’s offerings allow for product customization. Each selection change results in a different resulting price. The immediate temptation might be to generate denormalized price document for each possible configuration selection. However, many Dell’s solutions have hundreds of different configuration options. Simple math provides interesting details – a single product with just 30 multi-select configuration options would generate more 1bln permutations. Even for Big Data, billions of documents for a single product is not probably what we want to deal with.
  8. Dell products is usually represented in a form of a Tree Structure. There is a root node which is a product itself which comprised of different modules such as memory or hard drive, each module may have different selection options. For example, memory selection can be between 8GB or 16GB, etc. There is a default configuration that a customer can see when they navigate to the list of products. The default configuration has its price. Price of a product affected by its selected Options. If default price was the only one required, we could leverage pre-compute model and get default price for each product as a record in our denormalized repo. However, Dell offers possibility for customizations. A customer may choose – I do not want the default 1TB drive, I want to have 2TB. As you can imagine, 2TB drive is more expensive than 1TB therefore price calculated for the default configuration is not applicable to the custom configuration. We could potentially find out all possible permutations in the selections and for each of those calculate its price. But this can quickly go out of control as the number of possible configuration options get increased. There is another approach. For each price forming element in the Tree Structure, there can be an associated object containing actionable price information. With this approach we do not store product price as a single number. Instead, we create a lightweight model of association between Nodes and Price forming elements. Then during the request time, a customer simply provides the input of desired configuration, and the model executes rollup of prices to generate the resulting price. This is what we refer as pre-assembly model. Still no dependency on the external systems, still denormalized content but the end number is a result of a lightweight calculation in-memory.
  9. In order to generate just-in-time price for the given product configuration, Dell’s Pricing Engine needs three main types of data – Product Structure, Price Forming Objects (Adjustments) and optional Components. Due to the nature of business, each of those data source have independent lifecycles with no direct correlation with each other. To give a simple example, a memory stick of 8GB has its own price while being used in thousands of different Products. Price changes of this memory stick may or may not affect prices of products where it is being installed. Or change in Product Structure (e.g. adding or removing configuration Options) may affect the default or custom product prices. Each of those changes are authored and scoped within own ecosystem. Changes are streamed out in form of events. Relation between Product Structure and its Adjustment may not be known ahead of time. The question becomes – how we connect those pieces of data together so that we can execute lightweight just-in-time price calculation? Or in other words, what do we use as data consolidation mechanism from which we can generated denormalized pre-assembly product models. The answer is a Graph system. Graph has become a centerpiece for our data consolidation. Denormalized content is a result of Graph traversal logic.
  10. Why Graph? Why not RDBMS or some other No-SQL DB? The answer lies in several major factors – resolved association (as you might remember, Product and its Adjustments must be associated); and flexible dynamically defined schema represented in a form of ad-hoc relations. In terms of data processing, we were aiming at no proliferation during the intake, and denormalized output. For example, a memory stick of 8GB can be used in thousands of different products so its price change may impact thousands price packages. We intake its price change once, and we get denormalize output of thousands of affected packages. The relationship between price Adjustments and Products is defined dynamically by series of attributes rather than via predefined schema. This implies that we still use some elements of schema but by no mean we are limited to a rigid set of allowed relations. The depth of relationship between price forming elements is not strongly defined. Sometimes it can be a direct one-to-one link; in some cases, it can be based on inner elements of various hierarchy; in some cases, entities may be considered related if both belong to the same forest. As you can imagine, traversal logic may become complicated, but Graph can take care of that complexity leaving us with a simple formulation of traversal goal.
  11. Let’s take a quick look at data density problem. When there is an event that affect one or more products, we do not want to spend our intake time in identifying all affected elements. And even less we want to duplicate the event for every possible relation destination. Order of event delivery is non-deterministic. Sometimes Price Adjustment event may arrive before its impacted Product, or the other way around. To address the disconnected nature of data relations, we simply create potential point of connections, or Context Nodes based on the model declaration. Those may or may not be used at all. The important aspect is to have those ready. For example, two Products event were ingested. Both got linked to Context objects. Once a Price Adjustment event arrived, it gets connected to an existing Context only once. But with one link we have got the possibility to identify relationship with two Products.
  12. Data at Dell can be authored at different levels. Sometimes it can be a business catalog, sometimes it is a customer-specific catalog, region, country, segment, or other grouping units. Authoring Scopes are a totally separate set of data that is heavily used by Pricing Platform. The relationship between authoring scopes is often not hierarchical in its nature. Those are more of a forest nature with possible circular reference. Proliferating pricing data across different authoring scopes can create data explosion. In realities of Graph, we can afford non-linear nature of relations or membership. Membership groups are forming forests or clusters. This way a relevant price-related input ingested once, becomes available to the entire forest.
  13. Despite the fact we have all relevant data stored in Graph, resolving relationships can be an expensive process. During the runtime request we do not want to spend time on traversing Graph to get all relevant pieces of data to calculate the price. Instead, we want to store all Price relevant data packages ahead of time. Preparing of such packages is done via scalable background processes that are not a subject of SLA agreement. That’s basically where denormalization happens. A single price adjustment element can be included in thousands of packages. Preparing an individual package can take some time; however, what matters is how many packages we can produce within a given time. And this is controlled by degree of parallelism. Spending for example 800ms on a single package does not sound too impressive. However, if within the same 800ms we can generate 10K packages, that’s already not that bad. Once the package is prepared, Pricing Engine has all necessary information to calculate Product price as per customer selection - in one place, no external dependency. Packages are stored in a key-value store. With this approach there is no undefined data state – till new package is ready, the older one is being used.
  14. Let’s overview, what goals we were trying to achieve while redesigning out Pricing Engine Platform and how graph technology allowed us to achieve those goals. In its nutshell, Pricing Engine is a sort of calculator. For given data, it calculates prices against any product represented as a Tree Structure. The prices are calculated in different traversal directions – from leaves to the root, from root to the leaves and any combination in between. In addition, it solves common commerce problems – rounding, price compensation, currency conversion, etc. as well as business problems such as price explanation & break-down, grouping by price category (price vs tax vs discount vs cost), etc. From day one we put ourselves a goal – it must be implemented as a SaaS. Why? Despite the fact the Service was created to serve Dell needs, the variety of cases among Dell internal customers is no different than serving external customers. All that means was that we cannot afford hard-coded use-case implementation because hard coded logic for one customer will not work for another. Therefore, we had to create a platform where Pricing service would be able to accommodate any customer via self-service configuration. We had created a strict set of rules such as “Pricing does not author data but only serves data”. All data related to prices are authored in external systems while the Platform only facilitates connecting different scopes of rules and data together. Or another rule, which is my favorite – “if we implement this capability, can we advertise it a selling point if we put this Platform on the market”? As you can imagine, with such degree of flexibility, it is virtually impossible to predict and maintain strong data schema. Instead, relationships between data points are created dynamically based on a business context and attributes. Graph allows us to preserve data meaning without going into abstraction layers requiring several PhDs to comprehend to content. The principle is simple – create natural relations now, use them later. What I mean by “natural” relations is that by looking into your Graph content you should be able to formulate sentences about data meaning in plain English so that even outsiders would understand.
  15. Let’s overview some challenges we had to solve along the way. Graph DB just like any other DB is a subject of CAP Theorem in according to which a data store can provide only two out of three guarantees. In case of Neo4J the guarantees are Consistency and Availability. So, we had to decide whether we could live without Partition Tolerance. The answer was yes, but under certain conditions. Without sharding, the only option to increase data intake throughput would be through vertical scaling – increasing computational resources. And even though the throughput limit can be quite high, it is still a limit. So, we had to ensure data ingestion throttling was in place to prevent service overload. Here I need to mention that Neo4J Fabric allows for partitioning but currently only for disjoined Graphs which is not the case for Pricing data.
  16. Another problem to solve. As we just mentioned, all Neo4J write transactions are executed against a Leader Core. Since real sharding is not an option, the only way to increase write throughput is via Vertical Scaling by adding computational power to Neo4J Cores which of course would increase hardware cost. The reality of Dell business is that amount of data keeps growing. So, there must be a strategy to deal with ever increasing data volume. Vertical scaling can provide only temporary relief. Fortunately for Dell, there was an alternative. Data have clearly defined geographic region boundary. This way, instead of physical data sharding, we were able to organize “logical sharding” where each region serves only products for that region. Of course, there is a subset of data that will be duplicated between regions but as we all know – duplicated data is better than poorly organized data. Our end solution still avoids proliferation. Original massage is published once, each region picks only data it needs; and in some cases, the same message may be picked by more than region.
  17. Having Neo4J as our Graph service, allowed us to achieve true data-driven solution. What does that mean? Any data ingested into our system has 2 separated categories of properties. First category drives connection point (or potential connection points) between data types – Product Structure, Adjustments and Components. However, the fact an Adjustment is linked to a Product does not mean it plays immediate role in its price calculation. The relations defined on the Graph level are nothing more than “runtime candidates”. That means that an Adjustment has a potential to be applied. Whether or not an Adjustment gets applied is a subject of runtime Request Context. This second type of data category drives “final decision” of what is applicable and what is not. This way, two different category of customers requesting for a price of the same product may see two different prices.
  18. Since our focus is Graph, let’s look at the first category of relations that defines price Adjustment candidates. Typically, business authors price affecting constructs in a form of Boolean tree of attributes using and/or/not/contains/starts-with/etc. clauses. This is referred as condition-expression which in is raw form is just a string. However, the question is how this gets applied at the Graph level? Each message goes through data decomposition phase which ultimately gets translated into vertices and edges on the Graph level and condition-expression plays major role in forming associations. An Adjustment becomes relevant to a Product only if one or more conditions are satisfied. Condition Expression, being a Binary Tree can complicate Graph traversal. To simplify traversals, we want to avoid complex logic. For this, we flatten out condition-expression by converting it into Disjunctive Normal Form. That’s where a complex expression with nested clauses becomes just a flat list of AND clauses combined by OR Clause. Within each AND clause, there is a list of attributes to look for. Each AND clause can be processed separately during traversal and if at least one AND clause is satisfied, Adjustment get associated with a Product. This significantly simplified Graph traversal. Ultimately, we want only two types of answers from our Graph storage: For a given Product, give me all relevant Price Adjustments For a given Adjustment, give me all Products where it is applied
  19. While we are using loosely defined schema on ingested data, there can be edge case to address limited customer-specific data relations or traversal. Instead of creating a separate DB, we just create ad-hoc sub-model that exists side-by-side with the main data content. The question is how ad-hoc data is processed? Again, we leverage declarative syntax. Upon data ingestions, if some attribute matching criteria is met, our data processing pipeline can infer special instruction on how to interpret or decompose the relationships. This may include inferring Labels, defining attributes that should be exposed as separate Nodes and special traversal instructions such as what Node Labels to hop while traversing Graph. This is not a user-friendly type of instruction to specify, but this mechanism allows us to quickly accommodate business needs without any code recompilation or redeployment.
  20. A few tips that we had learned along the way. Some of those may be obvious but still were not considered till real-life situations pointed to that. These points are specific to Neo4J and may not be applicable to other Graph solutions. Neo4J executes all write operations on a Leader Core. Depending on the data load, it may burn a lot of CPU utilization. On the other hand, read operations requiring Graph traversal are not coming for free either. However, unlike write operations, can be executed on Follower’s Cores or Read Replicas. By explicitly specifying type of transaction, you can redirect read transactions to less busy instances thus giving more write room to the Leader. Additionally, read replicas can be scaled horizontally. Another factor to keep in mind is that Neo4J is the most efficient when the entire content can fit within the memory of a Core. In our case, we store in Graph only data that is relevant to defining relationship between Product and its Price Adjustments. Any additional information is stored in other DB such as Mongo, Redis or Blob. This way you can utilize Core’s memory with the most efficiency and do not waste its CPU cycles for paging. Another useful trick is separating data by Vertex labeling. Neo4J Vertices can have multiple labels. This way, depending on the Label picked, can be seen as a part of main schema, or of an ad-hoc schema. Neo4J considers write transaction successful if majority of Cores acknowledge the write operation. However, that does not mean Read Replicas are participating in confirmation. Newly inserted data get propagated to read replicas at later stage. But how do we know when a read replica we hit has already received data we just inserted? Neo4J has a useful mechanism of bookmarking. Once you execute your write transactions, you get bookmarks. You can pass those bookmarks to your read transactions. If you hit a read replicas that has not received yet the new data, the bookmark will hold your request till data gets delivered to that replica. Another tip that we learned a hard way is that when you use stretch cluster setup (Cores in separate data center) and some data center is slower than others, in order to prevent write operations to happen in that slow Core, you can disallow it to be a Leader. It still will participate in Leader election but will not become a leader itself.
  21. A few words about our migration strategy. While analyzing the effort required to convert old DB content into Graph repo, we found out that investing into direct data load would not be practical. The reason being is that there is a lot of declarative business rules that must be considered when processing data. To accommodate all business rules for defining relations, the entire pricing functionality must had been repeated with the migration tool. Given the fact that would be a through-away investments, we decided to go with a different approach. All data coming into Pricing Platform arrives in canonical format that is independent from upstream formats. Conversion is achieved with set of microservices that we call Adapters. Data event arrives to Adapter in upstream system’s native format, and Adapter responsibility is to convert it into canonical representation and pass to the intake endpoint. So instead of investing into a throw away effort of data migration tool, we decided to take re-usable approach. Intake Adapters were extended to operate in Crawling mode. That means that instead of waiting for upstream system events, Adapter would go to the source system and explicitly pull all data available. This was like a big-bang approach with maxed out scaling of Cloud Function to maximize throughput while making sure we do not kill our Neo4J instance with the number of connections and write operations. So, the short answer to our migration strategy – no direct DB migration, just native data intake as if it were new data.
  22. Once the initial big-bang load is over, adapter go to its normal operation mode and simply receive change notifications, but the end action is still the same – receive data in its native format, convert into canonical schema then submit it to the Pricing Intake endpoint. This way, the only difference between initial data migration and normal event processing is the volume of data.
  23. A bit of details about data volume the Pricing Platform is serving. Per day we usually receive anywhere between 5 to 50 mln events. Each event represents a Tree Structure. However, not all tree structures are equal. Vast majority of all events, about 80% are small trees representing Price Adjustments. Typically, it is not more than 4-5 vertices and edges. About 15% are medium size message where number of vertices and edges can go up to 10-12. And about 5% of all events are representing large tree structures. Number of Vertices and Edges can go up to 4000. As you can imagine, each of those events has meaning and potential impact on Price. That means, we must guarantee zero-loss processing system. How this is achieved is a separate subject, but we can say that without bullet-proof infrastructure stability that would be very difficult to accommodate. Let’s overview how we manage our infrastructure.
  24. All our Neo4j clusters run on version 4.4.x and spread across 3 different data centers. Out of these 3 DCs, 2 are close by and third one is remote. Each cluster consists of 3 core nodes (one in each data center) and 4 read replicas ( 2 in each data center which are close to app layer). We enabled server groups to make sure Leader node stays in one of the close by DC as well as to prevent app calls to remote DC. Each node is having 24 cores, 192 GB RAM, SSD storage with OEL8 running on VMWARE. Coming to users, we use both LDAP and native authentication with https communication enabled using DELL certificate authority.
  25. Splunk for DB log analytics. We install Splunk agents on all neo4j servers to ingest DB logs into Splunk (neo4j logs, debug logs, query & security logs) on real time basis. We custom built multiple dashboards using macros to provide better visibility to app team. Also created multiple alerts to identify high severity incidents like Node lost communication, out of memory, running out of threads, Neo4j service down etc.…. Splunk data retention is 90 days so it’s easy go back and troubleshoot at a specific interval.
  26. Prometheus/Grafana - captures both host metrics and DB metrics from neo4j clusters. Alert manager send alert emails to DBA distro. Grafana is our main troubleshooting tool for all production incidents.
  27. Backups happen from remote data center core node as this node doesn’t serve app traffic. Full backups are being performed on daily basis to backup NAS disk and restore validation is being performed on quarterly basis. All non prod refreshes happen through Delphix.
  28. All those changes we have spoken are good as academic concept, but the real question is how business benefit from this architecture. We can mention a few benefits. Unified pricing experience guarantees that all commerce domains and region are dealing with the common format understood by all stakeholders. Time to market is an essential factor to stay competitive. 90% of all new cases are addressed via configuration of existing capabilities. No code recompilation, no redeployment. In the past each commerce segment and region had own implementation of Pricing Service. With the new Engine implemented as SaaS this is no longer the case thus letting us to save on labor and maintenance. Another major gain was that business now has freedom to experiment with virtually any type of price authoring logic without coordination with the backend. Pricing Engine ensures all real and virtual prices are processed equally. This allows to accommodate non-linear logic such as tier-based discounts while still giving accurate delta price presentation. Delta Price is the price difference between currently selected configuration and would-be selected configuration. And system-wide we’ve got stable SLA because response time no longer depends on dependency service – all relevant information is available ahead of time. In addition, business no longer needs to coordinate data load because the architecture ensures predictable load on services that cannot scale horizontally.