This document discusses how insurance companies use MongoDB. It provides examples of how MongoDB allows insurance companies to create a single customer view, consolidate data from multiple disparate systems, and distribute claims information globally in real-time. MongoDB provides a flexible schema, automatic replication of data, and the ability to query data locally for improved customer experience, risk analysis, fraud detection, and claims processing. The document highlights several insurance companies that have adopted MongoDB to unify customer data, modernize legacy systems, and power new data-driven applications and services.
Find out which is faster, SQL or NoSQL, for traditional reporting tasks. Discover how you can optimise MongoDB aggregation pipelines and how to push complex computation down to the database.
Cluster computing frameworks such as Hadoop or Spark are tremendously beneficial in processing and deriving insights from data. However, long query latencies make these frameworks sub-optimal choices to power interactive applications. Organizations frequently rely on dedicated query layers, such as relational databases and key/value stores, for faster query latencies, but these technologies suffer many drawbacks for analytic use cases. In this session, we discuss using Druid for analytics and why the architecture is well suited to power analytic applications.
User-facing applications are replacing traditional reporting interfaces as the preferred means for organizations to derive value from their datasets. In order to provide an interactive user experience, user interactions with analytic applications must complete in an order of milliseconds. To meet these needs, organizations often struggle with selecting a proper serving layer. Many serving layers are selected because of their general popularity without understanding the possible architecture limitations.
Druid is an analytics data store designed for analytic (OLAP) queries on event data. It draws inspiration from Google’s Dremel, Google’s PowerDrill, and search infrastructure. Many enterprises are switching to Druid for analytics, and we will cover why the technology is a good fit for its intended use cases.
Speaker
Nishant Bangarwa, Software Engineer, Hortonworks
The openCypher Project - An Open Graph Query LanguageNeo4j
We want to present the openCypher project, whose purpose is to make Cypher available to everyone – every data store, every tooling provider, every application developer. openCypher is a continual work in progress. Over the next few months, we will move more and more of the language artifacts over to GitHub to make it available for everyone.
openCypher is an open source project that delivers four key artifacts released under a permissive license: (i) the Cypher reference documentation, (ii) a Technology compatibility kit (TCK), (iii) Reference implementation (a fully functional implementation of key parts of the stack needed to support Cypher inside a data platform or tool) and (iv) the Cypher language specification.
We are also seeking to make the process of specifying and evolving the Cypher query language as open as possible, and are actively seeking comments and suggestions on how to improve the Cypher query language.
The purpose of this talk is to provide more details regarding the above-mentioned aspects.
We want to present the openCypher project, whose purpose is to make Cypher available to everyone – every data store, every tooling provider, every application developer. openCypher is a continual work in progress. Over the next few months, we will move more and more of the language artifacts over to GitHub to make it available for everyone.
openCypher is an open source project that delivers four key artifacts released under a permissive license: (i) the Cypher reference documentation, (ii) a Technology compatibility kit (TCK), (iii) Reference implementation (a fully functional implementation of key parts of the stack needed to support Cypher inside a data platform or tool) and (iv) the Cypher language specification.
We are also seeking to make the process of specifying and evolving the Cypher query language as open as possible, and are actively seeking comments and suggestions on how to improve the Cypher query language.
The purpose of this talk is to provide more details regarding the above-mentioned aspects.
Neo4j is a powerful and expressive tool for storing, querying and manipulating data. However modeling data as graphs is quite different from modeling data under a relational database. In this talk, Michael Hunger will cover modeling business domains using graphs and show how they can be persisted and queried in Neo4j. We'll contrast this approach with the relational model, and discuss the impact on complexity, flexibility and performance.
Relational databases were conceived to digitize paper forms and automate well-structured business processes, and still have their uses. But RDBMS cannot model or store data and its relationships without complexity, which means performance degrades with the increasing number and levels of data relationships and data size. Additionally, new types of data and data relationships require schema redesign that increases time to market.
A native graph database like Neo4j naturally stores, manages, analyzes, and uses data within the context of connections meaning Neo4j provides faster query performance and vastly improved flexibility in handling complex hierarchies than SQL.
Find out which is faster, SQL or NoSQL, for traditional reporting tasks. Discover how you can optimise MongoDB aggregation pipelines and how to push complex computation down to the database.
Cluster computing frameworks such as Hadoop or Spark are tremendously beneficial in processing and deriving insights from data. However, long query latencies make these frameworks sub-optimal choices to power interactive applications. Organizations frequently rely on dedicated query layers, such as relational databases and key/value stores, for faster query latencies, but these technologies suffer many drawbacks for analytic use cases. In this session, we discuss using Druid for analytics and why the architecture is well suited to power analytic applications.
User-facing applications are replacing traditional reporting interfaces as the preferred means for organizations to derive value from their datasets. In order to provide an interactive user experience, user interactions with analytic applications must complete in an order of milliseconds. To meet these needs, organizations often struggle with selecting a proper serving layer. Many serving layers are selected because of their general popularity without understanding the possible architecture limitations.
Druid is an analytics data store designed for analytic (OLAP) queries on event data. It draws inspiration from Google’s Dremel, Google’s PowerDrill, and search infrastructure. Many enterprises are switching to Druid for analytics, and we will cover why the technology is a good fit for its intended use cases.
Speaker
Nishant Bangarwa, Software Engineer, Hortonworks
The openCypher Project - An Open Graph Query LanguageNeo4j
We want to present the openCypher project, whose purpose is to make Cypher available to everyone – every data store, every tooling provider, every application developer. openCypher is a continual work in progress. Over the next few months, we will move more and more of the language artifacts over to GitHub to make it available for everyone.
openCypher is an open source project that delivers four key artifacts released under a permissive license: (i) the Cypher reference documentation, (ii) a Technology compatibility kit (TCK), (iii) Reference implementation (a fully functional implementation of key parts of the stack needed to support Cypher inside a data platform or tool) and (iv) the Cypher language specification.
We are also seeking to make the process of specifying and evolving the Cypher query language as open as possible, and are actively seeking comments and suggestions on how to improve the Cypher query language.
The purpose of this talk is to provide more details regarding the above-mentioned aspects.
We want to present the openCypher project, whose purpose is to make Cypher available to everyone – every data store, every tooling provider, every application developer. openCypher is a continual work in progress. Over the next few months, we will move more and more of the language artifacts over to GitHub to make it available for everyone.
openCypher is an open source project that delivers four key artifacts released under a permissive license: (i) the Cypher reference documentation, (ii) a Technology compatibility kit (TCK), (iii) Reference implementation (a fully functional implementation of key parts of the stack needed to support Cypher inside a data platform or tool) and (iv) the Cypher language specification.
We are also seeking to make the process of specifying and evolving the Cypher query language as open as possible, and are actively seeking comments and suggestions on how to improve the Cypher query language.
The purpose of this talk is to provide more details regarding the above-mentioned aspects.
Neo4j is a powerful and expressive tool for storing, querying and manipulating data. However modeling data as graphs is quite different from modeling data under a relational database. In this talk, Michael Hunger will cover modeling business domains using graphs and show how they can be persisted and queried in Neo4j. We'll contrast this approach with the relational model, and discuss the impact on complexity, flexibility and performance.
Relational databases were conceived to digitize paper forms and automate well-structured business processes, and still have their uses. But RDBMS cannot model or store data and its relationships without complexity, which means performance degrades with the increasing number and levels of data relationships and data size. Additionally, new types of data and data relationships require schema redesign that increases time to market.
A native graph database like Neo4j naturally stores, manages, analyzes, and uses data within the context of connections meaning Neo4j provides faster query performance and vastly improved flexibility in handling complex hierarchies than SQL.
This overview presentation discusses big data challenges and provides an overview of the AWS Big Data Platform by covering:
- How AWS customers leverage the platform to manage massive volumes of data from a variety of sources while containing costs.
- Reference architectures for popular use cases, including, connected devices (IoT), log streaming, real-time intelligence, and analytics.
- The AWS big data portfolio of services, including, Amazon S3, Kinesis, DynamoDB, Elastic MapReduce (EMR), and Redshift.
- The latest relational database engine, Amazon Aurora— a MySQL-compatible, highly-available relational database engine, which provides up to five times better performance than MySQL at one-tenth the cost of a commercial database.
Created by: Rahul Pathak,
Sr. Manager of Software Development
Data Architecture Best Practices for Advanced AnalyticsDATAVERSITY
Many organizations are immature when it comes to data and analytics use. The answer lies in delivering a greater level of insight from data, straight to the point of need.
There are so many Data Architecture best practices today, accumulated from years of practice. In this webinar, William will look at some Data Architecture best practices that he believes have emerged in the past two years and are not worked into many enterprise data programs yet. These are keepers and will be required to move towards, by one means or another, so it’s best to mindfully work them into the environment.
How Kafka Powers the World's Most Popular Vector Database System with Charles...HostedbyConfluent
We use Kafka as the data backbone to build Milvus, an open-source vector database system that has been adopted by thousands of organizations worldwide for vector similarity search. In this presentation, we will share how Milvus uses Kafka to enable both real-time processing and batch processing on vector data at scale. We will walk through the challenges of unified streaming and batching in vector data processing, as well as the design choices and the Kafka-based data architecture.
MongoDB Atlas makes it easy to set up, operate, and scale your MongoDB deployments in the cloud. From high availability to scalability, security to disaster recovery - we've got you covered.
Automated: With MongoDB Atlas, you no longer need to worry about operational tasks such as provisioning, configuration, patching, upgrades, backups, and failure recovery. MongoDB Atlas provides the functionality and reliability you need, at the click of a button.
Flexible: Only MongoDB Atlas combines the critical capabilities of relational databases with the innovations of NoSQL. Radically simplify development and operations by delivering a diverse range of capabilities in a single, managed database platform.
Secure: MongoDB Atlas provides multiple levels of security for your database. These include robust access control, network isolation using Amazon VPC, IP whitelists, encryption of data in-flight using TLS/SSL, and optional encryption of the underlying filesystem.
Scalable: MongoDB Atlas grows with you, all with the click of a button. You can scale up across a range of instance sizes, and scale-out with automatic sharding. And you can do it with zero application downtime.
Highly Available: MongoDB Atlas is designed to offer exceptional uptime. Recovery from instance failures is transparent and fully automated. A minimum of three copies of your data are replicated across availability zones and continuously backed up.
High Performance: MongoDB Atlas provides high throughput and low latency for the most demanding workloads. Consistent, predictable performance eliminates the need for separate caching tiers, and delivers a far better price-performance ratio compared to traditional database software.
These webinar slides are an introduction to Neo4j and Graph Databases. They discuss the primary use cases for Graph Databases and the properties of Neo4j which make those use cases possible. They also cover the high-level steps of modeling, importing, and querying your data using Cypher and touch on RDBMS to Graph.
Graphs in Retail: Know Your Customers and Make Your Recommendations Engine LearnNeo4j
At Neo4j we believe that “Graphs Are Everywhere”. In this session, we’ll be exploring graphs within the Retail industry. We’ll discuss a range of data that are commonly available within a retail organisation, both online and “brick and mortar". We’ll illustrate some graphs which can be created by linking together different elements of that data and discuss the retail use cases those graphs can enable and transform.
We’ll specifically focus on use cases like Personalised Recommendations (with a live demo), Supply Chain Management, Logistics, and Customer 360. We'll also look at some relevant graph algorithms and talk about opportunities for integration with Artificial Intelligence/Machine Learning technologies, which can be used along with Neo4j to generate new value using retail data.
Walmart, Wobi, and others already deploy Neo4j for use cases like price comparison or real-time contextual and learning recommendation engines. Read about their use cases!
Data Catalog for Better Data Discovery and GovernanceDenodo
Watch full webinar here: https://buff.ly/2Vq9FR0
Data catalogs are en vogue answering critical data governance questions like “Where all does my data reside?” “What other entities are associated with my data?” “What are the definitions of the data fields?” and “Who accesses the data?” Data catalogs maintain the necessary business metadata to answer these questions and many more. But that’s not enough. For it to be useful, data catalogs need to deliver these answers to the business users right within the applications they use.
In this session, you will learn:
*How data catalogs enable enterprise-wide data governance regimes
*What key capability requirements should you expect in data catalogs
*How data virtualization combines dynamic data catalogs with delivery
This presentation is an attempt do demystify the practice of building reliable data processing pipelines. We go through the necessary pieces needed to build a stable processing platform: data ingestion, processing engines, workflow management, schemas, and pipeline development processes. The presentation also includes component choice considerations and recommendations, as well as best practices and pitfalls to avoid, most learnt through expensive mistakes.
DataEd Webinar: Reference & Master Data Management - Unlocking Business ValueDATAVERSITY
Data tends to pile up and can be rendered unusable or obsolete without careful maintenance processes. Reference and Master Data Management (MDM) has been a popular Data Management approach to effectively gain mastery over not just the data but the supporting architecture for processing it. This webinar presents MDM as a strategic approach to improving and formalizing practices around those data items that provide context for many organizational transactions—its master data. Too often, MDM has been implemented technology-first and achieved the same very poor track record (one-third succeeding on-time, within budget, and achieving planned functionality). MDM success depends on a coordinated approach typically involving Data Governance and Data Quality activities.
Learning Objectives:
- Understand foundational reference and MDM concepts based on the Data Management Body of Knowledge (DMBOK)
- Understand why these are an important component of your Data Architecture
- Gain awareness of Reference and MDM Frameworks and building blocks
- Know what MDM guiding principles consist of and best practices
- Know how to utilize reference and MDM in support of business strategy
Webinar: How Financial Services Organizations Use MongoDBMongoDB
The finance industry is facing major strain on existing IT infrastructure, systems, and design practices:
New pressures and industry regulation have meant increased volume, consolidation & reconciliation, and variability of data
Mobile and other channels demand significantly more flexible programming and data design environments
Improvements in operational efficiency and cost containment is ever increasing
MongoDB is the alternative that allows you to efficiently create and consume data, rapidly and securely, no matter how it is structured across channels and products and make it easy to aggregate data from multiple systems, while lowering TCO and delivering applications faster.
In this session, we will present on common MongoDB use cases including, but not limited to:
Risk Analytics & Reporting
Tick Data Capture & Analysis
Product Catalogues
Cross-Asset Class Trade Stores
Reference Data Management
Private DBaaS
How Financial Services Organizations Use MongoDBMongoDB
MongoDB is the alternative that allows you to efficiently create and consume data, rapidly and securely, no matter how it is structured across channels and products, and makes it easy to aggregate data from multiple systems, while lowering TCO and delivering applications faster.
Learn how Financial Services Organizations are Using MongoDB with this presentation.
This overview presentation discusses big data challenges and provides an overview of the AWS Big Data Platform by covering:
- How AWS customers leverage the platform to manage massive volumes of data from a variety of sources while containing costs.
- Reference architectures for popular use cases, including, connected devices (IoT), log streaming, real-time intelligence, and analytics.
- The AWS big data portfolio of services, including, Amazon S3, Kinesis, DynamoDB, Elastic MapReduce (EMR), and Redshift.
- The latest relational database engine, Amazon Aurora— a MySQL-compatible, highly-available relational database engine, which provides up to five times better performance than MySQL at one-tenth the cost of a commercial database.
Created by: Rahul Pathak,
Sr. Manager of Software Development
Data Architecture Best Practices for Advanced AnalyticsDATAVERSITY
Many organizations are immature when it comes to data and analytics use. The answer lies in delivering a greater level of insight from data, straight to the point of need.
There are so many Data Architecture best practices today, accumulated from years of practice. In this webinar, William will look at some Data Architecture best practices that he believes have emerged in the past two years and are not worked into many enterprise data programs yet. These are keepers and will be required to move towards, by one means or another, so it’s best to mindfully work them into the environment.
How Kafka Powers the World's Most Popular Vector Database System with Charles...HostedbyConfluent
We use Kafka as the data backbone to build Milvus, an open-source vector database system that has been adopted by thousands of organizations worldwide for vector similarity search. In this presentation, we will share how Milvus uses Kafka to enable both real-time processing and batch processing on vector data at scale. We will walk through the challenges of unified streaming and batching in vector data processing, as well as the design choices and the Kafka-based data architecture.
MongoDB Atlas makes it easy to set up, operate, and scale your MongoDB deployments in the cloud. From high availability to scalability, security to disaster recovery - we've got you covered.
Automated: With MongoDB Atlas, you no longer need to worry about operational tasks such as provisioning, configuration, patching, upgrades, backups, and failure recovery. MongoDB Atlas provides the functionality and reliability you need, at the click of a button.
Flexible: Only MongoDB Atlas combines the critical capabilities of relational databases with the innovations of NoSQL. Radically simplify development and operations by delivering a diverse range of capabilities in a single, managed database platform.
Secure: MongoDB Atlas provides multiple levels of security for your database. These include robust access control, network isolation using Amazon VPC, IP whitelists, encryption of data in-flight using TLS/SSL, and optional encryption of the underlying filesystem.
Scalable: MongoDB Atlas grows with you, all with the click of a button. You can scale up across a range of instance sizes, and scale-out with automatic sharding. And you can do it with zero application downtime.
Highly Available: MongoDB Atlas is designed to offer exceptional uptime. Recovery from instance failures is transparent and fully automated. A minimum of three copies of your data are replicated across availability zones and continuously backed up.
High Performance: MongoDB Atlas provides high throughput and low latency for the most demanding workloads. Consistent, predictable performance eliminates the need for separate caching tiers, and delivers a far better price-performance ratio compared to traditional database software.
These webinar slides are an introduction to Neo4j and Graph Databases. They discuss the primary use cases for Graph Databases and the properties of Neo4j which make those use cases possible. They also cover the high-level steps of modeling, importing, and querying your data using Cypher and touch on RDBMS to Graph.
Graphs in Retail: Know Your Customers and Make Your Recommendations Engine LearnNeo4j
At Neo4j we believe that “Graphs Are Everywhere”. In this session, we’ll be exploring graphs within the Retail industry. We’ll discuss a range of data that are commonly available within a retail organisation, both online and “brick and mortar". We’ll illustrate some graphs which can be created by linking together different elements of that data and discuss the retail use cases those graphs can enable and transform.
We’ll specifically focus on use cases like Personalised Recommendations (with a live demo), Supply Chain Management, Logistics, and Customer 360. We'll also look at some relevant graph algorithms and talk about opportunities for integration with Artificial Intelligence/Machine Learning technologies, which can be used along with Neo4j to generate new value using retail data.
Walmart, Wobi, and others already deploy Neo4j for use cases like price comparison or real-time contextual and learning recommendation engines. Read about their use cases!
Data Catalog for Better Data Discovery and GovernanceDenodo
Watch full webinar here: https://buff.ly/2Vq9FR0
Data catalogs are en vogue answering critical data governance questions like “Where all does my data reside?” “What other entities are associated with my data?” “What are the definitions of the data fields?” and “Who accesses the data?” Data catalogs maintain the necessary business metadata to answer these questions and many more. But that’s not enough. For it to be useful, data catalogs need to deliver these answers to the business users right within the applications they use.
In this session, you will learn:
*How data catalogs enable enterprise-wide data governance regimes
*What key capability requirements should you expect in data catalogs
*How data virtualization combines dynamic data catalogs with delivery
This presentation is an attempt do demystify the practice of building reliable data processing pipelines. We go through the necessary pieces needed to build a stable processing platform: data ingestion, processing engines, workflow management, schemas, and pipeline development processes. The presentation also includes component choice considerations and recommendations, as well as best practices and pitfalls to avoid, most learnt through expensive mistakes.
DataEd Webinar: Reference & Master Data Management - Unlocking Business ValueDATAVERSITY
Data tends to pile up and can be rendered unusable or obsolete without careful maintenance processes. Reference and Master Data Management (MDM) has been a popular Data Management approach to effectively gain mastery over not just the data but the supporting architecture for processing it. This webinar presents MDM as a strategic approach to improving and formalizing practices around those data items that provide context for many organizational transactions—its master data. Too often, MDM has been implemented technology-first and achieved the same very poor track record (one-third succeeding on-time, within budget, and achieving planned functionality). MDM success depends on a coordinated approach typically involving Data Governance and Data Quality activities.
Learning Objectives:
- Understand foundational reference and MDM concepts based on the Data Management Body of Knowledge (DMBOK)
- Understand why these are an important component of your Data Architecture
- Gain awareness of Reference and MDM Frameworks and building blocks
- Know what MDM guiding principles consist of and best practices
- Know how to utilize reference and MDM in support of business strategy
Webinar: How Financial Services Organizations Use MongoDBMongoDB
The finance industry is facing major strain on existing IT infrastructure, systems, and design practices:
New pressures and industry regulation have meant increased volume, consolidation & reconciliation, and variability of data
Mobile and other channels demand significantly more flexible programming and data design environments
Improvements in operational efficiency and cost containment is ever increasing
MongoDB is the alternative that allows you to efficiently create and consume data, rapidly and securely, no matter how it is structured across channels and products and make it easy to aggregate data from multiple systems, while lowering TCO and delivering applications faster.
In this session, we will present on common MongoDB use cases including, but not limited to:
Risk Analytics & Reporting
Tick Data Capture & Analysis
Product Catalogues
Cross-Asset Class Trade Stores
Reference Data Management
Private DBaaS
How Financial Services Organizations Use MongoDBMongoDB
MongoDB is the alternative that allows you to efficiently create and consume data, rapidly and securely, no matter how it is structured across channels and products, and makes it easy to aggregate data from multiple systems, while lowering TCO and delivering applications faster.
Learn how Financial Services Organizations are Using MongoDB with this presentation.
Webinar: How to Drive Business Value in Financial Services with MongoDBMongoDB
Huge upheaval in the finance industry has led to a major strain on existing IT infrastructure and systems. New finance industry regulation has meant increased volume, velocity and variability of data. This coupled with cost pressures from the business has led these institutions to seek alternatives. Top tier institutions like MetLife have turned to MongoDB because of the enormous business value it enables.
In this session, hear how MongoDB enabled these successful real world examples:
Single View of a Customer - 3 months and $2M for a single view of a customer across 50 source systems
Reference Data Management - $40M in cost savings from migrating to MongoDB for reference data management
Private cloud - MongoDB as a PaaS across a tier 1 bank for enabling agility for operations, not just the developer
The use cases are specific to financial services but the patterns of usage - agility, scale, global distribution - will be applicable across many industries.
Webinar: Making A Single View of the Customer Real with MongoDBMongoDB
Tier 1 banks, top insurance providers and other global financial services institutions have discovered that with the use of MongoDB, they are able to achieve a single view of the customer. This allows them not only to comply with KYC and other regulations, but also to engage customers efficiently, which helps reduce churn and increase wallet share while reducing costs. We will focus on how MongoDB's dynamic schema, real-time replication and auto-scaling make it possible to create a global, unified data hub aggregating disparate data sources, which can be made available to customers, customer service representatives (CSRs), and relationship managers (RMs).
Webinar: How to Drive Business Value in Financial Services with MongoDBMongoDB
Huge upheaval in the finance industry has led to a major strain on existing IT infrastructure and systems. New finance industry regulation has meant increased volume, velocity and variability of data, so-called Big Data. This coupled with cost pressures from the business has led these institutions to seek alternatives. Top tier institutions like MetLife have turned to MongoDB because of the enormous business value it enables.
In this session, learn where and how you should use MongoDB to get the maximum value including specific case studies such as saving $40M in one project.
The use cases are specific to financial services but the patterns of usage - agility, scale, global distribution - will be applicable across many industries.
In the world of big data, legacy modernization, siloed organizations, empowered customers, and mobile devices, making informed choices about your enterprise infrastructure has become more important than ever. The alternatives are abundant, and the successful Enterprise Architect must constantly discern which new technology is just a shiny object and which will add true business value.
Webinar: How Financial Firms Create a Single Customer View with MongoDBMongoDB
Learn why a tier 1 bank, top 5 insurance provider and other global financial services companies are flocking to MongoDB. This webinar focuses on how firms use MongoDB to generate a single customer view not only to comply with KYC and other regulations, but also to engage customers efficiently, which helps reduce churn and increase wallet share while still reducing costs. We will focus on how MongoDB's dynamic schema, real-time replication and auto-scaling make it possible to create a global, unified data hub aggregating disparate data sources, which can be made available to customers, customer service representatives (CSRs), and relationship managers (RMs).
Webinar: An Enterprise Architect’s View of MongoDBMongoDB
In the world of big data, legacy modernization, siloed organizations, empowered customers, and mobile devices, making informed choices about your enterprise infrastructure has become more important than ever. The alternatives are abundant, and the successful Enterprise Architect must constantly discern which new technology is just a shiny object and which will add true business value.
MongoDB is more than just a great application database for developers; it gives Enterprise Architects new capabilities to solve previously difficult architectural requirements much more easily. Take for example the challenge of many siloed systems at MetLife – with MongoDB, the Metlife team was able to successfully provide a single view into those 70 systems, in only 3 months.
In this webinar, we will:
Explore real life challenges enterprises face with case studies of their solutions
Consider how best to introduce MongoDB in the enterprise
Give an overview of how to optimize the use of MongoDB
Webinar: Achieving Customer Centricity and High Margins in Financial Services...MongoDB
It is imperative that Financial Services firms align the organization around providing maximum value to customers across all channels and products with the agility to capitalize on new opportunities. They must do this at the same time as cutting costs, improving operational efficiency, and complying with current and future regulations. This effort is commonly referred to as Industrialization, or streamlining people, process, and technology for maximum customer value, service, and efficiency.
MongoDB can help you in this initiative by allowing you to centralize data management no matter how it is structured across channels and products and make it easy to aggregate data from multiple systems, while lowering TCO and delivering applications faster. MetLife publicly announced that they used MongoDB to enable a single view of the customer in 3 months across 70+ existing systems. We will explore case studies demonstrating these capabilities to help you industrialize your firm.
Key takeaways:
Unique capabilities, brought to you by MongoDB
Concrete use cases that help industrialization
Implementation case studies, to pave the way
Webinar: Introducing the MongoDB Connector for BI 2.0 with TableauMongoDB
Pairing your real-time operational data stored in a modern database like MongoDB with first-class business intelligence platforms like Tableau enables new insights to be discovered faster than ever before.
Many leading organizations already use MongoDB in conjunction with Tableau including a top American investment bank and the world’s largest airline. With the Connector for BI 2.0, it’s never been easier to streamline the connection process between these two systems.
In this webinar, we will create a live connection from Tableau Desktop to a MongoDB cluster using the Connector for BI. Once we have Tableau Desktop and MongoDB connected, we will demonstrate the visual power of Tableau to explore the agile data storage of MongoDB.
You’ll walk away knowing:
- How to configure MongoDB with Tableau using the updated connector
- Best practices for working with documents in a BI environment
- How leading companies are using big data visualization strategies to transform their businesses
What started as a way for web giants to solve problems of serious scale has become the default way all enterprises manage Big Data. Despite having a catchy, if inaccurate title, there really isn't a coherent "NoSQL" category, nor is there a simple future for the range of NoSQL databases. In this presentation, Matt Asay will outline the reasons for NoSQL's existence and persistence, how the different NoSQL technologies help enterprises get control of Big Data, and will identify the trends that point to a bright future for post-relational databases.
Accelerating Data-Driven Enterprise Transformation in Banking, Financial Serv...Denodo
Watch full webinar here: https://bit.ly/3c6v8K7
Banking, Financial Services and Insurance (BFSI) organizations are globally accelerating their digital journey, making rapid strides with their digitization efforts, and adding key capabilities to adapt and innovate in the new normal.
Many companies find digital transformation challenging as they rely on established systems that are often not only poorly integrated but also highly resistant to modernization without downtime. Hear how the BFSI industry is leveraging data virtualization that facilitates digital transformation via a modern data integration/data delivery approach to gain greater agility, flexibility, and efficiency.
In this session from Denodo, you will learn:
- Industry key trends and challenges driving the digital transformation mandate and platform modernization initiatives
- Key concepts of Data Virtualization, and how it can enable BFSI customers to develop critical capabilities for real-time / near real-time data integration
- Success Stories on organizations who already use data virtualization to differentiate themselves from the competition.
Similar to How Insurance Companies Use MongoDB (20)
MongoDB SoCal 2020: Migrate Anything* to MongoDB AtlasMongoDB
During this talk we'll navigate through a customer's journey as they migrate an existing MongoDB deployment to MongoDB Atlas. While the migration itself can be as simple as a few clicks, the prep/post effort requires due diligence to ensure a smooth transfer. We'll cover these steps in detail and provide best practices. In addition, we’ll provide an overview of what to consider when migrating other cloud data stores, traditional databases and MongoDB imitations to MongoDB Atlas.
MongoDB SoCal 2020: Go on a Data Safari with MongoDB Charts!MongoDB
These days, everyone is expected to be a data analyst. But with so much data available, how can you make sense of it and be sure you're making the best decisions? One great approach is to use data visualizations. In this session, we take a complex dataset and show how the breadth of capabilities in MongoDB Charts can help you turn bits and bytes into insights.
MongoDB SoCal 2020: Using MongoDB Services in Kubernetes: Any Platform, Devel...MongoDB
MongoDB Kubernetes operator and MongoDB Open Service Broker are ready for production operations. Learn about how MongoDB can be used with the most popular container orchestration platform, Kubernetes, and bring self-service, persistent storage to your containerized applications. A demo will show you how easy it is to enable MongoDB clusters as an External Service using the Open Service Broker API for MongoDB
MongoDB SoCal 2020: A Complete Methodology of Data Modeling for MongoDBMongoDB
Are you new to schema design for MongoDB, or are you looking for a more complete or agile process than what you are following currently? In this talk, we will guide you through the phases of a flexible methodology that you can apply to projects ranging from small to large with very demanding requirements.
MongoDB SoCal 2020: From Pharmacist to Analyst: Leveraging MongoDB for Real-T...MongoDB
Humana, like many companies, is tackling the challenge of creating real-time insights from data that is diverse and rapidly changing. This is our journey of how we used MongoDB to combined traditional batch approaches with streaming technologies to provide continues alerting capabilities from real-time data streams.
MongoDB SoCal 2020: Best Practices for Working with IoT and Time-series DataMongoDB
Time series data is increasingly at the heart of modern applications - think IoT, stock trading, clickstreams, social media, and more. With the move from batch to real time systems, the efficient capture and analysis of time series data can enable organizations to better detect and respond to events ahead of their competitors or to improve operational efficiency to reduce cost and risk. Working with time series data is often different from regular application data, and there are best practices you should observe.
This talk covers:
Common components of an IoT solution
The challenges involved with managing time-series data in IoT applications
Different schema designs, and how these affect memory and disk utilization – two critical factors in application performance.
How to query, analyze and present IoT time-series data using MongoDB Compass and MongoDB Charts
At the end of the session, you will have a better understanding of key best practices in managing IoT time-series data with MongoDB.
Join this talk and test session with a MongoDB Developer Advocate where you'll go over the setup, configuration, and deployment of an Atlas environment. Create a service that you can take back in a production-ready state and prepare to unleash your inner genius.
MongoDB .local San Francisco 2020: Powering the new age data demands [Infosys]MongoDB
Our clients have unique use cases and data patterns that mandate the choice of a particular strategy. To implement these strategies, it is mandatory that we unlearn a lot of relational concepts while designing and rapidly developing efficient applications on NoSQL. In this session, we will talk about some of our client use cases, the strategies we have adopted, and the features of MongoDB that assisted in implementing these strategies.
MongoDB .local San Francisco 2020: Using Client Side Encryption in MongoDB 4.2MongoDB
Encryption is not a new concept to MongoDB. Encryption may occur in-transit (with TLS) and at-rest (with the encrypted storage engine). But MongoDB 4.2 introduces support for Client Side Encryption, ensuring the most sensitive data is encrypted before ever leaving the client application. Even full access to your MongoDB servers is not enough to decrypt this data. And better yet, Client Side Encryption can be enabled at the "flick of a switch".
This session covers using Client Side Encryption in your applications. This includes the necessary setup, how to encrypt data without sacrificing queryability, and what trade-offs to expect.
MongoDB .local San Francisco 2020: Using MongoDB Services in Kubernetes: any ...MongoDB
MongoDB Kubernetes operator is ready for prime-time. Learn about how MongoDB can be used with most popular orchestration platform, Kubernetes, and bring self-service, persistent storage to your containerized applications.
MongoDB .local San Francisco 2020: Go on a Data Safari with MongoDB Charts!MongoDB
These days, everyone is expected to be a data analyst. But with so much data available, how can you make sense of it and be sure you're making the best decisions? One great approach is to use data visualizations. In this session, we take a complex dataset and show how the breadth of capabilities in MongoDB Charts can help you turn bits and bytes into insights.
MongoDB .local San Francisco 2020: From SQL to NoSQL -- Changing Your MindsetMongoDB
When you need to model data, is your first instinct to start breaking it down into rows and columns? Mine used to be too. When you want to develop apps in a modern, agile way, NoSQL databases can be the best option. Come to this talk to learn how to take advantage of all that NoSQL databases have to offer and discover the benefits of changing your mindset from the legacy, tabular way of modeling data. We’ll compare and contrast the terms and concepts in SQL databases and MongoDB, explain the benefits of using MongoDB compared to SQL databases, and walk through data modeling basics so you feel confident as you begin using MongoDB.
MongoDB .local San Francisco 2020: MongoDB Atlas JumpstartMongoDB
Join this talk and test session with a MongoDB Developer Advocate where you'll go over the setup, configuration, and deployment of an Atlas environment. Create a service that you can take back in a production-ready state and prepare to unleash your inner genius.
MongoDB .local San Francisco 2020: Tips and Tricks++ for Querying and Indexin...MongoDB
Query performance should be the unsung hero of an application, but without proper configuration, can become a constant headache. When used properly, MongoDB provides extremely powerful querying capabilities. In this session, we'll discuss concepts like equality, sort, range, managing query predicates versus sequential predicates, and best practices to building multikey indexes.
MongoDB .local San Francisco 2020: Aggregation Pipeline Power++MongoDB
Aggregation pipeline has been able to power your analysis of data since version 2.2. In 4.2 we added more power and now you can use it for more powerful queries, updates, and outputting your data to existing collections. Come hear how you can do everything with the pipeline, including single-view, ETL, data roll-ups and materialized views.
MongoDB .local San Francisco 2020: A Complete Methodology of Data Modeling fo...MongoDB
Are you new to schema design for MongoDB, or are you looking for a more complete or agile process than what you are following currently? In this talk, we will guide you through the phases of a flexible methodology that you can apply to projects ranging from small to large with very demanding requirements.
MongoDB .local San Francisco 2020: MongoDB Atlas Data Lake Technical Deep DiveMongoDB
MongoDB Atlas Data Lake is a new service offered by MongoDB Atlas. Many organizations store long term, archival data in cost-effective storage like S3, GCP, and Azure Blobs. However, many of them do not have robust systems or tools to effectively utilize large amounts of data to inform decision making. MongoDB Atlas Data Lake is a service allowing organizations to analyze their long-term data to discover a wealth of information about their business.
This session will take a deep dive into the features that are currently available in MongoDB Atlas Data Lake and how they are implemented. In addition, we'll discuss future plans and opportunities and offer ample Q&A time with the engineers on the project.
MongoDB .local San Francisco 2020: Developing Alexa Skills with MongoDB & GolangMongoDB
Virtual assistants are becoming the new norm when it comes to daily life, with Amazon’s Alexa being the leader in the space. As a developer, not only do you need to make web and mobile compliant applications, but you need to be able to support virtual assistants like Alexa. However, the process isn’t quite the same between the platforms.
How do you handle requests? Where do you store your data and work with it to create meaningful responses with little delay? How much of your code needs to change between platforms?
In this session we’ll see how to design and develop applications known as Skills for Amazon Alexa powered devices using the Go programming language and MongoDB.
MongoDB .local Paris 2020: Realm : l'ingrédient secret pour de meilleures app...MongoDB
aux Core Data, appréciée par des centaines de milliers de développeurs. Apprenez ce qui rend Realm spécial et comment il peut être utilisé pour créer de meilleures applications plus rapidement.
MongoDB .local Paris 2020: Upply @MongoDB : Upply : Quand le Machine Learning...MongoDB
Il n’a jamais été aussi facile de commander en ligne et de se faire livrer en moins de 48h très souvent gratuitement. Cette simplicité d’usage cache un marché complexe de plus de 8000 milliards de $.
La data est bien connu du monde de la Supply Chain (itinéraires, informations sur les marchandises, douanes,…), mais la valeur de ces données opérationnelles reste peu exploitée. En alliant expertise métier et Data Science, Upply redéfinit les fondamentaux de la Supply Chain en proposant à chacun des acteurs de surmonter la volatilité et l’inefficacité du marché.
Essentials of Automations: Optimizing FME Workflows with ParametersSafe Software
Are you looking to streamline your workflows and boost your projects’ efficiency? Do you find yourself searching for ways to add flexibility and control over your FME workflows? If so, you’re in the right place.
Join us for an insightful dive into the world of FME parameters, a critical element in optimizing workflow efficiency. This webinar marks the beginning of our three-part “Essentials of Automation” series. This first webinar is designed to equip you with the knowledge and skills to utilize parameters effectively: enhancing the flexibility, maintainability, and user control of your FME projects.
Here’s what you’ll gain:
- Essentials of FME Parameters: Understand the pivotal role of parameters, including Reader/Writer, Transformer, User, and FME Flow categories. Discover how they are the key to unlocking automation and optimization within your workflows.
- Practical Applications in FME Form: Delve into key user parameter types including choice, connections, and file URLs. Allow users to control how a workflow runs, making your workflows more reusable. Learn to import values and deliver the best user experience for your workflows while enhancing accuracy.
- Optimization Strategies in FME Flow: Explore the creation and strategic deployment of parameters in FME Flow, including the use of deployment and geometry parameters, to maximize workflow efficiency.
- Pro Tips for Success: Gain insights on parameterizing connections and leveraging new features like Conditional Visibility for clarity and simplicity.
We’ll wrap up with a glimpse into future webinars, followed by a Q&A session to address your specific questions surrounding this topic.
Don’t miss this opportunity to elevate your FME expertise and drive your projects to new heights of efficiency.
Elevating Tactical DDD Patterns Through Object CalisthenicsDorra BARTAGUIZ
After immersing yourself in the blue book and its red counterpart, attending DDD-focused conferences, and applying tactical patterns, you're left with a crucial question: How do I ensure my design is effective? Tactical patterns within Domain-Driven Design (DDD) serve as guiding principles for creating clear and manageable domain models. However, achieving success with these patterns requires additional guidance. Interestingly, we've observed that a set of constraints initially designed for training purposes remarkably aligns with effective pattern implementation, offering a more ‘mechanical’ approach. Let's explore together how Object Calisthenics can elevate the design of your tactical DDD patterns, offering concrete help for those venturing into DDD for the first time!
The Art of the Pitch: WordPress Relationships and SalesLaura Byrne
Clients don’t know what they don’t know. What web solutions are right for them? How does WordPress come into the picture? How do you make sure you understand scope and timeline? What do you do if sometime changes?
All these questions and more will be explored as we talk about matching clients’ needs with what your agency offers without pulling teeth or pulling your hair out. Practical tips, and strategies for successful relationship building that leads to closing the deal.
Dev Dives: Train smarter, not harder – active learning and UiPath LLMs for do...UiPathCommunity
💥 Speed, accuracy, and scaling – discover the superpowers of GenAI in action with UiPath Document Understanding and Communications Mining™:
See how to accelerate model training and optimize model performance with active learning
Learn about the latest enhancements to out-of-the-box document processing – with little to no training required
Get an exclusive demo of the new family of UiPath LLMs – GenAI models specialized for processing different types of documents and messages
This is a hands-on session specifically designed for automation developers and AI enthusiasts seeking to enhance their knowledge in leveraging the latest intelligent document processing capabilities offered by UiPath.
Speakers:
👨🏫 Andras Palfi, Senior Product Manager, UiPath
👩🏫 Lenka Dulovicova, Product Program Manager, UiPath
Software Delivery At the Speed of AI: Inflectra Invests In AI-Powered QualityInflectra
In this insightful webinar, Inflectra explores how artificial intelligence (AI) is transforming software development and testing. Discover how AI-powered tools are revolutionizing every stage of the software development lifecycle (SDLC), from design and prototyping to testing, deployment, and monitoring.
Learn about:
• The Future of Testing: How AI is shifting testing towards verification, analysis, and higher-level skills, while reducing repetitive tasks.
• Test Automation: How AI-powered test case generation, optimization, and self-healing tests are making testing more efficient and effective.
• Visual Testing: Explore the emerging capabilities of AI in visual testing and how it's set to revolutionize UI verification.
• Inflectra's AI Solutions: See demonstrations of Inflectra's cutting-edge AI tools like the ChatGPT plugin and Azure Open AI platform, designed to streamline your testing process.
Whether you're a developer, tester, or QA professional, this webinar will give you valuable insights into how AI is shaping the future of software delivery.
DevOps and Testing slides at DASA ConnectKari Kakkonen
My and Rik Marselis slides at 30.5.2024 DASA Connect conference. We discuss about what is testing, then what is agile testing and finally what is Testing in DevOps. Finally we had lovely workshop with the participants trying to find out different ways to think about quality and testing in different parts of the DevOps infinity loop.
GraphRAG is All You need? LLM & Knowledge GraphGuy Korland
Guy Korland, CEO and Co-founder of FalkorDB, will review two articles on the integration of language models with knowledge graphs.
1. Unifying Large Language Models and Knowledge Graphs: A Roadmap.
https://arxiv.org/abs/2306.08302
2. Microsoft Research's GraphRAG paper and a review paper on various uses of knowledge graphs:
https://www.microsoft.com/en-us/research/blog/graphrag-unlocking-llm-discovery-on-narrative-private-data/
Accelerate your Kubernetes clusters with Varnish CachingThijs Feryn
A presentation about the usage and availability of Varnish on Kubernetes. This talk explores the capabilities of Varnish caching and shows how to use the Varnish Helm chart to deploy it to Kubernetes.
This presentation was delivered at K8SUG Singapore. See https://feryn.eu/presentations/accelerate-your-kubernetes-clusters-with-varnish-caching-k8sug-singapore-28-2024 for more details.
Smart TV Buyer Insights Survey 2024 by 91mobiles.pdf91mobiles
91mobiles recently conducted a Smart TV Buyer Insights Survey in which we asked over 3,000 respondents about the TV they own, aspects they look at on a new TV, and their TV buying preferences.
Key Trends Shaping the Future of Infrastructure.pdfCheryl Hung
Keynote at DIGIT West Expo, Glasgow on 29 May 2024.
Cheryl Hung, ochery.com
Sr Director, Infrastructure Ecosystem, Arm.
The key trends across hardware, cloud and open-source; exploring how these areas are likely to mature and develop over the short and long-term, and then considering how organisations can position themselves to adapt and thrive.
Securing your Kubernetes cluster_ a step-by-step guide to success !KatiaHIMEUR1
Today, after several years of existence, an extremely active community and an ultra-dynamic ecosystem, Kubernetes has established itself as the de facto standard in container orchestration. Thanks to a wide range of managed services, it has never been so easy to set up a ready-to-use Kubernetes cluster.
However, this ease of use means that the subject of security in Kubernetes is often left for later, or even neglected. This exposes companies to significant risks.
In this talk, I'll show you step-by-step how to secure your Kubernetes cluster for greater peace of mind and reliability.
State of ICS and IoT Cyber Threat Landscape Report 2024 previewPrayukth K V
The IoT and OT threat landscape report has been prepared by the Threat Research Team at Sectrio using data from Sectrio, cyber threat intelligence farming facilities spread across over 85 cities around the world. In addition, Sectrio also runs AI-based advanced threat and payload engagement facilities that serve as sinks to attract and engage sophisticated threat actors, and newer malware including new variants and latent threats that are at an earlier stage of development.
The latest edition of the OT/ICS and IoT security Threat Landscape Report 2024 also covers:
State of global ICS asset and network exposure
Sectoral targets and attacks as well as the cost of ransom
Global APT activity, AI usage, actor and tactic profiles, and implications
Rise in volumes of AI-powered cyberattacks
Major cyber events in 2024
Malware and malicious payload trends
Cyberattack types and targets
Vulnerability exploit attempts on CVEs
Attacks on counties – USA
Expansion of bot farms – how, where, and why
In-depth analysis of the cyber threat landscape across North America, South America, Europe, APAC, and the Middle East
Why are attacks on smart factories rising?
Cyber risk predictions
Axis of attacks – Europe
Systemic attacks in the Middle East
Download the full report from here:
https://sectrio.com/resources/ot-threat-landscape-reports/sectrio-releases-ot-ics-and-iot-security-threat-landscape-report-2024/
1. How Insurance Companies
Use MongoDB
Financial Services Enterprise Architect, MongoDB
Buzz Moschetti
buzz.moschetti@mongodb.com
#MongoDB
2. 2
MongoDB
The leading NoSQL database
Document
Data Model
Open-
Source
General
Purpose
{
name: “John Smith”,
pfxs: [“Dr.”,”Mr.”],
address: “10 3rd St.”,
phone: {
home: 1234567890,
mobile: 1234568138 }
}
3. 3
MongoDB Company Overview
400+ employees 1100+ customers
Over $231 million in funding
(More than other NoSQL vendors combined)
Offices in NY & Palo Alto and
across EMEA, and APAC
5. 5
Indeed.com Trends
Top Job Trends
1. HTML 5
2. MongoDB
3. iOS
4. Android
5. Mobile Apps
6. Puppet
7. Hadoop
8. jQuery
9. PaaS
10. Social Media
Leading NoSQL Database
LinkedIn Job SkillsGoogle Search
MongoDB
MongoDB
Jaspersoft Big Data Index
Direct Real-Time Downloads
MongoDB
9. 9
Relational: ALL Data is Column/Row
Customer ID First Name Last Name City
0 John Doe New York
1 Mark Smith San Francisco
2 Jay Black Newark
3 Meagan White London
4 Edward Daniels Boston
Phone Number Type DoNotCall Customer ID
1-212-555-1212 home T 0
1-212-555-1213 home T 0
1-212-555-1214 cell F 0
1-212-777-1212 home T 1
1-212-777-1213 cell (null) 1
1-212-888-1212 home F 2
10. 10
mongoDB: Model Your Data The Way
it is Naturally Used
Relational MongoDB
{ customer_id : 1,
first_name : "Mark",
last_name : "Smith",
city : "San Francisco",
phones: [ {
number : “1-212-777-1212”,
dnc : true,
type : “home”
},
{
number : “1-212-777-1213”,
type : “cell”
}]
}
Customer
ID
First Name Last Name City
0 John Doe New York
1 Mark Smith San Francisco
2 Jay Black Newark
3 Meagan White London
4 Edward Daniels Boston
Phone Number Type DNC
Customer
ID
1-212-555-1212 home T 0
1-212-555-1213 home T 0
1-212-555-1214 cell F 0
1-212-777-1212 home T 1
1-212-777-1213 cell (null) 1
1-212-888-1212 home F 2
11. 11
No SQL But Still Flexible Querying
Rich Queries
• Find everybody who opened a special
account last month in NY between $100
and $1000 OR last year more than $500
Geospatial
• Find all customers that live within 10 miles
of NYC
Text Search
• Find all tweets that mention the bank
within the last 2 days
Aggregation
• What is the average P&L of the trading
desks grouped by a set of date ranges
Map Reduce
• Calculate total amount settled position by
symbol by settlement venue
12. 12
Insurance – Common Uses
Functional Areas Use Cases to Consider
Customer Engagement Single View of a Customer
Customer Experience Management
Loyalty/Rewards Applications
Agile Next-generation Digital Platform
Marketing Multi-channel Customer Activity Capture
Real-time Cross-channel Next Best Offer
Agent Desktop Responsive Customer Reporting
Risk Analysis &
Reporting
Catastrophe Risk Modeling
Liquidity Risk Analysis
Regulatory Compliance Online Long-term Audit Trail
Reference Data
Management
[Global] Reference Data Distribution Hub
Policy Catalog
Fraud Detection Aggregate Activity Repository
13. 13
Data Consolidation
Challenge: Aggregation of disparate data is difficult
Cards
Loans
Deposits
…
Data
Warehouse
Batch
Issues
• Yesterday’s data
• Details lost
• Inflexible schema
• Slow performance
Datamar
t
Datamar
t
Datamar
t
Batch
Impact
• What happened today?
• Worse customer
satisfaction
• Missed opportunities
• Lost revenue
Batch
Batch
Reporting
CardsData
Source 1
LoansData
Source 2
DepositsData
Source n
14. 14
Data Consolidation
Solution: Using rich, dynamic schema and easy scaling
Data
Warehouse
Real-time or
Batch
Trading
Applications
Risk applications
Operational Data Hub Benefits
• Real-time
• Complete details
• Agile
• Higher customer
retention
• Increase wallet share
• Proactive exception
handling
Strategic
Reporting
Operational
Reporting
Cards
Loans
Deposits
CardsData
Source 1
LoansData
Source 2
Deposits
…
Data
Source n
15. 15
Data Consolidation
Watch Out For The Arrow!
Data
Source 1
Flat Data
Extractor
Program
Potentially
Many CSV
Files
Flat Data
Loader
Program
Data Mart
Or
Warehouse
• Entities in source RDBMS not extracted as entities
• CSV is brittle with no self-description
• Both Loader and RBDMS must update schema when source changes
• Application must reassemble Entities
App
Traditional Approach
Data
Source 1
JSON
Extractor
Program
Fewer
JSON
Files
• Entities in RDBMS extracted as entities
• JSON is flexible to change and self-descriptive
• mongoDB data hub does not change when source changes
• Application can consume Entities directly
App
The mongoDB Approach
16. 16
Insurance leader generates coveted 360-degree view of
customers in 90 days – “The Wall”
Data Consolidation
Problem Why MongoDB Results
• No single view of
customer
• 145 yrs of policy data,
70+ systems, 15+ apps
• 2 years, $25M in failing
to aggregate in RDBMS
• Poor customer
experience
• Agility – prototype in 9
days;
• Dynamic schema & rich
querying – combine
disparate data into one
data store
• Hot tech to attract top
talent
• Production in 90 days with
70 feeders
• Unified customer view
available to all channels
• Increased call center
productivity
• Better customer experience,
reduced churn, more upsell
• Dozens more projects on
same data platform
17. 17
Document and Analytics Platform to capture more
than 100 billion documents per year: RMS(one)
Risk Modeling & Management
Why MongoDB
• Individual customers
have very different
schemas of property,
policy, and business
information
• Could not rapidly adapt
to changes in customer
information
• Very expensive to scale
existing system
• Flexible data model can
hold document content,
rich shape content, and
rich metadata
• Affordable, predictable
scaling while maintaining
performance and usability
• Single-view of risk models,
exposures, and analytics
• New efficiencies in underwriting
portfolio management
• First-ever platform to offer these
capabilities to the market
• Platform will scale to support
TRILLIONS of documents over
hundreds of TB of data
Problem Results
18. 18
Claims Processing Distribution
Challenge: Claim data difficult to change and distribute
Golden
Copy
Batch
Batch
Batch
Batch
Batch
Batch
Batch
Batch
Common issues
• Hard to change schema
of master data
• Data copied everywhere
and gets out of sync
Impact
• Process breaks from out
of sync data
• Business doesn’t have
data it needs
• Many copies creates
more management
19. 19
Claims Processing Distribution
Solution: Persistent dynamic database replicated globally
Real-time
Real-time Real-time
Real-time
Real-time
Real-time
Real-time
Real-time
Solution:
• Load into primary with
any schema
• Replicate to and read
from secondaries
Benefits
• Easy & fast change at
speed of business
• Easy scale out for one
stop shop for data
• Low TCO
20. 20
Distribute claims information globally in real-time
for fast local accessing and querying
Claims Processing Distribution
Case Study: Global Reinsurance Company
Problem Why MongoDB Results
• Business workflow
slowed by ETL delays
• Had to manage over 20
distributed systems with
same data
• Rigid schema
prevented detailed
analytics on event-
specific data
• Dynamic schema: easy to
load initially & over time
• Auto-replication: data
distributed in real-time,
read locally
• Both cache and database:
cache always up-to-date
• Simple data modeling &
analysis: easy changes
and understanding
• Data in sync globally and
read locally
• Capacity to move to one
global shared data service
Hello all! This is Buzz Moschetti. Welcome to today’s webinar entitled “How Insurance Companies Use MongoDB”
I am an Enterprise Architect and today I’m going to talk about some popular use cases involving mongoDB that we’ve seen emerge in Financial Services – that being wholesale & retail banking and insurance -- and the reasons that motivated the use of it.
First, some quick logistics:
The presentation audio & slides will be recorded and made available to you in about 24 hours.
We have an hour set up but I’ll use about 40 minutes of that for the presentation with some time for Q & A.
You can of course use the webex Q&A box to ask those questions at any time but I will hold off answering them until the end.
If you have technical issues, please send a webex chat message to the participant ID’d as mongoDB webinar team; otherwise keep your Qs focused on the content.
Acknowledging this may be new for some percentage of the audience, I’ll spend a few minutes doing an overview of mongoDB.
What is it?
It is a general purpose document store database.
General purpose means CRUD (create read update delete) works similar to traditional databases, esp. RDBMS. Content that is saved is immediately readable and indexed and available to query through a rich query language. This is major differentiator in the NoSQL space.
By document we mean a “rich shape” model: Not a word doc or a PDF. instead of forcing data into a normalized set of rectangles (a.k.a. tables), mongoDB can store shapes that contain lists and subdocuments: we see some hint of that here with the pfxs and phone fields and we’ll explore in just slightly more detail later on.
We are also OpenSource: there is a vibrant community that contributes to and amplifies the product and solutions around it. As a company, we provide value beyond the basic features including enterprise-ready features such as commercial grade builds, monitoring & management services, authentication security, support, training, and launch services.
Here’s a little bit about us.
HQ in NY, we are 375 employees in eng, presales, consulting, documentation, and community support – and yes, sales too.
Actively supporting the mongoDB ecosystem are the people involved in the 7.2 million downloads of the product to date.
And here’s the logo page you’ve been waiting to see.
The 1000+ paying customers include most of the Fortune 500 and the top retail and wholesale banks in the country, and as you know banks are shy about their logos.
These customers span the spectrum of complexity and performance from small targeted solutions platforms to petabyte installations like CERN and the Large Hadron Collider and many billion document collections with high read/write workloads like craigslist and foursquare.
And why do they use us? Well, for a number of reasons. Our document model and the technology around it is very good – but it’s more than the technology.
Not important to point out the names of our direct competitors here but in comparison we’re clearly the most popular and commercially vibrant NoSQL database, and the talent pool is growing.
The overall community is large enough that, for example, stackoverflow.com has a very active and useful forum for mongoDB and many questions on edge use cases and integration and best practices can be found there.
And this is reflected in….. (turn page).
Here’s another reason for the popularity and strength of the platform: We have 400 partners and growing by about 10 monthly. Much More than others in the NoSQL space.
We have strategic partnerships with progressive companies like Pentaho in BI and AppDynamics for system health and performance monitoring.
And we have certification programs for systems integrators too so you can outsource with confidence.
IBM: Standardizing on BSON, MongoDB query language, and MongoDB wire protocol for DB2 integration, and that sends a very strong signal about our position in this space. Just google for IBM DB2 JSON and you’ll see.
Historically, mongoDB is very cloud friendly and although financial services tend not to use public clouds as much due to personal info and data secrecy issues, the tools and techniques developed in the public clouds for provisioning, monitoring, multitenancy, etc. can be reproduced in private clouds inside your firewall so financial services can get a leg up on that so to speak.
Let’s examine where the technology is positioned.
Here are a few of the most popular types of persistence models in use today.
RDBMS, being the most mature, are deep in functionality – but the legacy design principles are rooted on design principles almost 40 years old. And that comes at the expense of rich interaction with today’s programming languages, design requirements, and infrastructure implementation choices.
Key-value stores, at the other end of the spectrum, act essentially like HashMaps (for those Java programmers in the audience) but are not really general purpoise databases.
MongoDB trades some features of a relational database (joins, complex transactions) to enable greater scalability, flexibility, and performance for purpose. By that we mean performance for the operations as executed at the data access layer, not necessarily TPS at the database level.
To compare RDBMS and document modeling, let’s take a simple example of phone numbers for a particular customer.
Even for simple structures – a list of phone number within a customer – the data is split across 2 tables.
What are the consequences?
Managing relationship between customer and phones is non-trivial
This case is the friendly one because the same ID for the customer table is used for phones; that is not always the case, and separate foreign keys must be created and assigned o both tables.
Of course, be mindful of customers WITHOUT phones because this changes common JOIN relationships!
This approach clearly gets more complicated the more “subentities” exist for a particular design – especially those involving lists of plain scalar values
phone_0, phone_1
value_0, value_1, etc.
In mongoDB, you model your data the way it is naturally associated
Lists of things remain lists of things
No extra steps with foreign keys
Just because mongoDB is NoSQL does not mean it is without application-friendly features that are required for a general purpose database
Rich Queries and Aggregation are “expected” functions of a database and mongoDB has powerful offerings for both, complete with primary and secondary index support.
Text, Geo, and MapReduce are extended features of the platform.
NOW – let’s move on to use cases within financial serivces
Insurance is similar to Retail Banking – large direct customer base, 360 degree view of the customer and marketing / distribution channel optimization capabilities,
Many of the same themes: data consolidation, historical preservation of activity, and cross-asset flexible risk modeling.
In particular, the client-view integration of P&C, life, annuities, and other offerings across what was traditionally very separate aspects of the business (and therefore very separate systems) has had profound effects on the technology, customer relationship management, and targeted business growth.
It’s all about The Arrow. The arrow is the single most misleading thing in architecture diagrams today.
The “arrow” represents MUCH more than just “data in A going to B.”
In the traditional approach, almost from the get-go, data is extracted from the RDBMS into CSV or via ETL and immediately begins to lose fidelity. If you think back to the Customer and Phones example before, instead of extracting a complete customer entity, we likely will get two sets of files or worse – a lossy blend that perhaps only provide the first phone number!
After the extract, the loader and the target RDBMS have to have the right schema in place and good luck to an application trying to re-engineer the relationship between some of these things especially as the data shapes change. We all know what happens to CSV based environments when data changes – and that is to make a NEW feed.
In the mongoDB approach, the feeder system can extract entities in as much fidelity and richness of shape as appropriate. Because JSON is self-descriptive, new fields and indeed, complete new substructures can be added without changing the feed environment OR THE TARGET mongoDB HUB!
One of prouder moments
First feeder systems were plumbed in ONE MONTH
Risk!
Compared to distributed cache - $ and fixed schema