The document discusses using MongoDB to modernize mainframe systems by reducing costs and increasing flexibility. It describes 5 phases of mainframe modernization with MongoDB, from initially offloading reads to using MongoDB as the primary system of record. Case studies are presented where MongoDB helped customers increase developer productivity by 5-10x, lower mainframe costs by 80%, and transform IT strategies by simplifying technology stacks.
Sn wf12 amd fabric server (satheesh nanniyur) oct 12Satheesh Nanniyur
Big Data has influenced the data center architecture in ways unimagined before. This presentation explores the Fabric Compute and Storage architectures to enable extreme scale-out, low power, high density Big Data deployments
Presentation given 7th March 2017, including recent withdrawal announcement about POWER7 servers, the new AIX website, AIX Enterprise Edition, PowerVC and Cloud, IBM Design Thinking, Project Monocle, IBM Systems PoV, my Insurance story where I took a surprise trip to Lisbon, Hybrid Cloud, IBM Power Systems Enterprise servers for Cloud, reference architectures with PowerVC and OpenStack, OpenPOWER Foundation, LC servers, MondoDB, GPU and NVLink, Deep Learning, PowerAI and POWER9
Freddie Mac & KPMG Case Study – Advanced Machine Learning Data Integration wi...DataWorks Summit
Freddie Mac and KPMG will share an innovative solution to accelerate data model (ERM) development and data integration on a highly-distributed, in-memory computing platform. The machine learning component (PySpark) of the framework executes against evolving semi-structured and structured data sets to learn and automate data mapping from various sources to a targeted schema. As a result, it significantly reduces the manual analysis, design and development effort, as well as establishes faster data integration across a variety of complex and high-volume datasets.
The solution will leverage various components of the Hadoop data platform. It will use Sqoop to import the data into the platform. PySpark will be leveraged in order to process the data. In addition, the application will also have a developed PySpark ML model that will run as a continuous job in Spark to process the ingested semi-structured data and intelligently map into the proper Hive tables. This will all be scheduled thru the use of Oozie.
Speakers
Kevin Martelli, KPMG, Managing Director
Balaji Wooputur, Freddie Mac, Risk Analyst Director
Hybrid Data Warehouse Hadoop ImplementationsDavid Portnoy
Data Warehouse vendors are evolving to incorporate the best Hadoop has to offer. Similarly, the Hadoop ecosystem is growing to include capabilities previously available only to large scale (MPP) DW platforms.
Data Engineer, Patterns & Architecture The future: Deep-dive into Microservic...Igor De Souza
With Industry 4.0, several technologies are used to have data analysis in real-time, maintaining, organizing, and building this on the other hand is a complex and complicated job. Over the past 30 years, we saw several ideas to centralize the database in a single place as the united and true source of data has been implemented in companies, such as Data wareHouse, NoSQL, Data Lake, Lambda & Kappa Architecture.
On the other hand, Software Engineering has been applying ideas to separate applications to facilitate and improve application performance, such as microservices.
The idea is to use the MicroService patterns on the date and divide the model into several smaller ones. And a good way to split it up is to use the model using the DDD principles. And that's how I try to explain and define DataMesh & Data Fabric.
Sn wf12 amd fabric server (satheesh nanniyur) oct 12Satheesh Nanniyur
Big Data has influenced the data center architecture in ways unimagined before. This presentation explores the Fabric Compute and Storage architectures to enable extreme scale-out, low power, high density Big Data deployments
Presentation given 7th March 2017, including recent withdrawal announcement about POWER7 servers, the new AIX website, AIX Enterprise Edition, PowerVC and Cloud, IBM Design Thinking, Project Monocle, IBM Systems PoV, my Insurance story where I took a surprise trip to Lisbon, Hybrid Cloud, IBM Power Systems Enterprise servers for Cloud, reference architectures with PowerVC and OpenStack, OpenPOWER Foundation, LC servers, MondoDB, GPU and NVLink, Deep Learning, PowerAI and POWER9
Freddie Mac & KPMG Case Study – Advanced Machine Learning Data Integration wi...DataWorks Summit
Freddie Mac and KPMG will share an innovative solution to accelerate data model (ERM) development and data integration on a highly-distributed, in-memory computing platform. The machine learning component (PySpark) of the framework executes against evolving semi-structured and structured data sets to learn and automate data mapping from various sources to a targeted schema. As a result, it significantly reduces the manual analysis, design and development effort, as well as establishes faster data integration across a variety of complex and high-volume datasets.
The solution will leverage various components of the Hadoop data platform. It will use Sqoop to import the data into the platform. PySpark will be leveraged in order to process the data. In addition, the application will also have a developed PySpark ML model that will run as a continuous job in Spark to process the ingested semi-structured data and intelligently map into the proper Hive tables. This will all be scheduled thru the use of Oozie.
Speakers
Kevin Martelli, KPMG, Managing Director
Balaji Wooputur, Freddie Mac, Risk Analyst Director
Hybrid Data Warehouse Hadoop ImplementationsDavid Portnoy
Data Warehouse vendors are evolving to incorporate the best Hadoop has to offer. Similarly, the Hadoop ecosystem is growing to include capabilities previously available only to large scale (MPP) DW platforms.
Data Engineer, Patterns & Architecture The future: Deep-dive into Microservic...Igor De Souza
With Industry 4.0, several technologies are used to have data analysis in real-time, maintaining, organizing, and building this on the other hand is a complex and complicated job. Over the past 30 years, we saw several ideas to centralize the database in a single place as the united and true source of data has been implemented in companies, such as Data wareHouse, NoSQL, Data Lake, Lambda & Kappa Architecture.
On the other hand, Software Engineering has been applying ideas to separate applications to facilitate and improve application performance, such as microservices.
The idea is to use the MicroService patterns on the date and divide the model into several smaller ones. And a good way to split it up is to use the model using the DDD principles. And that's how I try to explain and define DataMesh & Data Fabric.
Designing Fast Data Architecture for Big Data using Logical Data Warehouse a...Denodo
Companies such as Autodesk are fast replacing the once-true- and-tried physical data warehouses with logical data warehouses/ data lakes. Why? Because they are able to accomplish the same results in 1/6 th of the time and with 1/4 th of the resources.
In this webinar, Autodesk’s Platform Lead, Kurt Jackson,, will describe how they designed a modern fast data architecture as a single unified logical data warehouse/ data lake using data virtualization and contemporary big data analytics like Spark.
Logical data warehouse / data lake is a virtual abstraction layer over the physical data warehouse, big data repositories, cloud, and other enterprise applications. It unifies both structured and unstructured data in real-time to power analytical and operational use cases.
Introduce the Big-Data data characteristic, big-data process flow/architecture, and take out an example about EKG solution to explain why we are run into big data issue, and try to build up a big-data server farm architecture. From there, you can have more concrete point of view, what the big-data is.
The Most Trusted In-Memory database in the world- AltibaseAltibase
Life is a database. How you manage data defines business. ALTIBASE HDB with its Hybrid architecture combines the extreme speed of an In-Memory Database with the storage capacity of an On-Disk Database’ in a single unified engine.
ALTIBASE® HDB™ is the only Hybrid DBMS in the industry that combines an in-memory DBMS with an on-disk DBMS, with a single uniform interface, enabling real-time access to large volumes of data, while simplifying and revolutionizing data processing. ALTIBASE XDB is the world’s fastest in-memory DBMS, featuring unprecedented high performance, and supports SQL-99 standard for wide applicability.
Altibase is provider of In-Memory data solutions for real-time access, analysis and distribution of high volumes of data in mission-critical environments.
Please visit our website (www.altibase.com) to learn more about our products and read more about our case studies. Or contact us at info@altibase.com. We look forward to helping you!
Learn about recent advances in MongoDB in the area of In-Memory Computing (Apache Spark Integration, In-memory Storage Engine), and how these advances can enable you to build a new breed of applications, and enhance your Enterprise Data Architecture.
In this presentation, we:
1. Look at the challenges and opportunities of the data era
2. Look at key challenges of the legacy data warehouses such as data diversity, complexity, cost, scalabilily, performance, management, ...
3. Look at how modern data warehouses in the cloud not only overcome most of these challenges but also how some of them bring additional technical innovations and capabilities such as pay as you go cloud-based services, decoupling of storage and compute, scaling up or down, effortless management, native support of semi-structured data ...
4. Show how capabilities brought by modern data warehouses in the cloud, help businesses, either new or existing ones, during the phases of their lifecycle such as launch, growth, maturity and renewal/decline.
5. Share a Near-Real-Time Data Warehousing use case built on Snowflake and give a live demo to showcase ease of use, fast provisioning, continuous data ingestion, support of JSON data ...
Building a Modern Data Architecture by Ben Sharma at Strata + Hadoop World Sa...Zaloni
When building your data stack, the architecture could be your biggest challenge. Yet it could also be the best predictor for success. With so many elements to consider and no proven playbook, where do you begin to assemble best practices for a scalable data architecture? Ben Sharma, thought leader and coauthor of Architecting Data Lakes, offers lessons learned from the field to get you started.
InfoSphere BigInsights - Analytics power for Hadoop - field experienceWilfried Hoge
How to analyze binary data as a technical business user. Use InfoSphere BigInsights to bring analytics on Hadoop closer to a user.
Presented at the OOP conference in Munich, 27.01.2015
Watch full webinar here: https://buff.ly/2MwDyhq
The use of Data Virtualization as a global delivery layer means that Denodo is a critical component of the data architecture. It cannot fail, needs to be fault tolerant and perform as designed. In this context, enterprise level-monitoring is key to make sure the virtual layer is in good health and proactively detect potential issues. Fortunately, Denodo provides a full suite of monitoring capabilities and integrates with leading monitoring tools like Splunk, Elastic and CloudWatch.
Attend this session to learn:
- How to configure the key global parameters of the Denodo server
- How to integrate Denodo with enterprise monitoring solutions like Splunk and Cloudwatch
- Key metrics to monitor
IBM InfoSphere BigInsights for Hadoop: 10 Reasons to Love ItIBM Analytics
Originally Published on Oct 15, 2014
IBM InfoSphere BigInsights is an industry-standard Hadoop offering that combines the best of open-source software with enterprise-grade features.
- #1 InfoSphere BigInsights is 100% standard, open-source Hadoop
- #2 Big SQL - Lightning fast, ANSI-compliant, native Hadoop formats
- #3 BigSheets - Spreadsheet-like data access for business users
- #4 Big Text - Simplify text analytics and natural language
- #5 Adaptive MapReduce - Fully compatible, four times faster
- #6 In-Hadoop Analytics - Deploy the analytics to the data
- #7 HDFS and POSIX - a more capable enterprise file system
- #8 Big R - Deep R Language integration in Hadoop
- #9 IBM Watson Explorer - Search, explore and visualize all your data
- #10 Accelerators - Get to market faster leveraging pre-written code
To learn more about IBM InfoSphere BigInsights, download the free InfoSphere BigInsights QuickStart Edition from http://ibm.com/hadoop.
Join CIGNEX Datamatics, Alfresco’s Global Platinum Partner, as they share the case study experience of a leading global online university. Together we’ll take a close look at their document management and web portal solution and their integrations with Alfresco ECM, Liferay Portal and Moodle Learning Management System.
Performance Considerations in Logical Data WarehouseDenodo
Watch the live presentation on-demand here: https://goo.gl/6RsqrA
When processing very large amounts of data at the speed of thought, performance questions raise their ugly head. Logical data warehouse architectures rival the conventional data warehouses in speed while reducing the need to extract, transform, and load the data.
Watch this Denodo DataFest 2017 session to discover:
• The perks of a logical data warehouse vs. the physical data warehouse.
• Challenging the myths of performance of a logical data warehouse.
• Denodo's dynamic query optimizer tool.
Big Data: Architecture and Performance Considerations in Logical Data LakesDenodo
This presentation explains in detail what a Data Lake Architecture looks like, how data virtualization fits into the Logical Data Lake, and goes over some performance tips. Also it includes an example demonstrating this model's performance.
This presentation is part of the Fast Data Strategy Conference, and you can watch the video here goo.gl/9Jwfu6.
Microsoft and Hortonworks Delivers the Modern Data Architecture for Big DataHortonworks
Joint webinar with Microsoft and Hortonworns on the power of combining the Hortonworks Data Platform with Microsoft’s ubiquitous Windows, Office, SQL Server, Parallel Data Warehouse, and Azure platform to build the Modern Data Architecture for Big Data.
Presentation on Data Mesh: The paradigm shift is a new type of eco-system architecture, which is a shift left towards a modern distributed architecture in which it allows domain-specific data and views “data-as-a-product,” enabling each domain to handle its own data pipelines.
As the core SQL processing engine of the Greenplum Unified Analytics Platform, the Greenplum Database delivers Industry leading performance for Big Data Analytics while scaling linearly on massively parallel processing clusters of standard x86 servers. This session reviews the product's underlying architecture, identify key differentiation areas, go deep into the new features introduced in Greenplum Database Release 4.2, and discuss our plans for 2012.
Designing Fast Data Architecture for Big Data using Logical Data Warehouse a...Denodo
Companies such as Autodesk are fast replacing the once-true- and-tried physical data warehouses with logical data warehouses/ data lakes. Why? Because they are able to accomplish the same results in 1/6 th of the time and with 1/4 th of the resources.
In this webinar, Autodesk’s Platform Lead, Kurt Jackson,, will describe how they designed a modern fast data architecture as a single unified logical data warehouse/ data lake using data virtualization and contemporary big data analytics like Spark.
Logical data warehouse / data lake is a virtual abstraction layer over the physical data warehouse, big data repositories, cloud, and other enterprise applications. It unifies both structured and unstructured data in real-time to power analytical and operational use cases.
Introduce the Big-Data data characteristic, big-data process flow/architecture, and take out an example about EKG solution to explain why we are run into big data issue, and try to build up a big-data server farm architecture. From there, you can have more concrete point of view, what the big-data is.
The Most Trusted In-Memory database in the world- AltibaseAltibase
Life is a database. How you manage data defines business. ALTIBASE HDB with its Hybrid architecture combines the extreme speed of an In-Memory Database with the storage capacity of an On-Disk Database’ in a single unified engine.
ALTIBASE® HDB™ is the only Hybrid DBMS in the industry that combines an in-memory DBMS with an on-disk DBMS, with a single uniform interface, enabling real-time access to large volumes of data, while simplifying and revolutionizing data processing. ALTIBASE XDB is the world’s fastest in-memory DBMS, featuring unprecedented high performance, and supports SQL-99 standard for wide applicability.
Altibase is provider of In-Memory data solutions for real-time access, analysis and distribution of high volumes of data in mission-critical environments.
Please visit our website (www.altibase.com) to learn more about our products and read more about our case studies. Or contact us at info@altibase.com. We look forward to helping you!
Learn about recent advances in MongoDB in the area of In-Memory Computing (Apache Spark Integration, In-memory Storage Engine), and how these advances can enable you to build a new breed of applications, and enhance your Enterprise Data Architecture.
In this presentation, we:
1. Look at the challenges and opportunities of the data era
2. Look at key challenges of the legacy data warehouses such as data diversity, complexity, cost, scalabilily, performance, management, ...
3. Look at how modern data warehouses in the cloud not only overcome most of these challenges but also how some of them bring additional technical innovations and capabilities such as pay as you go cloud-based services, decoupling of storage and compute, scaling up or down, effortless management, native support of semi-structured data ...
4. Show how capabilities brought by modern data warehouses in the cloud, help businesses, either new or existing ones, during the phases of their lifecycle such as launch, growth, maturity and renewal/decline.
5. Share a Near-Real-Time Data Warehousing use case built on Snowflake and give a live demo to showcase ease of use, fast provisioning, continuous data ingestion, support of JSON data ...
Building a Modern Data Architecture by Ben Sharma at Strata + Hadoop World Sa...Zaloni
When building your data stack, the architecture could be your biggest challenge. Yet it could also be the best predictor for success. With so many elements to consider and no proven playbook, where do you begin to assemble best practices for a scalable data architecture? Ben Sharma, thought leader and coauthor of Architecting Data Lakes, offers lessons learned from the field to get you started.
InfoSphere BigInsights - Analytics power for Hadoop - field experienceWilfried Hoge
How to analyze binary data as a technical business user. Use InfoSphere BigInsights to bring analytics on Hadoop closer to a user.
Presented at the OOP conference in Munich, 27.01.2015
Watch full webinar here: https://buff.ly/2MwDyhq
The use of Data Virtualization as a global delivery layer means that Denodo is a critical component of the data architecture. It cannot fail, needs to be fault tolerant and perform as designed. In this context, enterprise level-monitoring is key to make sure the virtual layer is in good health and proactively detect potential issues. Fortunately, Denodo provides a full suite of monitoring capabilities and integrates with leading monitoring tools like Splunk, Elastic and CloudWatch.
Attend this session to learn:
- How to configure the key global parameters of the Denodo server
- How to integrate Denodo with enterprise monitoring solutions like Splunk and Cloudwatch
- Key metrics to monitor
IBM InfoSphere BigInsights for Hadoop: 10 Reasons to Love ItIBM Analytics
Originally Published on Oct 15, 2014
IBM InfoSphere BigInsights is an industry-standard Hadoop offering that combines the best of open-source software with enterprise-grade features.
- #1 InfoSphere BigInsights is 100% standard, open-source Hadoop
- #2 Big SQL - Lightning fast, ANSI-compliant, native Hadoop formats
- #3 BigSheets - Spreadsheet-like data access for business users
- #4 Big Text - Simplify text analytics and natural language
- #5 Adaptive MapReduce - Fully compatible, four times faster
- #6 In-Hadoop Analytics - Deploy the analytics to the data
- #7 HDFS and POSIX - a more capable enterprise file system
- #8 Big R - Deep R Language integration in Hadoop
- #9 IBM Watson Explorer - Search, explore and visualize all your data
- #10 Accelerators - Get to market faster leveraging pre-written code
To learn more about IBM InfoSphere BigInsights, download the free InfoSphere BigInsights QuickStart Edition from http://ibm.com/hadoop.
Join CIGNEX Datamatics, Alfresco’s Global Platinum Partner, as they share the case study experience of a leading global online university. Together we’ll take a close look at their document management and web portal solution and their integrations with Alfresco ECM, Liferay Portal and Moodle Learning Management System.
Performance Considerations in Logical Data WarehouseDenodo
Watch the live presentation on-demand here: https://goo.gl/6RsqrA
When processing very large amounts of data at the speed of thought, performance questions raise their ugly head. Logical data warehouse architectures rival the conventional data warehouses in speed while reducing the need to extract, transform, and load the data.
Watch this Denodo DataFest 2017 session to discover:
• The perks of a logical data warehouse vs. the physical data warehouse.
• Challenging the myths of performance of a logical data warehouse.
• Denodo's dynamic query optimizer tool.
Big Data: Architecture and Performance Considerations in Logical Data LakesDenodo
This presentation explains in detail what a Data Lake Architecture looks like, how data virtualization fits into the Logical Data Lake, and goes over some performance tips. Also it includes an example demonstrating this model's performance.
This presentation is part of the Fast Data Strategy Conference, and you can watch the video here goo.gl/9Jwfu6.
Microsoft and Hortonworks Delivers the Modern Data Architecture for Big DataHortonworks
Joint webinar with Microsoft and Hortonworns on the power of combining the Hortonworks Data Platform with Microsoft’s ubiquitous Windows, Office, SQL Server, Parallel Data Warehouse, and Azure platform to build the Modern Data Architecture for Big Data.
Presentation on Data Mesh: The paradigm shift is a new type of eco-system architecture, which is a shift left towards a modern distributed architecture in which it allows domain-specific data and views “data-as-a-product,” enabling each domain to handle its own data pipelines.
As the core SQL processing engine of the Greenplum Unified Analytics Platform, the Greenplum Database delivers Industry leading performance for Big Data Analytics while scaling linearly on massively parallel processing clusters of standard x86 servers. This session reviews the product's underlying architecture, identify key differentiation areas, go deep into the new features introduced in Greenplum Database Release 4.2, and discuss our plans for 2012.
As an official MongoDB-as-a-Service offering from MongoDB Inc., the maker for MongoDB, Atlas is becoming a very popular service offering for those who wish to build their applications in the cloud, regardless on AWS, Azure or GCP. One less known cloud product offered on the Atlas platform is Stitch, A group of services designed to interact with Atlas in every conceivable way, including creating endpoints, triggers, user authentication flows, serverless functions, and a UI to handle all of this. Adding these together, you have a server-less solution running on top of MongoDB cloud.
During this presentation, Infusion and MongoDB shared their mainframe optimization experiences and best practices. These have been gained from working with a variety of organizations, including a case study from one of the world’s largest banks. MongoDB and Infusion bring a tested approach that provides a new way of modernizing mainframe applications, while keeping pace with the demand for new digital services.
Big Data, IoT, data lake, unstructured data, Hadoop, cloud, and massively parallel processing (MPP) are all just fancy words unless you can find uses cases for all this technology. Join me as I talk about the many use cases I have seen, from streaming data to advanced analytics, broken down by industry. I’ll show you how all this technology fits together by discussing various architectures and the most common approaches to solving data problems and hopefully set off light bulbs in your head on how big data can help your organization make better business decisions.
Marketing Automation at Scale: How Marketo Solved Key Data Management Challen...Continuent
Marketo provides the leading cloud-based marketing software platform for companies of all sizes to build and sustain engaging customer relationships. Marketo's SaaS platform runs on MySQL and has faced data management challenges common to all 24x7 SaaS businesses:
- Keeping data available regardless of DBMS failures or planned maintenance
- Utilizing hardware optimized for multi-terabyte MySQL servers
- Keeping replicas caught up and ready for instant failover despite high transaction loads
In this webinar, Nick Bonfiglio, VP of Operations at Marketo, describes how Marketo manages thousands of customers and processes a billion marketing analytics transactions a day using Continuent Tungsten and MySQL atop an innovative hardware architecture. He explains how Tungsten parallel replication paved the way to rapid growth by solving Marketo's biggest MySQL challenge: keeping DBMS replicas up to date despite massive transaction loads.
Solving enterprise challenges through scale out storage & big compute finalAvere Systems
Google Cloud Platform, Avere Systems, and Cycle Computing experts will share best practices for advancing solutions to big challenges faced by enterprises with growing compute and storage needs. In this “best practices” webinar, you’ll hear how these companies are working to improve results that drive businesses forward through scalability, performance, and ease of management.
The slides were from a webinar presented January 24, 2017. The audience learned:
- How enterprises are using Google Cloud Platform to gain compute and storage capacity on-demand
- Best practices for efficient use of cloud compute and storage resources
- Overcoming the need for file systems within a hybrid cloud environment
- Understand how to eliminate latency between cloud and data center architectures
- Learn how to best manage simulation, analytics, and big data workloads in dynamic environments
- Look at market dynamics drawing companies to new storage models over the next several years
Presenters communicated a foundation to build infrastructure to support ongoing demand growth.
Webinar: Enterprise Trends for Database-as-a-ServiceMongoDB
Two complementary trends are particularly strong in enterprise IT today: MongoDB itself, and the movement of infrastructure, platform, and software to as-a-service models. Being designed from the start to work in cloud deployments, MongoDB is a natural fit.
Learn how your enterprise can create its own MongoDB service offering, combining the advantages of MongoDB and cloud for agile, nearly-instantaneous deployments. Ease your operations workload by centralizing your points for enforcement, standardize best policies, and enable elastic scalability.
We will provide you with an enterprise planning outline which incorporates needs and value for stakeholders across operations, development, and business. We will cover accounting, chargeback integration, and quantification of benefits to the enterprise (such as standardizing best practices, creating elastic architecture, and reducing database maintenance costs).
MongoDB SoCal 2020: Migrate Anything* to MongoDB AtlasMongoDB
During this talk we'll navigate through a customer's journey as they migrate an existing MongoDB deployment to MongoDB Atlas. While the migration itself can be as simple as a few clicks, the prep/post effort requires due diligence to ensure a smooth transfer. We'll cover these steps in detail and provide best practices. In addition, we’ll provide an overview of what to consider when migrating other cloud data stores, traditional databases and MongoDB imitations to MongoDB Atlas.
MongoDB SoCal 2020: Go on a Data Safari with MongoDB Charts!MongoDB
These days, everyone is expected to be a data analyst. But with so much data available, how can you make sense of it and be sure you're making the best decisions? One great approach is to use data visualizations. In this session, we take a complex dataset and show how the breadth of capabilities in MongoDB Charts can help you turn bits and bytes into insights.
MongoDB SoCal 2020: Using MongoDB Services in Kubernetes: Any Platform, Devel...MongoDB
MongoDB Kubernetes operator and MongoDB Open Service Broker are ready for production operations. Learn about how MongoDB can be used with the most popular container orchestration platform, Kubernetes, and bring self-service, persistent storage to your containerized applications. A demo will show you how easy it is to enable MongoDB clusters as an External Service using the Open Service Broker API for MongoDB
MongoDB SoCal 2020: A Complete Methodology of Data Modeling for MongoDBMongoDB
Are you new to schema design for MongoDB, or are you looking for a more complete or agile process than what you are following currently? In this talk, we will guide you through the phases of a flexible methodology that you can apply to projects ranging from small to large with very demanding requirements.
MongoDB SoCal 2020: From Pharmacist to Analyst: Leveraging MongoDB for Real-T...MongoDB
Humana, like many companies, is tackling the challenge of creating real-time insights from data that is diverse and rapidly changing. This is our journey of how we used MongoDB to combined traditional batch approaches with streaming technologies to provide continues alerting capabilities from real-time data streams.
MongoDB SoCal 2020: Best Practices for Working with IoT and Time-series DataMongoDB
Time series data is increasingly at the heart of modern applications - think IoT, stock trading, clickstreams, social media, and more. With the move from batch to real time systems, the efficient capture and analysis of time series data can enable organizations to better detect and respond to events ahead of their competitors or to improve operational efficiency to reduce cost and risk. Working with time series data is often different from regular application data, and there are best practices you should observe.
This talk covers:
Common components of an IoT solution
The challenges involved with managing time-series data in IoT applications
Different schema designs, and how these affect memory and disk utilization – two critical factors in application performance.
How to query, analyze and present IoT time-series data using MongoDB Compass and MongoDB Charts
At the end of the session, you will have a better understanding of key best practices in managing IoT time-series data with MongoDB.
Join this talk and test session with a MongoDB Developer Advocate where you'll go over the setup, configuration, and deployment of an Atlas environment. Create a service that you can take back in a production-ready state and prepare to unleash your inner genius.
MongoDB .local San Francisco 2020: Powering the new age data demands [Infosys]MongoDB
Our clients have unique use cases and data patterns that mandate the choice of a particular strategy. To implement these strategies, it is mandatory that we unlearn a lot of relational concepts while designing and rapidly developing efficient applications on NoSQL. In this session, we will talk about some of our client use cases, the strategies we have adopted, and the features of MongoDB that assisted in implementing these strategies.
MongoDB .local San Francisco 2020: Using Client Side Encryption in MongoDB 4.2MongoDB
Encryption is not a new concept to MongoDB. Encryption may occur in-transit (with TLS) and at-rest (with the encrypted storage engine). But MongoDB 4.2 introduces support for Client Side Encryption, ensuring the most sensitive data is encrypted before ever leaving the client application. Even full access to your MongoDB servers is not enough to decrypt this data. And better yet, Client Side Encryption can be enabled at the "flick of a switch".
This session covers using Client Side Encryption in your applications. This includes the necessary setup, how to encrypt data without sacrificing queryability, and what trade-offs to expect.
MongoDB .local San Francisco 2020: Using MongoDB Services in Kubernetes: any ...MongoDB
MongoDB Kubernetes operator is ready for prime-time. Learn about how MongoDB can be used with most popular orchestration platform, Kubernetes, and bring self-service, persistent storage to your containerized applications.
MongoDB .local San Francisco 2020: Go on a Data Safari with MongoDB Charts!MongoDB
These days, everyone is expected to be a data analyst. But with so much data available, how can you make sense of it and be sure you're making the best decisions? One great approach is to use data visualizations. In this session, we take a complex dataset and show how the breadth of capabilities in MongoDB Charts can help you turn bits and bytes into insights.
MongoDB .local San Francisco 2020: From SQL to NoSQL -- Changing Your MindsetMongoDB
When you need to model data, is your first instinct to start breaking it down into rows and columns? Mine used to be too. When you want to develop apps in a modern, agile way, NoSQL databases can be the best option. Come to this talk to learn how to take advantage of all that NoSQL databases have to offer and discover the benefits of changing your mindset from the legacy, tabular way of modeling data. We’ll compare and contrast the terms and concepts in SQL databases and MongoDB, explain the benefits of using MongoDB compared to SQL databases, and walk through data modeling basics so you feel confident as you begin using MongoDB.
MongoDB .local San Francisco 2020: MongoDB Atlas JumpstartMongoDB
Join this talk and test session with a MongoDB Developer Advocate where you'll go over the setup, configuration, and deployment of an Atlas environment. Create a service that you can take back in a production-ready state and prepare to unleash your inner genius.
MongoDB .local San Francisco 2020: Tips and Tricks++ for Querying and Indexin...MongoDB
Query performance should be the unsung hero of an application, but without proper configuration, can become a constant headache. When used properly, MongoDB provides extremely powerful querying capabilities. In this session, we'll discuss concepts like equality, sort, range, managing query predicates versus sequential predicates, and best practices to building multikey indexes.
MongoDB .local San Francisco 2020: Aggregation Pipeline Power++MongoDB
Aggregation pipeline has been able to power your analysis of data since version 2.2. In 4.2 we added more power and now you can use it for more powerful queries, updates, and outputting your data to existing collections. Come hear how you can do everything with the pipeline, including single-view, ETL, data roll-ups and materialized views.
MongoDB .local San Francisco 2020: A Complete Methodology of Data Modeling fo...MongoDB
Are you new to schema design for MongoDB, or are you looking for a more complete or agile process than what you are following currently? In this talk, we will guide you through the phases of a flexible methodology that you can apply to projects ranging from small to large with very demanding requirements.
MongoDB .local San Francisco 2020: MongoDB Atlas Data Lake Technical Deep DiveMongoDB
MongoDB Atlas Data Lake is a new service offered by MongoDB Atlas. Many organizations store long term, archival data in cost-effective storage like S3, GCP, and Azure Blobs. However, many of them do not have robust systems or tools to effectively utilize large amounts of data to inform decision making. MongoDB Atlas Data Lake is a service allowing organizations to analyze their long-term data to discover a wealth of information about their business.
This session will take a deep dive into the features that are currently available in MongoDB Atlas Data Lake and how they are implemented. In addition, we'll discuss future plans and opportunities and offer ample Q&A time with the engineers on the project.
MongoDB .local San Francisco 2020: Developing Alexa Skills with MongoDB & GolangMongoDB
Virtual assistants are becoming the new norm when it comes to daily life, with Amazon’s Alexa being the leader in the space. As a developer, not only do you need to make web and mobile compliant applications, but you need to be able to support virtual assistants like Alexa. However, the process isn’t quite the same between the platforms.
How do you handle requests? Where do you store your data and work with it to create meaningful responses with little delay? How much of your code needs to change between platforms?
In this session we’ll see how to design and develop applications known as Skills for Amazon Alexa powered devices using the Go programming language and MongoDB.
MongoDB .local Paris 2020: Realm : l'ingrédient secret pour de meilleures app...MongoDB
aux Core Data, appréciée par des centaines de milliers de développeurs. Apprenez ce qui rend Realm spécial et comment il peut être utilisé pour créer de meilleures applications plus rapidement.
MongoDB .local Paris 2020: Upply @MongoDB : Upply : Quand le Machine Learning...MongoDB
Il n’a jamais été aussi facile de commander en ligne et de se faire livrer en moins de 48h très souvent gratuitement. Cette simplicité d’usage cache un marché complexe de plus de 8000 milliards de $.
La data est bien connu du monde de la Supply Chain (itinéraires, informations sur les marchandises, douanes,…), mais la valeur de ces données opérationnelles reste peu exploitée. En alliant expertise métier et Data Science, Upply redéfinit les fondamentaux de la Supply Chain en proposant à chacun des acteurs de surmonter la volatilité et l’inefficacité du marché.
Experience our free, in-depth three-part Tendenci Platform Corporate Membership Management workshop series! In Session 1 on May 14th, 2024, we began with an Introduction and Setup, mastering the configuration of your Corporate Membership Module settings to establish membership types, applications, and more. Then, on May 16th, 2024, in Session 2, we focused on binding individual members to a Corporate Membership and Corporate Reps, teaching you how to add individual members and assign Corporate Representatives to manage dues, renewals, and associated members. Finally, on May 28th, 2024, in Session 3, we covered questions and concerns, addressing any queries or issues you may have.
For more Tendenci AMS events, check out www.tendenci.com/events
Code reviews are vital for ensuring good code quality. They serve as one of our last lines of defense against bugs and subpar code reaching production.
Yet, they often turn into annoying tasks riddled with frustration, hostility, unclear feedback and lack of standards. How can we improve this crucial process?
In this session we will cover:
- The Art of Effective Code Reviews
- Streamlining the Review Process
- Elevating Reviews with Automated Tools
By the end of this presentation, you'll have the knowledge on how to organize and improve your code review proces
Quarkus Hidden and Forbidden ExtensionsMax Andersen
Quarkus has a vast extension ecosystem and is known for its subsonic and subatomic feature set. Some of these features are not as well known, and some extensions are less talked about, but that does not make them less interesting - quite the opposite.
Come join this talk to see some tips and tricks for using Quarkus and some of the lesser known features, extensions and development techniques.
We describe the deployment and use of Globus Compute for remote computation. This content is aimed at researchers who wish to compute on remote resources using a unified programming interface, as well as system administrators who will deploy and operate Globus Compute services on their research computing infrastructure.
Advanced Flow Concepts Every Developer Should KnowPeter Caitens
Tim Combridge from Sensible Giraffe and Salesforce Ben presents some important tips that all developers should know when dealing with Flows in Salesforce.
top nidhi software solution freedownloadvrstrong314
This presentation emphasizes the importance of data security and legal compliance for Nidhi companies in India. It highlights how online Nidhi software solutions, like Vector Nidhi Software, offer advanced features tailored to these needs. Key aspects include encryption, access controls, and audit trails to ensure data security. The software complies with regulatory guidelines from the MCA and RBI and adheres to Nidhi Rules, 2014. With customizable, user-friendly interfaces and real-time features, these Nidhi software solutions enhance efficiency, support growth, and provide exceptional member services. The presentation concludes with contact information for further inquiries.
Prosigns: Transforming Business with Tailored Technology SolutionsProsigns
Unlocking Business Potential: Tailored Technology Solutions by Prosigns
Discover how Prosigns, a leading technology solutions provider, partners with businesses to drive innovation and success. Our presentation showcases our comprehensive range of services, including custom software development, web and mobile app development, AI & ML solutions, blockchain integration, DevOps services, and Microsoft Dynamics 365 support.
Custom Software Development: Prosigns specializes in creating bespoke software solutions that cater to your unique business needs. Our team of experts works closely with you to understand your requirements and deliver tailor-made software that enhances efficiency and drives growth.
Web and Mobile App Development: From responsive websites to intuitive mobile applications, Prosigns develops cutting-edge solutions that engage users and deliver seamless experiences across devices.
AI & ML Solutions: Harnessing the power of Artificial Intelligence and Machine Learning, Prosigns provides smart solutions that automate processes, provide valuable insights, and drive informed decision-making.
Blockchain Integration: Prosigns offers comprehensive blockchain solutions, including development, integration, and consulting services, enabling businesses to leverage blockchain technology for enhanced security, transparency, and efficiency.
DevOps Services: Prosigns' DevOps services streamline development and operations processes, ensuring faster and more reliable software delivery through automation and continuous integration.
Microsoft Dynamics 365 Support: Prosigns provides comprehensive support and maintenance services for Microsoft Dynamics 365, ensuring your system is always up-to-date, secure, and running smoothly.
Learn how our collaborative approach and dedication to excellence help businesses achieve their goals and stay ahead in today's digital landscape. From concept to deployment, Prosigns is your trusted partner for transforming ideas into reality and unlocking the full potential of your business.
Join us on a journey of innovation and growth. Let's partner for success with Prosigns.
In software engineering, the right architecture is essential for robust, scalable platforms. Wix has undergone a pivotal shift from event sourcing to a CRUD-based model for its microservices. This talk will chart the course of this pivotal journey.
Event sourcing, which records state changes as immutable events, provided robust auditing and "time travel" debugging for Wix Stores' microservices. Despite its benefits, the complexity it introduced in state management slowed development. Wix responded by adopting a simpler, unified CRUD model. This talk will explore the challenges of event sourcing and the advantages of Wix's new "CRUD on steroids" approach, which streamlines API integration and domain event management while preserving data integrity and system resilience.
Participants will gain valuable insights into Wix's strategies for ensuring atomicity in database updates and event production, as well as caching, materialization, and performance optimization techniques within a distributed system.
Join us to discover how Wix has mastered the art of balancing simplicity and extensibility, and learn how the re-adoption of the modest CRUD has turbocharged their development velocity, resilience, and scalability in a high-growth environment.
Why React Native as a Strategic Advantage for Startup Innovation.pdfayushiqss
Do you know that React Native is being increasingly adopted by startups as well as big companies in the mobile app development industry? Big names like Facebook, Instagram, and Pinterest have already integrated this robust open-source framework.
In fact, according to a report by Statista, the number of React Native developers has been steadily increasing over the years, reaching an estimated 1.9 million by the end of 2024. This means that the demand for this framework in the job market has been growing making it a valuable skill.
But what makes React Native so popular for mobile application development? It offers excellent cross-platform capabilities among other benefits. This way, with React Native, developers can write code once and run it on both iOS and Android devices thus saving time and resources leading to shorter development cycles hence faster time-to-market for your app.
Let’s take the example of a startup, which wanted to release their app on both iOS and Android at once. Through the use of React Native they managed to create an app and bring it into the market within a very short period. This helped them gain an advantage over their competitors because they had access to a large user base who were able to generate revenue quickly for them.
Climate Science Flows: Enabling Petabyte-Scale Climate Analysis with the Eart...Globus
The Earth System Grid Federation (ESGF) is a global network of data servers that archives and distributes the planet’s largest collection of Earth system model output for thousands of climate and environmental scientists worldwide. Many of these petabyte-scale data archives are located in proximity to large high-performance computing (HPC) or cloud computing resources, but the primary workflow for data users consists of transferring data, and applying computations on a different system. As a part of the ESGF 2.0 US project (funded by the United States Department of Energy Office of Science), we developed pre-defined data workflows, which can be run on-demand, capable of applying many data reduction and data analysis to the large ESGF data archives, transferring only the resultant analysis (ex. visualizations, smaller data files). In this talk, we will showcase a few of these workflows, highlighting how Globus Flows can be used for petabyte-scale climate analysis.
SOCRadar Research Team: Latest Activities of IntelBrokerSOCRadar
The European Union Agency for Law Enforcement Cooperation (Europol) has suffered an alleged data breach after a notorious threat actor claimed to have exfiltrated data from its systems. Infamous data leaker IntelBroker posted on the even more infamous BreachForums hacking forum, saying that Europol suffered a data breach this month.
The alleged breach affected Europol agencies CCSE, EC3, Europol Platform for Experts, Law Enforcement Forum, and SIRIUS. Infiltration of these entities can disrupt ongoing investigations and compromise sensitive intelligence shared among international law enforcement agencies.
However, this is neither the first nor the last activity of IntekBroker. We have compiled for you what happened in the last few days. To track such hacker activities on dark web sources like hacker forums, private Telegram channels, and other hidden platforms where cyber threats often originate, you can check SOCRadar’s Dark Web News.
Stay Informed on Threat Actors’ Activity on the Dark Web with SOCRadar!
Large Language Models and the End of ProgrammingMatt Welsh
Talk by Matt Welsh at Craft Conference 2024 on the impact that Large Language Models will have on the future of software development. In this talk, I discuss the ways in which LLMs will impact the software industry, from replacing human software developers with AI, to replacing conventional software with models that perform reasoning, computation, and problem-solving.
First Steps with Globus Compute Multi-User EndpointsGlobus
In this presentation we will share our experiences around getting started with the Globus Compute multi-user endpoint. Working with the Pharmacology group at the University of Auckland, we have previously written an application using Globus Compute that can offload computationally expensive steps in the researcher's workflows, which they wish to manage from their familiar Windows environments, onto the NeSI (New Zealand eScience Infrastructure) cluster. Some of the challenges we have encountered were that each researcher had to set up and manage their own single-user globus compute endpoint and that the workloads had varying resource requirements (CPUs, memory and wall time) between different runs. We hope that the multi-user endpoint will help to address these challenges and share an update on our progress here.
A Comprehensive Look at Generative AI in Retail App Testing.pdfkalichargn70th171
Traditional software testing methods are being challenged in retail, where customer expectations and technological advancements continually shape the landscape. Enter generative AI—a transformative subset of artificial intelligence technologies poised to revolutionize software testing.
Accelerate Enterprise Software Engineering with PlatformlessWSO2
Key takeaways:
Challenges of building platforms and the benefits of platformless.
Key principles of platformless, including API-first, cloud-native middleware, platform engineering, and developer experience.
How Choreo enables the platformless experience.
How key concepts like application architecture, domain-driven design, zero trust, and cell-based architecture are inherently a part of Choreo.
Demo of an end-to-end app built and deployed on Choreo.
Designing for Privacy in Amazon Web ServicesKrzysztofKkol1
Data privacy is one of the most critical issues that businesses face. This presentation shares insights on the principles and best practices for ensuring the resilience and security of your workload.
Drawing on a real-life project from the HR industry, the various challenges will be demonstrated: data protection, self-healing, business continuity, security, and transparency of data processing. This systematized approach allowed to create a secure AWS cloud infrastructure that not only met strict compliance rules but also exceeded the client's expectations.
4. Let our team help you on your journey to efficiently leverage the capabilities of MongoDB, the data platform that
allows innovators to unleash the power of software and data for giant ideas.
The largest Financial Services and, Communications and Government Organizations are working with MongoDB to
Modernize their Mainframes to Reduce Cost and Increase Resilience
Being successful with MongoDB for Mainframes
5-10xDeveloper Productivity
We help our customers to increase overall
output, e.g. in terms of engineering
productivity.
80%Mainframe Cost Reduction
We help our customers to dramatically lower
their total cost of ownership for data storage
and analytics by up to 80%.
6. Challenges of Mainframes in a Modern World
There are three areas of Data Management. In the legacy world these have been disconnected with
many technologies attempting to achieve an integrated the landscape.
AdaptabilityCost Risk
Unpredictable Loads
Planned/Unplanned Downtime
Expensive Ecosystem
Change Management
Access to Skills
Capacity Management
Business Process Risk
Operational Complexity
Customer Experience
7. 5 phases of Mainframe Modernization
MongoDB will help you simultaneously offload critical services from the mainframe, save millions in
cost and increase agility for new use cases.
Scope
BusinessBenefits
Transactions are written first to MongoDB, which passes
the data on to the mainframe system of record.
Writes are performed concurrently to the mainframe as well
as MongoDB (Y-Loading), e.g. via a service-driven
architecture.
The Operational Data Layer (ODL) data is enriched with
additional sources to serve as operational intelligence
platform for insights and analytics.
Enriched ODL
Records are copied via CDC/Delta Load mechanism from
the mainframe into MongoDB, which serves as Operational
Data Layer (ODL), e.g. for frequent reads.
Operational
Data Layer (ODL)
“MongoDB first”
“Y-Loading”
System of Record
MongoDB serves as system of record for a multitude of
applications, with deferred writes to the mainframe if
necessary.
Offloading
Reads
Transforming the
role of the mainframe
Offloading
Reads & Writes
8. Offloading Reads
Initial use cases primarily focus on offloading costly reads, e.g. for querying large numbers of
transactions for analytics or historical views across customer data.
Application Application
Mainframe Mainframe
Operational Data Layer (ODL)
Using a change data capture (CDC) or delta load mechanism
you create an operational data layer alongside the mainframe
that serves read-heavy operations.
Additional
data sources
Files
Enriched Operational Data Layer (ODL)
Additional data sourced are loaded into the ODL to create an
even richer picture of your existing data and enable additional
use cases like advanced analytics.
Writes
Reads Reads
Writes
100%
10-50%50-90%
Writes
Reads
100%
25-75%25-75%
Writes
Reads
9. Offloading Reads & Writes
By introducing a smarter architecture to orchestrate writes concurrently, e.g. via a Microservices
architecture, you can shift away from delayed CDC or delta load mechanisms.
Mainframe
Additional
data sources
Files
Reads
Y-Loading
Writing (some) data concurrently into the mainframe
as well as MongoDB enables you to further limit
interactions with the mainframe technology .
It also sets you up for a more transformational shift of
the role of the mainframe with regards to your
enterprise architecture.
Application
10-25%75-90%
40-80%20-60%
Writes
Reads
Microservices / Backend as a Service
Writes
10. Transforming the role of the mainframe
With a shift towards writing to MongoDB first before writing to the mainframe (if at all) you are further
changing the meaning of “system of record” and “mainframe” within the organisation.
Mainframe
Additional
data sources
Files
System of Record
MongoDB serves as main System of Record, with writes
optionally being passed on to the mainframe for legacy
applications only or it gets decommissioned entirely.
Mainframe
Additional
data sources
Files
“MongoDB first”
Transactions first write to MongoDB, which can serve as buffer
before it passes transactions to the mainframe as System of
Record.
Writes Processing
20-50%50-80%
60-90%10-40%
Writes
Reads
50-90%10-50%
90-100%0-10%
Writes
Reads
Application
Microservices / Backend as a Service
Reads
Writes
Application
Microservices / Backend as a Service
Reads
Writes
12. Las piezas del puzzle
• Mainframe
• MongoDB
• Sincronización
• Acceso
13. Experiencias
• Los proyectos sulen tener tres fases.
– Toma de contacto.
• Probamos las ideas y la tecnología
– Fase operativa
• Usamos la tecnología para implementar las ideas
– Fase creativa
• Usamos la tecnología con ideas qué no se nos habían
ocurrido antes
14. Probando el valor
• Mainframe
• MongoDB
– ReplicaSet
• Sincronización
– Batch (ficheros)
• Acceso
– Aplicación
– Pruebas de carga
15. Dando en la diana
• Mainframe
• MongoDB
– Sharded cluster
• Sincronización
– Real time
• CDC
• Acceso
– BI Connector
– API
16. Yendo mucho más allá
• Mainframe and more
– Otras BBDD
– Fuentes externas
• MongoDB and more
– Data lake
• Sincronización
– Real Time, Distributed, Rich
• CDC
• Colas
• Transformación
• Acceso
– BI Connector
– API
– BaaS
17. Jim Duffy
Global Director of Information
Strategy, MongoDB
Transformar la
Gestión de la
información con
MongoDB
18. How best can we Navigate today’s complicated Technical Ecosystem
19. The Entire Stack Has Changed
The platforms your end users and customers use to engage with your applications and services have fundamentally
changed at an unprecedented speed over the past 5 years.
UPFRONT SUBSCRIBE
Business
YEARS / MONTHS WEEKS / DAYS
Applications
PC MOBILE / BYOD
Customers
ADS SOCIAL
Engagement
SERVERS CLOUD
Infrastructure
20. Developing a sophisticated data management strategy requires many components. The required range
of expertise is very broad, and many organisations struggle delivering using only in-house resources.
Implementation Considerations
Key Architecture Components:
• Access Management
• Virtualization or Containers
• Security & Entitlements
• Accounting and chargeback
• Backup and Recovery
• Distributed computing
• Server Hardware
• Storage
• Operating System
• Infrastructure Management
• etc.
21. Reduce bloated infrastructure
MongoDB enables you to eliminate technical debt for data storage, enabling more modern deployment patterns using
hybrid cloud strategies and more efficient utilization.
Under-utilization & Special Hardware
Legacy systems often reside on dedicated physical
hardware. Under-utilization and high maintenance
costs make up a large part of overall storage costs.
Specialist
Server
Specialist
Server
Specialist
Server
Specialist
Server
Typical deployment:
Efficient Use of Commodity Infrastructure
Leveraging commodity infrastructure either on
premise or in the cloud allows for a more cost-
effective model for operating data infrastructure.
Commodity
Server
Commodity
Server
Commodity
Server
Commodity
Server
Typical deployment: Full flexibility (on-premise,
cloud, virtualized, containers)
On premise;
dedicated hardware
22. Simplify technology stacks
Legacy stacks have too many layers, driving complexity & time to market. MongoDB enables you to collapse several
legacy layers, as the required capabilities can all be provided directly by MongoDB.
Data Warehouse
Relational Database
Data Caching
Web Services / SOAP
Object-Relational Mapping
Application
Legacy software stack
Too many layers & dependencies
Optional: Data Warehouse
Optional: Microservices / REST
Application
Capable of serving as Data Warehouse
or to sit alongside other data solutions
Full support for Microservices or
direct access via native drivers
Future proof architecture
Increase business & IT flexibility
JSON
23. Modern
SaaS, Mobile, Social
Native drivers / Microservices /
API Access / JSON
Polymorph Data (structured,
semi-structured, unstructured)
Hadoop, Spark
Commodity HW / Cloud
Local Storage / Cloud
Software-Defined Networks
Our technology can help you transform your IT organisation and modernise the entire IT stack
by enabling you leverage strategic solutions on every level to drive business transformation.
MongoDB and Enterprise IT Strategy
Legacy
Apps On-Premise
Data Access
Object-Relational Mapping /
ODBC Access / SOAP
Database Oracle / Microsoft
Data
Schemas
Relational Data / Structured
Offline Data Teradata
Compute Scale-Up Server
Storage SAN
Network Routers and Switches
MongoDB sits right at the centre
of strategic IT as well as business
transformation, enabling full stack
modernisation.
By removing layers we can:
• Reduce complexity
• Reduce cost
• Increase business agility
• Improve data quality
• Improve service quality
• Enable innovation
24. Technical Debt Limits Innovation
Legacy IT landscapes which have grown over time usually display 3 main drivers of impedance mismatches that limit an
organization’s capability to innovate and deliver modern IT services:
Data
Duplication
Bloated
Infrastructur
e
Complicated
Software Stacks
• Costly data reconciliation &
management workflows
• Low data quality and lack of
ownership / responsibility
• Reliance on “scale up” model
• Large footprint of costly
storage area networks
• Outdated, dedicated
infrastructure strategy
• Too many layers, driving
complexity & time to market
• Hiding deficiencies, e.g. by adding
caching for high-frequency access
• Clash between object-oriented
development vs. relational data
MongoDB can help you address all 3 drivers and help you unleash potential to innovate
25. Legacy
Legacy RDBMS systems are falling short
RDBMS systems were not created for today’s requirements and consequently try to bolt-on features to
compensate for the lack of capabilities. But this strategy can’t compete with data management systems
designed & purpose-built to solve today’s problems.
Rigid Schemas
Resistant to
change
Throughput &
Cost make Scale-
Up Impractical
Relational Model Scale-up
Data changes constantly,
which fits poorly with a
relational model
Scale-Up clusters were
never meant to handle
today’s volumes
Today
Flexible Model
01
10
JSON
Scale-out
Flexible Multi-Structured
Schema that is designed
to adapt to changes
Scale-out to the end of the
world and distribute data
where it needs to be
26. Scope
BusinessBenefitsAdoption Roadmap
Adopting MongoDB for individual projects and applications will unlock many benefits over using
legacy technology. Those gains can be further increased through a more strategic adoption.
Data as a Service
(DaaS & BaaS)
Data as a Service is an advanced way of storing and
accessing data enterprise-wide and yields a multitude of
benefits, e.g. improved data quality, reduced costs, and
improved governance.
Database as a
Service (DBaaS)
Automating provisioning of databases in your
organisation will considerably decrease the burden on
your operations teams and increase development
productivity and business agility.
Adopting MongoDB as strategic solution will help you
drive innovation and deliver on business
transformation agendas through increased efficiency &
capabilities.
Multiple projects/
strategic adoption
MongoDB as operational database for a single project is
usually the first step for our customers. Many leverage
our professional services to help design & deploy
according to best practices.
Single projects
& applications
Leap-frogging steps due to
faster skill adoption or new
business requirements is not
uncommon
27. ModernizedApplication Landscape
RDBMS Files
Mainframe
Application
Microservices / API Layer
ReadsWrites
Key/Value
Store
Files
Mainframe
Application
Typical Architecture
Complex & Fragile
Operational Data Layer (ODL)
Simplified & Resilient
Application Application Application
In-Memory
Cache
RDBMS
Wide-Column
Store
Application Application
Non-standard data access Standardised Data Access
Near Real-
Time CDC
Message
Streaming/Pr
ocessing
Graph Store
28. Characteristics: Operational Data Layer (ODL)
• Supports Structured, Semi-Structured and
Un-Structured data with the same level of
functionality
• Native drivers connect applications to data
without need for conversion (JSON)
• Multi-tenancy through use of a common
data model
• Native support for All deployment types
• On-premise/Bare Metal, Private, Public,
Hybrid and Cross Clouds
• Scale-out architecture supports all
deployment types in mixed mode
• Information Lifecycle Management easily
managed by workload and geography
Data Agnostic Deployment Agnostic&
30. Problem Why MongoDB ResultsProblem Solution Results
High licensing costs from proprietary
database and data grid technologies
Data duplication across systems with
complex reconciliation controls
High operational complexity impacting
service availability and speed of
application delivery
Implemented a multi-tenant PaaS with
shared data service based on
MongoDB, accessed via a common API
with message routing via Kafka
Standardized data structures for
storage and communication based on
JSON format
Multi-sharded, cross-data center
deployment for scalability and
availability
$ millions in savings after migration
from Coherence, Oracle database and
Microsoft SQL Server
Develop new apps in days vs months
100% uptime with simplified platform
architecture, higher utilization and
reduced data center footprint
Database-as-a-Service
Migration from Oracle & Microsoft to create a consolidated
“data fabric” reduces $m in cost, speeds application
development & simplifies operations
31. During their recent FY 2016 Investor
Report, RBS CEO Ross McEwan
highlighted their MongoDB Data Fabric
platform as a key enabler to helping
the Bank reduce cost significantly and
dramatically increase the speed at
which RBS can deploy new
capabilities.
“Data Fabric will help reduce cost
significantly and dramatically increase
the speed at which we can deploy new
capabilities for our customers”
-Ross McEwan, CEO RBS
RBS’s Investor Report FY’16
32. Problem Why MongoDB ResultsProblem Solution Results
Unable to scale Oracle database to
meet growth in both data volumes and
customers customers
High TCO driven by Oracle support
costs & complexity of managing
separate metadata and document
stores
Rigid relational data model inhibits
agility of application development and
support of diverse document types
Migrated to MongoDB for elastically
scalable content repo
Flexible data model allows bank to
quickly adapt application to add new
features and support new document
types
Native JSON support enables rapid
integration between the online and
mobile banking platforms, eliminating
ORM layer
The bank can scale its content
repository to add 1M new documents
per day and serve 10M+ users
MongoDB provides substantial TCO
savings over the legacy Oracle
database
The service can now support 2,000+
different document types, with new
features added quickly and cost-
effectively
Content Management
Migrated from RDBMS and scales to 10 Million customers
Multi-National
Financial Services
Institution
33. eCommerce Transformation
Mission-critical platform powering online purchasing of all Cisco
products & services globally
Problem Why MongoDB ResultsProblem Solution Results
Poor customer experience: page
rendering taking 5 seconds
Unable to scale to meet platform
growth, or roll out new features at
speed demanded by the business
Couldn’t take advantage of cloud
economics
MongoDB Enterprise Advanced with
Ops Manager
Expressive query language &
secondary indexes to support complex
business queries
Flexible data model supports faster
app delivery
MongoDB Global Consulting to
accelerate successful project delivery
Improved customer experience with
10x higher performance
No downtime: automated database
upgrades completed in 5 minutes,
proactive health monitoring
Cloud-ready platform distributed
across multiple data centers for scale
& resilience