SlideShare a Scribd company logo
MongoDB Common Use
      Cases
Emerging NoSQL Space

                  RDBMS                   RDBMS



   RDBMS

                  Data           Data
                 Warehou        Warehou           NoSQL
                   se             se



The beginning   Last 10 years             Today
Qualities of NoSQL
                   Workloads

Flexible data models      High Throughput          Large Data Sizes
• Lists, Nested Objects   • Lots of reads          • Aggregate data size
• Sparse schemas          • Lots of writes         • Number of objects
• Semi-structured data
• Agile Development



Low Latency               Cloud Computing          Commodity
• Both reads and writes   • Run anywhere           Hardware
• Millisecond latency     • No assumptions about   • Ethernet
                            hardware               • Local disks
                          • No / Few Knobs
MongoDB was designed for
            this

Flexible data models      High Throughput             Large Data Sizes
• Lists, Nested Objects   • Lots of reads             • Aggregate data size
      • schemas
• SparseJSON based             • writes
                          • Lots of Replica Sets to   • Number of objects shards
                                                           • 1000’s of
• Semi-structuredmodel
          object data           scale reads                 in a single DB
      • Dynamic
• Agile Development           • Sharding to               • Partitioning of
        schemas                 scale writes                data

Low Latency               Cloud Computing             Commodity
• Both reads and writes   • Run anywhere              Hardware
      • In-memory
• Millisecond latency          • Scale-out to
                          • No assumptions about      • Ethernet
                                                           • Designed for
      cache                        overcome
                            hardware                  • Local disks
                          • No / Few Knobs                   “typical” OS and
    • Scale-out                    hardware
                                                             local file system
      working set                limitations
Example customers
Content Management       Operational Intelligence     Product Data Management



        w




            User Data Management         High Volume Data Feeds
USE CASES THAT
LEVERAGE NOSQL
High Volume Data Feeds
  Machine      • More machines, more sensors, more
 Generated       data
   Data        • Variably structured


Stock Market   • High frequency trading
    Data

Social Media   • Multiple sources of data
 Firehose      • Each changes their format constantly
High Volume Data Feed
                              Flexible document
                              model can adapt to
                              changes in sensor
                                    format
   Asynchronous writes




 Data
  Data
Sources
    Data
 Sources
     Data                     Write to memory with
  Sources                      periodic disk flush
    Sources




          Scale writes over
           multiple shards
Operational Intelligence

               • Large volume of state about users
Ad Targeting   • Very strict latency requirements



               • Expose report data to millions of customers
 Real time     • Report on large volumes of data
dashboards     • Reports that update in real time



Social Media   • What are people talking about?
 Monitoring
Operational Intelligence
                                    Parallelize queries
               Low latency reads
                                   across replicas and
                                          shards




    API
                                      In database
                                      aggregation




Dashboards
                                    Flexible schema
                                   adapts to changing
                                       input data
Can use same cluster
to collect, store, and
   report on data
Behavioral Profiles
                                                               Rich profiles
                                                            collecting multiple
                                                             complex actions
1   See Ad

                Scale out to support   { cookie_id: ‚1234512413243‛,
                 high throughput of      advertiser:{
                                            apple: {
                  activities tracked           actions: [
2   See Ad                                        { impression: ‘ad1’, time: 123 },
                                                  { impression: ‘ad2’, time: 232 },
                                                  { click: ‘ad2’, time: 235 },
                                                  { add_to_cart: ‘laptop’,
                                                     sku: ‘asdf23f’,
                                                     time: 254 },
    Click                                         { purchase: ‘laptop’, time: 354 }
3                                              ]
                                            }
                                         }
                                       }
                         Dynamic schemas
                        make it easy to track
                                                       Indexing and
4   Convert               vendor specific
                                                    querying to support
                            attributes
                                                    matching, frequency
                                                         capping
Product Data Management

E-Commerce
              • Diverse product portfolio
  Product     • Complex querying and filtering
  Catalog

              • Scale for short bursts of high-
                volume traffic
Flash Sales   • Scalable but consistent view of
                inventory
Meta data
                                                Indexing and rich
                                                query API for easy
                                              searching and sorting
    db.archives.
       find({ ‚country”: ‚Egypt‛ });

                                                   Flexible data model
                                                      for similar, but
                                                    different objects


{ type: ‚Artefact‛,         { ISBN: ‚00e8da9b‛,
  medium: ‚Ceramic‛,          type: ‚Book‛,
  country: ‚Egypt‛,           country: ‚Egypt‛,
  year: ‚3000 BC‛             title: ‚Ancient Egypt‛
}                           }
Content Management
               • Comments and user generated
 News Site       content
               • Personalization of content, layout

Multi-Device   • Generate layout on the fly for each
 rendering       device that connects
               • No need to cache static pages


               • Store large objects
  Sharing      • Simple modeling of metadata
Content Management
                                                                             Geo spatial indexing
                              Flexible data model                             for location based
GridFS for large
                                 for similar, but                                  searches
 object storage
                               different objects

                                                { camera: ‚Nikon d4‛,
                                                  location: [ -122.418333, 37.775 ]
                                                }



                                                { camera: ‚Canon 5d mkII‛,
                                                  people: [ ‚Jim‛, ‚Carol‛ ],
                                                  taken_on: ISODate("2012-03-07T18:32:35.002Z")
                                                }


                                                { origin: ‚facebook.com/photos/xwdf23fsdf‛,
                                                  license: ‚Creative Commons CC0‛,
                                                  size: {
                                                     dimensions: [ 124, 52 ],
                                                     units: ‚pixels‛
     Horizontal scalability                       }
      for large data sets                       }
User Data Management

  Video      • User state and session
               management
  Games

  Social     • Scale out to large graphs
             • Easy to search and process
 Graphs

  Identity   • Authentication, Authorization, and
Management     Accounting
IS MY USE CASE A GOOD
FIT FOR MONGODB?
Good fits for MongoDB
Application Characteristic      Why MongoDB might be a good fit
Variable data in objects        Dynamic schema and JSON data model enable
                                flexible data storage without sparse tables or
                                complex joins
Low Latency Access              Memory Mapped storage engine caches
                                documents in RAM, enabling in-memory
                                performance. Data locality of documents can
                                significantly improve latency over join based
                                approaches
High write or read throughput   Sharding + Replication lets you scale read and
                                write traffic across multiple servers
Large number of objects to      Sharding lets you split objects across multiple
store                           servers
Cloud based deployment          Sharding and replication let you work around
                                hardware limitations in clouds.
THANK YOU!



Free online training available for MongoDB at:

http://education.10gen.com

More Related Content

What's hot

When to Use MongoDB
When to Use MongoDBWhen to Use MongoDB
When to Use MongoDB
MongoDB
 
How companies use NoSQL and Couchbase - NoSQL Now 2013
How companies use NoSQL and Couchbase - NoSQL Now 2013How companies use NoSQL and Couchbase - NoSQL Now 2013
How companies use NoSQL and Couchbase - NoSQL Now 2013
Dipti Borkar
 
Big Data: Guidelines and Examples for the Enterprise Decision Maker
Big Data: Guidelines and Examples for the Enterprise Decision MakerBig Data: Guidelines and Examples for the Enterprise Decision Maker
Big Data: Guidelines and Examples for the Enterprise Decision Maker
MongoDB
 
Aadhaar at 5th_elephant_v3
Aadhaar at 5th_elephant_v3Aadhaar at 5th_elephant_v3
Aadhaar at 5th_elephant_v3
Regunath B
 
Apache Spark and MongoDB - Turning Analytics into Real-Time Action
Apache Spark and MongoDB - Turning Analytics into Real-Time ActionApache Spark and MongoDB - Turning Analytics into Real-Time Action
Apache Spark and MongoDB - Turning Analytics into Real-Time Action
João Gabriel Lima
 
Getting Started with MongoDB at Oracle Open World 2012
Getting Started with MongoDB at Oracle Open World 2012Getting Started with MongoDB at Oracle Open World 2012
Getting Started with MongoDB at Oracle Open World 2012
MongoDB
 
Azure cosmos db
Azure cosmos dbAzure cosmos db
Azure cosmos db
Bill Liu
 
Cosmos DB Service
Cosmos DB ServiceCosmos DB Service
Cosmos DB Service
NexThoughts Technologies
 
Webinar: Faster Big Data Analytics with MongoDB
Webinar: Faster Big Data Analytics with MongoDBWebinar: Faster Big Data Analytics with MongoDB
Webinar: Faster Big Data Analytics with MongoDB
MongoDB
 
Hadoop at aadhaar
Hadoop at aadhaarHadoop at aadhaar
Hadoop at aadhaar
Regunath B
 
Data Driven Innovation with Amazon Web Services
Data Driven Innovation with Amazon Web ServicesData Driven Innovation with Amazon Web Services
Data Driven Innovation with Amazon Web ServicesAmazon Web Services
 
Why NoSQL and MongoDB for Big Data
Why NoSQL and MongoDB for Big DataWhy NoSQL and MongoDB for Big Data
Why NoSQL and MongoDB for Big Data
William LaForest
 
Horizon for Big Data
Horizon for Big DataHorizon for Big Data
Horizon for Big Data
Schubert Zhang
 
MongoDB Evenings DC: Get MEAN and Lean with Docker and Kubernetes
MongoDB Evenings DC: Get MEAN and Lean with Docker and KubernetesMongoDB Evenings DC: Get MEAN and Lean with Docker and Kubernetes
MongoDB Evenings DC: Get MEAN and Lean with Docker and Kubernetes
MongoDB
 
Engineering practices in big data storage and processing
Engineering practices in big data storage and processingEngineering practices in big data storage and processing
Engineering practices in big data storage and processing
Schubert Zhang
 
Cassandra nyc 2011 ilya maykov - ooyala - scaling video analytics with apac...
Cassandra nyc 2011   ilya maykov - ooyala - scaling video analytics with apac...Cassandra nyc 2011   ilya maykov - ooyala - scaling video analytics with apac...
Cassandra nyc 2011 ilya maykov - ooyala - scaling video analytics with apac...ivmaykov
 
Data engineering
Data engineeringData engineering
Data engineering
Suman Debnath
 
Common MongoDB Use Cases
Common MongoDB Use Cases Common MongoDB Use Cases
Common MongoDB Use Cases MongoDB
 
MongoATL: How Sourceforge is Using MongoDB
MongoATL: How Sourceforge is Using MongoDBMongoATL: How Sourceforge is Using MongoDB
MongoATL: How Sourceforge is Using MongoDB
Rick Copeland
 
When to Use MongoDB...and When You Should Not...
When to Use MongoDB...and When You Should Not...When to Use MongoDB...and When You Should Not...
When to Use MongoDB...and When You Should Not...
MongoDB
 

What's hot (20)

When to Use MongoDB
When to Use MongoDBWhen to Use MongoDB
When to Use MongoDB
 
How companies use NoSQL and Couchbase - NoSQL Now 2013
How companies use NoSQL and Couchbase - NoSQL Now 2013How companies use NoSQL and Couchbase - NoSQL Now 2013
How companies use NoSQL and Couchbase - NoSQL Now 2013
 
Big Data: Guidelines and Examples for the Enterprise Decision Maker
Big Data: Guidelines and Examples for the Enterprise Decision MakerBig Data: Guidelines and Examples for the Enterprise Decision Maker
Big Data: Guidelines and Examples for the Enterprise Decision Maker
 
Aadhaar at 5th_elephant_v3
Aadhaar at 5th_elephant_v3Aadhaar at 5th_elephant_v3
Aadhaar at 5th_elephant_v3
 
Apache Spark and MongoDB - Turning Analytics into Real-Time Action
Apache Spark and MongoDB - Turning Analytics into Real-Time ActionApache Spark and MongoDB - Turning Analytics into Real-Time Action
Apache Spark and MongoDB - Turning Analytics into Real-Time Action
 
Getting Started with MongoDB at Oracle Open World 2012
Getting Started with MongoDB at Oracle Open World 2012Getting Started with MongoDB at Oracle Open World 2012
Getting Started with MongoDB at Oracle Open World 2012
 
Azure cosmos db
Azure cosmos dbAzure cosmos db
Azure cosmos db
 
Cosmos DB Service
Cosmos DB ServiceCosmos DB Service
Cosmos DB Service
 
Webinar: Faster Big Data Analytics with MongoDB
Webinar: Faster Big Data Analytics with MongoDBWebinar: Faster Big Data Analytics with MongoDB
Webinar: Faster Big Data Analytics with MongoDB
 
Hadoop at aadhaar
Hadoop at aadhaarHadoop at aadhaar
Hadoop at aadhaar
 
Data Driven Innovation with Amazon Web Services
Data Driven Innovation with Amazon Web ServicesData Driven Innovation with Amazon Web Services
Data Driven Innovation with Amazon Web Services
 
Why NoSQL and MongoDB for Big Data
Why NoSQL and MongoDB for Big DataWhy NoSQL and MongoDB for Big Data
Why NoSQL and MongoDB for Big Data
 
Horizon for Big Data
Horizon for Big DataHorizon for Big Data
Horizon for Big Data
 
MongoDB Evenings DC: Get MEAN and Lean with Docker and Kubernetes
MongoDB Evenings DC: Get MEAN and Lean with Docker and KubernetesMongoDB Evenings DC: Get MEAN and Lean with Docker and Kubernetes
MongoDB Evenings DC: Get MEAN and Lean with Docker and Kubernetes
 
Engineering practices in big data storage and processing
Engineering practices in big data storage and processingEngineering practices in big data storage and processing
Engineering practices in big data storage and processing
 
Cassandra nyc 2011 ilya maykov - ooyala - scaling video analytics with apac...
Cassandra nyc 2011   ilya maykov - ooyala - scaling video analytics with apac...Cassandra nyc 2011   ilya maykov - ooyala - scaling video analytics with apac...
Cassandra nyc 2011 ilya maykov - ooyala - scaling video analytics with apac...
 
Data engineering
Data engineeringData engineering
Data engineering
 
Common MongoDB Use Cases
Common MongoDB Use Cases Common MongoDB Use Cases
Common MongoDB Use Cases
 
MongoATL: How Sourceforge is Using MongoDB
MongoATL: How Sourceforge is Using MongoDBMongoATL: How Sourceforge is Using MongoDB
MongoATL: How Sourceforge is Using MongoDB
 
When to Use MongoDB...and When You Should Not...
When to Use MongoDB...and When You Should Not...When to Use MongoDB...and When You Should Not...
When to Use MongoDB...and When You Should Not...
 

Viewers also liked

A Morning with MongoDB Paris 2012 - Xebia
A Morning with MongoDB Paris 2012 - XebiaA Morning with MongoDB Paris 2012 - Xebia
A Morning with MongoDB Paris 2012 - XebiaMongoDB
 
Webinar: Tales from the Field - 48 Hours to Data Centre Recovery
Webinar: Tales from the Field - 48 Hours to Data Centre RecoveryWebinar: Tales from the Field - 48 Hours to Data Centre Recovery
Webinar: Tales from the Field - 48 Hours to Data Centre Recovery
MongoDB
 
MongoDB World 2016: MongoDB & IBM
MongoDB World 2016: MongoDB & IBMMongoDB World 2016: MongoDB & IBM
MongoDB World 2016: MongoDB & IBM
MongoDB
 
Webinaire 6 de la série « Retour aux fondamentaux » : Déploiement en production
Webinaire 6 de la série « Retour aux fondamentaux » : Déploiement en production Webinaire 6 de la série « Retour aux fondamentaux » : Déploiement en production
Webinaire 6 de la série « Retour aux fondamentaux » : Déploiement en production
MongoDB
 
3DRepo
3DRepo3DRepo
3DRepo
MongoDB
 
MongoDB Versatility: Scaling the MapMyFitness Platform
MongoDB Versatility: Scaling the MapMyFitness PlatformMongoDB Versatility: Scaling the MapMyFitness Platform
MongoDB Versatility: Scaling the MapMyFitness Platform
MongoDB
 
Webinar : Nouveautés de MongoDB 3.2
Webinar : Nouveautés de MongoDB 3.2Webinar : Nouveautés de MongoDB 3.2
Webinar : Nouveautés de MongoDB 3.2
MongoDB
 
Big Data Paris: Etude de Cas: KPMG, l’innovation continue grâce au Data Lake ...
Big Data Paris: Etude de Cas: KPMG, l’innovation continue grâce au Data Lake ...Big Data Paris: Etude de Cas: KPMG, l’innovation continue grâce au Data Lake ...
Big Data Paris: Etude de Cas: KPMG, l’innovation continue grâce au Data Lake ...
MongoDB
 

Viewers also liked (8)

A Morning with MongoDB Paris 2012 - Xebia
A Morning with MongoDB Paris 2012 - XebiaA Morning with MongoDB Paris 2012 - Xebia
A Morning with MongoDB Paris 2012 - Xebia
 
Webinar: Tales from the Field - 48 Hours to Data Centre Recovery
Webinar: Tales from the Field - 48 Hours to Data Centre RecoveryWebinar: Tales from the Field - 48 Hours to Data Centre Recovery
Webinar: Tales from the Field - 48 Hours to Data Centre Recovery
 
MongoDB World 2016: MongoDB & IBM
MongoDB World 2016: MongoDB & IBMMongoDB World 2016: MongoDB & IBM
MongoDB World 2016: MongoDB & IBM
 
Webinaire 6 de la série « Retour aux fondamentaux » : Déploiement en production
Webinaire 6 de la série « Retour aux fondamentaux » : Déploiement en production Webinaire 6 de la série « Retour aux fondamentaux » : Déploiement en production
Webinaire 6 de la série « Retour aux fondamentaux » : Déploiement en production
 
3DRepo
3DRepo3DRepo
3DRepo
 
MongoDB Versatility: Scaling the MapMyFitness Platform
MongoDB Versatility: Scaling the MapMyFitness PlatformMongoDB Versatility: Scaling the MapMyFitness Platform
MongoDB Versatility: Scaling the MapMyFitness Platform
 
Webinar : Nouveautés de MongoDB 3.2
Webinar : Nouveautés de MongoDB 3.2Webinar : Nouveautés de MongoDB 3.2
Webinar : Nouveautés de MongoDB 3.2
 
Big Data Paris: Etude de Cas: KPMG, l’innovation continue grâce au Data Lake ...
Big Data Paris: Etude de Cas: KPMG, l’innovation continue grâce au Data Lake ...Big Data Paris: Etude de Cas: KPMG, l’innovation continue grâce au Data Lake ...
Big Data Paris: Etude de Cas: KPMG, l’innovation continue grâce au Data Lake ...
 

Similar to Common MongoDB Use Cases Webinar

Common MongoDB Use Cases
Common MongoDB Use CasesCommon MongoDB Use Cases
Common MongoDB Use CasesDATAVERSITY
 
Webinar: Utilisations courantes de MongoDB
Webinar: Utilisations courantes de MongoDBWebinar: Utilisations courantes de MongoDB
Webinar: Utilisations courantes de MongoDB
MongoDB
 
Millions quotes per second in pure java
Millions quotes per second in pure javaMillions quotes per second in pure java
Millions quotes per second in pure javaRoman Elizarov
 
Agility and Scalability with MongoDB
Agility and Scalability with MongoDBAgility and Scalability with MongoDB
Agility and Scalability with MongoDB
MongoDB
 
Using Data Lakes: Data Analytics Week SF
Using Data Lakes: Data Analytics Week SFUsing Data Lakes: Data Analytics Week SF
Using Data Lakes: Data Analytics Week SF
Amazon Web Services
 
Ankus, bigdata deployment and orchestration framework
Ankus, bigdata deployment and orchestration frameworkAnkus, bigdata deployment and orchestration framework
Ankus, bigdata deployment and orchestration framework
Ashrith Mekala
 
Using Data Lakes
Using Data Lakes Using Data Lakes
Using Data Lakes
Amazon Web Services
 
C* Summit 2013: Cassandra at eBay Scale by Feng Qu and Anurag Jambhekar
C* Summit 2013: Cassandra at eBay Scale by Feng Qu and Anurag JambhekarC* Summit 2013: Cassandra at eBay Scale by Feng Qu and Anurag Jambhekar
C* Summit 2013: Cassandra at eBay Scale by Feng Qu and Anurag Jambhekar
DataStax Academy
 
Summer 2017 undergraduate research powerpoint
Summer 2017 undergraduate research powerpointSummer 2017 undergraduate research powerpoint
Summer 2017 undergraduate research powerpoint
Christopher Dubois
 
Azure DocumentDB Overview
Azure DocumentDB OverviewAzure DocumentDB Overview
Azure DocumentDB Overview
Andrew Liu
 
AWS January 2016 Webinar Series - Getting Started with Big Data on AWS
AWS January 2016 Webinar Series - Getting Started with Big Data on AWSAWS January 2016 Webinar Series - Getting Started with Big Data on AWS
AWS January 2016 Webinar Series - Getting Started with Big Data on AWS
Amazon Web Services
 
5 Comparing Microsoft Big Data Technologies for Analytics
5 Comparing Microsoft Big Data Technologies for Analytics5 Comparing Microsoft Big Data Technologies for Analytics
5 Comparing Microsoft Big Data Technologies for Analytics
Jen Stirrup
 
Processing Big Data
Processing Big DataProcessing Big Data
Processing Big Data
cwensel
 
Using Data Lakes
Using Data LakesUsing Data Lakes
Using Data Lakes
Amazon Web Services
 
MongoDB in FS
MongoDB in FSMongoDB in FS
MongoDB in FSMongoDB
 
brock_delong_all_your_database_final.pptx
brock_delong_all_your_database_final.pptxbrock_delong_all_your_database_final.pptx
brock_delong_all_your_database_final.pptx
AWS Chicago
 
Kognitio overview jan 2013
Kognitio overview jan 2013Kognitio overview jan 2013
Kognitio overview jan 2013Kognitio
 
Kognitio overview jan 2013
Kognitio overview jan 2013Kognitio overview jan 2013
Kognitio overview jan 2013
Michael Hiskey
 
Big data hadoop-no sql and graph db-final
Big data hadoop-no sql and graph db-finalBig data hadoop-no sql and graph db-final
Big data hadoop-no sql and graph db-finalramazan fırın
 
SpringPeople - Introduction to Cloud Computing
SpringPeople - Introduction to Cloud ComputingSpringPeople - Introduction to Cloud Computing
SpringPeople - Introduction to Cloud Computing
SpringPeople
 

Similar to Common MongoDB Use Cases Webinar (20)

Common MongoDB Use Cases
Common MongoDB Use CasesCommon MongoDB Use Cases
Common MongoDB Use Cases
 
Webinar: Utilisations courantes de MongoDB
Webinar: Utilisations courantes de MongoDBWebinar: Utilisations courantes de MongoDB
Webinar: Utilisations courantes de MongoDB
 
Millions quotes per second in pure java
Millions quotes per second in pure javaMillions quotes per second in pure java
Millions quotes per second in pure java
 
Agility and Scalability with MongoDB
Agility and Scalability with MongoDBAgility and Scalability with MongoDB
Agility and Scalability with MongoDB
 
Using Data Lakes: Data Analytics Week SF
Using Data Lakes: Data Analytics Week SFUsing Data Lakes: Data Analytics Week SF
Using Data Lakes: Data Analytics Week SF
 
Ankus, bigdata deployment and orchestration framework
Ankus, bigdata deployment and orchestration frameworkAnkus, bigdata deployment and orchestration framework
Ankus, bigdata deployment and orchestration framework
 
Using Data Lakes
Using Data Lakes Using Data Lakes
Using Data Lakes
 
C* Summit 2013: Cassandra at eBay Scale by Feng Qu and Anurag Jambhekar
C* Summit 2013: Cassandra at eBay Scale by Feng Qu and Anurag JambhekarC* Summit 2013: Cassandra at eBay Scale by Feng Qu and Anurag Jambhekar
C* Summit 2013: Cassandra at eBay Scale by Feng Qu and Anurag Jambhekar
 
Summer 2017 undergraduate research powerpoint
Summer 2017 undergraduate research powerpointSummer 2017 undergraduate research powerpoint
Summer 2017 undergraduate research powerpoint
 
Azure DocumentDB Overview
Azure DocumentDB OverviewAzure DocumentDB Overview
Azure DocumentDB Overview
 
AWS January 2016 Webinar Series - Getting Started with Big Data on AWS
AWS January 2016 Webinar Series - Getting Started with Big Data on AWSAWS January 2016 Webinar Series - Getting Started with Big Data on AWS
AWS January 2016 Webinar Series - Getting Started with Big Data on AWS
 
5 Comparing Microsoft Big Data Technologies for Analytics
5 Comparing Microsoft Big Data Technologies for Analytics5 Comparing Microsoft Big Data Technologies for Analytics
5 Comparing Microsoft Big Data Technologies for Analytics
 
Processing Big Data
Processing Big DataProcessing Big Data
Processing Big Data
 
Using Data Lakes
Using Data LakesUsing Data Lakes
Using Data Lakes
 
MongoDB in FS
MongoDB in FSMongoDB in FS
MongoDB in FS
 
brock_delong_all_your_database_final.pptx
brock_delong_all_your_database_final.pptxbrock_delong_all_your_database_final.pptx
brock_delong_all_your_database_final.pptx
 
Kognitio overview jan 2013
Kognitio overview jan 2013Kognitio overview jan 2013
Kognitio overview jan 2013
 
Kognitio overview jan 2013
Kognitio overview jan 2013Kognitio overview jan 2013
Kognitio overview jan 2013
 
Big data hadoop-no sql and graph db-final
Big data hadoop-no sql and graph db-finalBig data hadoop-no sql and graph db-final
Big data hadoop-no sql and graph db-final
 
SpringPeople - Introduction to Cloud Computing
SpringPeople - Introduction to Cloud ComputingSpringPeople - Introduction to Cloud Computing
SpringPeople - Introduction to Cloud Computing
 

More from MongoDB

MongoDB SoCal 2020: Migrate Anything* to MongoDB Atlas
MongoDB SoCal 2020: Migrate Anything* to MongoDB AtlasMongoDB SoCal 2020: Migrate Anything* to MongoDB Atlas
MongoDB SoCal 2020: Migrate Anything* to MongoDB Atlas
MongoDB
 
MongoDB SoCal 2020: Go on a Data Safari with MongoDB Charts!
MongoDB SoCal 2020: Go on a Data Safari with MongoDB Charts!MongoDB SoCal 2020: Go on a Data Safari with MongoDB Charts!
MongoDB SoCal 2020: Go on a Data Safari with MongoDB Charts!
MongoDB
 
MongoDB SoCal 2020: Using MongoDB Services in Kubernetes: Any Platform, Devel...
MongoDB SoCal 2020: Using MongoDB Services in Kubernetes: Any Platform, Devel...MongoDB SoCal 2020: Using MongoDB Services in Kubernetes: Any Platform, Devel...
MongoDB SoCal 2020: Using MongoDB Services in Kubernetes: Any Platform, Devel...
MongoDB
 
MongoDB SoCal 2020: A Complete Methodology of Data Modeling for MongoDB
MongoDB SoCal 2020: A Complete Methodology of Data Modeling for MongoDBMongoDB SoCal 2020: A Complete Methodology of Data Modeling for MongoDB
MongoDB SoCal 2020: A Complete Methodology of Data Modeling for MongoDB
MongoDB
 
MongoDB SoCal 2020: From Pharmacist to Analyst: Leveraging MongoDB for Real-T...
MongoDB SoCal 2020: From Pharmacist to Analyst: Leveraging MongoDB for Real-T...MongoDB SoCal 2020: From Pharmacist to Analyst: Leveraging MongoDB for Real-T...
MongoDB SoCal 2020: From Pharmacist to Analyst: Leveraging MongoDB for Real-T...
MongoDB
 
MongoDB SoCal 2020: Best Practices for Working with IoT and Time-series Data
MongoDB SoCal 2020: Best Practices for Working with IoT and Time-series DataMongoDB SoCal 2020: Best Practices for Working with IoT and Time-series Data
MongoDB SoCal 2020: Best Practices for Working with IoT and Time-series Data
MongoDB
 
MongoDB SoCal 2020: MongoDB Atlas Jump Start
 MongoDB SoCal 2020: MongoDB Atlas Jump Start MongoDB SoCal 2020: MongoDB Atlas Jump Start
MongoDB SoCal 2020: MongoDB Atlas Jump Start
MongoDB
 
MongoDB .local San Francisco 2020: Powering the new age data demands [Infosys]
MongoDB .local San Francisco 2020: Powering the new age data demands [Infosys]MongoDB .local San Francisco 2020: Powering the new age data demands [Infosys]
MongoDB .local San Francisco 2020: Powering the new age data demands [Infosys]
MongoDB
 
MongoDB .local San Francisco 2020: Using Client Side Encryption in MongoDB 4.2
MongoDB .local San Francisco 2020: Using Client Side Encryption in MongoDB 4.2MongoDB .local San Francisco 2020: Using Client Side Encryption in MongoDB 4.2
MongoDB .local San Francisco 2020: Using Client Side Encryption in MongoDB 4.2
MongoDB
 
MongoDB .local San Francisco 2020: Using MongoDB Services in Kubernetes: any ...
MongoDB .local San Francisco 2020: Using MongoDB Services in Kubernetes: any ...MongoDB .local San Francisco 2020: Using MongoDB Services in Kubernetes: any ...
MongoDB .local San Francisco 2020: Using MongoDB Services in Kubernetes: any ...
MongoDB
 
MongoDB .local San Francisco 2020: Go on a Data Safari with MongoDB Charts!
MongoDB .local San Francisco 2020: Go on a Data Safari with MongoDB Charts!MongoDB .local San Francisco 2020: Go on a Data Safari with MongoDB Charts!
MongoDB .local San Francisco 2020: Go on a Data Safari with MongoDB Charts!
MongoDB
 
MongoDB .local San Francisco 2020: From SQL to NoSQL -- Changing Your Mindset
MongoDB .local San Francisco 2020: From SQL to NoSQL -- Changing Your MindsetMongoDB .local San Francisco 2020: From SQL to NoSQL -- Changing Your Mindset
MongoDB .local San Francisco 2020: From SQL to NoSQL -- Changing Your Mindset
MongoDB
 
MongoDB .local San Francisco 2020: MongoDB Atlas Jumpstart
MongoDB .local San Francisco 2020: MongoDB Atlas JumpstartMongoDB .local San Francisco 2020: MongoDB Atlas Jumpstart
MongoDB .local San Francisco 2020: MongoDB Atlas Jumpstart
MongoDB
 
MongoDB .local San Francisco 2020: Tips and Tricks++ for Querying and Indexin...
MongoDB .local San Francisco 2020: Tips and Tricks++ for Querying and Indexin...MongoDB .local San Francisco 2020: Tips and Tricks++ for Querying and Indexin...
MongoDB .local San Francisco 2020: Tips and Tricks++ for Querying and Indexin...
MongoDB
 
MongoDB .local San Francisco 2020: Aggregation Pipeline Power++
MongoDB .local San Francisco 2020: Aggregation Pipeline Power++MongoDB .local San Francisco 2020: Aggregation Pipeline Power++
MongoDB .local San Francisco 2020: Aggregation Pipeline Power++
MongoDB
 
MongoDB .local San Francisco 2020: A Complete Methodology of Data Modeling fo...
MongoDB .local San Francisco 2020: A Complete Methodology of Data Modeling fo...MongoDB .local San Francisco 2020: A Complete Methodology of Data Modeling fo...
MongoDB .local San Francisco 2020: A Complete Methodology of Data Modeling fo...
MongoDB
 
MongoDB .local San Francisco 2020: MongoDB Atlas Data Lake Technical Deep Dive
MongoDB .local San Francisco 2020: MongoDB Atlas Data Lake Technical Deep DiveMongoDB .local San Francisco 2020: MongoDB Atlas Data Lake Technical Deep Dive
MongoDB .local San Francisco 2020: MongoDB Atlas Data Lake Technical Deep Dive
MongoDB
 
MongoDB .local San Francisco 2020: Developing Alexa Skills with MongoDB & Golang
MongoDB .local San Francisco 2020: Developing Alexa Skills with MongoDB & GolangMongoDB .local San Francisco 2020: Developing Alexa Skills with MongoDB & Golang
MongoDB .local San Francisco 2020: Developing Alexa Skills with MongoDB & Golang
MongoDB
 
MongoDB .local Paris 2020: Realm : l'ingrédient secret pour de meilleures app...
MongoDB .local Paris 2020: Realm : l'ingrédient secret pour de meilleures app...MongoDB .local Paris 2020: Realm : l'ingrédient secret pour de meilleures app...
MongoDB .local Paris 2020: Realm : l'ingrédient secret pour de meilleures app...
MongoDB
 
MongoDB .local Paris 2020: Upply @MongoDB : Upply : Quand le Machine Learning...
MongoDB .local Paris 2020: Upply @MongoDB : Upply : Quand le Machine Learning...MongoDB .local Paris 2020: Upply @MongoDB : Upply : Quand le Machine Learning...
MongoDB .local Paris 2020: Upply @MongoDB : Upply : Quand le Machine Learning...
MongoDB
 

More from MongoDB (20)

MongoDB SoCal 2020: Migrate Anything* to MongoDB Atlas
MongoDB SoCal 2020: Migrate Anything* to MongoDB AtlasMongoDB SoCal 2020: Migrate Anything* to MongoDB Atlas
MongoDB SoCal 2020: Migrate Anything* to MongoDB Atlas
 
MongoDB SoCal 2020: Go on a Data Safari with MongoDB Charts!
MongoDB SoCal 2020: Go on a Data Safari with MongoDB Charts!MongoDB SoCal 2020: Go on a Data Safari with MongoDB Charts!
MongoDB SoCal 2020: Go on a Data Safari with MongoDB Charts!
 
MongoDB SoCal 2020: Using MongoDB Services in Kubernetes: Any Platform, Devel...
MongoDB SoCal 2020: Using MongoDB Services in Kubernetes: Any Platform, Devel...MongoDB SoCal 2020: Using MongoDB Services in Kubernetes: Any Platform, Devel...
MongoDB SoCal 2020: Using MongoDB Services in Kubernetes: Any Platform, Devel...
 
MongoDB SoCal 2020: A Complete Methodology of Data Modeling for MongoDB
MongoDB SoCal 2020: A Complete Methodology of Data Modeling for MongoDBMongoDB SoCal 2020: A Complete Methodology of Data Modeling for MongoDB
MongoDB SoCal 2020: A Complete Methodology of Data Modeling for MongoDB
 
MongoDB SoCal 2020: From Pharmacist to Analyst: Leveraging MongoDB for Real-T...
MongoDB SoCal 2020: From Pharmacist to Analyst: Leveraging MongoDB for Real-T...MongoDB SoCal 2020: From Pharmacist to Analyst: Leveraging MongoDB for Real-T...
MongoDB SoCal 2020: From Pharmacist to Analyst: Leveraging MongoDB for Real-T...
 
MongoDB SoCal 2020: Best Practices for Working with IoT and Time-series Data
MongoDB SoCal 2020: Best Practices for Working with IoT and Time-series DataMongoDB SoCal 2020: Best Practices for Working with IoT and Time-series Data
MongoDB SoCal 2020: Best Practices for Working with IoT and Time-series Data
 
MongoDB SoCal 2020: MongoDB Atlas Jump Start
 MongoDB SoCal 2020: MongoDB Atlas Jump Start MongoDB SoCal 2020: MongoDB Atlas Jump Start
MongoDB SoCal 2020: MongoDB Atlas Jump Start
 
MongoDB .local San Francisco 2020: Powering the new age data demands [Infosys]
MongoDB .local San Francisco 2020: Powering the new age data demands [Infosys]MongoDB .local San Francisco 2020: Powering the new age data demands [Infosys]
MongoDB .local San Francisco 2020: Powering the new age data demands [Infosys]
 
MongoDB .local San Francisco 2020: Using Client Side Encryption in MongoDB 4.2
MongoDB .local San Francisco 2020: Using Client Side Encryption in MongoDB 4.2MongoDB .local San Francisco 2020: Using Client Side Encryption in MongoDB 4.2
MongoDB .local San Francisco 2020: Using Client Side Encryption in MongoDB 4.2
 
MongoDB .local San Francisco 2020: Using MongoDB Services in Kubernetes: any ...
MongoDB .local San Francisco 2020: Using MongoDB Services in Kubernetes: any ...MongoDB .local San Francisco 2020: Using MongoDB Services in Kubernetes: any ...
MongoDB .local San Francisco 2020: Using MongoDB Services in Kubernetes: any ...
 
MongoDB .local San Francisco 2020: Go on a Data Safari with MongoDB Charts!
MongoDB .local San Francisco 2020: Go on a Data Safari with MongoDB Charts!MongoDB .local San Francisco 2020: Go on a Data Safari with MongoDB Charts!
MongoDB .local San Francisco 2020: Go on a Data Safari with MongoDB Charts!
 
MongoDB .local San Francisco 2020: From SQL to NoSQL -- Changing Your Mindset
MongoDB .local San Francisco 2020: From SQL to NoSQL -- Changing Your MindsetMongoDB .local San Francisco 2020: From SQL to NoSQL -- Changing Your Mindset
MongoDB .local San Francisco 2020: From SQL to NoSQL -- Changing Your Mindset
 
MongoDB .local San Francisco 2020: MongoDB Atlas Jumpstart
MongoDB .local San Francisco 2020: MongoDB Atlas JumpstartMongoDB .local San Francisco 2020: MongoDB Atlas Jumpstart
MongoDB .local San Francisco 2020: MongoDB Atlas Jumpstart
 
MongoDB .local San Francisco 2020: Tips and Tricks++ for Querying and Indexin...
MongoDB .local San Francisco 2020: Tips and Tricks++ for Querying and Indexin...MongoDB .local San Francisco 2020: Tips and Tricks++ for Querying and Indexin...
MongoDB .local San Francisco 2020: Tips and Tricks++ for Querying and Indexin...
 
MongoDB .local San Francisco 2020: Aggregation Pipeline Power++
MongoDB .local San Francisco 2020: Aggregation Pipeline Power++MongoDB .local San Francisco 2020: Aggregation Pipeline Power++
MongoDB .local San Francisco 2020: Aggregation Pipeline Power++
 
MongoDB .local San Francisco 2020: A Complete Methodology of Data Modeling fo...
MongoDB .local San Francisco 2020: A Complete Methodology of Data Modeling fo...MongoDB .local San Francisco 2020: A Complete Methodology of Data Modeling fo...
MongoDB .local San Francisco 2020: A Complete Methodology of Data Modeling fo...
 
MongoDB .local San Francisco 2020: MongoDB Atlas Data Lake Technical Deep Dive
MongoDB .local San Francisco 2020: MongoDB Atlas Data Lake Technical Deep DiveMongoDB .local San Francisco 2020: MongoDB Atlas Data Lake Technical Deep Dive
MongoDB .local San Francisco 2020: MongoDB Atlas Data Lake Technical Deep Dive
 
MongoDB .local San Francisco 2020: Developing Alexa Skills with MongoDB & Golang
MongoDB .local San Francisco 2020: Developing Alexa Skills with MongoDB & GolangMongoDB .local San Francisco 2020: Developing Alexa Skills with MongoDB & Golang
MongoDB .local San Francisco 2020: Developing Alexa Skills with MongoDB & Golang
 
MongoDB .local Paris 2020: Realm : l'ingrédient secret pour de meilleures app...
MongoDB .local Paris 2020: Realm : l'ingrédient secret pour de meilleures app...MongoDB .local Paris 2020: Realm : l'ingrédient secret pour de meilleures app...
MongoDB .local Paris 2020: Realm : l'ingrédient secret pour de meilleures app...
 
MongoDB .local Paris 2020: Upply @MongoDB : Upply : Quand le Machine Learning...
MongoDB .local Paris 2020: Upply @MongoDB : Upply : Quand le Machine Learning...MongoDB .local Paris 2020: Upply @MongoDB : Upply : Quand le Machine Learning...
MongoDB .local Paris 2020: Upply @MongoDB : Upply : Quand le Machine Learning...
 

Recently uploaded

GraphSummit Singapore | Enhancing Changi Airport Group's Passenger Experience...
GraphSummit Singapore | Enhancing Changi Airport Group's Passenger Experience...GraphSummit Singapore | Enhancing Changi Airport Group's Passenger Experience...
GraphSummit Singapore | Enhancing Changi Airport Group's Passenger Experience...
Neo4j
 
How to Get CNIC Information System with Paksim Ga.pptx
How to Get CNIC Information System with Paksim Ga.pptxHow to Get CNIC Information System with Paksim Ga.pptx
How to Get CNIC Information System with Paksim Ga.pptx
danishmna97
 
FIDO Alliance Osaka Seminar: The WebAuthn API and Discoverable Credentials.pdf
FIDO Alliance Osaka Seminar: The WebAuthn API and Discoverable Credentials.pdfFIDO Alliance Osaka Seminar: The WebAuthn API and Discoverable Credentials.pdf
FIDO Alliance Osaka Seminar: The WebAuthn API and Discoverable Credentials.pdf
FIDO Alliance
 
Communications Mining Series - Zero to Hero - Session 1
Communications Mining Series - Zero to Hero - Session 1Communications Mining Series - Zero to Hero - Session 1
Communications Mining Series - Zero to Hero - Session 1
DianaGray10
 
UiPath Test Automation using UiPath Test Suite series, part 5
UiPath Test Automation using UiPath Test Suite series, part 5UiPath Test Automation using UiPath Test Suite series, part 5
UiPath Test Automation using UiPath Test Suite series, part 5
DianaGray10
 
A tale of scale & speed: How the US Navy is enabling software delivery from l...
A tale of scale & speed: How the US Navy is enabling software delivery from l...A tale of scale & speed: How the US Navy is enabling software delivery from l...
A tale of scale & speed: How the US Navy is enabling software delivery from l...
sonjaschweigert1
 
Climate Impact of Software Testing at Nordic Testing Days
Climate Impact of Software Testing at Nordic Testing DaysClimate Impact of Software Testing at Nordic Testing Days
Climate Impact of Software Testing at Nordic Testing Days
Kari Kakkonen
 
Introduction to CHERI technology - Cybersecurity
Introduction to CHERI technology - CybersecurityIntroduction to CHERI technology - Cybersecurity
Introduction to CHERI technology - Cybersecurity
mikeeftimakis1
 
GraphSummit Singapore | The Art of the Possible with Graph - Q2 2024
GraphSummit Singapore | The Art of the  Possible with Graph - Q2 2024GraphSummit Singapore | The Art of the  Possible with Graph - Q2 2024
GraphSummit Singapore | The Art of the Possible with Graph - Q2 2024
Neo4j
 
Securing your Kubernetes cluster_ a step-by-step guide to success !
Securing your Kubernetes cluster_ a step-by-step guide to success !Securing your Kubernetes cluster_ a step-by-step guide to success !
Securing your Kubernetes cluster_ a step-by-step guide to success !
KatiaHIMEUR1
 
National Security Agency - NSA mobile device best practices
National Security Agency - NSA mobile device best practicesNational Security Agency - NSA mobile device best practices
National Security Agency - NSA mobile device best practices
Quotidiano Piemontese
 
Essentials of Automations: The Art of Triggers and Actions in FME
Essentials of Automations: The Art of Triggers and Actions in FMEEssentials of Automations: The Art of Triggers and Actions in FME
Essentials of Automations: The Art of Triggers and Actions in FME
Safe Software
 
Artificial Intelligence for XMLDevelopment
Artificial Intelligence for XMLDevelopmentArtificial Intelligence for XMLDevelopment
Artificial Intelligence for XMLDevelopment
Octavian Nadolu
 
みなさんこんにちはこれ何文字まで入るの?40文字以下不可とか本当に意味わからないけどこれ限界文字数書いてないからマジでやばい文字数いけるんじゃないの?えこ...
みなさんこんにちはこれ何文字まで入るの?40文字以下不可とか本当に意味わからないけどこれ限界文字数書いてないからマジでやばい文字数いけるんじゃないの?えこ...みなさんこんにちはこれ何文字まで入るの?40文字以下不可とか本当に意味わからないけどこれ限界文字数書いてないからマジでやばい文字数いけるんじゃないの?えこ...
みなさんこんにちはこれ何文字まで入るの?40文字以下不可とか本当に意味わからないけどこれ限界文字数書いてないからマジでやばい文字数いけるんじゃないの?えこ...
名前 です男
 
Monitoring Java Application Security with JDK Tools and JFR Events
Monitoring Java Application Security with JDK Tools and JFR EventsMonitoring Java Application Security with JDK Tools and JFR Events
Monitoring Java Application Security with JDK Tools and JFR Events
Ana-Maria Mihalceanu
 
20240607 QFM018 Elixir Reading List May 2024
20240607 QFM018 Elixir Reading List May 202420240607 QFM018 Elixir Reading List May 2024
20240607 QFM018 Elixir Reading List May 2024
Matthew Sinclair
 
GraphSummit Singapore | Graphing Success: Revolutionising Organisational Stru...
GraphSummit Singapore | Graphing Success: Revolutionising Organisational Stru...GraphSummit Singapore | Graphing Success: Revolutionising Organisational Stru...
GraphSummit Singapore | Graphing Success: Revolutionising Organisational Stru...
Neo4j
 
Building RAG with self-deployed Milvus vector database and Snowpark Container...
Building RAG with self-deployed Milvus vector database and Snowpark Container...Building RAG with self-deployed Milvus vector database and Snowpark Container...
Building RAG with self-deployed Milvus vector database and Snowpark Container...
Zilliz
 
Epistemic Interaction - tuning interfaces to provide information for AI support
Epistemic Interaction - tuning interfaces to provide information for AI supportEpistemic Interaction - tuning interfaces to provide information for AI support
Epistemic Interaction - tuning interfaces to provide information for AI support
Alan Dix
 
Uni Systems Copilot event_05062024_C.Vlachos.pdf
Uni Systems Copilot event_05062024_C.Vlachos.pdfUni Systems Copilot event_05062024_C.Vlachos.pdf
Uni Systems Copilot event_05062024_C.Vlachos.pdf
Uni Systems S.M.S.A.
 

Recently uploaded (20)

GraphSummit Singapore | Enhancing Changi Airport Group's Passenger Experience...
GraphSummit Singapore | Enhancing Changi Airport Group's Passenger Experience...GraphSummit Singapore | Enhancing Changi Airport Group's Passenger Experience...
GraphSummit Singapore | Enhancing Changi Airport Group's Passenger Experience...
 
How to Get CNIC Information System with Paksim Ga.pptx
How to Get CNIC Information System with Paksim Ga.pptxHow to Get CNIC Information System with Paksim Ga.pptx
How to Get CNIC Information System with Paksim Ga.pptx
 
FIDO Alliance Osaka Seminar: The WebAuthn API and Discoverable Credentials.pdf
FIDO Alliance Osaka Seminar: The WebAuthn API and Discoverable Credentials.pdfFIDO Alliance Osaka Seminar: The WebAuthn API and Discoverable Credentials.pdf
FIDO Alliance Osaka Seminar: The WebAuthn API and Discoverable Credentials.pdf
 
Communications Mining Series - Zero to Hero - Session 1
Communications Mining Series - Zero to Hero - Session 1Communications Mining Series - Zero to Hero - Session 1
Communications Mining Series - Zero to Hero - Session 1
 
UiPath Test Automation using UiPath Test Suite series, part 5
UiPath Test Automation using UiPath Test Suite series, part 5UiPath Test Automation using UiPath Test Suite series, part 5
UiPath Test Automation using UiPath Test Suite series, part 5
 
A tale of scale & speed: How the US Navy is enabling software delivery from l...
A tale of scale & speed: How the US Navy is enabling software delivery from l...A tale of scale & speed: How the US Navy is enabling software delivery from l...
A tale of scale & speed: How the US Navy is enabling software delivery from l...
 
Climate Impact of Software Testing at Nordic Testing Days
Climate Impact of Software Testing at Nordic Testing DaysClimate Impact of Software Testing at Nordic Testing Days
Climate Impact of Software Testing at Nordic Testing Days
 
Introduction to CHERI technology - Cybersecurity
Introduction to CHERI technology - CybersecurityIntroduction to CHERI technology - Cybersecurity
Introduction to CHERI technology - Cybersecurity
 
GraphSummit Singapore | The Art of the Possible with Graph - Q2 2024
GraphSummit Singapore | The Art of the  Possible with Graph - Q2 2024GraphSummit Singapore | The Art of the  Possible with Graph - Q2 2024
GraphSummit Singapore | The Art of the Possible with Graph - Q2 2024
 
Securing your Kubernetes cluster_ a step-by-step guide to success !
Securing your Kubernetes cluster_ a step-by-step guide to success !Securing your Kubernetes cluster_ a step-by-step guide to success !
Securing your Kubernetes cluster_ a step-by-step guide to success !
 
National Security Agency - NSA mobile device best practices
National Security Agency - NSA mobile device best practicesNational Security Agency - NSA mobile device best practices
National Security Agency - NSA mobile device best practices
 
Essentials of Automations: The Art of Triggers and Actions in FME
Essentials of Automations: The Art of Triggers and Actions in FMEEssentials of Automations: The Art of Triggers and Actions in FME
Essentials of Automations: The Art of Triggers and Actions in FME
 
Artificial Intelligence for XMLDevelopment
Artificial Intelligence for XMLDevelopmentArtificial Intelligence for XMLDevelopment
Artificial Intelligence for XMLDevelopment
 
みなさんこんにちはこれ何文字まで入るの?40文字以下不可とか本当に意味わからないけどこれ限界文字数書いてないからマジでやばい文字数いけるんじゃないの?えこ...
みなさんこんにちはこれ何文字まで入るの?40文字以下不可とか本当に意味わからないけどこれ限界文字数書いてないからマジでやばい文字数いけるんじゃないの?えこ...みなさんこんにちはこれ何文字まで入るの?40文字以下不可とか本当に意味わからないけどこれ限界文字数書いてないからマジでやばい文字数いけるんじゃないの?えこ...
みなさんこんにちはこれ何文字まで入るの?40文字以下不可とか本当に意味わからないけどこれ限界文字数書いてないからマジでやばい文字数いけるんじゃないの?えこ...
 
Monitoring Java Application Security with JDK Tools and JFR Events
Monitoring Java Application Security with JDK Tools and JFR EventsMonitoring Java Application Security with JDK Tools and JFR Events
Monitoring Java Application Security with JDK Tools and JFR Events
 
20240607 QFM018 Elixir Reading List May 2024
20240607 QFM018 Elixir Reading List May 202420240607 QFM018 Elixir Reading List May 2024
20240607 QFM018 Elixir Reading List May 2024
 
GraphSummit Singapore | Graphing Success: Revolutionising Organisational Stru...
GraphSummit Singapore | Graphing Success: Revolutionising Organisational Stru...GraphSummit Singapore | Graphing Success: Revolutionising Organisational Stru...
GraphSummit Singapore | Graphing Success: Revolutionising Organisational Stru...
 
Building RAG with self-deployed Milvus vector database and Snowpark Container...
Building RAG with self-deployed Milvus vector database and Snowpark Container...Building RAG with self-deployed Milvus vector database and Snowpark Container...
Building RAG with self-deployed Milvus vector database and Snowpark Container...
 
Epistemic Interaction - tuning interfaces to provide information for AI support
Epistemic Interaction - tuning interfaces to provide information for AI supportEpistemic Interaction - tuning interfaces to provide information for AI support
Epistemic Interaction - tuning interfaces to provide information for AI support
 
Uni Systems Copilot event_05062024_C.Vlachos.pdf
Uni Systems Copilot event_05062024_C.Vlachos.pdfUni Systems Copilot event_05062024_C.Vlachos.pdf
Uni Systems Copilot event_05062024_C.Vlachos.pdf
 

Common MongoDB Use Cases Webinar

  • 2. Emerging NoSQL Space RDBMS RDBMS RDBMS Data Data Warehou Warehou NoSQL se se The beginning Last 10 years Today
  • 3. Qualities of NoSQL Workloads Flexible data models High Throughput Large Data Sizes • Lists, Nested Objects • Lots of reads • Aggregate data size • Sparse schemas • Lots of writes • Number of objects • Semi-structured data • Agile Development Low Latency Cloud Computing Commodity • Both reads and writes • Run anywhere Hardware • Millisecond latency • No assumptions about • Ethernet hardware • Local disks • No / Few Knobs
  • 4. MongoDB was designed for this Flexible data models High Throughput Large Data Sizes • Lists, Nested Objects • Lots of reads • Aggregate data size • schemas • SparseJSON based • writes • Lots of Replica Sets to • Number of objects shards • 1000’s of • Semi-structuredmodel object data scale reads in a single DB • Dynamic • Agile Development • Sharding to • Partitioning of schemas scale writes data Low Latency Cloud Computing Commodity • Both reads and writes • Run anywhere Hardware • In-memory • Millisecond latency • Scale-out to • No assumptions about • Ethernet • Designed for cache overcome hardware • Local disks • No / Few Knobs “typical” OS and • Scale-out hardware local file system working set limitations
  • 5. Example customers Content Management Operational Intelligence Product Data Management w User Data Management High Volume Data Feeds
  • 7. High Volume Data Feeds Machine • More machines, more sensors, more Generated data Data • Variably structured Stock Market • High frequency trading Data Social Media • Multiple sources of data Firehose • Each changes their format constantly
  • 8. High Volume Data Feed Flexible document model can adapt to changes in sensor format Asynchronous writes Data Data Sources Data Sources Data Write to memory with Sources periodic disk flush Sources Scale writes over multiple shards
  • 9. Operational Intelligence • Large volume of state about users Ad Targeting • Very strict latency requirements • Expose report data to millions of customers Real time • Report on large volumes of data dashboards • Reports that update in real time Social Media • What are people talking about? Monitoring
  • 10. Operational Intelligence Parallelize queries Low latency reads across replicas and shards API In database aggregation Dashboards Flexible schema adapts to changing input data Can use same cluster to collect, store, and report on data
  • 11. Behavioral Profiles Rich profiles collecting multiple complex actions 1 See Ad Scale out to support { cookie_id: ‚1234512413243‛, high throughput of advertiser:{ apple: { activities tracked actions: [ 2 See Ad { impression: ‘ad1’, time: 123 }, { impression: ‘ad2’, time: 232 }, { click: ‘ad2’, time: 235 }, { add_to_cart: ‘laptop’, sku: ‘asdf23f’, time: 254 }, Click { purchase: ‘laptop’, time: 354 } 3 ] } } } Dynamic schemas make it easy to track Indexing and 4 Convert vendor specific querying to support attributes matching, frequency capping
  • 12. Product Data Management E-Commerce • Diverse product portfolio Product • Complex querying and filtering Catalog • Scale for short bursts of high- volume traffic Flash Sales • Scalable but consistent view of inventory
  • 13. Meta data Indexing and rich query API for easy searching and sorting db.archives. find({ ‚country”: ‚Egypt‛ }); Flexible data model for similar, but different objects { type: ‚Artefact‛, { ISBN: ‚00e8da9b‛, medium: ‚Ceramic‛, type: ‚Book‛, country: ‚Egypt‛, country: ‚Egypt‛, year: ‚3000 BC‛ title: ‚Ancient Egypt‛ } }
  • 14. Content Management • Comments and user generated News Site content • Personalization of content, layout Multi-Device • Generate layout on the fly for each rendering device that connects • No need to cache static pages • Store large objects Sharing • Simple modeling of metadata
  • 15. Content Management Geo spatial indexing Flexible data model for location based GridFS for large for similar, but searches object storage different objects { camera: ‚Nikon d4‛, location: [ -122.418333, 37.775 ] } { camera: ‚Canon 5d mkII‛, people: [ ‚Jim‛, ‚Carol‛ ], taken_on: ISODate("2012-03-07T18:32:35.002Z") } { origin: ‚facebook.com/photos/xwdf23fsdf‛, license: ‚Creative Commons CC0‛, size: { dimensions: [ 124, 52 ], units: ‚pixels‛ Horizontal scalability } for large data sets }
  • 16. User Data Management Video • User state and session management Games Social • Scale out to large graphs • Easy to search and process Graphs Identity • Authentication, Authorization, and Management Accounting
  • 17. IS MY USE CASE A GOOD FIT FOR MONGODB?
  • 18. Good fits for MongoDB Application Characteristic Why MongoDB might be a good fit Variable data in objects Dynamic schema and JSON data model enable flexible data storage without sparse tables or complex joins Low Latency Access Memory Mapped storage engine caches documents in RAM, enabling in-memory performance. Data locality of documents can significantly improve latency over join based approaches High write or read throughput Sharding + Replication lets you scale read and write traffic across multiple servers Large number of objects to Sharding lets you split objects across multiple store servers Cloud based deployment Sharding and replication let you work around hardware limitations in clouds.
  • 19. THANK YOU! Free online training available for MongoDB at: http://education.10gen.com

Editor's Notes

  1. In the beginning, there was RDBMS, and if you needed to store data, that was what you used. There really wasn’t much other choice, other than building a custom database solution for your problem.But RDBMS is performance critical, and BI workloads tended to suck up system resources because of long-running queries that would access all the values in a table. So we carved off the data warehouse as a place to store a copy of the operational data for use in analytical queries. This offloaded work from the RDBMS and bought us cycles to scale higher.Today, we’re seeing another split. There’s a new set of workloads that don’t fit well into an RDBMS. They might not fit in for data modeling constraints, for example things like loosely structured data that are difficult to model in a relational database. Or they might have different workload requirements based around working set, required performance, and size and scale of the data. These are being carved off into yet another tier of our data architecture: the NoSQL store. We don’t think MongoDB is going to completely replace the database architectures we see today, but just as we saw a split between OLTP and OLAP databases in the past, we think that most companies will have 3 different datastores in our enterprise datacenter.We’ll dig into what those workloads are that are driving people to nosqldatastores.
  2. These are some of the qualities of workloads that necessitate a move to NoSQL. Each of these qualities is difficult to achieve in an RDBMS, but is well addressed by NoSQL data stores. One aspect is the data model:As our code frameworks get more sophisticated, the entities we store in our applications get more complex. The number of actual fields and values we try to store within an object continues to increase. Sometimes that data is well structured but we’re seeing an increase in semi-structured and unstructured data; models where every object might have a variable set of fields and these are difficult to model in a relational database.If you’re adopting an agile development methodology and constantly pushing out new releases, this can be painful in a relational database because you have to change your schema in the database every time you release a new version. This might work fine in development but when pushing to production adding a column can be a really slow running operation if you have a database with a TB or more of data.Throughput aspect:Traditionally we could scale a RDBMS by adding replicas to give us lots of read capacity. This works well if you have a relatively small number of updates to your site but you need to high read volume.However, if you need high write scalability then an RDBMS isn’t going to cut it. For example, imagine an application that is consuming stock tick data, that is a very high volume of data that you need to write to your database and traditional replication doesn’t give you write throughput. You need a datastore that can scale the write capacity beyond what a single server can handle. The old way of scaling up a single server to have a huge amount of cores and RAM doesn’t work; it’s extremely expensive and will still hit some limit.You could invest a lot of time in building your own sharding or data distribution architecture (like what google did) but MongoDB does this for you out of the box.Data size is another aspect of this“Big Data” now means that people are interested in storing and processing Terabytes, Petabytes, and even Exabytes of data which would be very hard to do in a relational worldAlso, the total number of objects being stored is going up. Imagine a social media application with hundreds of millions of users constantly updating their status; that means you have billions of objects living in your database and you need something that can handle that volume of data.Things like Hadoop can help deal with this for offline batch processing workloads but increasingly people are wanting that data to be available online as well. These aren’t static files that you want to process later, these are objects that you want to be able to modify and query in realtime.Latency is incredibly important if you’re trying to deliver a great user experience. Maintaining low latency for large datasets becomes difficult in a relational model if you are doing lots of joins, so rethinking the problem and adopting a new model usually becomes necessary.Cloud deployments are becoming the defacto deployment architecture for many applications today.Startups building a new application who don’t want to invest in a hardware infrastructureEnterprise application group building a new application who doesn’t want the long lead time of internal IT procurement cyclesCloud computing architectures don’t give you the same kind of capabilities to scale relational databases as when you’re running in your own datacenter. Scaling an oracle database might involve buying a 128 core server and upgrading your SAN infrastructure with Infiniband or FC interconnects. Scaling up to a VM with 128 cores and a terabyte of RAM will be difficult (if not impossible) and very expensive in a cloud hosting environment. Instead you need an infrastructure that can scale out horizontally so you can combine the compute capacity of multiple virtual machines.Since you’re scaling out sideways and can have many servers in your nosql environment, you want to have fewer configuration parameters and fewer knobs to tweak to reduce the configuration complexity.The move to commodity hardware also drives a lot of cost savings. For example, moving away from expensive and complicated SAN infrastructures to using local disks instead.
  3. MongoDB was designed with these things in mind. JSON allows us to easily start adding new objects to the system that have different types of data and you don’t’ need to do a schema migration. You can store things that look more like objects than rows in a table.MongoDB allows you to scale out to handle not just increased read load but also to handle increased write load. And since we’re scaling out, we can always add more shards to be able to handle a constantly increasing amount of data.Many old databases are architected around a model where everything lives on disk, and then you need a second caching layer on top of them to make them fast. MongoDB was designed to be a very fast database because it heavily uses the RAM in our system and only go to disk when it really needs to.MongoDB was designed with the scale out architecture approach from the beginning. We assume that we’re going to be running on commodity hardware with slower disk drives and can perform very well in this environment.
  4. 5 major areas where people are doing interesting things withMongoDB:Content management exploits the flexible schema nature of MongoDB. For example, a web application that is trying to serve up audio, video, images, text, social media user-generated content, you can store all those objects in MongoDB and not worry that every object needs the same fields. If you want to add a twitter stream you can just add a collection of tweets to your database and you don’t need to go back and modify that schema to make it work.Operational intelligence is a variation of business intelligence. We think that most BI workloads still live in the data warehouse platforms but there are a number of applications today that are characterized by realtime access to data. For example, social media sentiment analysis where you are trying to consume various social media data in realtime and figure out what is going on in the social media universe for what people are saying about your brand. Also, realtime analytics is usually not possible in a traditional BI environment where you need to go through an ETL process to bulkload the data into your data warehouse and then use a reporting or dashboard infrastructure to analyze the data after the fact. If you’re trying to do something customer facing where you are exposing those dashboards not just to a handful of analysts but to thousands of customers or even better via an API back into your operational systems, you need something like MongoDB.Product Data Management is typically storing things like product catalogs. If you’re building a site like amazon where you sell books, shoes, movies, and televisions it is difficult to come up with a standard schema for all of these different products so you usually would have different tables for each type of product. We also see the emergence of flash-sale sites where you have very spikey traffic depending on when the sale occurs. It isn’t enough just to replication-scale for handling millions of reads, because you have a limited amount of inventory on a flash-sale you need to make sure you have a strictly consistent view of your inventory so that you don’t oversell the product.User Data Management comes in a lot of forms. Typically we think about this like a directory in an enterprise handling user names, passwords, etc. But now we have apps like foursquare where users are checking in at locations so this goes beyond just typical static user information and covers status updates and information about our activities that are constantly changing. Also big in online gaming; imagine a facebook game where millions of people are building a virtual world and if your game takes a second or two to save every change you make that is a terrible user experience. You need to scale out to handle a large number of aggregate users and a very large working set to access this data in low latency.High Volume Data Feeds comes online with click tracking and analytics, or offline with people building financial apps like high-frequency trading applications trying to keep up with low latency access to quotes coming out of the stock market or social media applications trying to ingest the twitter firehose and cope with that volume of data. This is all about dealing with write scaling and coping with increased write load.
  5. Let’s look at some examples of applications and architectures that people are actually using in these areas.
  6. Machine Generated Data:For example, hundreds or thousands of webservers generating log dataAn energy company who has thousands of temperature sensors or electricity flow sensors from all your customers’ houses that are generating lots of sensor dataIn many cases it isn’t good enough to just store this data to a file because you want realtime access to it for analysis as its being generatedStock Market Data:If you’re a financial house and are trying to ingest L1 or L2 quote feeds this is a massive volume of data. If you want to do complex event processing like storing the last hour of data, without a nosql database like MongoDB you are probably building a custom system so that your High Frequency trading applications can access that amount of data quickly.Social Media Firehose:If you’re connecting to the twitter firehose and you want to actually store and use all of the tweets that are being generated across twitter that alone is a massive write load to be able to handle. Of course you’re also going to extend your infrastructure to handle facebook updates, google plus updates, and any other social media that comes along later you’ll need to be able to scale our your write load even more. Plus each of these have different data formats and different formats of metadata and you want to be able to query on fields that aren’t in every format to make use of each source’s specialized nature.
  7. We architect a system that utilizes the scaleout architecture of mongoDB. We use sharding, which allows you to:Aggregate the disk drive performance of all of those serversAggregate the RAM of all of those servers to be able to stage writes and buffer the incoming flow of documents being written. MongoDB’s storage engine works off of a periodic flush so all of that data is being written to RAM and periodically being written to disk. This allows the OS to structure those I/Os in a much more efficient manner and to work within the capabilities of our disk drives. Disk drives tend to be pretty bad at random I/O as there is a high latency to seek to a particular place on disk and do a single write. By grouping these writes together, it means that when we go to disk to write, we’re seeking to a location and we’re writing out a few hundred MBs of data at once. The bandwidth of writing out a few hundred MBs of data even on cheap drives is pretty good, it’s the seek time that will kill performance so we avoid that in MongoDB as much as possible.MongoDB supports asynchronous writes. In a traditional database you often need to wait for confirmation from the database that a write succeeded before continuing on in your application. You can do this synchronous behavior of acknowledged writes in MongoDB but you can also choose to not wait for acknowledgement. Yes it does mean you could lose some data but in a high volume data feed losing an occasional entry often isn’t so bad. For example, if you’re collecting data from thousands of web servers and miss one line from an access log, it might not affect your application in a statistically significant way. This is a good alternatives depending on the value of your data and whether or not you absolutely need to guarantee every write, which can be much more expensive especially in a high-volume feed. There are a lot of people building out this kind of architecture.
  8. High volume data feeds tend to go hand in hand with Operational Intelligence workloads. If you don’t need to use the data in realtime it may be cheaper to just write that data out to flat files for later processing but if you want to make use of that high volume of data in real time then MongoDB can be excellent for this.For example, if you’re building an ad network and you want to build an ad targeting system, you need to keep track of behavior profies for hundreds of millions of users so every time they visit a page you’re updating a record for what you’ve seen them do. But you also have a very low latency when you’re on an ad exchange where you have 100ms to receive a request for an ad, figure out who the user is and how much they’re worth to you, and then submit a bid for the ad. You need a database that can deal with this amount of volume and respond in just a few millseconds.If you’re building an analytics application and are trying to expose a dashboard to your customers, that means you have a lot of clients that are running range queries over a huge amount of data. Traditional BI systems are architected around a use case of a handful of people accessing dashboards. In modern web apps that gets exposed to lots of users in real time; they don’t want to just generate a daily report and mail it to 10 people, they want to watch the charts change in real time.In Social Media monitoring you’re trying to understand what people are saying about your brand in real time. This is a very large dataset that you’re sifting through with relatively complex queries.
  9. In Operational Intelligence these are usually dashboards or APIs for viewing/monitoring the data in realtime.Using replication we can scale out more read copies of the data natively allowing us to do more of these complex analytical queries in realtime. Since MongoDB is keeping your working set in RAM it tends to be able to do those queries very quickly. As your browser is polling for updates that data from your working set will be in RAM.MongoDB does have a very complex query language for doing these queries, including our new aggregation framework and a built in MapReduce framework for answering queries or generating new collections. Because the MapReduce job can take as input the output of a query we don’t need to do full table scans for every MapReduce job so this makes it suitable to use for cases where we want to do realtime aggregation.The new aggregation framework that was just released in 2.2 gives us the kinds of capabilities we’re used to in SQL engines where we do GROUP BY, SUM, MIN, MAX, HAVING clauses. This is coded in C++ and runs very very fast.If your dataset changes frequently, managing schema migrations in a column oriented database or a data warehouse that has a petabyte of data with a star schema can be a nightmare. Here we can take advantage of MongoDB’s flexible schema as well.
  10. We talked about behavioral profiles and ad targeting. Here’s a case where we’re trying to keep a complex model about what a user is doing. So we want to track that a user had an impression for a specific ad, and then 3 days later they might see a different ad from the same advertiser and you’re going to track another impression and that time they actually click on the ad. Maybe they go to the site and fill out a shopping cart but leave before checking out. A lot of advertisers will want to know what was in your shopping cart to show you another ad later to incentivize you to come back later and actually purchase those items.Here you’re storing a lot of rich data about a particular user so you can make use of it in real time by looking up that user’s cookie and seeing which ads they’ve responded well to in the past to make a decision on how you want to bid by seeing which ads they’ve ignored in the past or what items they had in their shopping cart the last time they were on the site.MongoDB makes it easy to store a complex data model and since all of these fields are stored contiguously on disk you’re not doing expensive joins to pull this data together for a specific user so this will be very high performance. MongoDB allows you to build indexes on all of this data as well to make your queries even faster.
  11. We see this in things like product catalogs where we’re storing a diverse portfolio of product data. Some attributes might be shared across all products like they might all have SKUs and a price, but a book might have an author whereas an MP3 might have an artist or band name. So you can send a query that says “find me all the entries where the author field is Walter Isaacson” and even though maybe only 5% of your database entries have an author field MongoDB can cope with this just fine.That way when your business wants to start selling some new product you don’t need to modify your database to handle new product information like shoe sizes that was never built into your schema.Flash sale sites take advantage of what we’ve seen in the previous two examples where we’re coping with very high load.
  12. Moving to the broader aspect of content management.Most of the news organizations are moving online, and it isn’t enough just to put the news article online but they also want to deal with user-generated data. They also want to personalize the content and layout for what specific users want to see to make the site more useful to them based on their past browsing history.You also want your content to be specially rendered for computers, iPads, iPhone, android devices, and set top boxes so that it looks good on all devices and you want to do this rendering as the requests come in.You might want to upload large objects like photos or videos and share it with other people. This often involves complex metadata modeling where the different datatypes have different attributes from each other.
  13. Similar to the aspects of the Product Catalog, people are also building content management systems in MongoDB. GridFS is functionality built on top of MongoDB that can break up a large file into 16MB documents and store in pieces across your cluster and then when somebody goes and fetches that video or picture, you can reconstruct it using essentially a parallel query that will reassemble all the parts into the original file. This makes it very simple to store large objects in MongoDB as well as the metadata inside your database. Then you don’t need to deal with a separate filesystem to store your images or videos.You might want to allow people to store arbitrary tags about the files in their metadata as well, for example letting people tag who is in a photo. So the image metadata can be both dynamic and user-generated. And as you add more features to the site, you’ll need to add even more metadata tags into your schema. This works really well in MongoDB.
  14. We see User Data Management for things like video games, especially online games where you need to maintain a user state for things like a social game. Maybe you need to keep track of a farm where you plant things and want it to look the same when you come back a week later. Or you’re playing 10 scrabble games against different people simultaneously and we need to track the state of the game as it evolves. People commonly store a single document that stores the entire state of a game. Because we have a sharded environment we can make sure we have enough RAM to store the working set for millions of concurrent users that are playing a game currently.When facebook started they had to build their own custom database for storing the social graphs of millions of people who are constantly adding and removing connections or friends. This works for them and they have a large engineering team just maintaining their custom technology. But if you use MongoDBsharding is built in so you no longer need to maintain that kind of technology yourself. These social graphs tend to models having nodes and edges. But in MongoDB you can easily store a document for every user that simply has a list of all their friends so this makes it very easy to query a user object and get a list of all their friends because the data is co-located on disk so these lookups will be very fast. You can also index on array fields in MongoDB so you can easily run a query to find all the people who have me in their friends list.In a telco environment you may want to manage the authentication and authorization information for millions of users and you might have a large database of every cell phone you’ve issued and every time a cell phone attaches to your network you need to lookup that cell phone’s dial plan, what network they belong to, etc.Or if you’re building a web application or enterprise application you might be buildling a single-sign on facility for millions of users.
  15. If you’re looking at an application and trying to figure out if something is a good fit for MongoDB, here are some characteristics where we see MongoDB really excelling.If you’re faced with an application where you have lots of similar but slightly different objects where there is variation in what data you keep about each object, MongoDB will be a great fit because you’ll never be in a situation where you’re doing a schema migration.Many databases will consider a slow query to be anything that takes longer than a second but in MongoDB we consider a slow query to be anything that takes longer than 100ms. MongoDB was designed for very low latency access to data.High write throughput allows you to scale to tens or hundreds of thousands or even millions of writes per second because you can achieve linearly write scaling via sharding.If you’re looking to deploy in the cloud, MongoDB is a great fit because you can scale out horizontally to cobble together small cloud VM instances into a very large database instance.
  16. When to use RDBMS and when to use MongoDB?Use an RDBMS when you absolutely must have joins that can’t be easily replaced by a de-normalized nested data modelUse an RDBMS when you absolutely need multi-statement transactionsAre there standard CMS available backed by MongoDB?Mongo Press which is essentially a wordpress clone that is based on MongoDB.There is work on Drupal to have it be backed by MongoDB.Usually people are building custom CMS backed by MongoDB