Introduction to Big Data
      and NoSQL
NJ SQL Server User Group
May 15, 2012
Melissa Demsak    Don Demsak
SQL Architect     Advisory Solutions Architect
Realogy           EMC Consulting
www.sqldiva.com   www.donxml.com
Meet Melissa

• SQL Architect
   – Realogy
• SqlDiva, Twitter: sqldiva
• Email – melissa@sqldiva.com
Meet Don

• Advisory Solutions Architect
   – EMC Consulting
      • Application Architecture, Development & Design
• DonXml.com, Twitter: donxml
• Email – don@donxml.com
• SlideShare - http://www.slideshare.net/dondemsak
The era of Big Data
How did we get here?
• Expensive               • Culture of Limitations
  o   Processors             o   Limit CPU cycles
  o   Disk space             o   Limit disk space
  o   Memory                 o   Limit memory
  o   Operating Systems      o   Limited OS Development
  o   Software               o   Limited Software
  o   Programmers            o   Programmers
                                   • One language
                                   • One persistence store
Typical RDBMS Implementations
• Fixed table schemas
• Small but frequent reads/writes
• Large batch transactions
• Focus on ACID
  o   Atomicity
  o   Consistency
  o   Isolation
  o   Durability
How we scale RDBMS
implementations
1 st
    Step – Build a
relational database


        Relational
        Database
2 nd
   Step – Table
  Partitioning
       p1 p2 p3




       Relational
       Database
3 rd   Step – Database
                   Partitioning
 Browser             Web Tier   B/L Tier   Relational
                                           Database
Customer #1




  Browser            Web Tier   B/L Tier   Relational
                                           Database
Customer #2




  Browser            Web Tier   B/L Tier   Relational
                                           Database
Customer #3
4 th     Step – Move to the
                  cloud?
 Browser         Web Tier   B/L Tier   SQL Azure
                                       Federation
Customer #1



                                       SQL Azure
  Browser        Web Tier   B/L Tier   Federation

Customer #2



                                       SQL Azure
  Browser        Web Tier   B/L Tier   Federation

Customer #3
Problems created by too
          much data
• Where to store
• How to store
• How to process
• Organization, searching, and
  metadata
• How to manage access
• How to copy, move, and backup
• Lifecycle
Polyglot Programmer
Polyglot Persistence

      (how to store)
• Atlanta 2009 - No:sql(east) conference
   select fun, profit from real_world
   where relational=false
• Billed as “conference of no-rel
  datastores”

             (loose) Definition

•   (often) Open source
•   Non-relational
•   Distributed
•   (often) does not guarantee ACID
Types Of NoSQL Data Stores
5 Groups of Data
          Models
Relational


Document


Key Value


Graph


Column Family
Document?
• Think of a web page...
  o Relational model requires column/tag
  o Lots of empty columns
  o Wasted space and processing time

• Document model just stores the pages as is
  o Saves on space
  o Very flexible

• Document Databases
  o   Apache Jackrabbit
  o   CouchDB
  o   MongoDB
  o   SimpleDB
  o   XML Databases
       • MarkLogic Server
       • eXist.
Key/Value Stores
• Simple Index on Key
• Value can be any serialized form of data
• Lots of different implementations
   o Eventually Consistent
       • “If no updates occur for a period, eventually all updates will propagate
          through the system and all replicas will be consistent”
   o Cached in RAM
   o Cached on disk
   o Distributed Hash Tables

• Examples
   o Azure AppFabric Cache
   o Memcache-d
   o VMWare vFabric GemFire
Graph?
• Graph consists of
   o Node („stations‟ of the graph)
   o Edges (lines between them)

• Graph Stores
   o AllegroGraph
   o Core Data
   o Neo4j
   o DEX
   o FlockDB
       • Created by the Twitter folks
       • Nodes = Users
       • Edges = Nature of relationship between nodes.
   o Microsoft Trinity (research project)
       • http://research.microsoft.com/en-us/projects/trinity/
Column Family?
• Lots of variants
   o  Object Stores
       • Db4o
       • GemStone/S
       • InterSystems Caché
       • Objectivity/DB
       • ZODB
   o Tabluar
       • BigTable
       • Mnesia
       • Hbase
       • Hypertable
       • Azure Table Storage
   o Column-oriented
       • Greenplum
       • Microsoft SQL Server 2012
Okay got it, Now Let’s
Compare Some Real World
       Scenarios
You Need Constant
                     Consistency
•     You‟re dealing with financial transactions
•     You‟re dealing with medical records
•     You‟re dealing with bonded goods
•     Best you use a RDMBS 




    Footer Text                               5/15/2012   24
You Need Horizontal
                 Scalability
•     You‟re working across defined timezones
•     You‟re Aggregating large quantities of data
•     Maintaining a chat server (Facebook chat)
•     Use Column Family Storage.




    Footer Text                             5/15/2012   25
Frequently Written Rarely
          Read
•     Think web counters and the like
•     Every time a user comes to a page = ctr++
•     But it‟s only read when the report is run
•     Use Key-Value Storage.




    Footer Text                             5/15/2012   26
Here Today Gone
                 Tomorrow
• Transient data like..
    o Web Sessions
    o Locks
    o Short Term Stats
       • Shopping cart contents

• Use Key-Value Storage




Footer Text                       5/15/2012   27
Where to store
• RAM
   o Fast
                                • Local Disk
                                   o   SSD – super fast
   o Expensive
                                   o   Fast spinning disks (7200+)
   o volatile
                                   o   High Bandwidth possible
                                   o   Persistent

                                • SAN
• Parallel File System             o Storage Area Network
   o HDFS (Hadoop)                 o Fully managed
   o Auto-replicated for           o Expensive
     parallel decentralized
     I/O                        • Cloud
                                   o Amazon
                                   o Box.Net
                                   o DropBox
Big Data
Big Data Definition
           •Beyond what traditional
Volume      environments can handle

           •Need decisions fast
Velocity



           •Many formats
Variety
Additional Big Data Concepts
• Volumes & volumes of data
• Unstructured
• Semi-structured
• Not suited for Relational Databases
• Often utilizes MapReduce frameworks
Big Data Examples
• Cassandra
• Hadoop
• Greenplum
• Azure Storage
• EMC Atmos
• Amazon S3
• SQL Azure (with Federations support)?
Real World Example
• Twitter
  o The challenges
     • Needs to store many graphs
           Who you are following
           Who‟s following you
           Who you receive phone
            notifications from etc
     • To deliver a tweet requires
       rapid paging of followers
     • Heavy write load as
       followers are added and
       removed
     • Set arithmetic for @mentions
       (intersection of users).
What did they try?
• Started with Relational
  Databases
• Tried Key-Value storage
  of denormalized lists
• Did it work?
   o Nope
      • Either good at
            Handling the write load
            Or paging large
             amounts of data
            But not both
What did they need?
• Simplest possible thing that would work
• Allow for horizontal partitioning
• Allow write operations to
• Arrive out of order
  o Or be processed more than once
  o Failures should result in redundant work

• Not lost work!
The Result was FlockDB
• Stores graph data
• Not optimized for graph traversal operations
• Optimized for large adjacency lists
  o List of all edges in a graph
      • Key is the edge value a set of the node end points

• Optimized for fast read and write
• Optimized for page-able set arithmetic.
How Does it Work?
• Stores graphs as sets of edges between nodes
• Data is partitioned by node
  o All queries can be answered by a single partition

• Write operations are idempotent
  o Can be applied multiple times without changing the result

• And commutative
  o Changing the order of operands doesn‟t change the result.
How to Process Big
Data
ACID
• Atomicity
  o All or Nothing

• Consistency
  o Valid according to all defined rules

• Isolation
  o No transaction should be able to interfere with another transaction

• Durability
  o Once a transaction has been committed, it will remain so, even in
    the event of power loss, crashes, or errors
BASE
• Basically Available
  o High availability but not always consistent

• Soft state
  o Background cleanup mechanism

• Eventual consistency
  o Given a sufficiently long period of time over which no changes are
    sent, all updates can be expected to propagate eventually through
    the system and all the replicas will be consistent.
Traditional (relational)
      Approach
            Extract   Transactional Data Store




      Transform



                      Data Warehouse
            Load
Big Data Approach
• MapReduce Pattern/Framework
 o an Input Reader
 o Map Function – To transform to a common shape
   (format)
 o a partition function
 o a compare function
 o Reduce Function
 o an Output Writer
MongoDB Example

> // map function                        > // reduce function
> m = function(){                        > r = function( key , values ){
...    this.tags.forEach(                ...    var total = 0;
...        function(z){                  ...    for ( var i=0; i<values.length; i++ )
...            emit( z , { count : 1 }   ...        total += values[i].count;
);                                       ...    return { count : total };
...        }                             ...};
...    );
...};




           > // execute
           > res = db.things.mapReduce(m, r, { out : "myoutput" } );
What is Hadoop?
• A scalable fault-tolerant grid operating system for
  data storage and processing
• Its scalability comes from the marriage of:
  o HDFS: Self-Healing High-Bandwidth Clustered Storage
  o MapReduce: Fault-Tolerant Distributed Processing
• Operates on unstructured and structured data
• A large and active ecosystem (many developers
  and additions like HBase, Hive, Pig, …)
• Open source under the friendly Apache License
• http://wiki.apache.org/hadoop/
Hadoop Design Axioms
1. System Shall Manage and Heal Itself
2. Performance Shall Scale Linearly
3. Compute Should Move to Data
4. Simple Core, Modular and Extensible
Hadoop Core Components

     Store             Process


     HDFS           Map/Reduce



  Self-healing      Fault-tolerant
High-bandwidth       distributed
Clustered storage    processing
HDFS: Hadoop Distributed File System
 Block Size = 64MB
Replication Factor = 3




  Cost/GB is a few
 ¢/month vs $/month
Hadoop Map/Reduce
Hadoop Job Architecture
                                       Node
                                      Manager


                               Container   App Mstr


Client

                    Resource           Node
                    Manager           Manager
Client

                               App Mstr    Container




 MapReduce Status                      Node
                                      Manager
   Job Submission
   Node Status
 Resource Request              Container   Container
Microsoft embraces Hadoop




Good for enterprises & developers
Great for end users!
HADOOP
                                         [Azure and Enterprise]


 Java OM        Streaming OM     HiveQL                PigLatin               .NET/C#/F#         (T)SQL




                                           OCEAN OF DATA
             NOSQL             [unstructured, semi-structured, structured]                 ETL




                                            HDFS




           A SEAMLESS OCEAN OF INFORMATION PROCESSING AND ANALYTICs




EIS /                RDBMS                  File                             OData                 Azure
ERP                                         System                           [RSS]                Storage
Hive Plug-in for Excel




Footer Text                 5/15/2012   52
THANK YOU

Big Data (NJ SQL Server User Group)

  • 1.
    Introduction to BigData and NoSQL NJ SQL Server User Group May 15, 2012 Melissa Demsak Don Demsak SQL Architect Advisory Solutions Architect Realogy EMC Consulting www.sqldiva.com www.donxml.com
  • 2.
    Meet Melissa • SQLArchitect – Realogy • SqlDiva, Twitter: sqldiva • Email – melissa@sqldiva.com
  • 3.
    Meet Don • AdvisorySolutions Architect – EMC Consulting • Application Architecture, Development & Design • DonXml.com, Twitter: donxml • Email – don@donxml.com • SlideShare - http://www.slideshare.net/dondemsak
  • 4.
    The era ofBig Data
  • 5.
    How did weget here? • Expensive • Culture of Limitations o Processors o Limit CPU cycles o Disk space o Limit disk space o Memory o Limit memory o Operating Systems o Limited OS Development o Software o Limited Software o Programmers o Programmers • One language • One persistence store
  • 6.
    Typical RDBMS Implementations •Fixed table schemas • Small but frequent reads/writes • Large batch transactions • Focus on ACID o Atomicity o Consistency o Isolation o Durability
  • 7.
    How we scaleRDBMS implementations
  • 8.
    1 st Step – Build a relational database Relational Database
  • 9.
    2 nd Step – Table Partitioning p1 p2 p3 Relational Database
  • 10.
    3 rd Step – Database Partitioning Browser Web Tier B/L Tier Relational Database Customer #1 Browser Web Tier B/L Tier Relational Database Customer #2 Browser Web Tier B/L Tier Relational Database Customer #3
  • 11.
    4 th Step – Move to the cloud? Browser Web Tier B/L Tier SQL Azure Federation Customer #1 SQL Azure Browser Web Tier B/L Tier Federation Customer #2 SQL Azure Browser Web Tier B/L Tier Federation Customer #3
  • 12.
    Problems created bytoo much data • Where to store • How to store • How to process • Organization, searching, and metadata • How to manage access • How to copy, move, and backup • Lifecycle
  • 14.
  • 15.
    Polyglot Persistence (how to store)
  • 16.
    • Atlanta 2009- No:sql(east) conference select fun, profit from real_world where relational=false • Billed as “conference of no-rel datastores” (loose) Definition • (often) Open source • Non-relational • Distributed • (often) does not guarantee ACID
  • 17.
    Types Of NoSQLData Stores
  • 18.
    5 Groups ofData Models Relational Document Key Value Graph Column Family
  • 19.
    Document? • Think ofa web page... o Relational model requires column/tag o Lots of empty columns o Wasted space and processing time • Document model just stores the pages as is o Saves on space o Very flexible • Document Databases o Apache Jackrabbit o CouchDB o MongoDB o SimpleDB o XML Databases • MarkLogic Server • eXist.
  • 20.
    Key/Value Stores • SimpleIndex on Key • Value can be any serialized form of data • Lots of different implementations o Eventually Consistent • “If no updates occur for a period, eventually all updates will propagate through the system and all replicas will be consistent” o Cached in RAM o Cached on disk o Distributed Hash Tables • Examples o Azure AppFabric Cache o Memcache-d o VMWare vFabric GemFire
  • 21.
    Graph? • Graph consistsof o Node („stations‟ of the graph) o Edges (lines between them) • Graph Stores o AllegroGraph o Core Data o Neo4j o DEX o FlockDB • Created by the Twitter folks • Nodes = Users • Edges = Nature of relationship between nodes. o Microsoft Trinity (research project) • http://research.microsoft.com/en-us/projects/trinity/
  • 22.
    Column Family? • Lotsof variants o Object Stores • Db4o • GemStone/S • InterSystems Caché • Objectivity/DB • ZODB o Tabluar • BigTable • Mnesia • Hbase • Hypertable • Azure Table Storage o Column-oriented • Greenplum • Microsoft SQL Server 2012
  • 23.
    Okay got it,Now Let’s Compare Some Real World Scenarios
  • 24.
    You Need Constant Consistency • You‟re dealing with financial transactions • You‟re dealing with medical records • You‟re dealing with bonded goods • Best you use a RDMBS  Footer Text 5/15/2012 24
  • 25.
    You Need Horizontal Scalability • You‟re working across defined timezones • You‟re Aggregating large quantities of data • Maintaining a chat server (Facebook chat) • Use Column Family Storage. Footer Text 5/15/2012 25
  • 26.
    Frequently Written Rarely Read • Think web counters and the like • Every time a user comes to a page = ctr++ • But it‟s only read when the report is run • Use Key-Value Storage. Footer Text 5/15/2012 26
  • 27.
    Here Today Gone Tomorrow • Transient data like.. o Web Sessions o Locks o Short Term Stats • Shopping cart contents • Use Key-Value Storage Footer Text 5/15/2012 27
  • 28.
    Where to store •RAM o Fast • Local Disk o SSD – super fast o Expensive o Fast spinning disks (7200+) o volatile o High Bandwidth possible o Persistent • SAN • Parallel File System o Storage Area Network o HDFS (Hadoop) o Fully managed o Auto-replicated for o Expensive parallel decentralized I/O • Cloud o Amazon o Box.Net o DropBox
  • 29.
  • 30.
    Big Data Definition •Beyond what traditional Volume environments can handle •Need decisions fast Velocity •Many formats Variety
  • 31.
    Additional Big DataConcepts • Volumes & volumes of data • Unstructured • Semi-structured • Not suited for Relational Databases • Often utilizes MapReduce frameworks
  • 32.
    Big Data Examples •Cassandra • Hadoop • Greenplum • Azure Storage • EMC Atmos • Amazon S3 • SQL Azure (with Federations support)?
  • 33.
    Real World Example •Twitter o The challenges • Needs to store many graphs  Who you are following  Who‟s following you  Who you receive phone notifications from etc • To deliver a tweet requires rapid paging of followers • Heavy write load as followers are added and removed • Set arithmetic for @mentions (intersection of users).
  • 34.
    What did theytry? • Started with Relational Databases • Tried Key-Value storage of denormalized lists • Did it work? o Nope • Either good at  Handling the write load  Or paging large amounts of data  But not both
  • 35.
    What did theyneed? • Simplest possible thing that would work • Allow for horizontal partitioning • Allow write operations to • Arrive out of order o Or be processed more than once o Failures should result in redundant work • Not lost work!
  • 36.
    The Result wasFlockDB • Stores graph data • Not optimized for graph traversal operations • Optimized for large adjacency lists o List of all edges in a graph • Key is the edge value a set of the node end points • Optimized for fast read and write • Optimized for page-able set arithmetic.
  • 37.
    How Does itWork? • Stores graphs as sets of edges between nodes • Data is partitioned by node o All queries can be answered by a single partition • Write operations are idempotent o Can be applied multiple times without changing the result • And commutative o Changing the order of operands doesn‟t change the result.
  • 38.
  • 39.
    ACID • Atomicity o All or Nothing • Consistency o Valid according to all defined rules • Isolation o No transaction should be able to interfere with another transaction • Durability o Once a transaction has been committed, it will remain so, even in the event of power loss, crashes, or errors
  • 40.
    BASE • Basically Available o High availability but not always consistent • Soft state o Background cleanup mechanism • Eventual consistency o Given a sufficiently long period of time over which no changes are sent, all updates can be expected to propagate eventually through the system and all the replicas will be consistent.
  • 41.
    Traditional (relational) Approach Extract Transactional Data Store Transform Data Warehouse Load
  • 42.
    Big Data Approach •MapReduce Pattern/Framework o an Input Reader o Map Function – To transform to a common shape (format) o a partition function o a compare function o Reduce Function o an Output Writer
  • 43.
    MongoDB Example > //map function > // reduce function > m = function(){ > r = function( key , values ){ ... this.tags.forEach( ... var total = 0; ... function(z){ ... for ( var i=0; i<values.length; i++ ) ... emit( z , { count : 1 } ... total += values[i].count; ); ... return { count : total }; ... } ...}; ... ); ...}; > // execute > res = db.things.mapReduce(m, r, { out : "myoutput" } );
  • 44.
    What is Hadoop? •A scalable fault-tolerant grid operating system for data storage and processing • Its scalability comes from the marriage of: o HDFS: Self-Healing High-Bandwidth Clustered Storage o MapReduce: Fault-Tolerant Distributed Processing • Operates on unstructured and structured data • A large and active ecosystem (many developers and additions like HBase, Hive, Pig, …) • Open source under the friendly Apache License • http://wiki.apache.org/hadoop/
  • 45.
    Hadoop Design Axioms 1.System Shall Manage and Heal Itself 2. Performance Shall Scale Linearly 3. Compute Should Move to Data 4. Simple Core, Modular and Extensible
  • 46.
    Hadoop Core Components Store Process HDFS Map/Reduce Self-healing Fault-tolerant High-bandwidth distributed Clustered storage processing
  • 47.
    HDFS: Hadoop DistributedFile System Block Size = 64MB Replication Factor = 3 Cost/GB is a few ¢/month vs $/month
  • 48.
  • 49.
    Hadoop Job Architecture Node Manager Container App Mstr Client Resource Node Manager Manager Client App Mstr Container MapReduce Status Node Manager Job Submission Node Status Resource Request Container Container
  • 50.
    Microsoft embraces Hadoop Goodfor enterprises & developers Great for end users!
  • 51.
    HADOOP [Azure and Enterprise] Java OM Streaming OM HiveQL PigLatin .NET/C#/F# (T)SQL OCEAN OF DATA NOSQL [unstructured, semi-structured, structured] ETL HDFS A SEAMLESS OCEAN OF INFORMATION PROCESSING AND ANALYTICs EIS / RDBMS File OData Azure ERP System [RSS] Storage
  • 52.
    Hive Plug-in forExcel Footer Text 5/15/2012 52
  • 53.

Editor's Notes

  • #19 t least four groups of data model: key-value, document, column-family, and graph. Looking at this list, there&apos;s a big similarity between the first three - all have a fundamental unit of storage which is a rich structure of closely related data: for key-value stores it&apos;s the value, for document stores it&apos;s the document, and for column-family stores it&apos;s the column family. In DDD terms, this group of data is an aggregate.A Graph Database stores data structured in the Nodes and Relationships of a graphColumn Family (BigTable-style) databases are an evolution of key-value, using &quot;families&quot; to allow grouping of rows. The rise of NoSQL databases has been driven primarily by the desire to store data effectively on large clusters - such as the setups used by Google and Amazon. Relational databases were not designed with clusters in mind, which is why people have cast around for an alternative. Storing aggregates as fundamental units makes a lot of sense for running on a cluster. Aggregates make natural units for distribution strategies such as sharding, since you have a large clump of data that you expect to be accessed together.The Relational ModelThe relational model provides for the storage of records that are made up of tuples. Records are stored in tables. Tables are defined by a schema, which determines what columns are in the table. Columns have a name and a type. All records within a table fit that table&apos;s definition. SQL is a query language designed to operate over tables. SQL provides syntax for finding records that meet criteria, as well as for relating records in one table to another via joins; a join finds a record in one table based on its relationship to a record in another table.Records can be created (inserted) or deleted. Fields within a record can be updated individually.Implementations of the relational model usually provide transactions, which provide a means to make modifications spanning multiple records atomically.In terms of what programming languages provide, tables are like arrays or lists of records or structures. For high performance access, tables can be indexed in various ways using b-trees or hash maps.Key-Value StoresKey-Value stores provide access to a value based on a key.The key-value pair can be created (inserted), or deleted. The value associated with a key may be updated.Key-value stores don&apos;t usually provide transactions.In terms of what programming languages provide, key-value stores resemble hash tables; these have many names: HashMap (Java), hash (Perl), dict (Python), associative array (PHP), boost::unordered_map&lt;...&gt; (C++).Key-value stores provide one implicit index on the key itself.A key-value store may not sound like the most useful thing, but a lot of information can be stored in the value. It is quite common for the value to be an XML document, a JSON object, or some other serialized form. The key point here is that the storage engine is not aware of the internal structure of the value. It is up to the client application to interpet the value andmanage its contents. The value can only be written as a whole; if the client is storing a JSON object, and only wants to update one field, the entire value must be fetched, the new value substituted, and then the entire value must be written back.The inability to fetch data by anything other than one key may appear limited, but there are workarounds. If the application requires a secondary index, the application can maintain one itself. To do this, the application manages a second collection of key-value pairs where the key is the value of another field in the first collection, and the value is the primary key in the first collection. Because there are no transactions that can be used to make sure that the secondary index is kept synchronized with the original collection, any application that does this would be wise to have a periodic syncing process to clean up after any partial changes that occur due to application crashes, bugs, or errors.Document StoresDocument stores provide access to structured data, but unlike the relational model, there may not be a schema that is enforced. In essence, the application stores bags of key-value pairs. In order to operate in this environment, the application adopts some conventions about how to deal with differing bags it may retrieve, or it may take advantage of the storage engine&apos;s ability to put different documents in different collections, which the application will use to manage its data.Unlike a relational store, document stores usually support nested structures. For example, for document stores that support XML or JSON documents, the value of a field may be something that looks like another document. Document stores can also support array or list-valued keys.Unlike a key-value store, document stores are aware of the internal structure of the document. This allows the storage engine to support secondary indexes directly, allowing for efficient queries on any field. The ability to support nested document storage leads to query languages that can be used to search for items nested inside others; XQuery is one example of this. MongoDB supports some similar functionality by allowing the specification of JSON field paths in queries.Column StoresColumn stores are like relational stores, except that they flip the data around. Instead of storing records, column stores store all the values for a column together in a stream. An index provides a means to get column values for any particular record.Map-reduce implementations such as Hadoop are most efficient if they can stream in their data. Column stores work particularly well for that. As a result, stores like HBase and Hypertable are often used as non-relational data warehouses to feed map-reduce for analytics.A relational-style column scalar may not be the most useful for analytics, so users often store more complex structures in columns. This manifests directly in Cassandra, which introduces the notion of &quot;column families,&quot; which get treated as a &quot;super-column.&quot;Column-oriented stores support retrieving records, but this requires fetching the column values from their individual columns and re-assembling the record.Graph DatabasesGraph databases store vertices and the edges between them. Some support adding annotations to the vertices and/or edges. This can be used to model things like social graphs (people are represented by vertices, and their relationships are the edges), or real-world objects (components are represented by vertices, and their connectedness is represented by edges). The content on IMDB is tied together by a graph: movies are related to to the actors in them, and actors are related to the movies they star in, forming a large complex graph.The access and query languages for graph databases are the most different of the set of those discussed here. Graph database query languages are generally about finding paths in the graph based on either endpoints, or constraints on attributes of the paths between endpoints; one example is SPARQL.
  • #48 Pool commodity servers in a single hierarchical namespace.Designed for large files that are written once and read many times.Example here shows what happens with a replication factor of 3, each data block is present in at least 3 separate data nodes.Typical Hadoop node is eight cores with 16GB ram and four 1TB SATA disks.Default block size is 64MB, though most folks now set it to 128MB