SlideShare a Scribd company logo
1 of 129
Download to read offline
Building Applications
              with

DynamoDB
 An Online Seminar - 16th May 2012
Dr Matt Wood, Amazon Web Services
Thank you!
Building Applications with DynamoDB
Building Applications with DynamoDB




               Getting started
Building Applications with DynamoDB




               Getting started


               Data modeling
Building Applications with DynamoDB




               Getting started


               Data modeling


               Partitioning
Building Applications with DynamoDB




               Getting started


               Data modeling


               Partitioning


               Analytics
Getting started with
     DynamoDB

                 quick review
DynamoDB is a managed
NoSQL database service.
Store and retrieve any amount of data.
Serve any level of request traffic.
Without the
operational burden.
Consistent, predictable
performance.
Single digit millisecond latencies.
Backed on solid-state drives.
Flexible data model.

Key/attribute pairs.
No schema required.
Easy to create. Easy to adjust.
Seamless scalability.

No table size limits. Unlimited storage.
No downtime.
Durable.

Consistent, disk-only writes.
Replication across data centres and
availability zones.
Without the
operational burden.
Without the
operational burden.

             FOCUS ON YOUR APP
Two decisions + three clicks
= ready for use
P rimary keys +
                        t
   le v e l of throughpu


Two decisions + three clicks
= ready for use
Provisioned throughput.

Reserve IOPS for reads and writes.
Scale up (or down) at any time.
Pay per capacity unit.

Priced per hour of
provisioned throughput.
Write throughput.

Units = size of item x writes/second
$0.01 per hour for 10 write units
Consistent writes.

Atomic increment/decrement.
Optimistic concurrency control.
aka: “conditional writes”.
Transactions.

Item level transactions only.
Puts, updates and deletes are ACID.
strongly consistent

      eventually consistent



Read throughput.
strongly consistent

         eventually consistent



Read throughput.
Provisioned units =
size of item x reads/second

$0.01 per hour for 50 read units
strongly consistent

         eventually consistent



Read throughput.
Provisioned units =
size of item x reads/second
             2
$0.01 per hour for 100 read units
strongly consistent

         eventually consistent



Read throughput.

Same latency expectations.
Mix and match at “read time”.
Two decisions + three clicks
= ready for use
Two decisions + three clicks
= ready for use
Two decisions + one API call
= ready for use
$create_response = $dynamodb->create_table(array(
    'TableName' => 'ProductCatalog',
    'KeySchema' => array(
        'HashKeyElement' => array(
            'AttributeName' => 'Id',
            'AttributeType' => AmazonDynamoDB::TYPE_NUMBER
        )
    ),
    'ProvisionedThroughput' => array(
        'ReadCapacityUnits' => 10,
        'WriteCapacityUnits' => 5
    )
));
Two decisions + one API call
= ready for use
Two decisions + one API call
= ready for development
Two decisions + one API call
= ready for production
Two decisions + one API call
= ready for scale
Authentication.

Session based to minimize latency.
Uses Amazon Security Token Service.
Handled by AWS SDKs.
Integrates with IAM.
Monitoring.

CloudWatch metrics:
latency, consumed read and write
throughput, errors and throttling.
Libraries, mappers & mocks.
ColdFusion, Django, Erlang, Java, .Net,
Node.js, Perl, PHP, Python, Ruby

http://j.mp/dynamodb-libs
DynamoDB data models
DynamoDB semantics.

Tables, items and attributes.
Tables contain items.

Unlimited items per table.
Items are a collection of
attributes.

Each attribute has a key and a value.
An item can have any number of
attributes, up to 64k total.
Two scalar data types.

String: Unicode, UTF8 binary encoding.
Number: 38 digit precision.

Multi-value strings and numbers.
date =
id = 100   2012-05-16-09-00-10   total = 25.00

                 date =
id = 101   2012-05-15-15-00-11   total = 35.00

                 date =
id = 101   2012-05-16-12-00-10   total = 100.00

                 date =
id = 102   2012-03-20-18-23-10   total = 20.00

                 date =
id = 102   2012-03-20-18-23-10   total = 120.00
Table

                 date =
id = 100   2012-05-16-09-00-10   total = 25.00

                 date =
id = 101   2012-05-15-15-00-11   total = 35.00

                 date =
id = 101   2012-05-16-12-00-10   total = 100.00

                 date =
id = 102   2012-03-20-18-23-10   total = 20.00

                 date =
id = 102   2012-03-20-18-23-10   total = 120.00
Item
                        date =
       id = 100   2012-05-16-09-00-10   total = 25.00

                        date =
       id = 101   2012-05-15-15-00-11   total = 35.00

                        date =
       id = 101   2012-05-16-12-00-10   total = 100.00

                        date =
       id = 102   2012-03-20-18-23-10   total = 20.00

                        date =
       id = 102   2012-03-20-18-23-10   total = 120.00
Attribute

                 date =
id = 100   2012-05-16-09-00-10   total = 25.00

                 date =
id = 101   2012-05-15-15-00-11   total = 35.00

                 date =
id = 101   2012-05-16-12-00-10   total = 100.00

                 date =
id = 102   2012-03-20-18-23-10   total = 20.00

                 date =
id = 102   2012-03-20-18-23-10   total = 120.00
Where is the schema?

Tables do not require a formal schema.
Items are an arbitrary sized hash.
Just need to specify the primary key.
Items are indexed by
primary key.

Single hash keys and composite keys.
Hash Key

                   date =
  id = 100   2012-05-16-09-00-10   total = 25.00

                   date =
  id = 101   2012-05-15-15-00-11   total = 35.00

                   date =
  id = 101   2012-05-16-12-00-10   total = 100.00

                   date =
  id = 102   2012-03-20-18-23-10   total = 20.00

                   date =
  id = 102   2012-03-20-18-23-10   total = 120.00
Range key for queries.

Querying items by composite key.
Hash Key + Range Key

                   date =
  id = 100   2012-05-16-09-00-10   total = 25.00

                   date =
  id = 101   2012-05-15-15-00-11   total = 35.00

                   date =
  id = 101   2012-05-16-12-00-10   total = 100.00

                   date =
  id = 102   2012-03-20-18-23-10   total = 20.00

                   date =
  id = 102   2012-03-20-18-23-10   total = 120.00
Programming DynamoDB.

Small but perfectly formed.
Whole programming interface
fits on one slide.
CreateTable           PutItem

UpdateTable           GetItem

DeleteTable        UpdateItem

DescribeTable      DeleteItem

ListTables       BatchGetItem

Query           BatchWriteItem

Scan
CreateTable           PutItem

UpdateTable           GetItem

DeleteTable        UpdateItem

DescribeTable      DeleteItem

ListTables       BatchGetItem

Query           BatchWriteItem

Scan
CreateTable           PutItem

UpdateTable           GetItem

DeleteTable        UpdateItem

DescribeTable      DeleteItem

ListTables       BatchGetItem

Query           BatchWriteItem

Scan
Conditional updates.
PutItem, UpdateItem, DeleteItem can
take optional conditions for operation.

UpdateItem performs atomic
increments.
CreateTable           PutItem

UpdateTable           GetItem

DeleteTable        UpdateItem

DescribeTable      DeleteItem

ListTables       BatchGetItem

Query           BatchWriteItem

Scan
One API call, multiple items.
BatchGet returns multiple items by
primary key.

BatchWrite performs up to 25 put or
delete operations.

Throughput is measured by IO,
not API calls.
CreateTable           PutItem

UpdateTable           GetItem

DeleteTable        UpdateItem

DescribeTable      DeleteItem

ListTables       BatchGetItem

Query           BatchWriteItem

Scan
Query vs Scan
Query for composite key queries.
Scan for full table scans, exports.

Both support pages and limits.
Maximum response is 1Mb in size.
Query patterns.
Retrieve all items by hash key.

Range key conditions:
==, <, >, >=, <=, begins with, between.

Counts. Top and bottom n values.
Paged responses.
Modeling patterns
Patterns



 1. Mapping relationships
 with range keys.
 No cross-table joins in DynamoDB.

 Use composite keys to model
 relationships.
Data model example: online gaming.
Storing scores and leader boards.
                                       Players with
                                       high Scores.

                                     Leader board
                                                  for
                                       each game.
Data model example: online gaming.
Storing scores and leader boards.
                                           Players with
                                           high Scores.
  Players: hash key
   user_id =   location =    joined =    Leader board
                                                      for
      mza      Cambridge    2011-07-04     each game.
   user_id =   location =    joined =
    jeffbarr     Seattle    2012-01-20
   user_id =   location =    joined =
    werner     Worldwide    2011-05-15
Data model example: online gaming.
Storing scores and leader boards.
                                            Players with
                                            high Scores.
  Players: hash key
   user_id =   location =     joined =    Leader board
                                                       for
      mza      Cambridge     2011-07-04     each game.
   user_id =    location =    joined =
    jeffbarr      Seattle    2012-01-20
   user_id =   location =     joined =
    werner     Worldwide     2011-05-15


  Scores: composite key
   user_id =    game =        score =
      mza      angry-birds     11,000
   user_id =     game =        score =
      mza         tetris     1,223,000
   user_id =    location =    score =
    werner      bejewelled    55,000
Data model example: online gaming.
Storing scores and leader boards.
                                                             Players with
                                                             high Scores.
  Players: hash key
   user_id =   location =     joined =                    Leader board
                                                                       for
      mza      Cambridge     2011-07-04                     each game.
   user_id =    location =    joined =
    jeffbarr      Seattle    2012-01-20
   user_id =   location =     joined =
    werner     Worldwide     2011-05-15


  Scores: composite key                   Leader boards: composite key
   user_id =    game =        score =      game =        score =    user_id =
      mza      angry-birds     11,000     angry-birds     11,000       mza
   user_id =     game =        score =      game =        score =   user_id =
      mza         tetris     1,223,000       tetris     1,223,000      mza
   user_id =    location =    score =       game =        score =   user_id =
    werner      bejewelled    55,000         tetris     9,000,000    jeffbarr
Data model example: online gaming.
Storing scores and leader boards.


  Players: hash key
   user_id =   location =     joined =
      mza      Cambridge     2011-07-04       Scores by user
   user_id =
    jeffbarr
                location =
                  Seattle
                              joined =
                             2012-01-20
                                              (and by game)
   user_id =   location =     joined =
    werner     Worldwide     2011-05-15


  Scores: composite key                   Leader boards: composite key
   user_id =    game =        score =      game =        score =    user_id =
      mza      angry-birds     11,000     angry-birds     11,000       mza
   user_id =     game =        score =      game =        score =   user_id =
      mza         tetris     1,223,000       tetris     1,223,000      mza
   user_id =    location =    score =       game =        score =   user_id =
    werner      bejewelled    55,000         tetris     9,000,000    jeffbarr
Data model example: online gaming.
Storing scores and leader boards.


  Players: hash key
   user_id =   location =     joined =      High scores by
      mza      Cambridge     2011-07-04
   user_id =    location =    joined =          game
    jeffbarr      Seattle    2012-01-20
   user_id =   location =     joined =
    werner     Worldwide     2011-05-15


  Scores: composite key                   Leader boards: composite key
   user_id =    game =        score =      game =        score =    user_id =
      mza      angry-birds     11,000     angry-birds     11,000       mza
   user_id =     game =        score =      game =        score =   user_id =
      mza         tetris     1,223,000       tetris     1,223,000      mza
   user_id =    location =    score =       game =        score =   user_id =
    werner      bejewelled    55,000         tetris     9,000,000    jeffbarr
Patterns




 2. Handling large items.
 Unlimited attributes per item.
 Unlimited items per table.

 Max 64k per item.
Data model example: large items.
Storing more than 64k across items.




  Large messages: composite keys
         message_id =                 part =      message =
               1                        1         <first 64k>
         message_id =                 part =      message =
               1                        2        <second 64k>
         message_id =                 part =        joined =
               1                        3         <third 64k>




                    Split attributes across items.
              Query by message_id and part to retrieve.
Patterns



 Store a pointer to objects in
 Amazon S3.
 Large data stored in S3.
 Location stored in DynamoDB.

 99.999999999% data durability in S3.
Patterns



 3. Managing secondary
 indices.

 Not supported by DynamoDB.

 Create your own.
Data model example: secondary indices.
Storing more than 64k across items.


  Users: hash key
           user_id =             first_name =   last_name =
              mza                     Matt          Wood
           user_id =             first_name =   last_name =
            mattfox                   Matt           Fox
           user_id =             first_name =   last_name =
            werner                   Werner        Vogels
Data model example: secondary indices.
Storing more than 64k across items.


  Users: hash key
           user_id =                  first_name =   last_name =
              mza                          Matt          Wood
           user_id =                  first_name =   last_name =
            mattfox                        Matt           Fox
           user_id =                  first_name =   last_name =
            werner                        Werner        Vogels


 First name index: composite keys
     first_name =         user_id =
          Matt               mza
     first_name =         user_id =
          Matt             mattfox
     first_name =         user_id =
         Werner            werner
Data model example: secondary indices.
Storing more than 64k across items.


  Users: hash key
           user_id =                  first_name =                 last_name =
              mza                          Matt                        Wood
           user_id =                  first_name =                 last_name =
            mattfox                        Matt                         Fox
           user_id =                  first_name =                 last_name =
            werner                        Werner                      Vogels


 First name index: composite keys             Second name index: composite keys
     first_name =         user_id =                  last_name =       user_id =
          Matt               mza                         Wood             mza
     first_name =         user_id =                  last_name =       user_id =
          Matt             mattfox                        Fox           mattfox
     first_name =         user_id =                  last_name =       user_id =
         Werner            werner                       Vogels          werner
Data model example: secondary indices.
Storing more than 64k across items.


  Users: hash key
           user_id =                  first_name =                 last_name =
              mza                          Matt                        Wood
           user_id =                  first_name =                 last_name =
            mattfox                        Matt                         Fox
           user_id =                  first_name =                 last_name =
            werner                        Werner                      Vogels


 First name index: composite keys             Second name index: composite keys
     first_name =         user_id =                  last_name =       user_id =
          Matt               mza                         Wood             mza
     first_name =         user_id =                  last_name =       user_id =
          Matt             mattfox                        Fox           mattfox
     first_name =         user_id =                  last_name =       user_id =
         Werner            werner                       Vogels          werner
Data model example: secondary indices.
Storing more than 64k across items.


  Users: hash key
           user_id =                  first_name =                 last_name =
              mza                          Matt                        Wood
           user_id =                  first_name =                 last_name =
            mattfox                        Matt                         Fox
           user_id =                  first_name =                 last_name =
            werner                        Werner                      Vogels


 First name index: composite keys             Second name index: composite keys
     first_name =         user_id =                  last_name =       user_id =
          Matt               mza                         Wood             mza
     first_name =         user_id =                  last_name =       user_id =
          Matt             mattfox                        Fox           mattfox
     first_name =         user_id =                  last_name =       user_id =
         Werner            werner                       Vogels          werner
Patterns




 4. Time series data.
 Logging, click through, ad views,
 game play data, application usage.

 Non-uniform access patterns.
 Newer data is ‘live’.
 Older data is read only.
Data model example: time series data.
Rolling tables for hot and cold data.


  Events table: composite keys
          event_id =                timestamp =        key =
             1000                2012-05-16-09-59-01   value
          event_id =                timestamp =        key =
             1001                2012-05-16-09-59-02   value
          event_id =                timestamp =        key =
             1002                2012-05-16-09-59-02   value
Data model example: time series data.
Rolling tables for hot and cold data.


  Events table: composite keys
           event_id =                   timestamp =                     key =
              1000                   2012-05-16-09-59-01                value
           event_id =                  timestamp =                      key =
              1001                  2012-05-16-09-59-02                 value
           event_id =                  timestamp =                      key =
              1002                  2012-05-16-09-59-02                 value


  Events table for April: composite keys         Events table for January: composite keys
       event_id =            timestamp =              event_id =           timestamp =
          400            2012-04-01-00-00-01             100            2012-01-01-00-00-01
       event_id =            timestamp =              event_id =           timestamp =
          401            2012-04-01-00-00-02              101          2012-01-01-00-00-02
       event_id =            timestamp =              event_id =           timestamp =
          402            2012-04-01-00-00-03             102           2012-01-01-00-00-03
Patterns


       Hot and cold tables.


Dec    Jan       Feb   Mar   April   May
Patterns


       Hot and cold tables.


Dec    Jan       Feb   Mar   April       May

                                       higher
                                     throughput
Patterns


       Hot and cold tables.


Dec    Jan       Feb   Mar   April       May

         lower                         higher
       throughput                    throughput
Patterns


       Hot and cold tables.


Dec    Jan       Feb   Mar   April   May

      data to S3,
  delete cold tables
Patterns


       Hot and cold tables.


Jan    Feb       Mar   Apr   May   June
Patterns


       Hot and cold tables.


Feb    Mar       Apr   May   June   July
Patterns


       Hot and cold tables.


Mar    Apr       May   June   July   Aug
Patterns


       Hot and cold tables.


Apr    May       June   July   Aug   Sept
Patterns


    Hot and cold tables.


May June      July   Aug   Sept   Oct
Patterns


 Not out of mind.

 DynamoDB and S3 data can be
 integrated for analytics.

 Run queries across hot and cold data
 with Elastic MapReduce.
Partitioning best practices
Uniform workloads.
DynamoDB divides table data into
multiple partitions.

Data is distributed primarily by
hash key.

Provisioned throughput is divided
evenly across the partitions.
Uniform workloads.
To achieve and maintain full
provisioned throughput for a table,
spread your workload evenly across
the hash keys.
Non-uniform workloads.

Some requests might be throttled,
even at high levels of provisioned
throughput.


Some best practices...
Patterns



 1. Distinct values for hash
 keys.

 Hash key elements should have a
 high number of distinct values.
Data model example: hash key selection.
Well distributed work loads


  Users

          user_id =           first_name =   last_name =
             mza                   Matt          Wood

          user_id =           first_name =   last_name =
           jeffbarr                 Jeff          Barr

          user_id =           first_name =   last_name =
           werner                 Werner        Vogels

          user_id =           first_name =   last_name =
           mattfox                 Matt           Fox

             ...                   ...           ...
Data model example: hash key selection.
Well distributed work loads


  Users

          user_id =             first_name =             last_name =
             mza                     Matt                    Wood

          user_id =             first_name =             last_name =
           jeffbarr                   Jeff                    Barr

          user_id =             first_name =             last_name =
           werner                   Werner                  Vogels

          user_id =             first_name =             last_name =
           mattfox                   Matt                     Fox

             ...                     ...                     ...


             Lots of users with unique user_id.
             Workload well distributed across user partitions.
Patterns



 2. Avoid limited hash key
 values.

 Hash key elements should have a
 high number of distinct values.
Data model example: small hash value range.
Non-uniform workload.


 Status responses

                    status =           date =
                      200        2012-04-01-00-00-01

                    status =           date =
                      404        2012-04-01-00-00-01

                     status            date =
                      404        2012-04-01-00-00-01

                    status =           date =
                      404        2012-04-01-00-00-01
Data model example: small hash value range.
Non-uniform workload.


 Status responses

                    status =                          date =
                      200                       2012-04-01-00-00-01

                    status =                          date =
                      404                       2012-04-01-00-00-01

                     status                           date =
                      404                       2012-04-01-00-00-01

                    status =                          date =
                      404                       2012-04-01-00-00-01



              Small number of status codes.
              Unevenly, non-uniform workload.
Patterns



 3. Model for even
 distribution of access.

 Access by hash key value should be
 evenly distributed across the dataset.
Data model example: uneven access pattern by key.
Non-uniform access workload.


 Devices

              mobile_id =           access_date =
                 100             2012-04-01-00-00-01

              mobile_id =           access_date =
                 100             2012-04-01-00-00-02

              mobile_id =           access_date =
                 100             2012-04-01-00-00-03

              mobile_id =           access_date =
                 100             2012-04-01-00-00-04

                  ...                    ...
Data model example: uneven access pattern by key.
Non-uniform access workload.


 Devices

              mobile_id =                     access_date =
                 100                       2012-04-01-00-00-01

              mobile_id =                     access_date =
                 100                       2012-04-01-00-00-02

              mobile_id =                     access_date =
                 100                       2012-04-01-00-00-03

              mobile_id =                     access_date =
                 100                       2012-04-01-00-00-04

                  ...                              ...

            Large number of devices.
            Small number which are much more popular than others.
            Workload unevenly distributed.
Data model example: randomize access pattern by key.
Towards a uniform workload.


 Devices

             mobile_id =                     access_date =
                100.1                     2012-04-01-00-00-01

             mobile_id =                     access_date =
               100.2                      2012-04-01-00-00-02

             mobile_id =                     access_date =
               100.3                      2012-04-01-00-00-03

             mobile_id =                     access_date =
               100.4                      2012-04-01-00-00-04

                 ...                              ...


            Randomize access pattern.
            Workload randomised by hash key.
Design for a uniform
workload.
Analytics with DynamoDB
Seamless scale.
Scalable methods for data processing.
Scalable methods for backup/restore.
Amazon Elastic MapReduce.
Managed Hadoop service for
data-intensive workflows.

http://aws.amazon.com/emr
Hadoop under the hood.
Take advantage of the Hadoop
ecosystem: streaming interfaces,
Hive, Pig, Mahout.
Distributed data processing.

API driven. Analytics at any scale.
Query flexibility with Hive.
create external table items_db
  (id string, votes bigint, views bigint) stored by
  'org.apache.hadoop.hive.dynamodb.DynamoDBStorageHandler'
   tblproperties
  ("dynamodb.table.name" = "items",
   "dynamodb.column.mapping" =
   "id:id,votes:votes,views:views");
Query flexibility with Hive.

select id, likes, views
from items_db
order by views desc;
Data export/import.

Use EMR for backup and restore
to Amazon S3.
Data export/import.
CREATE EXTERNAL TABLE orders_s3_new_export ( order_id
string, customer_id string, order_date int, total
double )
PARTITIONED BY (year string, month string)
ROW FORMAT DELIMITED FIELDS TERMINATED BY ','
LOCATION 's3://export_bucket';

INSERT OVERWRITE TABLE
orders_s3_new_export
PARTITION (year='2012', month='01')
SELECT * from orders_ddb_2012_01;
Integrate live and
archive data
Run queries across external Hive tables
on S3 and DynamoDB.

Live & archive. Metadata & big objects.
In summary...




DynamoDB
Predictable performance
Provisioned throughput
Libraries & mappers
In summary...




DynamoDB
Predictable performance
Provisioned throughput
Libraries & mappers


Data modeling
Tables & items
Read & write patterns
Time series data
In summary...




DynamoDB                                 Partitioning
Predictable performance                  Automatic partitioning
Provisioned throughput                   Hot and cold data
Libraries & mappers                      Size/throughput ratio


Data modeling
Tables & items
Read & write patterns
Time series data
In summary...




DynamoDB                                 Partitioning
Predictable performance                  Automatic partitioning
Provisioned throughput                   Hot and cold data
Libraries & mappers                      Size/throughput ratio


Data modeling                            Analytics
Tables & items                           Elastic MapReduce
Read & write patterns                    Hive queries
Time series data                         Backup & restore
DynamoDB free tier
5 writes, 10 consistent reads per second
           100Mb of storage
aws.amazon.com/dynamodb
aws.amazon.com/documentation/dynamodb


         best practice + sample code
Thank you!
Q&A
matthew@amazon.com
      @mza

More Related Content

What's hot

Streaming Data Analytics with ksqlDB and Superset | Robert Stolz, Preset
Streaming Data Analytics with ksqlDB and Superset | Robert Stolz, PresetStreaming Data Analytics with ksqlDB and Superset | Robert Stolz, Preset
Streaming Data Analytics with ksqlDB and Superset | Robert Stolz, PresetHostedbyConfluent
 
February 2016 Webinar Series - Introducing VPC Support for AWS Lambda
February 2016 Webinar Series - Introducing VPC Support for AWS LambdaFebruary 2016 Webinar Series - Introducing VPC Support for AWS Lambda
February 2016 Webinar Series - Introducing VPC Support for AWS LambdaAmazon Web Services
 
게임서비스를 위한 ElastiCache 활용 전략 :: 구승모 솔루션즈 아키텍트 :: Gaming on AWS 2016
게임서비스를 위한 ElastiCache 활용 전략 :: 구승모 솔루션즈 아키텍트 :: Gaming on AWS 2016게임서비스를 위한 ElastiCache 활용 전략 :: 구승모 솔루션즈 아키텍트 :: Gaming on AWS 2016
게임서비스를 위한 ElastiCache 활용 전략 :: 구승모 솔루션즈 아키텍트 :: Gaming on AWS 2016Amazon Web Services Korea
 
RedisConf17- Using Redis at scale @ Twitter
RedisConf17- Using Redis at scale @ TwitterRedisConf17- Using Redis at scale @ Twitter
RedisConf17- Using Redis at scale @ TwitterRedis Labs
 
AWS Summit Seoul 2023 | 실시간 CDC 데이터 처리! Modern Transactional Data Lake 구축하기
AWS Summit Seoul 2023 | 실시간 CDC 데이터 처리! Modern Transactional Data Lake 구축하기AWS Summit Seoul 2023 | 실시간 CDC 데이터 처리! Modern Transactional Data Lake 구축하기
AWS Summit Seoul 2023 | 실시간 CDC 데이터 처리! Modern Transactional Data Lake 구축하기Amazon Web Services Korea
 
AWS 기반의 마이크로 서비스 아키텍쳐 구현 방안 :: 김필중 :: AWS Summit Seoul 20
AWS 기반의 마이크로 서비스 아키텍쳐 구현 방안 :: 김필중 :: AWS Summit Seoul 20AWS 기반의 마이크로 서비스 아키텍쳐 구현 방안 :: 김필중 :: AWS Summit Seoul 20
AWS 기반의 마이크로 서비스 아키텍쳐 구현 방안 :: 김필중 :: AWS Summit Seoul 20Amazon Web Services Korea
 
ksqlDB로 시작하는 스트림 프로세싱
ksqlDB로 시작하는 스트림 프로세싱ksqlDB로 시작하는 스트림 프로세싱
ksqlDB로 시작하는 스트림 프로세싱confluent
 
Introduction to Amazon Elasticsearch Service
Introduction to  Amazon Elasticsearch ServiceIntroduction to  Amazon Elasticsearch Service
Introduction to Amazon Elasticsearch ServiceAmazon Web Services
 
Datadog을 활용한 Elastic Kubernetes Service(EKS)에서의 마이크로서비스 통합 가시성 - 정영석 시니어 세일즈 ...
Datadog을 활용한 Elastic Kubernetes Service(EKS)에서의 마이크로서비스 통합 가시성 - 정영석 시니어 세일즈 ...Datadog을 활용한 Elastic Kubernetes Service(EKS)에서의 마이크로서비스 통합 가시성 - 정영석 시니어 세일즈 ...
Datadog을 활용한 Elastic Kubernetes Service(EKS)에서의 마이크로서비스 통합 가시성 - 정영석 시니어 세일즈 ...Amazon Web Services Korea
 
Introduction and Overview of Apache Kafka, TriHUG July 23, 2013
Introduction and Overview of Apache Kafka, TriHUG July 23, 2013Introduction and Overview of Apache Kafka, TriHUG July 23, 2013
Introduction and Overview of Apache Kafka, TriHUG July 23, 2013mumrah
 
Large Scale Lakehouse Implementation Using Structured Streaming
Large Scale Lakehouse Implementation Using Structured StreamingLarge Scale Lakehouse Implementation Using Structured Streaming
Large Scale Lakehouse Implementation Using Structured StreamingDatabricks
 
실시간 스트리밍 분석 Kinesis Data Analytics Deep Dive
실시간 스트리밍 분석  Kinesis Data Analytics Deep Dive실시간 스트리밍 분석  Kinesis Data Analytics Deep Dive
실시간 스트리밍 분석 Kinesis Data Analytics Deep DiveAmazon Web Services Korea
 
Amazon Aurora Deep Dive (김기완) - AWS DB Day
Amazon Aurora Deep Dive (김기완) - AWS DB DayAmazon Aurora Deep Dive (김기완) - AWS DB Day
Amazon Aurora Deep Dive (김기완) - AWS DB DayAmazon Web Services Korea
 
[AWS Migration Workshop] 데이터베이스를 AWS로 손쉽게 마이그레이션 하기
[AWS Migration Workshop]  데이터베이스를 AWS로 손쉽게 마이그레이션 하기[AWS Migration Workshop]  데이터베이스를 AWS로 손쉽게 마이그레이션 하기
[AWS Migration Workshop] 데이터베이스를 AWS로 손쉽게 마이그레이션 하기Amazon Web Services Korea
 
AWS 네트워크 보안을 위한 계층별 보안 구성 모범 사례 – 조이정, AWS 솔루션즈 아키텍트:: AWS 온라인 이벤트 – 클라우드 보안 특집
AWS 네트워크 보안을 위한 계층별 보안 구성 모범 사례 – 조이정, AWS 솔루션즈 아키텍트:: AWS 온라인 이벤트 – 클라우드 보안 특집AWS 네트워크 보안을 위한 계층별 보안 구성 모범 사례 – 조이정, AWS 솔루션즈 아키텍트:: AWS 온라인 이벤트 – 클라우드 보안 특집
AWS 네트워크 보안을 위한 계층별 보안 구성 모범 사례 – 조이정, AWS 솔루션즈 아키텍트:: AWS 온라인 이벤트 – 클라우드 보안 특집Amazon Web Services Korea
 
Apache Kafka Architecture & Fundamentals Explained
Apache Kafka Architecture & Fundamentals ExplainedApache Kafka Architecture & Fundamentals Explained
Apache Kafka Architecture & Fundamentals Explainedconfluent
 
Object storage의 이해와 활용
Object storage의 이해와 활용Object storage의 이해와 활용
Object storage의 이해와 활용Seoro Kim
 
Deep Dive into Building Streaming Applications with Apache Pulsar
Deep Dive into Building Streaming Applications with Apache Pulsar Deep Dive into Building Streaming Applications with Apache Pulsar
Deep Dive into Building Streaming Applications with Apache Pulsar Timothy Spann
 
서버리스 앱 배포 자동화 (김필중, AWS 솔루션즈 아키텍트) :: AWS DevDay2018
서버리스 앱 배포 자동화 (김필중, AWS 솔루션즈 아키텍트) :: AWS DevDay2018서버리스 앱 배포 자동화 (김필중, AWS 솔루션즈 아키텍트) :: AWS DevDay2018
서버리스 앱 배포 자동화 (김필중, AWS 솔루션즈 아키텍트) :: AWS DevDay2018Amazon Web Services Korea
 

What's hot (20)

Streaming Data Analytics with ksqlDB and Superset | Robert Stolz, Preset
Streaming Data Analytics with ksqlDB and Superset | Robert Stolz, PresetStreaming Data Analytics with ksqlDB and Superset | Robert Stolz, Preset
Streaming Data Analytics with ksqlDB and Superset | Robert Stolz, Preset
 
Deep Dive on Amazon Aurora
Deep Dive on Amazon AuroraDeep Dive on Amazon Aurora
Deep Dive on Amazon Aurora
 
February 2016 Webinar Series - Introducing VPC Support for AWS Lambda
February 2016 Webinar Series - Introducing VPC Support for AWS LambdaFebruary 2016 Webinar Series - Introducing VPC Support for AWS Lambda
February 2016 Webinar Series - Introducing VPC Support for AWS Lambda
 
게임서비스를 위한 ElastiCache 활용 전략 :: 구승모 솔루션즈 아키텍트 :: Gaming on AWS 2016
게임서비스를 위한 ElastiCache 활용 전략 :: 구승모 솔루션즈 아키텍트 :: Gaming on AWS 2016게임서비스를 위한 ElastiCache 활용 전략 :: 구승모 솔루션즈 아키텍트 :: Gaming on AWS 2016
게임서비스를 위한 ElastiCache 활용 전략 :: 구승모 솔루션즈 아키텍트 :: Gaming on AWS 2016
 
RedisConf17- Using Redis at scale @ Twitter
RedisConf17- Using Redis at scale @ TwitterRedisConf17- Using Redis at scale @ Twitter
RedisConf17- Using Redis at scale @ Twitter
 
AWS Summit Seoul 2023 | 실시간 CDC 데이터 처리! Modern Transactional Data Lake 구축하기
AWS Summit Seoul 2023 | 실시간 CDC 데이터 처리! Modern Transactional Data Lake 구축하기AWS Summit Seoul 2023 | 실시간 CDC 데이터 처리! Modern Transactional Data Lake 구축하기
AWS Summit Seoul 2023 | 실시간 CDC 데이터 처리! Modern Transactional Data Lake 구축하기
 
AWS 기반의 마이크로 서비스 아키텍쳐 구현 방안 :: 김필중 :: AWS Summit Seoul 20
AWS 기반의 마이크로 서비스 아키텍쳐 구현 방안 :: 김필중 :: AWS Summit Seoul 20AWS 기반의 마이크로 서비스 아키텍쳐 구현 방안 :: 김필중 :: AWS Summit Seoul 20
AWS 기반의 마이크로 서비스 아키텍쳐 구현 방안 :: 김필중 :: AWS Summit Seoul 20
 
ksqlDB로 시작하는 스트림 프로세싱
ksqlDB로 시작하는 스트림 프로세싱ksqlDB로 시작하는 스트림 프로세싱
ksqlDB로 시작하는 스트림 프로세싱
 
Introduction to Amazon Elasticsearch Service
Introduction to  Amazon Elasticsearch ServiceIntroduction to  Amazon Elasticsearch Service
Introduction to Amazon Elasticsearch Service
 
Datadog을 활용한 Elastic Kubernetes Service(EKS)에서의 마이크로서비스 통합 가시성 - 정영석 시니어 세일즈 ...
Datadog을 활용한 Elastic Kubernetes Service(EKS)에서의 마이크로서비스 통합 가시성 - 정영석 시니어 세일즈 ...Datadog을 활용한 Elastic Kubernetes Service(EKS)에서의 마이크로서비스 통합 가시성 - 정영석 시니어 세일즈 ...
Datadog을 활용한 Elastic Kubernetes Service(EKS)에서의 마이크로서비스 통합 가시성 - 정영석 시니어 세일즈 ...
 
Introduction and Overview of Apache Kafka, TriHUG July 23, 2013
Introduction and Overview of Apache Kafka, TriHUG July 23, 2013Introduction and Overview of Apache Kafka, TriHUG July 23, 2013
Introduction and Overview of Apache Kafka, TriHUG July 23, 2013
 
Large Scale Lakehouse Implementation Using Structured Streaming
Large Scale Lakehouse Implementation Using Structured StreamingLarge Scale Lakehouse Implementation Using Structured Streaming
Large Scale Lakehouse Implementation Using Structured Streaming
 
실시간 스트리밍 분석 Kinesis Data Analytics Deep Dive
실시간 스트리밍 분석  Kinesis Data Analytics Deep Dive실시간 스트리밍 분석  Kinesis Data Analytics Deep Dive
실시간 스트리밍 분석 Kinesis Data Analytics Deep Dive
 
Amazon Aurora Deep Dive (김기완) - AWS DB Day
Amazon Aurora Deep Dive (김기완) - AWS DB DayAmazon Aurora Deep Dive (김기완) - AWS DB Day
Amazon Aurora Deep Dive (김기완) - AWS DB Day
 
[AWS Migration Workshop] 데이터베이스를 AWS로 손쉽게 마이그레이션 하기
[AWS Migration Workshop]  데이터베이스를 AWS로 손쉽게 마이그레이션 하기[AWS Migration Workshop]  데이터베이스를 AWS로 손쉽게 마이그레이션 하기
[AWS Migration Workshop] 데이터베이스를 AWS로 손쉽게 마이그레이션 하기
 
AWS 네트워크 보안을 위한 계층별 보안 구성 모범 사례 – 조이정, AWS 솔루션즈 아키텍트:: AWS 온라인 이벤트 – 클라우드 보안 특집
AWS 네트워크 보안을 위한 계층별 보안 구성 모범 사례 – 조이정, AWS 솔루션즈 아키텍트:: AWS 온라인 이벤트 – 클라우드 보안 특집AWS 네트워크 보안을 위한 계층별 보안 구성 모범 사례 – 조이정, AWS 솔루션즈 아키텍트:: AWS 온라인 이벤트 – 클라우드 보안 특집
AWS 네트워크 보안을 위한 계층별 보안 구성 모범 사례 – 조이정, AWS 솔루션즈 아키텍트:: AWS 온라인 이벤트 – 클라우드 보안 특집
 
Apache Kafka Architecture & Fundamentals Explained
Apache Kafka Architecture & Fundamentals ExplainedApache Kafka Architecture & Fundamentals Explained
Apache Kafka Architecture & Fundamentals Explained
 
Object storage의 이해와 활용
Object storage의 이해와 활용Object storage의 이해와 활용
Object storage의 이해와 활용
 
Deep Dive into Building Streaming Applications with Apache Pulsar
Deep Dive into Building Streaming Applications with Apache Pulsar Deep Dive into Building Streaming Applications with Apache Pulsar
Deep Dive into Building Streaming Applications with Apache Pulsar
 
서버리스 앱 배포 자동화 (김필중, AWS 솔루션즈 아키텍트) :: AWS DevDay2018
서버리스 앱 배포 자동화 (김필중, AWS 솔루션즈 아키텍트) :: AWS DevDay2018서버리스 앱 배포 자동화 (김필중, AWS 솔루션즈 아키텍트) :: AWS DevDay2018
서버리스 앱 배포 자동화 (김필중, AWS 솔루션즈 아키텍트) :: AWS DevDay2018
 

Viewers also liked

Design Patterns using Amazon DynamoDB
 Design Patterns using Amazon DynamoDB Design Patterns using Amazon DynamoDB
Design Patterns using Amazon DynamoDBAmazon Web Services
 
Amazon DynamoDB Design Patterns for Ultra-High Performance Apps (DAT304) | AW...
Amazon DynamoDB Design Patterns for Ultra-High Performance Apps (DAT304) | AW...Amazon DynamoDB Design Patterns for Ultra-High Performance Apps (DAT304) | AW...
Amazon DynamoDB Design Patterns for Ultra-High Performance Apps (DAT304) | AW...Amazon Web Services
 
(SDD407) Amazon DynamoDB: Data Modeling and Scaling Best Practices | AWS re:I...
(SDD407) Amazon DynamoDB: Data Modeling and Scaling Best Practices | AWS re:I...(SDD407) Amazon DynamoDB: Data Modeling and Scaling Best Practices | AWS re:I...
(SDD407) Amazon DynamoDB: Data Modeling and Scaling Best Practices | AWS re:I...Amazon Web Services
 
AWS December 2015 Webinar Series - Design Patterns using Amazon DynamoDB
AWS December 2015 Webinar Series - Design Patterns using Amazon DynamoDBAWS December 2015 Webinar Series - Design Patterns using Amazon DynamoDB
AWS December 2015 Webinar Series - Design Patterns using Amazon DynamoDBAmazon Web Services
 
Compare DynamoDB vs. MongoDB
Compare DynamoDB vs. MongoDBCompare DynamoDB vs. MongoDB
Compare DynamoDB vs. MongoDBAmar Das
 
Cloud Security at Netflix
Cloud Security at NetflixCloud Security at Netflix
Cloud Security at NetflixJason Chan
 
Netflix in the Cloud at SV Forum
Netflix in the Cloud at SV ForumNetflix in the Cloud at SV Forum
Netflix in the Cloud at SV ForumAdrian Cockcroft
 
(WRK302) Event-Driven Programming
(WRK302) Event-Driven Programming(WRK302) Event-Driven Programming
(WRK302) Event-Driven ProgrammingAmazon Web Services
 
(BDT203) From Zero to NoSQL Hero: Amazon DynamoDB Tutorial | AWS re:Invent 2014
(BDT203) From Zero to NoSQL Hero: Amazon DynamoDB Tutorial | AWS re:Invent 2014(BDT203) From Zero to NoSQL Hero: Amazon DynamoDB Tutorial | AWS re:Invent 2014
(BDT203) From Zero to NoSQL Hero: Amazon DynamoDB Tutorial | AWS re:Invent 2014Amazon Web Services
 
Cloud-powered Continuous Integration and Deployment architectures - Jinesh Varia
Cloud-powered Continuous Integration and Deployment architectures - Jinesh VariaCloud-powered Continuous Integration and Deployment architectures - Jinesh Varia
Cloud-powered Continuous Integration and Deployment architectures - Jinesh VariaAmazon Web Services
 
(ARC403) From One to Many: Evolving VPC Design | AWS re:Invent 2014
(ARC403) From One to Many: Evolving VPC Design | AWS re:Invent 2014(ARC403) From One to Many: Evolving VPC Design | AWS re:Invent 2014
(ARC403) From One to Many: Evolving VPC Design | AWS re:Invent 2014Amazon Web Services
 
(DAT401) Amazon DynamoDB Deep Dive
(DAT401) Amazon DynamoDB Deep Dive(DAT401) Amazon DynamoDB Deep Dive
(DAT401) Amazon DynamoDB Deep DiveAmazon Web Services
 
Deep Dive - Amazon Virtual Private Cloud (VPC)
Deep Dive - Amazon Virtual Private Cloud (VPC)Deep Dive - Amazon Virtual Private Cloud (VPC)
Deep Dive - Amazon Virtual Private Cloud (VPC)Amazon Web Services
 
Developing Connected Applications with AWS IoT - Technical 301
Developing Connected Applications with AWS IoT - Technical 301Developing Connected Applications with AWS IoT - Technical 301
Developing Connected Applications with AWS IoT - Technical 301Amazon Web Services
 

Viewers also liked (20)

Design Patterns using Amazon DynamoDB
 Design Patterns using Amazon DynamoDB Design Patterns using Amazon DynamoDB
Design Patterns using Amazon DynamoDB
 
Amazon DynamoDB Design Patterns for Ultra-High Performance Apps (DAT304) | AW...
Amazon DynamoDB Design Patterns for Ultra-High Performance Apps (DAT304) | AW...Amazon DynamoDB Design Patterns for Ultra-High Performance Apps (DAT304) | AW...
Amazon DynamoDB Design Patterns for Ultra-High Performance Apps (DAT304) | AW...
 
Deep Dive: Amazon DynamoDB
Deep Dive: Amazon DynamoDBDeep Dive: Amazon DynamoDB
Deep Dive: Amazon DynamoDB
 
(SDD407) Amazon DynamoDB: Data Modeling and Scaling Best Practices | AWS re:I...
(SDD407) Amazon DynamoDB: Data Modeling and Scaling Best Practices | AWS re:I...(SDD407) Amazon DynamoDB: Data Modeling and Scaling Best Practices | AWS re:I...
(SDD407) Amazon DynamoDB: Data Modeling and Scaling Best Practices | AWS re:I...
 
Under the Covers of DynamoDB
Under the Covers of DynamoDBUnder the Covers of DynamoDB
Under the Covers of DynamoDB
 
Deep Dive on Amazon DynamoDB
Deep Dive on Amazon DynamoDBDeep Dive on Amazon DynamoDB
Deep Dive on Amazon DynamoDB
 
AWS December 2015 Webinar Series - Design Patterns using Amazon DynamoDB
AWS December 2015 Webinar Series - Design Patterns using Amazon DynamoDBAWS December 2015 Webinar Series - Design Patterns using Amazon DynamoDB
AWS December 2015 Webinar Series - Design Patterns using Amazon DynamoDB
 
Compare DynamoDB vs. MongoDB
Compare DynamoDB vs. MongoDBCompare DynamoDB vs. MongoDB
Compare DynamoDB vs. MongoDB
 
Cloud Security at Netflix
Cloud Security at NetflixCloud Security at Netflix
Cloud Security at Netflix
 
Netflix in the Cloud at SV Forum
Netflix in the Cloud at SV ForumNetflix in the Cloud at SV Forum
Netflix in the Cloud at SV Forum
 
(WRK302) Event-Driven Programming
(WRK302) Event-Driven Programming(WRK302) Event-Driven Programming
(WRK302) Event-Driven Programming
 
Hadoop and DynamoDB
Hadoop and DynamoDBHadoop and DynamoDB
Hadoop and DynamoDB
 
(BDT203) From Zero to NoSQL Hero: Amazon DynamoDB Tutorial | AWS re:Invent 2014
(BDT203) From Zero to NoSQL Hero: Amazon DynamoDB Tutorial | AWS re:Invent 2014(BDT203) From Zero to NoSQL Hero: Amazon DynamoDB Tutorial | AWS re:Invent 2014
(BDT203) From Zero to NoSQL Hero: Amazon DynamoDB Tutorial | AWS re:Invent 2014
 
Cloud-powered Continuous Integration and Deployment architectures - Jinesh Varia
Cloud-powered Continuous Integration and Deployment architectures - Jinesh VariaCloud-powered Continuous Integration and Deployment architectures - Jinesh Varia
Cloud-powered Continuous Integration and Deployment architectures - Jinesh Varia
 
Global Netflix Platform
Global Netflix PlatformGlobal Netflix Platform
Global Netflix Platform
 
(ARC403) From One to Many: Evolving VPC Design | AWS re:Invent 2014
(ARC403) From One to Many: Evolving VPC Design | AWS re:Invent 2014(ARC403) From One to Many: Evolving VPC Design | AWS re:Invent 2014
(ARC403) From One to Many: Evolving VPC Design | AWS re:Invent 2014
 
(DAT401) Amazon DynamoDB Deep Dive
(DAT401) Amazon DynamoDB Deep Dive(DAT401) Amazon DynamoDB Deep Dive
(DAT401) Amazon DynamoDB Deep Dive
 
Deep Dive - Amazon Virtual Private Cloud (VPC)
Deep Dive - Amazon Virtual Private Cloud (VPC)Deep Dive - Amazon Virtual Private Cloud (VPC)
Deep Dive - Amazon Virtual Private Cloud (VPC)
 
Developing Connected Applications with AWS IoT - Technical 301
Developing Connected Applications with AWS IoT - Technical 301Developing Connected Applications with AWS IoT - Technical 301
Developing Connected Applications with AWS IoT - Technical 301
 
AWS Data Collection & Storage
AWS Data Collection & StorageAWS Data Collection & Storage
AWS Data Collection & Storage
 

Similar to Building Applications with DynamoDB

Data & Analytics - Session 3 - Under the Covers with Amazon DynamoDB
Data & Analytics - Session 3 - Under the Covers with Amazon DynamoDBData & Analytics - Session 3 - Under the Covers with Amazon DynamoDB
Data & Analytics - Session 3 - Under the Covers with Amazon DynamoDBAmazon Web Services
 
DAT302 Under the Covers of Amazon DynamoDB - AWS re: Invent 2012
DAT302 Under the Covers of Amazon DynamoDB - AWS re: Invent 2012DAT302 Under the Covers of Amazon DynamoDB - AWS re: Invent 2012
DAT302 Under the Covers of Amazon DynamoDB - AWS re: Invent 2012Amazon Web Services
 
Beyond PHP - It's not (just) about the code
Beyond PHP - It's not (just) about the codeBeyond PHP - It's not (just) about the code
Beyond PHP - It's not (just) about the codeWim Godden
 
AWS Under the covers with Amazon DynamoDB IP Expo 2013
AWS Under the covers with Amazon DynamoDB IP Expo 2013AWS Under the covers with Amazon DynamoDB IP Expo 2013
AWS Under the covers with Amazon DynamoDB IP Expo 2013Amazon Web Services
 
Beyond php it's not (just) about the code
Beyond php   it's not (just) about the codeBeyond php   it's not (just) about the code
Beyond php it's not (just) about the codeWim Godden
 
Beyond PHP - it's not (just) about the code
Beyond PHP - it's not (just) about the codeBeyond PHP - it's not (just) about the code
Beyond PHP - it's not (just) about the codeWim Godden
 
Beyond php - it's not (just) about the code
Beyond php - it's not (just) about the codeBeyond php - it's not (just) about the code
Beyond php - it's not (just) about the codeWim Godden
 
Beyond php - it's not (just) about the code
Beyond php - it's not (just) about the codeBeyond php - it's not (just) about the code
Beyond php - it's not (just) about the codeWim Godden
 
Beyond php - it's not (just) about the code
Beyond php - it's not (just) about the codeBeyond php - it's not (just) about the code
Beyond php - it's not (just) about the codeWim Godden
 
Relational Database to Apache Spark (and sometimes back again)
Relational Database to Apache Spark (and sometimes back again)Relational Database to Apache Spark (and sometimes back again)
Relational Database to Apache Spark (and sometimes back again)Ed Thewlis
 
Geek Sync | Rewriting Bad SQL Code 101
Geek Sync | Rewriting Bad SQL Code 101Geek Sync | Rewriting Bad SQL Code 101
Geek Sync | Rewriting Bad SQL Code 101IDERA Software
 
Beyond php - it's not (just) about the code
Beyond php - it's not (just) about the codeBeyond php - it's not (just) about the code
Beyond php - it's not (just) about the codeWim Godden
 
Beyond php - it's not (just) about the code
Beyond php - it's not (just) about the codeBeyond php - it's not (just) about the code
Beyond php - it's not (just) about the codeWim Godden
 
Beyond php - it's not (just) about the code
Beyond php - it's not (just) about the codeBeyond php - it's not (just) about the code
Beyond php - it's not (just) about the codeWim Godden
 
Amazon Redshift
Amazon RedshiftAmazon Redshift
Amazon RedshiftJeff Patti
 
Beyond PHP - it's not (just) about the code
Beyond PHP - it's not (just) about the codeBeyond PHP - it's not (just) about the code
Beyond PHP - it's not (just) about the codeWim Godden
 
10 wp7 local database
10 wp7   local database10 wp7   local database
10 wp7 local databaseTao Wang
 
Building Competing Models Using Apache Spark DataFrames with Abdulla Al-Qawasmeh
Building Competing Models Using Apache Spark DataFrames with Abdulla Al-QawasmehBuilding Competing Models Using Apache Spark DataFrames with Abdulla Al-Qawasmeh
Building Competing Models Using Apache Spark DataFrames with Abdulla Al-QawasmehDatabricks
 

Similar to Building Applications with DynamoDB (20)

Data & Analytics - Session 3 - Under the Covers with Amazon DynamoDB
Data & Analytics - Session 3 - Under the Covers with Amazon DynamoDBData & Analytics - Session 3 - Under the Covers with Amazon DynamoDB
Data & Analytics - Session 3 - Under the Covers with Amazon DynamoDB
 
Conhecendo o DynamoDB
Conhecendo o DynamoDBConhecendo o DynamoDB
Conhecendo o DynamoDB
 
DAT302 Under the Covers of Amazon DynamoDB - AWS re: Invent 2012
DAT302 Under the Covers of Amazon DynamoDB - AWS re: Invent 2012DAT302 Under the Covers of Amazon DynamoDB - AWS re: Invent 2012
DAT302 Under the Covers of Amazon DynamoDB - AWS re: Invent 2012
 
Beyond PHP - It's not (just) about the code
Beyond PHP - It's not (just) about the codeBeyond PHP - It's not (just) about the code
Beyond PHP - It's not (just) about the code
 
AWS Under the covers with Amazon DynamoDB IP Expo 2013
AWS Under the covers with Amazon DynamoDB IP Expo 2013AWS Under the covers with Amazon DynamoDB IP Expo 2013
AWS Under the covers with Amazon DynamoDB IP Expo 2013
 
Beyond php it's not (just) about the code
Beyond php   it's not (just) about the codeBeyond php   it's not (just) about the code
Beyond php it's not (just) about the code
 
Beyond PHP - it's not (just) about the code
Beyond PHP - it's not (just) about the codeBeyond PHP - it's not (just) about the code
Beyond PHP - it's not (just) about the code
 
Beyond php - it's not (just) about the code
Beyond php - it's not (just) about the codeBeyond php - it's not (just) about the code
Beyond php - it's not (just) about the code
 
Beyond php - it's not (just) about the code
Beyond php - it's not (just) about the codeBeyond php - it's not (just) about the code
Beyond php - it's not (just) about the code
 
DynamoDB Deep Dive
DynamoDB Deep DiveDynamoDB Deep Dive
DynamoDB Deep Dive
 
Beyond php - it's not (just) about the code
Beyond php - it's not (just) about the codeBeyond php - it's not (just) about the code
Beyond php - it's not (just) about the code
 
Relational Database to Apache Spark (and sometimes back again)
Relational Database to Apache Spark (and sometimes back again)Relational Database to Apache Spark (and sometimes back again)
Relational Database to Apache Spark (and sometimes back again)
 
Geek Sync | Rewriting Bad SQL Code 101
Geek Sync | Rewriting Bad SQL Code 101Geek Sync | Rewriting Bad SQL Code 101
Geek Sync | Rewriting Bad SQL Code 101
 
Beyond php - it's not (just) about the code
Beyond php - it's not (just) about the codeBeyond php - it's not (just) about the code
Beyond php - it's not (just) about the code
 
Beyond php - it's not (just) about the code
Beyond php - it's not (just) about the codeBeyond php - it's not (just) about the code
Beyond php - it's not (just) about the code
 
Beyond php - it's not (just) about the code
Beyond php - it's not (just) about the codeBeyond php - it's not (just) about the code
Beyond php - it's not (just) about the code
 
Amazon Redshift
Amazon RedshiftAmazon Redshift
Amazon Redshift
 
Beyond PHP - it's not (just) about the code
Beyond PHP - it's not (just) about the codeBeyond PHP - it's not (just) about the code
Beyond PHP - it's not (just) about the code
 
10 wp7 local database
10 wp7   local database10 wp7   local database
10 wp7 local database
 
Building Competing Models Using Apache Spark DataFrames with Abdulla Al-Qawasmeh
Building Competing Models Using Apache Spark DataFrames with Abdulla Al-QawasmehBuilding Competing Models Using Apache Spark DataFrames with Abdulla Al-Qawasmeh
Building Competing Models Using Apache Spark DataFrames with Abdulla Al-Qawasmeh
 

More from Amazon Web Services

Come costruire servizi di Forecasting sfruttando algoritmi di ML e deep learn...
Come costruire servizi di Forecasting sfruttando algoritmi di ML e deep learn...Come costruire servizi di Forecasting sfruttando algoritmi di ML e deep learn...
Come costruire servizi di Forecasting sfruttando algoritmi di ML e deep learn...Amazon Web Services
 
Big Data per le Startup: come creare applicazioni Big Data in modalità Server...
Big Data per le Startup: come creare applicazioni Big Data in modalità Server...Big Data per le Startup: come creare applicazioni Big Data in modalità Server...
Big Data per le Startup: come creare applicazioni Big Data in modalità Server...Amazon Web Services
 
Esegui pod serverless con Amazon EKS e AWS Fargate
Esegui pod serverless con Amazon EKS e AWS FargateEsegui pod serverless con Amazon EKS e AWS Fargate
Esegui pod serverless con Amazon EKS e AWS FargateAmazon Web Services
 
Costruire Applicazioni Moderne con AWS
Costruire Applicazioni Moderne con AWSCostruire Applicazioni Moderne con AWS
Costruire Applicazioni Moderne con AWSAmazon Web Services
 
Come spendere fino al 90% in meno con i container e le istanze spot
Come spendere fino al 90% in meno con i container e le istanze spot Come spendere fino al 90% in meno con i container e le istanze spot
Come spendere fino al 90% in meno con i container e le istanze spot Amazon Web Services
 
Rendi unica l’offerta della tua startup sul mercato con i servizi Machine Lea...
Rendi unica l’offerta della tua startup sul mercato con i servizi Machine Lea...Rendi unica l’offerta della tua startup sul mercato con i servizi Machine Lea...
Rendi unica l’offerta della tua startup sul mercato con i servizi Machine Lea...Amazon Web Services
 
OpsWorks Configuration Management: automatizza la gestione e i deployment del...
OpsWorks Configuration Management: automatizza la gestione e i deployment del...OpsWorks Configuration Management: automatizza la gestione e i deployment del...
OpsWorks Configuration Management: automatizza la gestione e i deployment del...Amazon Web Services
 
Microsoft Active Directory su AWS per supportare i tuoi Windows Workloads
Microsoft Active Directory su AWS per supportare i tuoi Windows WorkloadsMicrosoft Active Directory su AWS per supportare i tuoi Windows Workloads
Microsoft Active Directory su AWS per supportare i tuoi Windows WorkloadsAmazon Web Services
 
Database Oracle e VMware Cloud on AWS i miti da sfatare
Database Oracle e VMware Cloud on AWS i miti da sfatareDatabase Oracle e VMware Cloud on AWS i miti da sfatare
Database Oracle e VMware Cloud on AWS i miti da sfatareAmazon Web Services
 
Crea la tua prima serverless ledger-based app con QLDB e NodeJS
Crea la tua prima serverless ledger-based app con QLDB e NodeJSCrea la tua prima serverless ledger-based app con QLDB e NodeJS
Crea la tua prima serverless ledger-based app con QLDB e NodeJSAmazon Web Services
 
API moderne real-time per applicazioni mobili e web
API moderne real-time per applicazioni mobili e webAPI moderne real-time per applicazioni mobili e web
API moderne real-time per applicazioni mobili e webAmazon Web Services
 
Database Oracle e VMware Cloud™ on AWS: i miti da sfatare
Database Oracle e VMware Cloud™ on AWS: i miti da sfatareDatabase Oracle e VMware Cloud™ on AWS: i miti da sfatare
Database Oracle e VMware Cloud™ on AWS: i miti da sfatareAmazon Web Services
 
Tools for building your MVP on AWS
Tools for building your MVP on AWSTools for building your MVP on AWS
Tools for building your MVP on AWSAmazon Web Services
 
How to Build a Winning Pitch Deck
How to Build a Winning Pitch DeckHow to Build a Winning Pitch Deck
How to Build a Winning Pitch DeckAmazon Web Services
 
Building a web application without servers
Building a web application without serversBuilding a web application without servers
Building a web application without serversAmazon Web Services
 
AWS_HK_StartupDay_Building Interactive websites while automating for efficien...
AWS_HK_StartupDay_Building Interactive websites while automating for efficien...AWS_HK_StartupDay_Building Interactive websites while automating for efficien...
AWS_HK_StartupDay_Building Interactive websites while automating for efficien...Amazon Web Services
 
Introduzione a Amazon Elastic Container Service
Introduzione a Amazon Elastic Container ServiceIntroduzione a Amazon Elastic Container Service
Introduzione a Amazon Elastic Container ServiceAmazon Web Services
 

More from Amazon Web Services (20)

Come costruire servizi di Forecasting sfruttando algoritmi di ML e deep learn...
Come costruire servizi di Forecasting sfruttando algoritmi di ML e deep learn...Come costruire servizi di Forecasting sfruttando algoritmi di ML e deep learn...
Come costruire servizi di Forecasting sfruttando algoritmi di ML e deep learn...
 
Big Data per le Startup: come creare applicazioni Big Data in modalità Server...
Big Data per le Startup: come creare applicazioni Big Data in modalità Server...Big Data per le Startup: come creare applicazioni Big Data in modalità Server...
Big Data per le Startup: come creare applicazioni Big Data in modalità Server...
 
Esegui pod serverless con Amazon EKS e AWS Fargate
Esegui pod serverless con Amazon EKS e AWS FargateEsegui pod serverless con Amazon EKS e AWS Fargate
Esegui pod serverless con Amazon EKS e AWS Fargate
 
Costruire Applicazioni Moderne con AWS
Costruire Applicazioni Moderne con AWSCostruire Applicazioni Moderne con AWS
Costruire Applicazioni Moderne con AWS
 
Come spendere fino al 90% in meno con i container e le istanze spot
Come spendere fino al 90% in meno con i container e le istanze spot Come spendere fino al 90% in meno con i container e le istanze spot
Come spendere fino al 90% in meno con i container e le istanze spot
 
Open banking as a service
Open banking as a serviceOpen banking as a service
Open banking as a service
 
Rendi unica l’offerta della tua startup sul mercato con i servizi Machine Lea...
Rendi unica l’offerta della tua startup sul mercato con i servizi Machine Lea...Rendi unica l’offerta della tua startup sul mercato con i servizi Machine Lea...
Rendi unica l’offerta della tua startup sul mercato con i servizi Machine Lea...
 
OpsWorks Configuration Management: automatizza la gestione e i deployment del...
OpsWorks Configuration Management: automatizza la gestione e i deployment del...OpsWorks Configuration Management: automatizza la gestione e i deployment del...
OpsWorks Configuration Management: automatizza la gestione e i deployment del...
 
Microsoft Active Directory su AWS per supportare i tuoi Windows Workloads
Microsoft Active Directory su AWS per supportare i tuoi Windows WorkloadsMicrosoft Active Directory su AWS per supportare i tuoi Windows Workloads
Microsoft Active Directory su AWS per supportare i tuoi Windows Workloads
 
Computer Vision con AWS
Computer Vision con AWSComputer Vision con AWS
Computer Vision con AWS
 
Database Oracle e VMware Cloud on AWS i miti da sfatare
Database Oracle e VMware Cloud on AWS i miti da sfatareDatabase Oracle e VMware Cloud on AWS i miti da sfatare
Database Oracle e VMware Cloud on AWS i miti da sfatare
 
Crea la tua prima serverless ledger-based app con QLDB e NodeJS
Crea la tua prima serverless ledger-based app con QLDB e NodeJSCrea la tua prima serverless ledger-based app con QLDB e NodeJS
Crea la tua prima serverless ledger-based app con QLDB e NodeJS
 
API moderne real-time per applicazioni mobili e web
API moderne real-time per applicazioni mobili e webAPI moderne real-time per applicazioni mobili e web
API moderne real-time per applicazioni mobili e web
 
Database Oracle e VMware Cloud™ on AWS: i miti da sfatare
Database Oracle e VMware Cloud™ on AWS: i miti da sfatareDatabase Oracle e VMware Cloud™ on AWS: i miti da sfatare
Database Oracle e VMware Cloud™ on AWS: i miti da sfatare
 
Tools for building your MVP on AWS
Tools for building your MVP on AWSTools for building your MVP on AWS
Tools for building your MVP on AWS
 
How to Build a Winning Pitch Deck
How to Build a Winning Pitch DeckHow to Build a Winning Pitch Deck
How to Build a Winning Pitch Deck
 
Building a web application without servers
Building a web application without serversBuilding a web application without servers
Building a web application without servers
 
Fundraising Essentials
Fundraising EssentialsFundraising Essentials
Fundraising Essentials
 
AWS_HK_StartupDay_Building Interactive websites while automating for efficien...
AWS_HK_StartupDay_Building Interactive websites while automating for efficien...AWS_HK_StartupDay_Building Interactive websites while automating for efficien...
AWS_HK_StartupDay_Building Interactive websites while automating for efficien...
 
Introduzione a Amazon Elastic Container Service
Introduzione a Amazon Elastic Container ServiceIntroduzione a Amazon Elastic Container Service
Introduzione a Amazon Elastic Container Service
 

Recently uploaded

Apidays Singapore 2024 - Modernizing Securities Finance by Madhu Subbu
Apidays Singapore 2024 - Modernizing Securities Finance by Madhu SubbuApidays Singapore 2024 - Modernizing Securities Finance by Madhu Subbu
Apidays Singapore 2024 - Modernizing Securities Finance by Madhu Subbuapidays
 
Architecting Cloud Native Applications
Architecting Cloud Native ApplicationsArchitecting Cloud Native Applications
Architecting Cloud Native ApplicationsWSO2
 
Boost Fertility New Invention Ups Success Rates.pdf
Boost Fertility New Invention Ups Success Rates.pdfBoost Fertility New Invention Ups Success Rates.pdf
Boost Fertility New Invention Ups Success Rates.pdfsudhanshuwaghmare1
 
Repurposing LNG terminals for Hydrogen Ammonia: Feasibility and Cost Saving
Repurposing LNG terminals for Hydrogen Ammonia: Feasibility and Cost SavingRepurposing LNG terminals for Hydrogen Ammonia: Feasibility and Cost Saving
Repurposing LNG terminals for Hydrogen Ammonia: Feasibility and Cost SavingEdi Saputra
 
2024: Domino Containers - The Next Step. News from the Domino Container commu...
2024: Domino Containers - The Next Step. News from the Domino Container commu...2024: Domino Containers - The Next Step. News from the Domino Container commu...
2024: Domino Containers - The Next Step. News from the Domino Container commu...Martijn de Jong
 
Mastering MySQL Database Architecture: Deep Dive into MySQL Shell and MySQL R...
Mastering MySQL Database Architecture: Deep Dive into MySQL Shell and MySQL R...Mastering MySQL Database Architecture: Deep Dive into MySQL Shell and MySQL R...
Mastering MySQL Database Architecture: Deep Dive into MySQL Shell and MySQL R...Miguel Araújo
 
"I see eyes in my soup": How Delivery Hero implemented the safety system for ...
"I see eyes in my soup": How Delivery Hero implemented the safety system for ..."I see eyes in my soup": How Delivery Hero implemented the safety system for ...
"I see eyes in my soup": How Delivery Hero implemented the safety system for ...Zilliz
 
A Beginners Guide to Building a RAG App Using Open Source Milvus
A Beginners Guide to Building a RAG App Using Open Source MilvusA Beginners Guide to Building a RAG App Using Open Source Milvus
A Beginners Guide to Building a RAG App Using Open Source MilvusZilliz
 
Apidays New York 2024 - Accelerating FinTech Innovation by Vasa Krishnan, Fin...
Apidays New York 2024 - Accelerating FinTech Innovation by Vasa Krishnan, Fin...Apidays New York 2024 - Accelerating FinTech Innovation by Vasa Krishnan, Fin...
Apidays New York 2024 - Accelerating FinTech Innovation by Vasa Krishnan, Fin...apidays
 
DBX First Quarter 2024 Investor Presentation
DBX First Quarter 2024 Investor PresentationDBX First Quarter 2024 Investor Presentation
DBX First Quarter 2024 Investor PresentationDropbox
 
AWS Community Day CPH - Three problems of Terraform
AWS Community Day CPH - Three problems of TerraformAWS Community Day CPH - Three problems of Terraform
AWS Community Day CPH - Three problems of TerraformAndrey Devyatkin
 
Connector Corner: Accelerate revenue generation using UiPath API-centric busi...
Connector Corner: Accelerate revenue generation using UiPath API-centric busi...Connector Corner: Accelerate revenue generation using UiPath API-centric busi...
Connector Corner: Accelerate revenue generation using UiPath API-centric busi...DianaGray10
 
Polkadot JAM Slides - Token2049 - By Dr. Gavin Wood
Polkadot JAM Slides - Token2049 - By Dr. Gavin WoodPolkadot JAM Slides - Token2049 - By Dr. Gavin Wood
Polkadot JAM Slides - Token2049 - By Dr. Gavin WoodJuan lago vázquez
 
Apidays New York 2024 - The value of a flexible API Management solution for O...
Apidays New York 2024 - The value of a flexible API Management solution for O...Apidays New York 2024 - The value of a flexible API Management solution for O...
Apidays New York 2024 - The value of a flexible API Management solution for O...apidays
 
AXA XL - Insurer Innovation Award Americas 2024
AXA XL - Insurer Innovation Award Americas 2024AXA XL - Insurer Innovation Award Americas 2024
AXA XL - Insurer Innovation Award Americas 2024The Digital Insurer
 
How to Troubleshoot Apps for the Modern Connected Worker
How to Troubleshoot Apps for the Modern Connected WorkerHow to Troubleshoot Apps for the Modern Connected Worker
How to Troubleshoot Apps for the Modern Connected WorkerThousandEyes
 
Emergent Methods: Multi-lingual narrative tracking in the news - real-time ex...
Emergent Methods: Multi-lingual narrative tracking in the news - real-time ex...Emergent Methods: Multi-lingual narrative tracking in the news - real-time ex...
Emergent Methods: Multi-lingual narrative tracking in the news - real-time ex...Zilliz
 
Corporate and higher education May webinar.pptx
Corporate and higher education May webinar.pptxCorporate and higher education May webinar.pptx
Corporate and higher education May webinar.pptxRustici Software
 
Strategize a Smooth Tenant-to-tenant Migration and Copilot Takeoff
Strategize a Smooth Tenant-to-tenant Migration and Copilot TakeoffStrategize a Smooth Tenant-to-tenant Migration and Copilot Takeoff
Strategize a Smooth Tenant-to-tenant Migration and Copilot Takeoffsammart93
 
Data Cloud, More than a CDP by Matt Robison
Data Cloud, More than a CDP by Matt RobisonData Cloud, More than a CDP by Matt Robison
Data Cloud, More than a CDP by Matt RobisonAnna Loughnan Colquhoun
 

Recently uploaded (20)

Apidays Singapore 2024 - Modernizing Securities Finance by Madhu Subbu
Apidays Singapore 2024 - Modernizing Securities Finance by Madhu SubbuApidays Singapore 2024 - Modernizing Securities Finance by Madhu Subbu
Apidays Singapore 2024 - Modernizing Securities Finance by Madhu Subbu
 
Architecting Cloud Native Applications
Architecting Cloud Native ApplicationsArchitecting Cloud Native Applications
Architecting Cloud Native Applications
 
Boost Fertility New Invention Ups Success Rates.pdf
Boost Fertility New Invention Ups Success Rates.pdfBoost Fertility New Invention Ups Success Rates.pdf
Boost Fertility New Invention Ups Success Rates.pdf
 
Repurposing LNG terminals for Hydrogen Ammonia: Feasibility and Cost Saving
Repurposing LNG terminals for Hydrogen Ammonia: Feasibility and Cost SavingRepurposing LNG terminals for Hydrogen Ammonia: Feasibility and Cost Saving
Repurposing LNG terminals for Hydrogen Ammonia: Feasibility and Cost Saving
 
2024: Domino Containers - The Next Step. News from the Domino Container commu...
2024: Domino Containers - The Next Step. News from the Domino Container commu...2024: Domino Containers - The Next Step. News from the Domino Container commu...
2024: Domino Containers - The Next Step. News from the Domino Container commu...
 
Mastering MySQL Database Architecture: Deep Dive into MySQL Shell and MySQL R...
Mastering MySQL Database Architecture: Deep Dive into MySQL Shell and MySQL R...Mastering MySQL Database Architecture: Deep Dive into MySQL Shell and MySQL R...
Mastering MySQL Database Architecture: Deep Dive into MySQL Shell and MySQL R...
 
"I see eyes in my soup": How Delivery Hero implemented the safety system for ...
"I see eyes in my soup": How Delivery Hero implemented the safety system for ..."I see eyes in my soup": How Delivery Hero implemented the safety system for ...
"I see eyes in my soup": How Delivery Hero implemented the safety system for ...
 
A Beginners Guide to Building a RAG App Using Open Source Milvus
A Beginners Guide to Building a RAG App Using Open Source MilvusA Beginners Guide to Building a RAG App Using Open Source Milvus
A Beginners Guide to Building a RAG App Using Open Source Milvus
 
Apidays New York 2024 - Accelerating FinTech Innovation by Vasa Krishnan, Fin...
Apidays New York 2024 - Accelerating FinTech Innovation by Vasa Krishnan, Fin...Apidays New York 2024 - Accelerating FinTech Innovation by Vasa Krishnan, Fin...
Apidays New York 2024 - Accelerating FinTech Innovation by Vasa Krishnan, Fin...
 
DBX First Quarter 2024 Investor Presentation
DBX First Quarter 2024 Investor PresentationDBX First Quarter 2024 Investor Presentation
DBX First Quarter 2024 Investor Presentation
 
AWS Community Day CPH - Three problems of Terraform
AWS Community Day CPH - Three problems of TerraformAWS Community Day CPH - Three problems of Terraform
AWS Community Day CPH - Three problems of Terraform
 
Connector Corner: Accelerate revenue generation using UiPath API-centric busi...
Connector Corner: Accelerate revenue generation using UiPath API-centric busi...Connector Corner: Accelerate revenue generation using UiPath API-centric busi...
Connector Corner: Accelerate revenue generation using UiPath API-centric busi...
 
Polkadot JAM Slides - Token2049 - By Dr. Gavin Wood
Polkadot JAM Slides - Token2049 - By Dr. Gavin WoodPolkadot JAM Slides - Token2049 - By Dr. Gavin Wood
Polkadot JAM Slides - Token2049 - By Dr. Gavin Wood
 
Apidays New York 2024 - The value of a flexible API Management solution for O...
Apidays New York 2024 - The value of a flexible API Management solution for O...Apidays New York 2024 - The value of a flexible API Management solution for O...
Apidays New York 2024 - The value of a flexible API Management solution for O...
 
AXA XL - Insurer Innovation Award Americas 2024
AXA XL - Insurer Innovation Award Americas 2024AXA XL - Insurer Innovation Award Americas 2024
AXA XL - Insurer Innovation Award Americas 2024
 
How to Troubleshoot Apps for the Modern Connected Worker
How to Troubleshoot Apps for the Modern Connected WorkerHow to Troubleshoot Apps for the Modern Connected Worker
How to Troubleshoot Apps for the Modern Connected Worker
 
Emergent Methods: Multi-lingual narrative tracking in the news - real-time ex...
Emergent Methods: Multi-lingual narrative tracking in the news - real-time ex...Emergent Methods: Multi-lingual narrative tracking in the news - real-time ex...
Emergent Methods: Multi-lingual narrative tracking in the news - real-time ex...
 
Corporate and higher education May webinar.pptx
Corporate and higher education May webinar.pptxCorporate and higher education May webinar.pptx
Corporate and higher education May webinar.pptx
 
Strategize a Smooth Tenant-to-tenant Migration and Copilot Takeoff
Strategize a Smooth Tenant-to-tenant Migration and Copilot TakeoffStrategize a Smooth Tenant-to-tenant Migration and Copilot Takeoff
Strategize a Smooth Tenant-to-tenant Migration and Copilot Takeoff
 
Data Cloud, More than a CDP by Matt Robison
Data Cloud, More than a CDP by Matt RobisonData Cloud, More than a CDP by Matt Robison
Data Cloud, More than a CDP by Matt Robison
 

Building Applications with DynamoDB

  • 1. Building Applications with DynamoDB An Online Seminar - 16th May 2012 Dr Matt Wood, Amazon Web Services
  • 4. Building Applications with DynamoDB Getting started
  • 5. Building Applications with DynamoDB Getting started Data modeling
  • 6. Building Applications with DynamoDB Getting started Data modeling Partitioning
  • 7. Building Applications with DynamoDB Getting started Data modeling Partitioning Analytics
  • 8. Getting started with DynamoDB quick review
  • 9. DynamoDB is a managed NoSQL database service. Store and retrieve any amount of data. Serve any level of request traffic.
  • 11. Consistent, predictable performance. Single digit millisecond latencies. Backed on solid-state drives.
  • 12. Flexible data model. Key/attribute pairs. No schema required. Easy to create. Easy to adjust.
  • 13. Seamless scalability. No table size limits. Unlimited storage. No downtime.
  • 14. Durable. Consistent, disk-only writes. Replication across data centres and availability zones.
  • 16. Without the operational burden. FOCUS ON YOUR APP
  • 17. Two decisions + three clicks = ready for use
  • 18. P rimary keys + t le v e l of throughpu Two decisions + three clicks = ready for use
  • 19. Provisioned throughput. Reserve IOPS for reads and writes. Scale up (or down) at any time.
  • 20. Pay per capacity unit. Priced per hour of provisioned throughput.
  • 21. Write throughput. Units = size of item x writes/second $0.01 per hour for 10 write units
  • 22. Consistent writes. Atomic increment/decrement. Optimistic concurrency control. aka: “conditional writes”.
  • 23. Transactions. Item level transactions only. Puts, updates and deletes are ACID.
  • 24. strongly consistent eventually consistent Read throughput.
  • 25. strongly consistent eventually consistent Read throughput. Provisioned units = size of item x reads/second $0.01 per hour for 50 read units
  • 26. strongly consistent eventually consistent Read throughput. Provisioned units = size of item x reads/second 2 $0.01 per hour for 100 read units
  • 27. strongly consistent eventually consistent Read throughput. Same latency expectations. Mix and match at “read time”.
  • 28. Two decisions + three clicks = ready for use
  • 29.
  • 30.
  • 31.
  • 32. Two decisions + three clicks = ready for use
  • 33. Two decisions + one API call = ready for use
  • 34. $create_response = $dynamodb->create_table(array( 'TableName' => 'ProductCatalog', 'KeySchema' => array( 'HashKeyElement' => array( 'AttributeName' => 'Id', 'AttributeType' => AmazonDynamoDB::TYPE_NUMBER ) ), 'ProvisionedThroughput' => array( 'ReadCapacityUnits' => 10, 'WriteCapacityUnits' => 5 ) ));
  • 35. Two decisions + one API call = ready for use
  • 36. Two decisions + one API call = ready for development
  • 37. Two decisions + one API call = ready for production
  • 38. Two decisions + one API call = ready for scale
  • 39.
  • 40. Authentication. Session based to minimize latency. Uses Amazon Security Token Service. Handled by AWS SDKs. Integrates with IAM.
  • 41. Monitoring. CloudWatch metrics: latency, consumed read and write throughput, errors and throttling.
  • 42. Libraries, mappers & mocks. ColdFusion, Django, Erlang, Java, .Net, Node.js, Perl, PHP, Python, Ruby http://j.mp/dynamodb-libs
  • 46. Items are a collection of attributes. Each attribute has a key and a value. An item can have any number of attributes, up to 64k total.
  • 47. Two scalar data types. String: Unicode, UTF8 binary encoding. Number: 38 digit precision. Multi-value strings and numbers.
  • 48. date = id = 100 2012-05-16-09-00-10 total = 25.00 date = id = 101 2012-05-15-15-00-11 total = 35.00 date = id = 101 2012-05-16-12-00-10 total = 100.00 date = id = 102 2012-03-20-18-23-10 total = 20.00 date = id = 102 2012-03-20-18-23-10 total = 120.00
  • 49. Table date = id = 100 2012-05-16-09-00-10 total = 25.00 date = id = 101 2012-05-15-15-00-11 total = 35.00 date = id = 101 2012-05-16-12-00-10 total = 100.00 date = id = 102 2012-03-20-18-23-10 total = 20.00 date = id = 102 2012-03-20-18-23-10 total = 120.00
  • 50. Item date = id = 100 2012-05-16-09-00-10 total = 25.00 date = id = 101 2012-05-15-15-00-11 total = 35.00 date = id = 101 2012-05-16-12-00-10 total = 100.00 date = id = 102 2012-03-20-18-23-10 total = 20.00 date = id = 102 2012-03-20-18-23-10 total = 120.00
  • 51. Attribute date = id = 100 2012-05-16-09-00-10 total = 25.00 date = id = 101 2012-05-15-15-00-11 total = 35.00 date = id = 101 2012-05-16-12-00-10 total = 100.00 date = id = 102 2012-03-20-18-23-10 total = 20.00 date = id = 102 2012-03-20-18-23-10 total = 120.00
  • 52. Where is the schema? Tables do not require a formal schema. Items are an arbitrary sized hash. Just need to specify the primary key.
  • 53. Items are indexed by primary key. Single hash keys and composite keys.
  • 54. Hash Key date = id = 100 2012-05-16-09-00-10 total = 25.00 date = id = 101 2012-05-15-15-00-11 total = 35.00 date = id = 101 2012-05-16-12-00-10 total = 100.00 date = id = 102 2012-03-20-18-23-10 total = 20.00 date = id = 102 2012-03-20-18-23-10 total = 120.00
  • 55. Range key for queries. Querying items by composite key.
  • 56. Hash Key + Range Key date = id = 100 2012-05-16-09-00-10 total = 25.00 date = id = 101 2012-05-15-15-00-11 total = 35.00 date = id = 101 2012-05-16-12-00-10 total = 100.00 date = id = 102 2012-03-20-18-23-10 total = 20.00 date = id = 102 2012-03-20-18-23-10 total = 120.00
  • 57. Programming DynamoDB. Small but perfectly formed. Whole programming interface fits on one slide.
  • 58. CreateTable PutItem UpdateTable GetItem DeleteTable UpdateItem DescribeTable DeleteItem ListTables BatchGetItem Query BatchWriteItem Scan
  • 59. CreateTable PutItem UpdateTable GetItem DeleteTable UpdateItem DescribeTable DeleteItem ListTables BatchGetItem Query BatchWriteItem Scan
  • 60. CreateTable PutItem UpdateTable GetItem DeleteTable UpdateItem DescribeTable DeleteItem ListTables BatchGetItem Query BatchWriteItem Scan
  • 61. Conditional updates. PutItem, UpdateItem, DeleteItem can take optional conditions for operation. UpdateItem performs atomic increments.
  • 62. CreateTable PutItem UpdateTable GetItem DeleteTable UpdateItem DescribeTable DeleteItem ListTables BatchGetItem Query BatchWriteItem Scan
  • 63. One API call, multiple items. BatchGet returns multiple items by primary key. BatchWrite performs up to 25 put or delete operations. Throughput is measured by IO, not API calls.
  • 64. CreateTable PutItem UpdateTable GetItem DeleteTable UpdateItem DescribeTable DeleteItem ListTables BatchGetItem Query BatchWriteItem Scan
  • 65. Query vs Scan Query for composite key queries. Scan for full table scans, exports. Both support pages and limits. Maximum response is 1Mb in size.
  • 66. Query patterns. Retrieve all items by hash key. Range key conditions: ==, <, >, >=, <=, begins with, between. Counts. Top and bottom n values. Paged responses.
  • 68. Patterns 1. Mapping relationships with range keys. No cross-table joins in DynamoDB. Use composite keys to model relationships.
  • 69. Data model example: online gaming. Storing scores and leader boards. Players with high Scores. Leader board for each game.
  • 70. Data model example: online gaming. Storing scores and leader boards. Players with high Scores. Players: hash key user_id = location = joined = Leader board for mza Cambridge 2011-07-04 each game. user_id = location = joined = jeffbarr Seattle 2012-01-20 user_id = location = joined = werner Worldwide 2011-05-15
  • 71. Data model example: online gaming. Storing scores and leader boards. Players with high Scores. Players: hash key user_id = location = joined = Leader board for mza Cambridge 2011-07-04 each game. user_id = location = joined = jeffbarr Seattle 2012-01-20 user_id = location = joined = werner Worldwide 2011-05-15 Scores: composite key user_id = game = score = mza angry-birds 11,000 user_id = game = score = mza tetris 1,223,000 user_id = location = score = werner bejewelled 55,000
  • 72. Data model example: online gaming. Storing scores and leader boards. Players with high Scores. Players: hash key user_id = location = joined = Leader board for mza Cambridge 2011-07-04 each game. user_id = location = joined = jeffbarr Seattle 2012-01-20 user_id = location = joined = werner Worldwide 2011-05-15 Scores: composite key Leader boards: composite key user_id = game = score = game = score = user_id = mza angry-birds 11,000 angry-birds 11,000 mza user_id = game = score = game = score = user_id = mza tetris 1,223,000 tetris 1,223,000 mza user_id = location = score = game = score = user_id = werner bejewelled 55,000 tetris 9,000,000 jeffbarr
  • 73. Data model example: online gaming. Storing scores and leader boards. Players: hash key user_id = location = joined = mza Cambridge 2011-07-04 Scores by user user_id = jeffbarr location = Seattle joined = 2012-01-20 (and by game) user_id = location = joined = werner Worldwide 2011-05-15 Scores: composite key Leader boards: composite key user_id = game = score = game = score = user_id = mza angry-birds 11,000 angry-birds 11,000 mza user_id = game = score = game = score = user_id = mza tetris 1,223,000 tetris 1,223,000 mza user_id = location = score = game = score = user_id = werner bejewelled 55,000 tetris 9,000,000 jeffbarr
  • 74. Data model example: online gaming. Storing scores and leader boards. Players: hash key user_id = location = joined = High scores by mza Cambridge 2011-07-04 user_id = location = joined = game jeffbarr Seattle 2012-01-20 user_id = location = joined = werner Worldwide 2011-05-15 Scores: composite key Leader boards: composite key user_id = game = score = game = score = user_id = mza angry-birds 11,000 angry-birds 11,000 mza user_id = game = score = game = score = user_id = mza tetris 1,223,000 tetris 1,223,000 mza user_id = location = score = game = score = user_id = werner bejewelled 55,000 tetris 9,000,000 jeffbarr
  • 75. Patterns 2. Handling large items. Unlimited attributes per item. Unlimited items per table. Max 64k per item.
  • 76. Data model example: large items. Storing more than 64k across items. Large messages: composite keys message_id = part = message = 1 1 <first 64k> message_id = part = message = 1 2 <second 64k> message_id = part = joined = 1 3 <third 64k> Split attributes across items. Query by message_id and part to retrieve.
  • 77. Patterns Store a pointer to objects in Amazon S3. Large data stored in S3. Location stored in DynamoDB. 99.999999999% data durability in S3.
  • 78. Patterns 3. Managing secondary indices. Not supported by DynamoDB. Create your own.
  • 79. Data model example: secondary indices. Storing more than 64k across items. Users: hash key user_id = first_name = last_name = mza Matt Wood user_id = first_name = last_name = mattfox Matt Fox user_id = first_name = last_name = werner Werner Vogels
  • 80. Data model example: secondary indices. Storing more than 64k across items. Users: hash key user_id = first_name = last_name = mza Matt Wood user_id = first_name = last_name = mattfox Matt Fox user_id = first_name = last_name = werner Werner Vogels First name index: composite keys first_name = user_id = Matt mza first_name = user_id = Matt mattfox first_name = user_id = Werner werner
  • 81. Data model example: secondary indices. Storing more than 64k across items. Users: hash key user_id = first_name = last_name = mza Matt Wood user_id = first_name = last_name = mattfox Matt Fox user_id = first_name = last_name = werner Werner Vogels First name index: composite keys Second name index: composite keys first_name = user_id = last_name = user_id = Matt mza Wood mza first_name = user_id = last_name = user_id = Matt mattfox Fox mattfox first_name = user_id = last_name = user_id = Werner werner Vogels werner
  • 82. Data model example: secondary indices. Storing more than 64k across items. Users: hash key user_id = first_name = last_name = mza Matt Wood user_id = first_name = last_name = mattfox Matt Fox user_id = first_name = last_name = werner Werner Vogels First name index: composite keys Second name index: composite keys first_name = user_id = last_name = user_id = Matt mza Wood mza first_name = user_id = last_name = user_id = Matt mattfox Fox mattfox first_name = user_id = last_name = user_id = Werner werner Vogels werner
  • 83. Data model example: secondary indices. Storing more than 64k across items. Users: hash key user_id = first_name = last_name = mza Matt Wood user_id = first_name = last_name = mattfox Matt Fox user_id = first_name = last_name = werner Werner Vogels First name index: composite keys Second name index: composite keys first_name = user_id = last_name = user_id = Matt mza Wood mza first_name = user_id = last_name = user_id = Matt mattfox Fox mattfox first_name = user_id = last_name = user_id = Werner werner Vogels werner
  • 84. Patterns 4. Time series data. Logging, click through, ad views, game play data, application usage. Non-uniform access patterns. Newer data is ‘live’. Older data is read only.
  • 85. Data model example: time series data. Rolling tables for hot and cold data. Events table: composite keys event_id = timestamp = key = 1000 2012-05-16-09-59-01 value event_id = timestamp = key = 1001 2012-05-16-09-59-02 value event_id = timestamp = key = 1002 2012-05-16-09-59-02 value
  • 86. Data model example: time series data. Rolling tables for hot and cold data. Events table: composite keys event_id = timestamp = key = 1000 2012-05-16-09-59-01 value event_id = timestamp = key = 1001 2012-05-16-09-59-02 value event_id = timestamp = key = 1002 2012-05-16-09-59-02 value Events table for April: composite keys Events table for January: composite keys event_id = timestamp = event_id = timestamp = 400 2012-04-01-00-00-01 100 2012-01-01-00-00-01 event_id = timestamp = event_id = timestamp = 401 2012-04-01-00-00-02 101 2012-01-01-00-00-02 event_id = timestamp = event_id = timestamp = 402 2012-04-01-00-00-03 102 2012-01-01-00-00-03
  • 87. Patterns Hot and cold tables. Dec Jan Feb Mar April May
  • 88. Patterns Hot and cold tables. Dec Jan Feb Mar April May higher throughput
  • 89. Patterns Hot and cold tables. Dec Jan Feb Mar April May lower higher throughput throughput
  • 90. Patterns Hot and cold tables. Dec Jan Feb Mar April May data to S3, delete cold tables
  • 91. Patterns Hot and cold tables. Jan Feb Mar Apr May June
  • 92. Patterns Hot and cold tables. Feb Mar Apr May June July
  • 93. Patterns Hot and cold tables. Mar Apr May June July Aug
  • 94. Patterns Hot and cold tables. Apr May June July Aug Sept
  • 95. Patterns Hot and cold tables. May June July Aug Sept Oct
  • 96. Patterns Not out of mind. DynamoDB and S3 data can be integrated for analytics. Run queries across hot and cold data with Elastic MapReduce.
  • 98. Uniform workloads. DynamoDB divides table data into multiple partitions. Data is distributed primarily by hash key. Provisioned throughput is divided evenly across the partitions.
  • 99. Uniform workloads. To achieve and maintain full provisioned throughput for a table, spread your workload evenly across the hash keys.
  • 100. Non-uniform workloads. Some requests might be throttled, even at high levels of provisioned throughput. Some best practices...
  • 101. Patterns 1. Distinct values for hash keys. Hash key elements should have a high number of distinct values.
  • 102. Data model example: hash key selection. Well distributed work loads Users user_id = first_name = last_name = mza Matt Wood user_id = first_name = last_name = jeffbarr Jeff Barr user_id = first_name = last_name = werner Werner Vogels user_id = first_name = last_name = mattfox Matt Fox ... ... ...
  • 103. Data model example: hash key selection. Well distributed work loads Users user_id = first_name = last_name = mza Matt Wood user_id = first_name = last_name = jeffbarr Jeff Barr user_id = first_name = last_name = werner Werner Vogels user_id = first_name = last_name = mattfox Matt Fox ... ... ... Lots of users with unique user_id. Workload well distributed across user partitions.
  • 104. Patterns 2. Avoid limited hash key values. Hash key elements should have a high number of distinct values.
  • 105. Data model example: small hash value range. Non-uniform workload. Status responses status = date = 200 2012-04-01-00-00-01 status = date = 404 2012-04-01-00-00-01 status date = 404 2012-04-01-00-00-01 status = date = 404 2012-04-01-00-00-01
  • 106. Data model example: small hash value range. Non-uniform workload. Status responses status = date = 200 2012-04-01-00-00-01 status = date = 404 2012-04-01-00-00-01 status date = 404 2012-04-01-00-00-01 status = date = 404 2012-04-01-00-00-01 Small number of status codes. Unevenly, non-uniform workload.
  • 107. Patterns 3. Model for even distribution of access. Access by hash key value should be evenly distributed across the dataset.
  • 108. Data model example: uneven access pattern by key. Non-uniform access workload. Devices mobile_id = access_date = 100 2012-04-01-00-00-01 mobile_id = access_date = 100 2012-04-01-00-00-02 mobile_id = access_date = 100 2012-04-01-00-00-03 mobile_id = access_date = 100 2012-04-01-00-00-04 ... ...
  • 109. Data model example: uneven access pattern by key. Non-uniform access workload. Devices mobile_id = access_date = 100 2012-04-01-00-00-01 mobile_id = access_date = 100 2012-04-01-00-00-02 mobile_id = access_date = 100 2012-04-01-00-00-03 mobile_id = access_date = 100 2012-04-01-00-00-04 ... ... Large number of devices. Small number which are much more popular than others. Workload unevenly distributed.
  • 110. Data model example: randomize access pattern by key. Towards a uniform workload. Devices mobile_id = access_date = 100.1 2012-04-01-00-00-01 mobile_id = access_date = 100.2 2012-04-01-00-00-02 mobile_id = access_date = 100.3 2012-04-01-00-00-03 mobile_id = access_date = 100.4 2012-04-01-00-00-04 ... ... Randomize access pattern. Workload randomised by hash key.
  • 111. Design for a uniform workload.
  • 113. Seamless scale. Scalable methods for data processing. Scalable methods for backup/restore.
  • 114. Amazon Elastic MapReduce. Managed Hadoop service for data-intensive workflows. http://aws.amazon.com/emr
  • 115. Hadoop under the hood. Take advantage of the Hadoop ecosystem: streaming interfaces, Hive, Pig, Mahout.
  • 116. Distributed data processing. API driven. Analytics at any scale.
  • 117. Query flexibility with Hive. create external table items_db (id string, votes bigint, views bigint) stored by 'org.apache.hadoop.hive.dynamodb.DynamoDBStorageHandler' tblproperties ("dynamodb.table.name" = "items", "dynamodb.column.mapping" = "id:id,votes:votes,views:views");
  • 118. Query flexibility with Hive. select id, likes, views from items_db order by views desc;
  • 119. Data export/import. Use EMR for backup and restore to Amazon S3.
  • 120. Data export/import. CREATE EXTERNAL TABLE orders_s3_new_export ( order_id string, customer_id string, order_date int, total double ) PARTITIONED BY (year string, month string) ROW FORMAT DELIMITED FIELDS TERMINATED BY ',' LOCATION 's3://export_bucket'; INSERT OVERWRITE TABLE orders_s3_new_export PARTITION (year='2012', month='01') SELECT * from orders_ddb_2012_01;
  • 121. Integrate live and archive data Run queries across external Hive tables on S3 and DynamoDB. Live & archive. Metadata & big objects.
  • 123. In summary... DynamoDB Predictable performance Provisioned throughput Libraries & mappers Data modeling Tables & items Read & write patterns Time series data
  • 124. In summary... DynamoDB Partitioning Predictable performance Automatic partitioning Provisioned throughput Hot and cold data Libraries & mappers Size/throughput ratio Data modeling Tables & items Read & write patterns Time series data
  • 125. In summary... DynamoDB Partitioning Predictable performance Automatic partitioning Provisioned throughput Hot and cold data Libraries & mappers Size/throughput ratio Data modeling Analytics Tables & items Elastic MapReduce Read & write patterns Hive queries Time series data Backup & restore
  • 126. DynamoDB free tier 5 writes, 10 consistent reads per second 100Mb of storage