Architecting an Enterprise Storage
Platform Using Object Stores
© mekuria getinet / www.mekuriageti.net	

Niraj Tolia
Chie...
These gray slides are equivalent to speaker notes.
Normally invisible, they are provided for non-
presentation settings.
H...
A Whirlwind Tour
This presentation provides an end-to-end overview of
MagFS and therefore might not be deep enough in
certain areas.
Contac...
80%YoY Growth in
Unstructured Data
41% Growth in IaaS
Systems through 2016
Sources:	

Gartner, IT Marketing Clock for Stor...
Data growth is impressive! Requires centralization for
protection, analysis and cost management.
Infrastructure-as-a-Servi...
MagFS –The File System for the Cloud
Consistent, Elastic, Secure, Mobile-Enabled
Layered on Object Stores
“Software-Define...
To respond to the earlier trends, we built a system that, at
its core, is a distributed file system.
It differs from legac...
No (Initial) Legacy
Support (NFS/CIFS)
Native Clients: Push
Intelligence to Edges
Strong Consistency w/
Full-Spectrum Cach...
Three Early Decisions:
1.  No legacy (NFS, CIFS) support on purpose: File systems
must evolve (e.g., dedup, caching, scali...
File System Design Goals
Low Cost, High
Scale
Intelligent
Clients
Span Devices
and Networks
Support Rapid
Iteration
Design Goals:
1.  Deliver scale cost-effectively.
2.  Make clients intelligent: modern computing
platforms have enough hor...
In-Cloud
File System
NAS Replacement
and Consolidation
Enterprise File
Sharing
Use Cases
MagFS, a general purpose system, is used for many different
use cases.The majority are Tier 2/3 workloads (e.g., home
dire...
Object Storage
(public, on-premises, or hybrid)
Data
Metadata
Metadata Servers
Clients
10,000 FootView
The previous slide presents a very high-level overview
of MagFS.
Note the split data and metadata planes: MagFS does
not t...
Koukouvaya / flickr.com/photos/jackoughton/6535137981/	

Heavy (Data) Lifting via Clients
Encryption
Inline Deduplication
C...
Push a lot of smarts to increasingly-powerful clients.
Clients do heavy data lifting: Chunking for
deduplication, encrypti...
Cloud Object Storage
Scale Out, Low Cost
Handles Placement + Replication
Tolerates Failures
High Aggregate Performance
Object Storage has a number of very useful properties:
Cost, Commodity, Scale Out (aggregate performance,
fault tolerance,...
Virtualized Metadata Servers
Enforce Strong Consistency
Enforce Authentication and Integrity
Runtime Performance Optimizat...
TheVM-based metadata servers are where consistency and
user authentication are enforced.
They also allow clients to dynami...
Architecture
We will now branch off into details about the client and
server architecture and how they interact with object
storage.
Client
Architecture
MagFS supports different Linux,Windows, OS X,
Android, and iOS versions.
Majority of code is shared across platforms with
...
Client Architecture
Application
Redirector
(e.g., FUSE)
File System
OS Glue
Data Manager
Metadata Transport
Layer
Local Re...
Traditional platforms have a thin in-kernel redirector (FUSE
on Linux.We ship the equivalent on Windows and OS X).
Modulo ...
Data Manager
File System Layer
Simplified Write: Deduplication + Encryption
Write Request
Plaintext
Variable-Length
Chunki...
Very simple example! In reality, most operations are
not synchronous, are batched, and clients get ack early.
Incoming dat...
Data Manager
File System Layer
Simplified Write: Deduplication + Encryption
Write Request
Plaintext
Variable-Length
Chunki...
Encrypted data (but not key) is written to local cache.
Write request with offset, chunk name, and encryption
key is made ...
Data Manager
File System Layer
Simplified Read: Deduplication + Encryption
Read Request
<File, Offset, Range>
Local Cache ...
Another very simple example. Does not include
metadata caching either.
Server responds to a read request with the chunk
na...
The Client in Real Life Does a Lot More!
•  File and Directory Leases (data and metadata caching)
•  Asynchronous Operatio...
There is a separate discussion on leases later when we
talk about how clients and servers optimize
performance at runtime.
Object Storage
(public, on-premises, or hybrid)
Data
Metadata
Metadata Servers
Clients
Communication Details
Thrift
(HTTPS...
Important: Split Data and Metadata paths (always, not
optional). Clients directly access the object store. MagFS
does not ...
Server
Architecture
The next few slides cover how we virtualize file
namespaces, the distributed system deployment, a view
into internals, and...
Metadata Server Internals
Metadata Storage Layer
Storage Core
Backups
Production Development
GC
Scrubbing
Quotas Dedup Lea...
The metadata server internals have been modularized to
provide both development and runtime agility.
For example, adding s...
Bootstrapping:Virtualized Namespaces
server.example.comshare	
  
	
  HOST	
  FQDN	
   FOLDER	
  
Legacy
server.example.com...
With both Windows UNC paths or NFS server/share
exports, the exported file system would be tied to a
DNS name.
Instead, Ma...
Discovery Service
Metadata
Server
Metadata
Server (HA)
Metadata
Server
ZooKeeper
ZooKeeperZooKeeper
Monitoring
Management
...
MagFS is a distributed system. It has a number of
backend services:VM and Service Monitoring,
ZooKeeper for server registr...
One of the big challenges in any distributed file system
is the tradeoff between consistency and performance.
In a naïve s...
Leases: Performance and Strong Consistency
Read Write HandleLeaseTypes
Read
Read +
Handle
Read +
Write +
Handle
Lease Stat...
Lease Types: READ allows client to cache reads locally,
WRITE allows local write caching, and HANDLE where
files can be cl...
Cloud Storage
Interaction
While Maginatics does not provide an Object Storage
system itself, it works with a number of different
products.The next f...
Object Storage
(public, on-premises, or hybrid)
Today, MagFS supports a large number of object storage
systems: private and public Swift and Atmos
deployments,AWS S3, pub...
Object Storage systems
are like snowflakes!
MagFS also works hard to address inter-object store
variance and hide the complexity from the end user.
MagFS uses very ba...
Object Store API Compatibility
Q: Has anyone come across a near 100%
Amazon S3 API compatible object storage
system?
A: It...
Even vendors claiming to support the same API have
differences, bugs, or interpretation differences. For
example, most S3 ...
Object Storage
(public, on-premises, or hybrid)
Data
Metadata
Metadata Servers
Clients
Direct Client Access: Security Prob...
One of the challenges with providing clients direct
object store access is security.There is generally one
(or few) master...
Request Signing
The solution to providing secure and time-limited data
access to clients is to use Request Signing, a feature
found in all...
Server-Driven Request Signing
SignString	
  =	
  HTTP-­‐Verb	
  	
  	
  	
  	
  	
  	
  +	
  "n"	
  
	
  +	
  Content-­‐MD...
Client read or write requests are authorized by the
MagFS server that shares the master key with the
object storage system...
Server-Driven Request Signing
SignString	
  =	
  PUT	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  +	
  "n"	
  
	
  +...
The first component of the signature string is the HTTP
verb used.This would be GET for a read and generally
PUT for a wri...
Server-Driven Request Signing
SignString	
  =	
  PUT	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  +	
  "n"	
  
	
  +...
The second component is a cryptographic hash of the
data.A number of object storage systems will reject
data whose cryptog...
Server-Driven Request Signing
SignString	
  =	
  PUT	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  +	
  "n"	
  
	
  +...
The next component is the content-type of the object.
We are using the JPEG type in this example but, in
MagFS, this would...
Server-Driven Request Signing
SignString	
  =	
  PUT	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  +	
  "n"	
  
	
  +...
Following the content-type, we now add a timestamp
field.This is very useful because it puts a time limit on
this request ...
Server-Driven Request Signing
SignString	
  =	
  PUT	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  +	
  "n"	
  
	
  +...
The final component in this example is the resource
name and this includes both the container name and
the object name wit...
Server-Driven Request Signing
SignString	
  =	
  PUT	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  +	
  "n"	
  
	
  +...
Following the construction of the signature string, a
keyed hash message authentication code (HMAC) is
generated using the...
Server-Driven Request Signing
SignString	
  =	
  PUT	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  +	
  "n"	
  
	
  +...
A Base64 encoded representation (signature) of this
HMAC is sent to the client to prove that this request
was authorized b...
Object Storage
(public, on-premises, or hybrid)
Data
Metadata
Metadata Servers
Clients
Safe Direct Client Access via Reque...
To summarize, read or write operations not serviced
from the local cache require server authorization.
Using the server-pr...
Dealing with Lost Client Writes
•  Clients can lose connectivity or, in the worst case, be malicious
•  Naïvely trusting c...
MagFS exposes global deduplication and therefore needs to
handle buggy or malicious clients that might have claimed to
hav...
Handling Object Store Eventual Consistency
•  Treat objects as immutable (even if modifications are allowed)
•  Use conten...
Some object stores have eventually consistent
properties and can lead to interesting read-after-write
behaviors where what...
Security
Architecture
In theory, this is where we would discuss MagFS’s
security architecture. However, as you observed,
security is baked into ...
Recap: On-Premises Security Model
•  User authentication and permissions derived from native Active
Directory setup
•  Enc...
Quick point about Active Directory (AD):The fact that
all our user permissions, group membership
information, and other au...
Try MagFS at http://maginatics.com
Please send questions or follow-ups to
info@maginatics.com
Maginatics @ SDC 2013: Architecting An Enterprise Storage Platform Using Object Stores
Upcoming SlideShare
Loading in …5
×

Maginatics @ SDC 2013: Architecting An Enterprise Storage Platform Using Object Stores

2,768 views

Published on

How did Maginatics build a strongly consistent and secure distributed file system? Niraj Tolia, Chief Architect at Maginatics, gave this presentation on the design of MagFS at the Storage Developer Conference on September 16, 2013.

For more information about MagFS—The File System for the Cloud, visit maginatics.com or contact us directly at info@maginatics.com.

Published in: Technology
2 Comments
6 Likes
Statistics
Notes
No Downloads
Views
Total views
2,768
On SlideShare
0
From Embeds
0
Number of Embeds
9
Actions
Shares
0
Downloads
1
Comments
2
Likes
6
Embeds 0
No embeds

No notes for slide
  • http://www.sxc.hu/photo/525905
  • Gartner predicts that Infrastructure-as-a-Service (IaaS) will achieve a compound annual growth rate (CAGR) of 41.3% through 2016, the fastest growing area of public cloud computing the research firm trackshttp://www.forbes.com/sites/louiscolumbus/2013/02/19/gartner-predicts-infrastructure-services-will-accelerate-cloud-computing-growth/http://www.sxc.hu/photo/758439
  • http://www.sxc.hu/photo/465412
  • http://www.sxc.hu/photo/479466http://morguefile.com/archive/display/216094
  • Penny Image: http://www.morguefile.com/archive/display/847455Blackboard: http://morguefile.com/archive/display/105605
  • http://www.sxc.hu/browse.phtml?f=download&amp;id=419970
  • http://www.flickr.com/photos/jackoughton/6535137981/
  • http://www.www8-hp.com/us/en/images/THPProLiantMoonshot_server1_Ctcm2451393762_Ttcm245108560332_F.jpg
  • http://morguefile.com/archive/display/4739
  • http://www.sxc.hu/photo/921149
  • https://www.apple.com/pr/products/macbook-pro-with-retina-display/en-US-MacBook-Pro-with-Retina-display.html
  • https://en.wikipedia.org/wiki/File:Tux.svg.
  • - Obviously, the real system batches a number of these operations and performs them in the background (unless the application requests synchronous behavior via O_DIRECT or fsync)
  • - Obviously, the real system batches a number of these operations and performs them in the background (unless the application requests synchronous behavior via O_DIRECT or fsync)
  • Split Data and Metadata (always, not optional)Everything runs over SSLClient technically speaks REST but has no knowledge of the actual object store detailsOur file system protocol uses Thrift but over HTTP (to be firewall and proxy friendly) Also talk about Compounding
  • http://h18006.www1.hp.com/products/quickspecs/PH_002840/PH_002840.JPGhttp://h18006.www1.hp.com/products/quickspecs/Photos/Photos.html#PH_001598
  • No DNS support requiredAllows for failover/HA use casesDisaster Recovery is easierEasy to stich together namespaces
  • Don’t forget about the NONE Lease State
  • Different API subsets supportedDifferent bugsDifferent consistency propertieshttp://serverfault.com/questions/283914/s3-compatible-object-storage-systems
  • Split Data and Metadata (always, not optional)Everything runs over SSLClient technically speaks REST but has no knowledge of the actual object store detailsOur file system protocol uses Thrift but over HTTP (to be firewall and proxy friendly) Also talk about Compounding
  • http://www.sxc.hu/photo/343545
  • Read-after-write, Read-after-delete, etc.And not just different providers, sometimes different data centers for the same provider (e.g. US Standard for AWS vs. all other regions)
  • http://morguefile.com/archive/display/9810
  • http://www.sxc.hu/photo/1009933
  • Maginatics @ SDC 2013: Architecting An Enterprise Storage Platform Using Object Stores

    1. 1. Architecting an Enterprise Storage Platform Using Object Stores © mekuria getinet / www.mekuriageti.net Niraj Tolia Chief Architect, Maginatics @nirajtolia
    2. 2. These gray slides are equivalent to speaker notes. Normally invisible, they are provided for non- presentation settings. Hope they help.
    3. 3. A Whirlwind Tour
    4. 4. This presentation provides an end-to-end overview of MagFS and therefore might not be deep enough in certain areas. Contact @nirajtolia for Comments, Questions, Flames.
    5. 5. 80%YoY Growth in Unstructured Data 41% Growth in IaaS Systems through 2016 Sources: Gartner, IT Marketing Clock for Storage, Sep 2011 Gartner, Forecast Overview: Public Cloud Services,Worldwide, 2011-2016, Feb 2013
    6. 6. Data growth is impressive! Requires centralization for protection, analysis and cost management. Infrastructure-as-a-Service systems are rapidly growing. Apart from leveraging new storage paradigms (object storage) to deal with this data growth, workloads are migrating and need to use cloud storage. Storage systems also need to support elastic workloads (capacity and scale).
    7. 7. MagFS –The File System for the Cloud Consistent, Elastic, Secure, Mobile-Enabled Layered on Object Stores “Software-Defined”
    8. 8. To respond to the earlier trends, we built a system that, at its core, is a distributed file system. It differs from legacy systems in a number of ways but primarily with an end-to-end (E2E) security perspective, the ability to be both elastic and support elastic workloads, by elevating mobility to a first-class citizen, and by exploiting object stores. Further, while “software-defined” is a oft-abused buzzword, MagFS does fit the definition: software-only, packaged as VMs, and clean separation of data and control planes.
    9. 9. No (Initial) Legacy Support (NFS/CIFS) Native Clients: Push Intelligence to Edges Strong Consistency w/ Full-Spectrum Caching
    10. 10. Three Early Decisions: 1.  No legacy (NFS, CIFS) support on purpose: File systems must evolve (e.g., dedup, caching, scaling). MagFS transparently replaces legacy distributed file systems though. 2.  Client agents allows MagFS to push smarts to edges. No significant IT pushback anymore. Common codebase reduces development costs. 3.  Enable data & metadata caching with strong consistency.
    11. 11. File System Design Goals Low Cost, High Scale Intelligent Clients Span Devices and Networks Support Rapid Iteration
    12. 12. Design Goals: 1.  Deliver scale cost-effectively. 2.  Make clients intelligent: modern computing platforms have enough horsepower. 3.  Span server-grade hardware to mobile clients and from fast to bandwidth-challenged networks. 4.  To rapidly iterate on our product and add new features without disruption to users.
    13. 13. In-Cloud File System NAS Replacement and Consolidation Enterprise File Sharing Use Cases
    14. 14. MagFS, a general purpose system, is used for many different use cases.The majority are Tier 2/3 workloads (e.g., home directory, media, nearline storage, etc.). In-Cloud File System:Allow unmodified applications to Just Work™ in the cloud. Provide a distributed file system where no filer can be racked in. NAS: Both serve as a more cost-effective filer as well as allow for globally distributed workforces to leverage our WAN optimization. Enterprise File Sharing: Related to NAS, secure file sharing that meets compliance and regulatory concerns as MagFS is a product and not a service.
    15. 15. Object Storage (public, on-premises, or hybrid) Data Metadata Metadata Servers Clients 10,000 FootView
    16. 16. The previous slide presents a very high-level overview of MagFS. Note the split data and metadata planes: MagFS does not try to resolve scalability issues already tackled by the object storage system and therefore will not intercept data on the fast path. The metadata servers provide a single pane-of-glass for admins, integrate with native AD or LDAP setups, and also store encryption keys.
    17. 17. Koukouvaya / flickr.com/photos/jackoughton/6535137981/ Heavy (Data) Lifting via Clients Encryption Inline Deduplication Compression Persistent Data Caching Bulk Data Transfers
    18. 18. Push a lot of smarts to increasingly-powerful clients. Clients do heavy data lifting: Chunking for deduplication, encryption, optional compression, on-disk caching, etc. Available resources generally proportional to workloads for different device types. Server doesn’t see data on read OR write path!
    19. 19. Cloud Object Storage Scale Out, Low Cost Handles Placement + Replication Tolerates Failures High Aggregate Performance
    20. 20. Object Storage has a number of very useful properties: Cost, Commodity, Scale Out (aggregate performance, fault tolerance, etc.). We directly expose clients to the object store. Similar to clients, we also push functionality to the object storage system: data placement and replication, fault-tolerance, repairs, etc. as we do not want to reinvent the wheel.
    21. 21. Virtualized Metadata Servers Enforce Strong Consistency Enforce Authentication and Integrity Runtime Performance Optimization Share-level Deduplication Data Scrubbing & Garbage Collection
    22. 22. TheVM-based metadata servers are where consistency and user authentication are enforced. They also allow clients to dynamically cache read and write data, lock objects and byte ranges, etc. Works with clients to prevent duplicated data transfers or redundant data copies. Data is scrubbed and unused data deleted in the background.
    23. 23. Architecture
    24. 24. We will now branch off into details about the client and server architecture and how they interact with object storage.
    25. 25. Client Architecture
    26. 26. MagFS supports different Linux,Windows, OS X, Android, and iOS versions. Majority of code is shared across platforms with platform-specific glue layers. The next few slides talk about desktop/server platforms but the same structure applies to all.
    27. 27. Client Architecture Application Redirector (e.g., FUSE) File System OS Glue Data Manager Metadata Transport Layer Local Remote Userspace Kernel Deduplication Encryption Compression Locking Leases
    28. 28. Traditional platforms have a thin in-kernel redirector (FUSE on Linux.We ship the equivalent on Windows and OS X). Modulo glue, the file system layer contains core functionality. Data manager used for local persistent data caching and optimized remote object store fetches. Metadata transport layer manages the MagFS control plane.
    29. 29. Data Manager File System Layer Simplified Write: Deduplication + Encryption Write Request Plaintext Variable-Length Chunking Encrypted Text (E) AES-256 (K) Object Name (N) SHA-256 Local Cache Remote Transfer Encryption Key (K) SHA-256
    30. 30. Very simple example! In reality, most operations are not synchronous, are batched, and clients get ack early. Incoming data is broken up into smaller variable-length chunks for deduplication. Per-chunk encryption used where the per-chunk key is derived from a cryptographic hash of unencrypted data. Chunk name derived from hash of encrypted data.
    31. 31. Data Manager File System Layer Simplified Write: Deduplication + Encryption Write Request Plaintext Variable-Length Chunking Encrypted Text (E) AES-256 (K) Object Name (N) SHA-256 <File, Offset, N, K> Optional(<URI>) Local Cache Remote Transfer <N, E> <URI, E> No Encryption Keys in the Cloud No Encryption Keys in Local Cache Encryption Key (K) SHA-256 <E>
    32. 32. Encrypted data (but not key) is written to local cache. Write request with offset, chunk name, and encryption key is made to the server. If new chunk, a secure write URI is sent to the client. Data manager queues and writes chunk to the cloud. No encryption keys in local cache or object store.
    33. 33. Data Manager File System Layer Simplified Read: Deduplication + Encryption Read Request <File, Offset, Range> Local Cache Remote Transfer <N, URI> Encryption Key (K) <N, K, URI> Encrypted Text (E) <E> <URI> <E> <URI> <E> Plaintext AES-256 (K)
    34. 34. Another very simple example. Does not include metadata caching either. Server responds to a read request with the chunk name, decryption key, and secure read URI. A local cache miss causes an object storage fetch. Encrypted chunk is decrypted using the server-provided key and unencrypted data returned to the application. All deduplication and encryption is always transparent to the application.
    35. 35. The Client in Real Life Does a Lot More! •  File and Directory Leases (data and metadata caching) •  Asynchronous Operations (including writes) •  Operation Compounding •  Runtime Optimizations (e.g., read ahead) •  Optimizing for High Bandwidth Delay Product (BDP) •  …
    36. 36. There is a separate discussion on leases later when we talk about how clients and servers optimize performance at runtime.
    37. 37. Object Storage (public, on-premises, or hybrid) Data Metadata Metadata Servers Clients Communication Details Thrift (HTTPS) REST (HTTPS)
    38. 38. Important: Split Data and Metadata paths (always, not optional). Clients directly access the object store. MagFS does not need to scale the data plane. Client technically speaks REST over HTTPS to the object store but has no knowledge of the actual API (server- provided URIs). The MagFS protocol uses Thrift over HTTPS (firewall and proxy friendly). Enables efficient encoding and easy protocol extension without breaking compatibility.
    39. 39. Server Architecture
    40. 40. The next few slides cover how we virtualize file namespaces, the distributed system deployment, a view into internals, and a brief overview of leases.
    41. 41. Metadata Server Internals Metadata Storage Layer Storage Core Backups Production Development GC Scrubbing Quotas Dedup Leases Security HA MagFS Ext. Sharing Multi-Cloud Versioning Offline Mode Cloud Abstraction Layer Legend
    42. 42. The metadata server internals have been modularized to provide both development and runtime agility. For example, adding support for a new object storage system doesn’t impact the rest of the code. Runtime background operations (e.g., hot backups, garbage collection, scrubbing) do not impact clients. The file system protocol is separate from file system- agnostic features (e.g., quotas, lease, and lock management).
    43. 43. Bootstrapping:Virtualized Namespaces server.example.comshare    HOST  FQDN   FOLDER   Legacy server.example.comshare   MagFS Dynamic mapping to host:port
    44. 44. With both Windows UNC paths or NFS server/share exports, the exported file system would be tied to a DNS name. Instead, MagFS virtualizes the access path. Nothing changes with respect to applications but a virtualized server:share combination can map to any host:port. This is extremely useful for High Availability Failover and Disaster Recovery.
    45. 45. Discovery Service Metadata Server Metadata Server (HA) Metadata Server ZooKeeper ZooKeeperZooKeeper Monitoring Management Console Config + Scheduler Virtual Filer à Host:Port Mapping
    46. 46. MagFS is a distributed system. It has a number of backend services:VM and Service Monitoring, ZooKeeper for server registration and discovery,Admin management console, job scheduler,AD integration, etc. Shares are deployed in HA or non-HA configuration. HA comes with automatic failover. Clients use a discovery service to map namespace to server.
    47. 47. One of the big challenges in any distributed file system is the tradeoff between consistency and performance. In a naïve strongly consistent system, every operation needs to be centralized on a server.This is obviously bad for performance. The MagFS metadata server therefore hands leases out to clients for data and metadata caching (including caching writes and updates).
    48. 48. Leases: Performance and Strong Consistency Read Write HandleLeaseTypes Read Read + Handle Read + Write + Handle Lease States Valid File Leases Valid Directory Leases
    49. 49. Lease Types: READ allows client to cache reads locally, WRITE allows local write caching, and HANDLE where files can be closed and reopened locally. Valid Lease Type combinations are: READ, READ + HANDLE, READ + WRITE + HANDLE. Others don’t really apply (e.g,WRITE is exclusive and READ + HANDLE come for free if a WRITE lease is held). MagFS also supports WRITE directory leases.
    50. 50. Cloud Storage Interaction
    51. 51. While Maginatics does not provide an Object Storage system itself, it works with a number of different products.The next few slides will talk about the challenges of interoperating with a large number of systems as well the technical challenges of layering a file system on top of them.
    52. 52. Object Storage (public, on-premises, or hybrid)
    53. 53. Today, MagFS supports a large number of object storage systems: private and public Swift and Atmos deployments,AWS S3, public and private S3 clones, Azure, and others not mentioned here. We are seeing an increasing shift towards vendors providing S3 and Swift API compatibility layers even if they originally had their own REST-style protocols.
    54. 54. Object Storage systems are like snowflakes!
    55. 55. MagFS also works hard to address inter-object store variance and hide the complexity from the end user. MagFS uses very basic API calls (GET/PUT/DELETE object/bucket and Signed URLs) and we discovered a number of differences in vendor implementations. MagFS also optimizes data layout for different object stores to obtain the best performance. For example, data layout on S3,Atmos, and Swift differs to match the underlying platform.
    56. 56. Object Store API Compatibility Q: Has anyone come across a near 100% Amazon S3 API compatible object storage system? A: It is hard to find a near-100% compatible product… -Vendor w/ S3 Compatible Product
    57. 57. Even vendors claiming to support the same API have differences, bugs, or interpretation differences. For example, most S3 compatible systems for which we have added support are different from one another (e.g., subsets of API supported, differing API interpretations, bugs, etc.). Swift is similar.The same code cannot be used with a generic Swift setup and the public cloud providers that are based on Swift. Swift authentication (Keystone, TempAuth, etc.) also differs between vendors.
    58. 58. Object Storage (public, on-premises, or hybrid) Data Metadata Metadata Servers Clients Direct Client Access: Security Problem?
    59. 59. One of the challenges with providing clients direct object store access is security.There is generally one (or few) master API key(s) that can delete or read arbitrary data. However, as different MagFS users have different access rights to files, we should not provide the master key to clients (even though the data is encrypted). Further, a malicious client would be able to wipe all data with the master key!
    60. 60. Request Signing
    61. 61. The solution to providing secure and time-limited data access to clients is to use Request Signing, a feature found in all mature object storage systems today. The next few slides will walk through an example of how Request Signing works for a write.
    62. 62. Server-Driven Request Signing SignString  =  HTTP-­‐Verb              +  "n"    +  Content-­‐MD5                              +  "n"    +  Content-­‐Type                            +  "n"    +  Date                                            +  "n"    +  Resource                                    +  "n"    +  ...    
    63. 63. Client read or write requests are authorized by the MagFS server that shares the master key with the object storage system. Signing is done by the metadata server creating a request string in a pre-defined order.
    64. 64. Server-Driven Request Signing SignString  =  PUT                          +  "n"    +  Content-­‐MD5                              +  "n"    +  Content-­‐Type                            +  "n"    +  Date                                            +  "n"    +  Resource                                    +  "n"    +  ...    
    65. 65. The first component of the signature string is the HTTP verb used.This would be GET for a read and generally PUT for a write (some providers like Atmos use POST). DELETEs are never performed by the client.
    66. 66. Server-Driven Request Signing SignString  =  PUT                          +  "n"    +  07BzhNET7exJ6qYjitX/AA==    +  "n"    +  Content-­‐Type                            +  "n"    +  Date                                            +  "n"    +  Resource                                    +  "n"    +  ...    
    67. 67. The second component is a cryptographic hash of the data.A number of object storage systems will reject data whose cryptographic hash doesn’t match the request.This is useful to protect against TCP errors that the TCP checksum doesn’t catch, buggy clients, and even malicious clients. A common hash algorithm used at this step is MD5 but some object storage systems are now supporting stronger cryptographic algorithms.
    68. 68. Server-Driven Request Signing SignString  =  PUT                          +  "n"    +  07BzhNET7exJ6qYjitX/AA==    +  "n"    +  image/jpeg                                +  "n"    +  Date                                            +  "n"    +  Resource                                    +  "n"    +  ...    
    69. 69. The next component is the content-type of the object. We are using the JPEG type in this example but, in MagFS, this would be “application/octet-stream” for all our objects as they are encrypted binary data.
    70. 70. Server-Driven Request Signing SignString  =  PUT                          +  "n"    +  07BzhNET7exJ6qYjitX/AA==    +  "n"    +  image/jpeg                                +  "n"    +  Tue,  11  Jun  2013  00:27:41  +  "n"    +  Resource                                    +  "n"    +  ...    
    71. 71. Following the content-type, we now add a timestamp field.This is very useful because it puts a time limit on this request to prevent replay attacks. Most object stores place a reasonable time limit on request validity (e.g., 15 minutes) but a number also allow configurable values. MagFS supports both.
    72. 72. Server-Driven Request Signing SignString  =  PUT                          +  "n"    +  07BzhNET7exJ6qYjitX/AA==    +  "n"    +  image/jpeg                                +  "n"    +  Tue,  11  Jun  2013  00:27:41  +  "n"    +  /container/example.jpeg      +  "n"    +  ...    
    73. 73. The final component in this example is the resource name and this includes both the container name and the object name within the container. More options are possible in signature strings and these options differ from provider to provider.
    74. 74. Server-Driven Request Signing SignString  =  PUT                          +  "n"    +  07BzhNET7exJ6qYjitX/AA==    +  "n"    +  image/jpeg                                +  "n"    +  Tue,  11  Jun  2013  00:27:41  +  "n"    +  /container/example.jpeg      +  "n"    +  ...     HMAC-­‐SHA1(        ,  SignString)
    75. 75. Following the construction of the signature string, a keyed hash message authentication code (HMAC) is generated using the signature string and the master key. This is a one-way transform and obtaining the HMAC value does not leak information about the master key.
    76. 76. Server-Driven Request Signing SignString  =  PUT                          +  "n"    +  07BzhNET7exJ6qYjitX/AA==    +  "n"    +  image/jpeg                                +  "n"    +  Tue,  11  Jun  2013  00:27:41  +  "n"    +  /container/example.jpeg      +  "n"    +  ...      Signature  =  Base64(HMAC-­‐SHA1(        ,  SignString))  
    77. 77. A Base64 encoded representation (signature) of this HMAC is sent to the client to prove that this request was authorized by the server.
    78. 78. Object Storage (public, on-premises, or hybrid) Data Metadata Metadata Servers Clients Safe Direct Client Access via Request Signing 1. Read/Write Request 3. HTTP Request + Signature + Encrypted Data 2. HTTP Request + Signature
    79. 79. To summarize, read or write operations not serviced from the local cache require server authorization. Using the server-provided request and signature, a client can safely read and write data but only for the specified object. The object store recalculates the signature based on the request, compares it to the received signature, and rejects the request in case of a mismatch (e.g., wrong HTTP verb, stale/old request, swapped object names).
    80. 80. Dealing with Lost Client Writes •  Clients can lose connectivity or, in the worst case, be malicious •  Naïvely trusting client writes can “corrupt” w/ global dedup •  MagFS server scrubs all writes: •  Client acknowledges write •  Server verifies object existence (object store performed MD5 at PUT) •  Server can also read and verify object data (stronger SHA-256 check) •  The object will be available for deduplication only after scrubbing
    81. 81. MagFS exposes global deduplication and therefore needs to handle buggy or malicious clients that might have claimed to have written data but did not. The server therefore waits for a client to acknowledge the write, checks the object store to verify that the object was written (implies success for the cryptographic hash check), and can optionally scrub the data using a stronger cryptographic hash. Modulo optimizations for the same client (really user), the data is only used for deduplication after scrubbing.
    82. 82. Handling Object Store Eventual Consistency •  Treat objects as immutable (even if modifications are allowed) •  Use content-based names (generated using cryptographic hashes) •  Tombstone names after Garbage Collection •  Suffix generation number to content-based names in case of resurrection
    83. 83. Some object stores have eventually consistent properties and can lead to interesting read-after-write behaviors where what you read might not be the most recent write. To address this, we treat all objects as immutable, use content-based names, and using a suffix-based method to tombstone names so that they are never reused. AWS S3 supporting read-after-first-put consistency in most regions also really helps with the above scheme.
    84. 84. Security Architecture
    85. 85. In theory, this is where we would discuss MagFS’s security architecture. However, as you observed, security is baked into the product at every level and has been covered throughout the deck.We will therefore only recap here.
    86. 86. Recap: On-Premises Security Model •  User authentication and permissions derived from native Active Directory setup •  Encryption keys are never exposed to the cloud •  Data and metadata is always encrypted:At-Rest and In-Flight
    87. 87. Quick point about Active Directory (AD):The fact that all our user permissions, group membership information, and other authentication information is derived from AD makes it very simple for admins and using MagFS does not change their workflows.
    88. 88. Try MagFS at http://maginatics.com
    89. 89. Please send questions or follow-ups to info@maginatics.com

    ×