STG401 - NFS and CIFS options on AWS
Craig Carl, AWS
November 15, 2013

© 2013 Amazon.com, Inc. and its affiliates. All ri...
What are NFS and CIFS?
• Protocols used to implement shared access to
files
• Different from block and object storage
• Cu...
Do I really need a POSIX file system?
• Legacy applications
• Shared/clustered databases
• Multi-instance read and write a...
Important considerations
• Availability
–
–

Single AZ = no durability commitments
Dual AZ = 99.95% available

• Durabilit...
Backing stores
• EBS
– between 0.1% – 0.5% AFR per volume

• Ephemeral
– hs1.8xlarge
• 48 terabytes of storage across 24 h...
Single EBS-backed instance
raidformer.py
ec2-consistent-snapshot
NFS
SAMBA

arche

EBS

MDADM
RAID 0
array
Public facing IP interface
Low performance
t1.micro, m1.small, m1.medium, m1.large
c1.medium, m2.xlarge, m2,2xlarge, m3.xl...
EBS facing interface

Variable
everything, except EBS Optimized instances

500 megabit, committed
EBS Optimized = yes
m1.l...
Single EBS-backed instance
# sudo yum install nfs-utils nfs-utils-lib samba samba-commons cups-libs
# raidformer.py --size...
Ephemeral backed instance with DRBD to EBS
NFS
SAMBA

This is asynchronous replication
DRBD
protocol A
replication

MDADM
...
Ephemeral backed instance with DRBD to EBS
global {
usage-count yes;
}
common {
net {
protocol A;
}
}

# /etc/drbd.d/globa...
Ephemeral backed instance with DRBD to EBS
resource r0 {
on az-a{
device
/dev/drbd0;
disk
/dev/md0;
address
10.1.1.1:7789;...
Pacemaker clustered with EBS
AZ-A

AZ-B
NFS
SAMBA

NFS
SAMBA

PACEMAKER
EBS

EBS

DRBD
protocol C
Pacemaker clustered with EBS
#!/bin/sh
VIP=10.1.1.1
REGION=us-east-1
Instance_ID=`/usr/bin/curl --silent
http://169.254.16...
Pacemaker clustered with Ephemeral
AZ-A

AZ-B
Pacemaker clustered with Ephemeral + EBS
AZ-A

AZ-B

DRBD
protocol A
(asynchronous)

MDADM
RAID 0
array
Gluster
GlusterFS

GlusterFS

GlusterFS

GlusterFS

GlusterFS

GlusterFS

NFS

NFS

NFS

NFS

NFS

AZ-A

AZ-B

GlusterFS

...
Gluster
# on the server
# gluster volume create replica 2 glu-volume /
10.0.0.1:/gluster 10.0.1.1:/gluster 10.0.0.2:/glust...
Windows Server 2012

SMB v3

EBS
Windows Server 2012 with DFS
AZ-A

AZ-B
SMB v3

SMB v3

Windows Server 2012
DFS
(synchronous replication)
EBS

EBS
partner options
Red Hat Storage

• A supported version of Gluster
• Try it now
– https://testdrive.redhat.com/
Elastic File System for the Cloud

•
•
•
•
•

Strongly consistent cross-OS distributed file system
Migrate unmodified appl...
•
•
•
•

Multi-AZ redundant
Exports NFS, CIFS and iSCSI
Supports ZFS to tier between ephemeral and EBS
Supports S3 as a ba...
Noam Shendar
noam@zadarastorage.com
@noamshen
Virtual Private Storage Array (VPSA™) Service
•
•
•
•
•
•
•

Private Storage-as-a-Service for AWS customers
Billed hourly,...
Zadara via AWS Direct Connect

Availability Zone X

Availability Zone Y

Availability Zone X

AWS Region A

AWS Region N

...
Easy Provisioning!
Why Zadara VPSA?
SSD read/write caching
100 TB+ volumes
Shared volumes
Low-impact snapshots
NFS and CIFS
Remote replicatio...
Why Zadara VPSA?
High random write performance
Write cache assisted

Data-at-Rest Encryption
Zero-capacity instant cloning...
Business Continuity & Disaster Recovery
Protect
• Low-impact snapshots, available immediately
• Snapshot based, latency-to...
iG
• The largest Internet
portal in Brazil
• 5 TB NFS Volumes
shared by 170
instances connected
to a single VPSA
questions?

craig carl
crcarl@amazon.com

STG401
Please give us your feedback on this
presentation

STG401
As a thank you, we will select prize
winners daily for completed...
NFS and CIFS Options for AWS (STG401) | AWS re:Invent 2013
Upcoming SlideShare
Loading in...5
×

NFS and CIFS Options for AWS (STG401) | AWS re:Invent 2013

11,917

Published on

In this session, you learn about the use cases for Network File Systems (NFS) and Common Internet File Systems (CIFS), and when NFS and CIFS are appropriate on AWS. We cover the use cases for ephemeral, Amazon EBS, Amazon EBS P-IOPS, and Amazon S3 as the persistent stores for NFS and CIFS shares. We share AWS CloudFormation templates that build multiple solutions—a single instance with Amazon EBS, clustered instances with Amazon EBS, and Gluster cluster—as well as introduce AWS partner solutions.

Published in: Technology
1 Comment
11 Likes
Statistics
Notes
  • Has someone actually tried the local async brdb setup? There's no way I can get this to work (conflicting use of host section error in brbd)
       Reply 
    Are you sure you want to  Yes  No
    Your message goes here
No Downloads
Views
Total Views
11,917
On Slideshare
0
From Embeds
0
Number of Embeds
12
Actions
Shares
0
Downloads
226
Comments
1
Likes
11
Embeds 0
No embeds

No notes for slide

NFS and CIFS Options for AWS (STG401) | AWS re:Invent 2013

  1. 1. STG401 - NFS and CIFS options on AWS Craig Carl, AWS November 15, 2013 © 2013 Amazon.com, Inc. and its affiliates. All rights reserved. May not be copied, modified, or distributed in whole or in part without the express consent of Amazon.com, Inc.
  2. 2. What are NFS and CIFS? • Protocols used to implement shared access to files • Different from block and object storage • Current versions – – NFS v4 – SMB v3
  3. 3. Do I really need a POSIX file system? • Legacy applications • Shared/clustered databases • Multi-instance read and write access to the same data set
  4. 4. Important considerations • Availability – – Single AZ = no durability commitments Dual AZ = 99.95% available • Durability – Backing store • Performance – – – Network interface EBS interface EBS performance • Consistency – Pay attention to replication types
  5. 5. Backing stores • EBS – between 0.1% – 0.5% AFR per volume • Ephemeral – hs1.8xlarge • 48 terabytes of storage across 24 hard disk drives – i2.8xlarge • ~5.7 terabytes of storage across 8 SSDs • S3 – designed for 99.999999999% durability
  6. 6. Single EBS-backed instance raidformer.py ec2-consistent-snapshot NFS SAMBA arche EBS MDADM RAID 0 array
  7. 7. Public facing IP interface Low performance t1.micro, m1.small, m1.medium, m1.large c1.medium, m2.xlarge, m2,2xlarge, m3.xlarge Moderate performance c1.xlarge, m1.xlarge, m3.2xlarge, m2.4xlarge 10 Gigabit interface cc1.4xlarge, cc2.8xlarge, cg1.4xlarge, cr1.8large, hi1.4xlarge, hi2.8xlarge, hs1.8xlarge
  8. 8. EBS facing interface Variable everything, except EBS Optimized instances 500 megabit, committed EBS Optimized = yes m1.large, m2.2xlarge, m3.xlarge 1 gigabit, committed EBS Optimized = yes m1.xlarge, m2.4xlarge, c1.xlarge, m3.2xlarge 10 gigabit, shared with public traffic cc1.4xlarge, cc2.8xlarge, cg1.4xlarge, cr1.8large, hi1.4xlarge, hi2.8xlarge, hs1.8xlarge
  9. 9. Single EBS-backed instance # sudo yum install nfs-utils nfs-utils-lib samba samba-commons cups-libs # raidformer.py --size 100 –count 6 –raidlevel 0 --mountpoint /exports -–wipe –attach # sudo vim /etc/exports # sudo vim /etc/samba/smb.conf raidformer.py -- https://github.com/jsmartin/raidformer
  10. 10. Ephemeral backed instance with DRBD to EBS NFS SAMBA This is asynchronous replication DRBD protocol A replication MDADM RAID 0 array Monitoring the latency of this replication is critical! Snapshot the EBS array MDADM RAID 0 array # cat /proc/drbd #look for ‘oos’
  11. 11. Ephemeral backed instance with DRBD to EBS global { usage-count yes; } common { net { protocol A; } } # /etc/drbd.d/global_common.conf
  12. 12. Ephemeral backed instance with DRBD to EBS resource r0 { on az-a{ device /dev/drbd0; disk /dev/md0; address 10.1.1.1:7789; meta-disk internal; } on az-a{ device /dev/drbd1; disk /dev/md1; address 10.1.1.1:7789; meta-disk internal; } } #/etc/drbd.d/r0.res
  13. 13. Pacemaker clustered with EBS AZ-A AZ-B NFS SAMBA NFS SAMBA PACEMAKER EBS EBS DRBD protocol C
  14. 14. Pacemaker clustered with EBS #!/bin/sh VIP=10.1.1.1 REGION=us-east-1 Instance_ID=`/usr/bin/curl --silent http://169.254.169.254/latest/meta-data/instance-id` ENI_ID=`aws ec2 describe-instances --instance-id $Instance_ID --region $REGION | grep NetworkInterfaceId | cut -d '"' -f 4` aws ec2 assign-private-ip-addresses --network-interface-id $ENI_ID --private-ip-addresses $VIP --allow-reassignment --region $REGION
  15. 15. Pacemaker clustered with Ephemeral AZ-A AZ-B
  16. 16. Pacemaker clustered with Ephemeral + EBS AZ-A AZ-B DRBD protocol A (asynchronous) MDADM RAID 0 array
  17. 17. Gluster GlusterFS GlusterFS GlusterFS GlusterFS GlusterFS GlusterFS NFS NFS NFS NFS NFS AZ-A AZ-B GlusterFS GlusterFS GlusterFS GlusterFS GlusterFS NFS NFS NFS NFS NFS
  18. 18. Gluster # on the server # gluster volume create replica 2 glu-volume / 10.0.0.1:/gluster 10.0.1.1:/gluster 10.0.0.2:/gluster / 10.0.1.2:/gluster 10.0.0.3:/gluster 10.0.1.3:/gluster / 10.0.0.4:/gluster 10.0.1.4:/gluster 10.0.0.5:/gluster / 10.0.1.5:/gluster # on the client # mount -t glusterfs 10.0.0.1:/glu-volume /mnt/glusterfs
  19. 19. Windows Server 2012 SMB v3 EBS
  20. 20. Windows Server 2012 with DFS AZ-A AZ-B SMB v3 SMB v3 Windows Server 2012 DFS (synchronous replication) EBS EBS
  21. 21. partner options
  22. 22. Red Hat Storage • A supported version of Gluster • Try it now – https://testdrive.redhat.com/
  23. 23. Elastic File System for the Cloud • • • • • Strongly consistent cross-OS distributed file system Migrate unmodified applications to AWS Multi-AZ HA and cross-region DR Inline deduplication and end-to-end security Clients access S3 directly for scale and performance • TestDrive now at testdrive.maginatics.com 23
  24. 24. • • • • Multi-AZ redundant Exports NFS, CIFS and iSCSI Supports ZFS to tier between ephemeral and EBS Supports S3 as a backing store • Available now in AWS Marketplace
  25. 25. Noam Shendar noam@zadarastorage.com @noamshen
  26. 26. Virtual Private Storage Array (VPSA™) Service • • • • • • • Private Storage-as-a-Service for AWS customers Billed hourly, with no AMIs needed Low-latency (1~2ms) attach to AWS instances Global footprint: US East/West, Europe, Japan File (NFS and CIFS) and Block (iSCSI) Ridiculously high QoS True HA (no single point of failure, 100% SLA)
  27. 27. Zadara via AWS Direct Connect Availability Zone X Availability Zone Y Availability Zone X AWS Region A AWS Region N AWS Direct Connect AWS Direct Connect San Jose & N. Va. Availability Zone Y Tokyo Secure remote replication Dublin Zadara Cloud A L.A.* *coming soon Zadara Cloud N
  28. 28. Easy Provisioning!
  29. 29. Why Zadara VPSA? SSD read/write caching 100 TB+ volumes Shared volumes Low-impact snapshots NFS and CIFS Remote replication 5-minute RPO!
  30. 30. Why Zadara VPSA? High random write performance Write cache assisted Data-at-Rest Encryption Zero-capacity instant cloning, e.g. for test/dev 100s of volumes
  31. 31. Business Continuity & Disaster Recovery Protect • Low-impact snapshots, available immediately • Snapshot based, latency-tolerant Remote Replication for multi-region Disaster Recovery Recover • Instant, zero-capacity cloning of snapshots • RPO: 5 minutes
  32. 32. iG • The largest Internet portal in Brazil • 5 TB NFS Volumes shared by 170 instances connected to a single VPSA
  33. 33. questions? craig carl crcarl@amazon.com STG401
  34. 34. Please give us your feedback on this presentation STG401 As a thank you, we will select prize winners daily for completed surveys!
  1. A particular slide catching your eye?

    Clipping is a handy way to collect important slides you want to go back to later.

×