Successfully reported this slideshow.
We use your LinkedIn profile and activity data to personalize ads and to show you more relevant ads. You can change your ad preferences anytime.

NFS and CIFS Options for AWS (STG401) | AWS re:Invent 2013


Published on

In this session, you learn about the use cases for Network File Systems (NFS) and Common Internet File Systems (CIFS), and when NFS and CIFS are appropriate on AWS. We cover the use cases for ephemeral, Amazon EBS, Amazon EBS P-IOPS, and Amazon S3 as the persistent stores for NFS and CIFS shares. We share AWS CloudFormation templates that build multiple solutions—a single instance with Amazon EBS, clustered instances with Amazon EBS, and Gluster cluster—as well as introduce AWS partner solutions.

Published in: Technology

NFS and CIFS Options for AWS (STG401) | AWS re:Invent 2013

  1. 1. STG401 - NFS and CIFS options on AWS Craig Carl, AWS November 15, 2013 © 2013, Inc. and its affiliates. All rights reserved. May not be copied, modified, or distributed in whole or in part without the express consent of, Inc.
  2. 2. What are NFS and CIFS? • Protocols used to implement shared access to files • Different from block and object storage • Current versions – – NFS v4 – SMB v3
  3. 3. Do I really need a POSIX file system? • Legacy applications • Shared/clustered databases • Multi-instance read and write access to the same data set
  4. 4. Important considerations • Availability – – Single AZ = no durability commitments Dual AZ = 99.95% available • Durability – Backing store • Performance – – – Network interface EBS interface EBS performance • Consistency – Pay attention to replication types
  5. 5. Backing stores • EBS – between 0.1% – 0.5% AFR per volume • Ephemeral – hs1.8xlarge • 48 terabytes of storage across 24 hard disk drives – i2.8xlarge • ~5.7 terabytes of storage across 8 SSDs • S3 – designed for 99.999999999% durability
  6. 6. Single EBS-backed instance ec2-consistent-snapshot NFS SAMBA arche EBS MDADM RAID 0 array
  7. 7. Public facing IP interface Low performance t1.micro, m1.small, m1.medium, m1.large c1.medium, m2.xlarge, m2,2xlarge, m3.xlarge Moderate performance c1.xlarge, m1.xlarge, m3.2xlarge, m2.4xlarge 10 Gigabit interface cc1.4xlarge, cc2.8xlarge, cg1.4xlarge, cr1.8large, hi1.4xlarge, hi2.8xlarge, hs1.8xlarge
  8. 8. EBS facing interface Variable everything, except EBS Optimized instances 500 megabit, committed EBS Optimized = yes m1.large, m2.2xlarge, m3.xlarge 1 gigabit, committed EBS Optimized = yes m1.xlarge, m2.4xlarge, c1.xlarge, m3.2xlarge 10 gigabit, shared with public traffic cc1.4xlarge, cc2.8xlarge, cg1.4xlarge, cr1.8large, hi1.4xlarge, hi2.8xlarge, hs1.8xlarge
  9. 9. Single EBS-backed instance # sudo yum install nfs-utils nfs-utils-lib samba samba-commons cups-libs # --size 100 –count 6 –raidlevel 0 --mountpoint /exports -–wipe –attach # sudo vim /etc/exports # sudo vim /etc/samba/smb.conf --
  10. 10. Ephemeral backed instance with DRBD to EBS NFS SAMBA This is asynchronous replication DRBD protocol A replication MDADM RAID 0 array Monitoring the latency of this replication is critical! Snapshot the EBS array MDADM RAID 0 array # cat /proc/drbd #look for ‘oos’
  11. 11. Ephemeral backed instance with DRBD to EBS global { usage-count yes; } common { net { protocol A; } } # /etc/drbd.d/global_common.conf
  12. 12. Ephemeral backed instance with DRBD to EBS resource r0 { on az-a{ device /dev/drbd0; disk /dev/md0; address; meta-disk internal; } on az-a{ device /dev/drbd1; disk /dev/md1; address; meta-disk internal; } } #/etc/drbd.d/r0.res
  13. 13. Pacemaker clustered with EBS AZ-A AZ-B NFS SAMBA NFS SAMBA PACEMAKER EBS EBS DRBD protocol C
  14. 14. Pacemaker clustered with EBS #!/bin/sh VIP= REGION=us-east-1 Instance_ID=`/usr/bin/curl --silent` ENI_ID=`aws ec2 describe-instances --instance-id $Instance_ID --region $REGION | grep NetworkInterfaceId | cut -d '"' -f 4` aws ec2 assign-private-ip-addresses --network-interface-id $ENI_ID --private-ip-addresses $VIP --allow-reassignment --region $REGION
  15. 15. Pacemaker clustered with Ephemeral AZ-A AZ-B
  16. 16. Pacemaker clustered with Ephemeral + EBS AZ-A AZ-B DRBD protocol A (asynchronous) MDADM RAID 0 array
  17. 17. Gluster GlusterFS GlusterFS GlusterFS GlusterFS GlusterFS GlusterFS NFS NFS NFS NFS NFS AZ-A AZ-B GlusterFS GlusterFS GlusterFS GlusterFS GlusterFS NFS NFS NFS NFS NFS
  18. 18. Gluster # on the server # gluster volume create replica 2 glu-volume / / / / # on the client # mount -t glusterfs /mnt/glusterfs
  19. 19. Windows Server 2012 SMB v3 EBS
  20. 20. Windows Server 2012 with DFS AZ-A AZ-B SMB v3 SMB v3 Windows Server 2012 DFS (synchronous replication) EBS EBS
  21. 21. partner options
  22. 22. Red Hat Storage • A supported version of Gluster • Try it now –
  23. 23. Elastic File System for the Cloud • • • • • Strongly consistent cross-OS distributed file system Migrate unmodified applications to AWS Multi-AZ HA and cross-region DR Inline deduplication and end-to-end security Clients access S3 directly for scale and performance • TestDrive now at 23
  24. 24. • • • • Multi-AZ redundant Exports NFS, CIFS and iSCSI Supports ZFS to tier between ephemeral and EBS Supports S3 as a backing store • Available now in AWS Marketplace
  25. 25. Noam Shendar @noamshen
  26. 26. Virtual Private Storage Array (VPSA™) Service • • • • • • • Private Storage-as-a-Service for AWS customers Billed hourly, with no AMIs needed Low-latency (1~2ms) attach to AWS instances Global footprint: US East/West, Europe, Japan File (NFS and CIFS) and Block (iSCSI) Ridiculously high QoS True HA (no single point of failure, 100% SLA)
  27. 27. Zadara via AWS Direct Connect Availability Zone X Availability Zone Y Availability Zone X AWS Region A AWS Region N AWS Direct Connect AWS Direct Connect San Jose & N. Va. Availability Zone Y Tokyo Secure remote replication Dublin Zadara Cloud A L.A.* *coming soon Zadara Cloud N
  28. 28. Easy Provisioning!
  29. 29. Why Zadara VPSA? SSD read/write caching 100 TB+ volumes Shared volumes Low-impact snapshots NFS and CIFS Remote replication 5-minute RPO!
  30. 30. Why Zadara VPSA? High random write performance Write cache assisted Data-at-Rest Encryption Zero-capacity instant cloning, e.g. for test/dev 100s of volumes
  31. 31. Business Continuity & Disaster Recovery Protect • Low-impact snapshots, available immediately • Snapshot based, latency-tolerant Remote Replication for multi-region Disaster Recovery Recover • Instant, zero-capacity cloning of snapshots • RPO: 5 minutes
  32. 32. iG • The largest Internet portal in Brazil • 5 TB NFS Volumes shared by 170 instances connected to a single VPSA
  33. 33. questions? craig carl STG401
  34. 34. Please give us your feedback on this presentation STG401 As a thank you, we will select prize winners daily for completed surveys!