CloudStack S3

2,988 views
2,779 views

Published on

A tutorial on how to setup CloudStack to expose an S3 interface.

S3 is the amazon web service simple storage service. It is used to create containers on a backend storage system and storage objects in them. S3 is one (if not the one) of the most successfull AWS web service, it scales to billions of objects and serves millions of users.

In this talk we show how to enable a S3 service with the cloudstack management server. This is a tech preview to show the compatibility between CloudStack and AWS services. CloudStack does not implement a distributed data store behind this S3 compatible service but instead uses a traditional file system like NFS to store the objects. This has the advantage of giving users an S3 compatible interface to their cloudstack based cloud.
In future Apache CloudStack releases a true S3 service will be available via the storage systems used like Riack CS, glusterfs and Ceph.

Published in: Technology
1 Comment
2 Likes
Statistics
Notes
No Downloads
Views
Total views
2,988
On SlideShare
0
From Embeds
0
Number of Embeds
10
Actions
Shares
0
Downloads
63
Comments
1
Likes
2
Embeds 0
No embeds

No notes for slide

CloudStack S3

  1. 1. CloudStack S3 configuration Tech Preview Sebastien Goasguen August 23rd
  2. 2. Introduction• CloudStack provides an S3 compatible interface• In Apache CloudStack 4.0 (soon out), Cloudbridge is now an integral part of the management server and not a separate server.• This is not saying that CloudStack provides an S3 implementation. CloudStack supports object stores (e.g Swift, GlusterFS…) but is not itself an object store.
  3. 3. Steps to use S3 in CloudStack• Specify the mount point where you want to store the objects• Enable the service via global configuration settings• Generate API keys for the user(s)• Register the user and associate a certificate• Use boto or other S3 clients
  4. 4. S3 mount point• S3 properties are set in /path/to/source/awsapi/conf/cloud- bridge.properties or on the mgt server at $CATALINA_HOME/conf/cloud-bridge-properties host=http://localhost:8080/awsapi storage.root=/Users/john1/S3-Mount storage.multipartDir=__multipart__uploads__ bucket.dns=false serviceEndpoint=localhost:8080 Edit the storage.root to point to a file system mount point on the management server.
  5. 5. Enabling S3• Via the GUI• Via API call on integration API port 8096 http://localhost:8096/client/api? command=updateConfiguration&name=enable.s3.api&value=true
  6. 6. Enabling S3• Via an authenticated API call on port 8080 (e.g using a Python client)apiurl = http://localhost:8080/client/api’cloudstack = CloudStack.Client(apiurl,apikey,secretkey)cloudstack.updateConfiguration ({‘name’:’enable.s3.api’,’value’:’true’})
  7. 7. Generate Keys• Via the GUI
  8. 8. Generate Keys• Via the API: http://localhost:8096/client/api? command=registerUserKeys&id=<id of the user>
  9. 9. Register the user• Get the script from the source at /path/to/source/awsapi- setup/setup/cloudstack-aws-api- registercloud-bridge-register --apikey=<User’s Cloudstack API key> --secretkey=<User’s CloudStack Secret key> --cert=</path/to/cert.pem> --url=http://<cloudstack-server- ip>:8080/awsapi
  10. 10. S3 Boto example 1/4• Import the boto S3 modules: >>> from boto.s3.key import Key >>> from boto.s3.connection import S3Connection >>> from boto.s3.connection import OrdinaryCallingFormat• Set your API keys, calling format and create the connection to the S3 endpoint:>>> apikey=ChOw-pwdcCFy6fpeyv6kUaR0NnhzmG3tE7HLN2z3OB_s- ogF5HjZtN4rnzKnq2UjtnHeg_RjeDgdDAPyLA5gOw’ >>>secretkey=IMY8R7CJQiSGFk4cHwfXXN3DUFXz07cCiU80eM3MCmfLs7ku sgyOfm0g9qzXRXhoAPCOllGt637cWH-IRxXc3w’>>> cf=OrdinaryCallingFormat()>>> conn=S3Connection(aws_access_key_id=apikey,aws_secret_access_k ey=secretkey,is_secure=False,host=localhost,port=8080,callin g_format=cf,path=/awsapi/rest/AmazonS3)
  11. 11. S3 boto example 2/4• Note the path of the connection: /awsapi/rest/AmazonS3 , this is not consistent with the EC2 endpoint and will probably be fixed soon, it is also not consistent with the information in the configuration file. That’s why it’s a Tech Preview.• Help welcome !!!
  12. 12. S3 Boto example 3/4• Once you have the connection, start by creating a bucket, get a key and store a value for that key in the bucket.>>> conn.create_bucket(test)<Bucket: test>>>> b=conn.get_bucket(test)>>> k=Key(b)>>> k.set_contents_from_string(This is a test)>>> k.get_contents_as_string()This is a test
  13. 13. S3 boto example 4/4• Same thing with a file:>>> conn.create_bucket(cloud)<Bucket: cloud>>>> b=conn.get_bucket(cloud)>>> k=Key(b)>>> k.set_contents_from_filename(/Users/runseb/Deskto p/code/s3cs.py)>>> k.get_contents_to_filename(/Users/runseb/Desktop/ code/foobar’)>>> conn.get_all_buckets()[<Bucket: test>, <Bucket: cloud>]
  14. 14. Example of S3 Database tables• The cloudbridge database on the mgt server contains information about the users registered• mysql> select * from usercredentials;• | ID | AccessKey | SecretKey | CertUniqueId | | 1 | ChOw- pwdcCFy6fpeyv6kUaR0NnhzmG3tE7HLN2z3OB_s- ogF5HjZtN4rnzKnq2UjtnHeg_RjeDgdDAPyLA5gOw | IMY8R7CJQiSGFk4cHwfXXN3DUFXz07cCiU80eM3MCmfLs7kusgyOfm0g9qzXRXhoA COllGt637cWH-IRxXc3w | CN=AWS Limited-Assurance CA, OU=AWS, O=Amazon.com, C=US, serial=570614354026 |• As well as the buckets (snippet cut)• mysql> select * from sbucket;• | ID | Name | OwnerCanonicalID | SHostID | CreateTime | 1 | test | ChOw-pwdcCFy6fpeyv6kUaR0NnhzmG3tE7HLN2z23:42:21 | |• | 2 | cloud | ChOw-pwdcCFy6fpeyv6kUaR0NnhzmG3tE7HLN2z3OB_s-
  15. 15. Mount Point• The mount point now contains a flat directory structure with two buckets, and in each bucket a file containing the value for that keyroot@devcloud:/tmp/s3mount# ls -ltotal 8drwxr-xr-x 2 root root 4096 Aug 23 16:45 clouddrwxr-xr-x 2 root root 4096 Aug 23 16:47 testroot@devcloud:/tmp/s3mount# cat test/2This is a test
  16. 16. Conclusions• This was all tested with DevCloud• Join the discussion on the future of the EC2/S3 compatibility of CloudStack cloudstack-dev@incubator.apache.org #cloudstack on irc.freenode.net @CloudStack on Twitter

×