Swift is an open source object storage system that provides scalable storage and retrieval of any amount of unstructured data over HTTP. It is designed to be scalable, reliable, and inexpensive for storing large amounts of unstructured data. Some key uses of Swift include storing backups, web content like images, and large scientific data objects. Swift uses a ring architecture to distribute and replicate data across multiple servers for high availability.
Openstack Swift is a very powerful object storage that is used in several of the largest object storage deployments around the globe. It ensures a very high level of data durability and can withstand epic disasters if setup in the right way.
Openstack Swift is a very powerful object storage that is used in several of the largest object storage deployments around the globe. It ensures a very high level of data durability and can withstand epic disasters if setup in the right way.
DockerCon 2016 Ecosystem - Everything You Need to Know About Docker and Stora...ClusterHQ
In this talk, we will provide a 10,000-ft. overview of the key concepts, architectures, and common deployment scenarios for stateful services. We will cover the Docker volumes and available storage options in the community including ClusterHQ’s Flocker volume manager. After getting the lay of the land, we'll see these concepts in action. Starting by deploying a database container on a single node with UCP, Flocker and VolumeHub. Then, using the features of Docker Swarm and Flocker, we will then allow Swarm to automatically reschedule the stateful service along with Flocker moving its volume when the node fails giving us a HA containerized database.
Netflix Container Scheduling and Execution - QCon New York 2016aspyker
Scheduling a Fuller House: Container Management At Netflix
Customers from over all over the world streamed Forty Two Billion hours of Netflix content last year. Various Netflix batch jobs and an increasing number of service applications use containers for their processing. In this talk Netflix will present a deep dive on the motivations and the technology powering container deployment on top of the AWS EC2 service. The talk will cover our approach to cloud resource management and scheduling with the open source Fenzo library, along with details on docker execution engine as a part of project Titus. As well, the talk will share some of the results so far, lessons learned, and end with a brief look at the developer experience for containers.
Apache Cassandra Lunch #52: Airflow and Cassandra for Cluster ManagementAnant Corporation
In Apache Cassandra Lunch #52: Airflow and Cassandra for Cluster Management, we discussed using Airflow to schedule tasks on a Cassandra cluster beyond what could be accomplished with the Cassandra provider package.
QCon NYC: Distributed systems in practice, in theoryAysylu Greenberg
Modern systems in production rely on decades of computer science research. Over time, new architectural patterns emerge that enable more resilient and robust systems. In this talk, we'll discuss some of these patterns from systems I've worked on at Google and the related work that provide insights into the motivations behind them.
This talk is from Distributed Data Summit SF 2018 - http://distributeddatasummit.com/2018-sf/sessions#chella
Audit logging is one of the most critical features in an enterprise-ready database in terms of security compliance. Furthermore, live traffic troubleshooting is critical for operators to troubleshoot production issues quickly. While past versions have lacked these critical features, the Cassandra team understood the need for better solutions and in the upcoming release of Cassandra both of these features now come out of the box which makes Cassandra even more awesome to work with. Cassandra now supports Audit logging and query logging as part of C* itself. As part of this talk, audience will learn about how to enable, configure, and tune audit logging for their C* clusters and how to log live traffic/queries for serverel needs including troubleshooting or even live traffic reply
Introduction to the management of data persistency in FIWARE with the different approach adopted by the FIWARE Community. What is a Time Series Database. What are the different between the different solutions adopted.
Deploying and running applications globally, at scale, is becoming easier and faster with Kubernetes. From a single container Express app to microservice-powered service meshes, cloud-native applications are the norm. In this session, Christopher Bradford explores the aspects you’ll want to consider when taking the database of your application global including:
Deployment topologies like hybrid and multi-cloud
Infrastructure and configuration planning
Resiliency and planning for failure
Security and privacy
Application connectivity
After this session, you’ll be equipped with the bigger picture of what it takes to move your data operations onto Kubernetes in a globally distributed space, as part of a cloud-native stack.
DockerCon 2016 Ecosystem - Everything You Need to Know About Docker and Stora...ClusterHQ
In this talk, we will provide a 10,000-ft. overview of the key concepts, architectures, and common deployment scenarios for stateful services. We will cover the Docker volumes and available storage options in the community including ClusterHQ’s Flocker volume manager. After getting the lay of the land, we'll see these concepts in action. Starting by deploying a database container on a single node with UCP, Flocker and VolumeHub. Then, using the features of Docker Swarm and Flocker, we will then allow Swarm to automatically reschedule the stateful service along with Flocker moving its volume when the node fails giving us a HA containerized database.
Netflix Container Scheduling and Execution - QCon New York 2016aspyker
Scheduling a Fuller House: Container Management At Netflix
Customers from over all over the world streamed Forty Two Billion hours of Netflix content last year. Various Netflix batch jobs and an increasing number of service applications use containers for their processing. In this talk Netflix will present a deep dive on the motivations and the technology powering container deployment on top of the AWS EC2 service. The talk will cover our approach to cloud resource management and scheduling with the open source Fenzo library, along with details on docker execution engine as a part of project Titus. As well, the talk will share some of the results so far, lessons learned, and end with a brief look at the developer experience for containers.
Apache Cassandra Lunch #52: Airflow and Cassandra for Cluster ManagementAnant Corporation
In Apache Cassandra Lunch #52: Airflow and Cassandra for Cluster Management, we discussed using Airflow to schedule tasks on a Cassandra cluster beyond what could be accomplished with the Cassandra provider package.
QCon NYC: Distributed systems in practice, in theoryAysylu Greenberg
Modern systems in production rely on decades of computer science research. Over time, new architectural patterns emerge that enable more resilient and robust systems. In this talk, we'll discuss some of these patterns from systems I've worked on at Google and the related work that provide insights into the motivations behind them.
This talk is from Distributed Data Summit SF 2018 - http://distributeddatasummit.com/2018-sf/sessions#chella
Audit logging is one of the most critical features in an enterprise-ready database in terms of security compliance. Furthermore, live traffic troubleshooting is critical for operators to troubleshoot production issues quickly. While past versions have lacked these critical features, the Cassandra team understood the need for better solutions and in the upcoming release of Cassandra both of these features now come out of the box which makes Cassandra even more awesome to work with. Cassandra now supports Audit logging and query logging as part of C* itself. As part of this talk, audience will learn about how to enable, configure, and tune audit logging for their C* clusters and how to log live traffic/queries for serverel needs including troubleshooting or even live traffic reply
Introduction to the management of data persistency in FIWARE with the different approach adopted by the FIWARE Community. What is a Time Series Database. What are the different between the different solutions adopted.
Deploying and running applications globally, at scale, is becoming easier and faster with Kubernetes. From a single container Express app to microservice-powered service meshes, cloud-native applications are the norm. In this session, Christopher Bradford explores the aspects you’ll want to consider when taking the database of your application global including:
Deployment topologies like hybrid and multi-cloud
Infrastructure and configuration planning
Resiliency and planning for failure
Security and privacy
Application connectivity
After this session, you’ll be equipped with the bigger picture of what it takes to move your data operations onto Kubernetes in a globally distributed space, as part of a cloud-native stack.
20 Facts about Swift programming languageRohit Tirkey
Swift is a multi-paradigm, compiled programming language created for iOS, OS X, watchOS and tvOS development by Apple Inc. It is essentially the new go-to language for everything Apple.
We are so excited to announce that DoSelect now supports Swift programming language on our script evaluation platform! Teams can now evaluate candidates on Swift in all their tests and sessions. Developers can create solutions in public hackathons using Swift. Isn’t that so great? :)
In this session, we’ll focus exclusively on OpenStack Swift, OpenStack’s object store capability. We’ll review the architecture, use cases, deployment strategies and common obstacles as we “open up the covers” on this exciting element of the OpenStack architecture.
Introduction to Swift programming language.Icalia Labs
Take a look to Swift, if you've been developing for iOS in Objective-C many things may look familiar, maybe just "upgraded". If you're a first timer diving into iOS development we strongly recommend you to understand first the basics of Cocoa.
Think different visualization tools for testers StarEast 2013 pascaldufourPascal Dufour
Traditional processes have required testers to create a large amount of documentation in the form of test plans, test cases, and test reports. It’s time to think differently. Creating test artifacts in the “old school” textual style takes too much time away from actual testing. Besides, text is boring and uses only the left side of your brain. Visual images—charts, graphs, and diagrams—engage your right brain for more thinking power. The old saying “A picture is worth a thousand words” is really true! Pascal Dufour shows how you can employ visualizations—mind maps, drawings, dashboards, charts, and other graphics—to improve clarity and guide your team to create lightweight testware artifacts. Find out how visualization helps you more easily and more quickly understand information—enabling and improving team decision making, collaboration, and agility.
Carta stampata sempre più nel baratro! Numeri spaventosi per la maggior parte delle testate Provinciali e Nazionali- Numeri a 2 cifre in meno rispetto al trimestre dell'anno precedente!!
NetflixOSS Meetup S3 E1, covering latest components in Distributed Databases, Telemetry systems, Big Data tools and more. Speakers from Netflix, IBM Watson, Pivotal and Nike Digital
Netflix Open Source Meetup Season 4 Episode 2aspyker
In this episode, we will take a close look at 2 different approaches to high-throughput/low-latency data stores, developed by Netflix.
The first, EVCache, is a battle-tested distributed memcached-backed data store, optimized for the cloud. You will also hear about the road ahead for EVCache it evolves into an L1/L2 cache over RAM and SSDs.
The second, Dynomite, is a framework to make any non-distributed data-store, distributed. Netflix's first implementation of Dynomite is based on Redis.
Come learn about the products' features and hear from Thomson and Reuters, Diego Pacheco from Ilegra and other third party speakers, internal and external to Netflix, on how these products fit in their stack and roadmap.
AWS Big Data Demystified #1: Big data architecture lessons learned Omid Vahdaty
AWS Big Data Demystified #1: Big data architecture lessons learned . a quick overview of a big data techonoligies, which were selected and disregard in our company
The video: https://youtu.be/l5KmaZNQxaU
dont forget to subcribe to the youtube channel
The website: https://amazon-aws-big-data-demystified.ninja/
The meetup : https://www.meetup.com/AWS-Big-Data-Demystified/
The facebook group : https://www.facebook.com/Amazon-AWS-Big-Data-Demystified-1832900280345700/
Ceph data services in a multi- and hybrid cloud worldSage Weil
IT organizations of the future (and present) are faced with managing infrastructure that spans multiple private data centers and multiple public clouds. Emerging tools and operational patterns like kubernetes and microservices are easing the process of deploying applications across multiple environments, but the achilles heel of such efforts remains that most applications require large quantities of state, either in databases, object stores, or file systems. Unlike stateless microservices, state is hard to move.
Ceph is known for providing scale-out file, block, and object storage within a single data center, but it also includes a robust set of multi-cluster federation capabilities. This talk will cover how Ceph's underlying multi-site capabilities complement and enable true portability across cloud footprints--public and private--and how viewing Ceph from a multi-cloud perspective has fundamentally shifted our data services roadmap, especially for Ceph object storage.
Initial presentation of openstack (for montreal user group)Marcos García
Introduction to Openstack: basic concepts, latest Havana project release, cloud terminology (including IaaS, PaaS and SaaS). This presentation was shown in the first Openstack Montreal user group in November 19 2013 (http://montrealopenstack.org/)
Skyhook: Towards an Arrow-Native Storage System, CCGrid 2022JayjeetChakraborty
With the ever-increasing dataset sizes, several file
formats such as Parquet, ORC, and Avro have been developed
to store data efficiently, save the network, and interconnect
bandwidth at the price of additional CPU utilization. However,
with the advent of networks supporting 25-100 Gb/s and storage
devices delivering 1, 000, 000 reqs/sec, the CPU has become the
bottleneck trying to keep up feeding data in and out of these
fast devices. The result is that data access libraries executed
on single clients are often CPU-bound and cannot utilize the
scale-out benefits of distributed storage systems. One attractive
solution to this problem is to offload data-reducing processing
and filtering tasks to the storage layer. However, modifying
legacy storage systems to support compute offloading is often
tedious and requires an extensive understanding of the system
internals. Previous approaches re-implemented functionality of
data processing frameworks and access libraries for a particular
storage system, a duplication of effort that might have to be
repeated for different storage systems.
This paper introduces a new design paradigm that allows extending programmable object storage systems to embed existing,
widely used data processing frameworks and access libraries
into the storage layer with no modifications. In this approach,
data processing frameworks and access libraries can evolve
independently from storage systems while leveraging distributed
storage systems’ scale-out and availability properties. We present
Skyhook, an example implementation of our design paradigm
using Ceph, Apache Arrow, and Parquet. We provide a brief
performance evaluation of Skyhook and discuss key results.
Como creamos QuestDB Cloud, un SaaS basado en Kubernetes alrededor de QuestDB...javier ramirez
QuestDB es una base de datos open source de alto rendimiento. Mucha gente nos comentaba que les gustaría usarla como servicio, sin tener que gestionar las máquinas. Así que nos pusimos manos a la obra para desarrollar una solución que nos permitiese lanzar instancias de QuestDB con provisionado, monitorización, seguridad o actualizaciones totalmente gestionadas.
Unos cuantos clusters de Kubernetes más tarde, conseguimos lanzar nuestra oferta de QuestDB Cloud. Esta charla es la historia de cómo llegamos ahí. Hablaré de herramientas como Calico, Karpenter, CoreDNS, Telegraf, Prometheus, Loki o Grafana, pero también de retos como autenticación, facturación, multi-nube, o de a qué tienes que decir que no para poder sobrevivir en la nube.
RubiX: A caching framework for big data engines in the cloud. Helps provide data caching capabilities to engines like Presto, Spark, Hadoop, etc transparently without user intervention.
Rook: Storage for Containers in Containers – data://disrupted® 2020data://disrupted®
In this talk Kim-Norman Sahm and Alexander Trost dive into the challenges of storage for containerized applications on Kubernetes. We'll see how the current state is and how Rook can help with that. We are going to especially look at Ceph run through Rook here, but nonetheless trying not to lose sight of the whole picture. There is a lot to keep in mind storage as is, but everything gets more complex with storage for containers. From what type of storage to how much and how "safe" it should, all questions that should be asked and most of them which should be answered as well. Rook's project site https://rook.io/
Kim-Norman Sahm is CTO of Cloudical. He also works as Executive Cloud Architect at Cloudical. Previously, he was OpenStack Cloud Architect at T-Systems (operational services GmbH) and noris network AG. He is an expert of the technologies OpenStack, Ceph and Kubernetes (CKA).
Alexander Trost works as a DevOps Engineer at Cloudical Deutschlang GmbH and is a Certified Kubernetes Administrator (CKA). He is one of four maintainers of the Rook.io Project and is engaged in several more open source projects, for example a Prometheus exporter for Dell Hardware (Dell OMSA Metrics), k8s-vagrant-multi-node an easy local multi node Kubernetes environment, and others. Besides Containers and Kubernetes he is expert on Software Defined Storage, Golang and Continuous Integration (with GitLab CI). He passionately enjoys working on open source projects, such as Rook, Ancientt and other projects.
For this upcoming meetup, we welcome Patrick Eaton PhD, Systems Architect at Stackdriver, and Joey Imbasciano, Cloud Platform Engineer at Stackdriver.
What You'll Learn At This Meetup:
• Why Stackdriver chose Cassandra over other DB offerings
• Stackdriver's data pipeline that runs into Cassandra
• Operating Cassandra Running on AWS
• Stackdriver's approach to disaster recovery
Patrick and Joey will be presenting their use of Apache Cassandra at Stackdriver, some lesson's learned, technical tips and a Q&A to end the evening.
2. Index
● What is Object storage
● A quick look to Amazon S3
● Swift use cases
● History & Architecture
● Swift features
● The API
● Demo using Cyberduck
3. What is object storage
● HTTP accessible storage of objects (files) in
buckets (folders)
● Like FTP or WebDAV
● Added security access, metadata
● Everything is a URL
● Cheap and hassle-free
○ Notion of unlimited capacity
○ No fragmentation or integrity checks
○ No locks or concurrency problems
○ Support of partial reads or writes
4. What is object storage
● Designed for cloud-era requirements
○ Secure
○ Reliable
○ Scalable
○ Fast
○ Inexpensive
○ Simple
6. ● Content storage and distribution
○ Serve static files or whole websites from S3 directly
● Better scalability for web server tier
○ Reduces ‘data gravity’, low I/O in the server, all HTTP
● Storage for data analysis
● Fine-grained access control to buckets
● Backup, archiving and disaster recovery
○ even if Amazon Glacier is a cheaper option
● ... but it’s not a Content Distribution Network
○ doesn’t optimize routing for lowest latency
○ is not optimized for content streaming
○ that’s why Amazon Cloudfront exists
Some Amazon S3 use cases
7. The cost of Amazon S3
● Main reason to use S3: price
● Example: 1 TB stored, modified 100GB per month
○ Storage cost: $85 / month
○ Data Transfer (Upload): $0
○ Data Transfer (Download): $12, at $0.12/GB
● A cheaper option: reduced redundancy (99’9% instead
of 99’999999999%)
○ Storage cost: $68
● Even cheaper, but just for backups (very limited
functionalities): Glacier
○ Storage cost: $10
8. Swift use cases
● Object Storage system
● Massively Scalable
● Runs on commodity hardware
● An S3-like solution
What is it
● Hard drive or filesystem
● NFS / SMB share
● Block storage
● any SAN/NAS/DAS
● not even a CDN
What is NOT
9. Swift use cases
● Multi tenancy
○ Ideal for Public or Private Clouds
○ Different URLs, groups of users, access codes, fine-grained privileges
● Backups
○ Write-Once, read-never (long term archiving).
○ Disaster recovery.
● Web Content
○ Write many, read many.
○ File-sharing websites (temporary access).
○ Static website or media-focused blogs (i.e. imgur).
● Large Objects
○ Medical/Scientific images.
○ Store your fancy images from the moon (i.e: nasa).
○ Store your VM from the cloud.
10. History
● Rackspace Cloud Files V1.
○ Distributed storage.
○ Centralized metadata.
○ PostgreSQL DB
● 2009: Rackspace Cloud Files V2 (Swift).
○ Full redesign and rewrite. Opensource.
○ API compatible with Amazon S3
○ Worked closely with ops.
○ Distributed storage and metadata.
○ Logical placement, based on algorithm
11. ● Highly available, distributed, eventually consistent
object storage, using commodity servers
● Eventually consistent: a write is acknowledged before
waiting for full replication confirmation
○ Referring the CAP theorem, Swift chose:
■ availability and partition tolerance
■ dropped consistency.
● 3 rings to replicate
○ Accounts
○ Containers
○ Objects
Swift architecture
12. Swift architecture
Proxy Proxy Proxy Proxy
Storage Storage Storage Storage
The Ring
● Multiple components, usually on 2 type of nodes
○ Proxy servers: Runs the swift-proxy-server processes which proxy
requests to the appropriate Storage nodes. It also contains the
TempAuth service as WSGI middleware.
○ Storage servers: Runs the swift-account-server, swift-container-
server, and swift-object-server processes which control storage of
the account databases, the container databases, as well as the
actual stored objects.
13. Swift architecture
Proxy Proxy Proxy Proxy
Storage Storage Storage Storage
The Ring
● Proxy tier
○ Handles Incoming Requests
Scales Horizontally
14. Swift architecture
Proxy Proxy Proxy Proxy
Storage Storage Storage Storage
The Ring
● The Ring
○ Maps data (accounts, containers, objects) to storage servers
Example of 3-replication
15. Swift architecture
Proxy Proxy Proxy Proxy
Storage Storage Storage Storage
The Ring
● Storage zones
○ Isolate Physical failures
16. Swift architecture
Proxy Proxy Proxy Proxy
Storage Storage Storage Storage
The Ring
● Quorum writes
○ Proxy acknowledges after the 2nd replica is OK, no wait for 3rd
Lookup
18. Swift architecture
Proxy Proxy Proxy Proxy
Storage Storage Storage Storage
The Ring
● Replication
○ A process that runs continuously, checks integrity as well
19. Swift features
● ACL
○ Free form implemented by the auth system middleware
● Healthcheck
○ Simple healthcheck page for LB
● Ratelimit
○ Rate Limiting requests
● Staticweb
○ Provide index.html in containers
● TempURL
○ Temporary URL generation for objects
● FormPost
○ Translates a browser form post into a regular Swift object PUT
● Domain Remap
○ Pretty URL with domains based containers
20. Swift features
● Bulk Operations
○ Multiple DELETE or upload or even tar.(b|g)z upload
● Account Quotas
○ Give operator ability to limit or set as read only accounts
● Container Quotas
○ Allows user to restrict a public container (i.e: with formpost)
● Large Objects (upload > 5GB)
○ Internally splitted when uploaded. Downloads a single assembled
object, supports files of virtually unlimited size
● CORS
○ Upload directly from the browser via javascript to Swift
● Versioning
○ Allow versioning all object in a container
● Swift3
○ S3 Compatible but this one has been pulled out of swift
21. The API
● Bindings for different languages: python, ruby, java…
● Multiple CLI tools: python-swiftclient, jcloud, fog
22. ● Swift CLI:
○ delete, download, list, post, stat,upload,capabilities
○ post: Updates meta information for the account,
container,or object
● Examples of metadata (HTTP Headers)
○ X-Account-Access-Control (for ACL)
○ X-Account-Sysmeta-Global-Write-Ratelimit (for ratelimit)
○ X-Object-Manifest (for dynamic large objects)
○ X-Versions-Location (for object versioning)
○ X-Container-Sync-* (used internally for container synchronisation)
○ X-Delete-At and X-Delete-After (for object expiration)
○ X-Container-Meta-Access-Control (for CORS)
● Other
○ crossdomain.xml (for cross-domain policies)
The API
27. Proxy Servers
● Swift public face
○ The entry point, and it has to do a lot of work too
● Determines the appropriate storage nodes
○ By using a logical map
● Coordinates responses
○ Ensures at least two replicas have succeeded
writing the object to disk before confirming to the
client
28. The ring
● Used by proxies and replication processes.
● Maps requests to storage nodes
● Availability zones
○ Ensure your objects are placed as far as possible
● Regions
○ Support for global clusters, multi-region replication
● Scale-out without affecting most entities
○ Only a fraction needs to be moved around
○ Still, it’s better to use the weighing system
● Up to you how to synchronise the ring
30. Account / Container Servers
● Stored using SQLITE Database
● Simple schema
○ Table for listing
○ Table for metadata
○ Stats information
● Scaling
○ With high concurrency, SQLite gets you a lot of IO
Wait, this is when you use ‘ratelimit’
31. Object Servers
● Use filesystem to store files
○ The file (object) is dumped on disk ‘as is’
● Use ‘xattrs’ to store metadata
○ On ext4, xfs
● Files named by timestamps
○ Last write always win
○ Deletion is treated as a version of the file with a tombstone object
● Directory structures
○ /mount/data_dir/partition/hash_suffix/hash/object.ts
32. Replication
● N-factor, configurable. By default is 3
● Asynchronous and peer-to-peer replicator
process
○ Traverses the local filesystem to detect changes
○ Concurrently performs operations, balancing load across physical
disks
● Push model system
○ Records and files are generally only copied from local to remote
replicas
○ It’s the duty of a node holding data to ensure its data gets to where it
belongs
○ Replica placement handled by the ring
33. ● DB Replication
○ Hash comparison of DB files
○ Replicates whole database file using rsync, new unique id is assigned
● Object replication
○ Uses rsync for transport
○ Sync only subsets of directories
○ Hash based
○ Bound by the number of uncached directories it has to traverse
Replication
34. ● Standard WSGI
○ Pipeline composed of a succession of middleware, ending with one
application. The last one,
● Usually provided by the proxy
○ But it can be provided by other server roles
● Auth is pluggable via middleware
○ swauth
○ keystone
Middleware
35. Amazon S3 in initial slides: price of $0,085 per GB per month. ROI after 5-6 months
http://www.slideshare.net/joearnold/7-steps-to-roll-out-a-private-open-stack-swift-cluster-joe-arnold-swiftstack-20120417
Swift cost estimation
36. Amazon S3 in initial slides: price of $0.085 per GB per month. ROI after barely 9 months
○ Monthly S3 cost for 145 TB = $10,600 ($8.5k if reduced redundancy)
○ Monthly S3 cost for 1.3 PB = $82,600 ($66k if reduced redundancy)
http://www.slideshare.net/joearnold/7-steps-to-roll-out-a-private-open-stack-swift-cluster-joe-arnold-swiftstack-20120417
Swift cost estimation
37. Connecting to Swift (I)
1. (Example using a ca.enocloud.com account)
2. download your openrc.sh file
3. source it (i.e. source marcos.garcia-openrc.sh)
4. put your password
5. do “keystone catalog” to validate the keystone public URL
6. recover the object-store public URL (i.e. http://198.154.188.142:
8080/v1/AUTH_17698de747ea403283730999605716c9 )
7. use swift CLI to validate (i.e. swift list)
8. in Cyberduck, setup a connection ‘Openstack Swift (Keystone
HTTP)’, with tenant:username (i.e. marcos.garcia:marcos.garcia) and
password, server ca.enocloud.com and port 5000