This white paper discusses the EMC Isilon scale-out storage platform that provides object storage by exposing the OpenStack Object Storage API as a set of Representational State Transfer (REST) web services over HTTP.
EMC Isilon Multitenancy for Hadoop Big Data AnalyticsEMC
This white paper discusses the EMC Isilon scale-out storage platform, which provides multitenancy through access zones that segregate tenants and their data sets for a scalable, multitenant storage solution for Hadoop and other analytics applications.
The Enterprise File Fabric for Red Hat Ceph StorageHybrid Cloud
The SME Cloud File Server layers Collaboration, Synchronization, Governance, BYOD, Audit, Security, Encryption, Back-up and migration capabilities on top of Red Hat Ceph to give organizations an enterprise grade EFSS solution.
Security and Compliance for Scale-Out Hadoop Data LakesEMC
This paper describes how the EMC Isilon scale-out NAS platform protects the confidentiality, availability, and integrity of Hadoop data to help meet compliance regulations.
Big Data: SQL query federation for Hadoop and RDBMS dataCynthia Saracco
Explore query federation capabilities in IBM Big SQL, which enables programmers to transparently join Hadoop data with relational database management (RDBMS) data.
Hadoop and object stores: Can we do it better?gvernik
Strata Data Conference, London, May 2017
Trent Gray-Donald and Gil Vernik explain the challenges of current Hadoop and Apache Spark integration with object stores and discuss Stocator, an open source (Apache License 2.0) object store connector for Hadoop and Apache Spark specifically designed to optimize their performance with object stores. Trent and Gil describe how Stocator works and share real-life examples and benchmarks that demonstrate how it can greatly improve performance and reduce the quantity of resources used.
EMC Isilon Multitenancy for Hadoop Big Data AnalyticsEMC
This white paper discusses the EMC Isilon scale-out storage platform, which provides multitenancy through access zones that segregate tenants and their data sets for a scalable, multitenant storage solution for Hadoop and other analytics applications.
The Enterprise File Fabric for Red Hat Ceph StorageHybrid Cloud
The SME Cloud File Server layers Collaboration, Synchronization, Governance, BYOD, Audit, Security, Encryption, Back-up and migration capabilities on top of Red Hat Ceph to give organizations an enterprise grade EFSS solution.
Security and Compliance for Scale-Out Hadoop Data LakesEMC
This paper describes how the EMC Isilon scale-out NAS platform protects the confidentiality, availability, and integrity of Hadoop data to help meet compliance regulations.
Big Data: SQL query federation for Hadoop and RDBMS dataCynthia Saracco
Explore query federation capabilities in IBM Big SQL, which enables programmers to transparently join Hadoop data with relational database management (RDBMS) data.
Hadoop and object stores: Can we do it better?gvernik
Strata Data Conference, London, May 2017
Trent Gray-Donald and Gil Vernik explain the challenges of current Hadoop and Apache Spark integration with object stores and discuss Stocator, an open source (Apache License 2.0) object store connector for Hadoop and Apache Spark specifically designed to optimize their performance with object stores. Trent and Gil describe how Stocator works and share real-life examples and benchmarks that demonstrate how it can greatly improve performance and reduce the quantity of resources used.
A lecture on Apace Spark, the well-known open source cluster computing framework. The course consisted of three parts: a) install the environment through Docker, b) introduction to Spark as well as advanced features, and c) hands-on training on three (out of five) of its APIs, namely Core, SQL \ Dataframes, and MLlib.
Getting started with Hadoop on the Cloud with BluemixNicolas Morales
Silicon Valley Code Camp -- October 11, 2014.
Session: Getting started with Hadoop on the Cloud.
Hadoop and Cloud is an almost perfect marriage. Hadoop is a distributed computing framework that leverages a cluster built on commodity hardware. The Cloud simplifies provisioning of machines and software. Getting started with Hadoop on the Cloud makes it simple to provision your environment quickly and actually get started using Hadoop. IBM Bluemix has democratized Hadoop for the masses! This session will provide a brief introduction to what Hadoop is, how does cloud work and will then focus on how to get started via a series of demos. We will conclude with a discussion around the tutorials and public datasets - all of the tools needed to get you started quickly.
Learn more about BigInsights for Hadoop: https://developer.ibm.com/hadoop/
Alluxio Innovations for Structured DataAlluxio, Inc.
Data Orchestration Summit
www.alluxio.io/data-orchestration-summit-2019
November 7, 2019
Alluxio Innovations for Structured Data
Speaker:
Gene Pang, Alluxio
For more Alluxio events: https://www.alluxio.io/events/
Apache Phoenix combines standard SQL and JDBC APIs with the scalability of HBase’s NoSQL data store to provide the best of both worlds.
Thinking along the same vein, We looked to combine Apache Phoenix with Apache Ignite as a way to offer in-memory performance with large volumes of data including support for 3D-XPoint memory.
Ignite exposes database caching functionalities as a key-value in-memory data grid that plugs itself seamlessly into a distributed system configuration with the added support of distributed UDF's on SQL.
All in all, both projects packaged together neatly in a Spring Boot application make for a high performance, scalable storage solution for users without straying from the comfort of SQL.
Spring Boot enables an elegant and standards compliant usage pattern for Security, Caching, Metrics, Session management and many more necessary features.
In this talk we will divulge the internal workings of the project, discuss benchmarks and showcase a few live demos
Oracle Cloud is Best for Oracle Database - High AvailabilityMarkus Michalewicz
This presentation looks behind the covers and evaluates the offerings provided by various cloud vendors and compares them to the Oracle Database offerings available in the Oracle Cloud. The comparison includes Oracle Database in general, focusing on High Availability (HA) and Disaster Recovery (DR), as those areas have historically distinguished the Oracle Database from other databases and will likely continue to be some of the most distinguishing features when it comes to operating the Oracle Database in the cloud.
Using your DB2 SQL Skills with Hadoop and SparkCynthia Saracco
Learn about Big SQL, IBM's SQL interface for Apache Hadoop based on DB2's query engine. We'll walk through some code example and discuss Spark integration for JDBC data sources (DB2 and Big SQL) using examples from a hands-on lab. Explore benchmark results comparing Big SQL and Spark SQL at 100TB. This presentation was created for the DB2 LUW TRIDEX Users Group meeting in NYC in June 2017.
A lecture on Apace Spark, the well-known open source cluster computing framework. The course consisted of three parts: a) install the environment through Docker, b) introduction to Spark as well as advanced features, and c) hands-on training on three (out of five) of its APIs, namely Core, SQL \ Dataframes, and MLlib.
Getting started with Hadoop on the Cloud with BluemixNicolas Morales
Silicon Valley Code Camp -- October 11, 2014.
Session: Getting started with Hadoop on the Cloud.
Hadoop and Cloud is an almost perfect marriage. Hadoop is a distributed computing framework that leverages a cluster built on commodity hardware. The Cloud simplifies provisioning of machines and software. Getting started with Hadoop on the Cloud makes it simple to provision your environment quickly and actually get started using Hadoop. IBM Bluemix has democratized Hadoop for the masses! This session will provide a brief introduction to what Hadoop is, how does cloud work and will then focus on how to get started via a series of demos. We will conclude with a discussion around the tutorials and public datasets - all of the tools needed to get you started quickly.
Learn more about BigInsights for Hadoop: https://developer.ibm.com/hadoop/
Alluxio Innovations for Structured DataAlluxio, Inc.
Data Orchestration Summit
www.alluxio.io/data-orchestration-summit-2019
November 7, 2019
Alluxio Innovations for Structured Data
Speaker:
Gene Pang, Alluxio
For more Alluxio events: https://www.alluxio.io/events/
Apache Phoenix combines standard SQL and JDBC APIs with the scalability of HBase’s NoSQL data store to provide the best of both worlds.
Thinking along the same vein, We looked to combine Apache Phoenix with Apache Ignite as a way to offer in-memory performance with large volumes of data including support for 3D-XPoint memory.
Ignite exposes database caching functionalities as a key-value in-memory data grid that plugs itself seamlessly into a distributed system configuration with the added support of distributed UDF's on SQL.
All in all, both projects packaged together neatly in a Spring Boot application make for a high performance, scalable storage solution for users without straying from the comfort of SQL.
Spring Boot enables an elegant and standards compliant usage pattern for Security, Caching, Metrics, Session management and many more necessary features.
In this talk we will divulge the internal workings of the project, discuss benchmarks and showcase a few live demos
Oracle Cloud is Best for Oracle Database - High AvailabilityMarkus Michalewicz
This presentation looks behind the covers and evaluates the offerings provided by various cloud vendors and compares them to the Oracle Database offerings available in the Oracle Cloud. The comparison includes Oracle Database in general, focusing on High Availability (HA) and Disaster Recovery (DR), as those areas have historically distinguished the Oracle Database from other databases and will likely continue to be some of the most distinguishing features when it comes to operating the Oracle Database in the cloud.
Using your DB2 SQL Skills with Hadoop and SparkCynthia Saracco
Learn about Big SQL, IBM's SQL interface for Apache Hadoop based on DB2's query engine. We'll walk through some code example and discuss Spark integration for JDBC data sources (DB2 and Big SQL) using examples from a hands-on lab. Explore benchmark results comparing Big SQL and Spark SQL at 100TB. This presentation was created for the DB2 LUW TRIDEX Users Group meeting in NYC in June 2017.
This white paper discusses the various cyber threats targeting healthcare organizations and the challenges security professionals face in securing access to protected health information.
DIGITALLY MODIFIED CHILDREN - THE NEW CHALLENGE OF THE HUMANITY Dr. Raju M. Mathew
The Brains of the Little Children are conditioned to become ' One Dimensional and Linear ' as Data Structure 'and not to become t Multi-Dimensional and Non-Linear as Knowledge Structure' and they are loosing their Minds in the Digital Age. It will affect their personality, mental growth, even physical and emotional growth, some times leading to sexual impotency. The earlier smartness of the children will be withered soon.
Over the past two decades, the Big Data stack has reshaped and evolved quickly with numerous innovations driven by the rise of many different open source projects and communities. In this meetup, speakers from Uber, Alibaba, and Alluxio will share best practices for addressing the challenges and opportunities in the developing data architectures using new and emerging open source building blocks. Topics include data format (ORC) optimization, storage security (HDFS), data format (Parquet) layers, and unified data access (Alluxio) layers.
Meetup at AI NextCon 2019: In-Stream data process, Data Orchestration & MoreAlluxio, Inc.
Alluxio - Data Orchestration for Analytics and AI in the Cloud
Oct 8, 2019
Speakers:
Haoyuan Li & Bin Fan, Alluxio
Visit https://www.alluxio.io/events/ for more Alluxio events.
Building Fast SQL Analytics on Anything with Presto, AlluxioAlluxio, Inc.
Alluxio Bay Area Meetup @ Galvanize | SF
Aug 20, 2019
Interactive Analytics in the Cloud with Presto and Alluxio
Speaker:
Bin Fan, Founding Engineer, Alluxio
Getting Started with Apache Spark and Alluxio for Blazingly Fast AnalyticsAlluxio, Inc.
Alluxio Austin Meetup
Aug 15, 2019
Speaker: Bin Fan
Apache Spark and Alluxio are cousin open source projects that originated from UC Berkeley’s AMPLab. Running Spark with Alluxio is a popular stack particularly for hybrid environments. In this session, I will briefly introduce Apache Spark and Alluxio, share the top ten tips for performance tuning for real-world workloads, and demo Alluxio with Spark.
Alluxio 2.0 Deep Dive – Simplifying data access for cloud workloadsAlluxio, Inc.
Alluxio Tech Talk
Aug 7, 2019
Speaker:
Dipti Borkar, Alluxio
Alluxio 2.0 is the most ambitious platform upgrade since the inception of Alluxio with greatly expanded capabilities to empower users to run analytics and AI workloads on private, public or hybrid cloud infrastructures leveraging valuable data wherever it might be stored.
This release, now available for download, includes many advancements that will allow users to push the limits of their data-workloads in the cloud.
In this tech talk, we will introduce the key new features and enhancements such as:
- Support for hyper-scale data workloads with tiered metadata storage, distributed cluster services, and adaptive replication for increased data locality
- Machine learning and deep learning workloads on any storage with the improved POSIX API
- Better storage abstraction with support for HDFS clusters across different versions & active sync with Hadoop
The Enterprise File Fabric for OpenStackHybrid Cloud
The SME Cloud File Server layers collaboration, synchronization, governance, BYOD, audit, security, encryption, back-up and migration capabilities on top of OpenStack to give organizations an enterprise grade EFSS solution.
Unveiling the Evolution: Proprietary Hardware to Agile Software-Defined Solut...MaryJWilliams2
Embark on a captivating journey through the evolution of data center technology. Our webinar delves deep into the transformative shift from traditional proprietary hardware setups to dynamic, software-defined solutions. Join us as we unravel the convergence of compute virtualization, Software-Defined Networking (SDN), and Software-Defined Storage (SDS), reshaping the very foundations of modern data infrastructure. Explore how this revolution is empowering businesses with unparalleled flexibility, scalability, and efficiency, and gain insights into navigating the rapidly evolving landscape of data center architecture. Whether you're a seasoned IT professional or an enthusiast eager to embrace the future of technology, this webinar promises to enlighten and inspire. For more information you can visit here: https://stonefly.com/white-papers/software-defined-data-center-sddc/#wpcf7-f206423-p263417-o2
Open Cloud Storage @ OpenStack Summit Parisit-novum
This slides are the original slides from Michael Kienle @ OpenStack Summit in Paris November 2014 focusing on Open Cloud Storage - Building a flexible and large - scale software-defined storage platform for OpenStack
Liberate Your Files with a Private Cloud Storage Solution powered by Open SourceIsaac Christoffersen
Many of today's enterprises are working under a false assumption that there is a trade-off between consumer-centric file sharing and corporate IT policy compliance. This is because most market-leading SaaS solutions for file sync and share are not designed around enterprise IT's needs. They represent growing risks with vendor lock-in, data security, compliance and data ownership.
With a track record in delivering innovative Open Source solutions, Vizuri has an answer to help enterprises overcome these hurdles. By leveraging innovative Red Hat and ownCloud open source solutions, this solution help corporate IT provide a simple to use file sync and share solution for employees. As a result, organizations are able to retain a greater control over valuable intellectual property.
EMC Isilon Best Practices for Hadoop Data StorageEMC
This white paper describes the best practices for setting up and managing the HDFS service on an Isilon cluster to optimize data storage for Hadoop analytics.
Achieving Separation of Compute and Storage in a Cloud WorldAlluxio, Inc.
Alluxio Tech Talk
Feb 12, 2019
Speaker:
Dipti Borkar, Alluxio
The rise of compute intensive workloads and the adoption of the cloud has driven organizations to adopt a decoupled architecture for modern workloads – one in which compute scales independently from storage. While this enables scaling elasticity, it introduces new problems – how do you co-locate data with compute, how do you unify data across multiple remote clouds, how do you keep storage and I/O service costs down and many more.
Enter Alluxio, a virtual unified file system, which sits between compute and storage that allows you to realize the benefits of a hybrid cloud architecture with the same performance and lower costs.
In this webinar, we will discuss:
- Why leading enterprises are adopting hybrid cloud architectures with compute and storage disaggregated
- The new challenges that this new paradigm introduces
- An introduction to Alluxio and the unified data solution it provides for hybrid environments
INDUSTRY-LEADING TECHNOLOGY FOR LONG TERM RETENTION OF BACKUPS IN THE CLOUDEMC
CloudBoost is a cloud-enabling solution from EMC
Facilitates secure, automatic, efficient data transfer to private and public clouds for Long-Term Retention (LTR) of backups. Seamlessly extends existing data protection solutions to elastic, resilient, scale-out cloud storage
Transforming Desktop Virtualization with Citrix XenDesktop and EMC XtremIOEMC
With EMC XtremIO all-flash array, improve
1) your competitive agility with real-time analytics & development
2) your infrastructure agility with elastic provisioning for performance & capacity
3) your TCO with 50% lower capex and opex and double the storage lifecycle.
• Citrix & EMC XtremIO: Better Together
• XtremIO Design Fundamentals for VDI
• Citrix XenDesktop & XtremIO
-- Image Management & Storage
-- Demonstrations
-- XtremIO XenDesktop Integration
EMC FORUM RESEARCH GLOBAL RESULTS - 10,451 RESPONSES ACROSS 33 COUNTRIES EMC
Explore findings from the EMC Forum IT Study and learn how cloud computing, social, mobile, and big data megatrends are shaping IT as a business driver globally.
Reference architecture with MIRANTIS OPENSTACK PLATFORM.The changes that are going on in IT with disruptions from technology, business and culture and so IT to solve the issues has to change from moving from traditional models to broker provider model.
Force Cyber Criminals to Shop Elsewhere
Learn the value of having an Identity Management and Governance solution and how retailers today are benefiting by strengthening their defenses and bolstering their Identity Management capabilities.
Container-based technology has experienced a recent revival and is becoming adopted at an explosive rate. For those that are new to the conversation, containers offer a way to virtualize an operating system. This virtualization isolates processes, providing limited visibility and resource utilization to each, such that the processes appear to be running on separate machines. In short, allowing more applications to run on a single machine. Here is a brief timeline of key moments in container history.
This white paper provides an overview of EMC's data protection solutions for the data lake - an active repository to manage varied and complex Big Data workloads
This infographic highlights key stats and messages from the analyst report from J.Gold Associates that addresses the growing economic impact of mobile cybercrime and fraud.
This white paper describes how an intelligence-driven governance, risk management, and compliance (GRC) model can create an efficient, collaborative enterprise GRC strategy across IT, Finance, Operations, and Legal areas.
The Trust Paradox: Access Management and Trust in an Insecure AgeEMC
This white paper discusses the results of a CIO UK survey on a“Trust Paradox,” defined as employees and business partners being both the weakest link in an organization’s security as well as trusted agents in achieving the company’s goals.
LF Energy Webinar: Electrical Grid Modelling and Simulation Through PowSyBl -...DanBrown980551
Do you want to learn how to model and simulate an electrical network from scratch in under an hour?
Then welcome to this PowSyBl workshop, hosted by Rte, the French Transmission System Operator (TSO)!
During the webinar, you will discover the PowSyBl ecosystem as well as handle and study an electrical network through an interactive Python notebook.
PowSyBl is an open source project hosted by LF Energy, which offers a comprehensive set of features for electrical grid modelling and simulation. Among other advanced features, PowSyBl provides:
- A fully editable and extendable library for grid component modelling;
- Visualization tools to display your network;
- Grid simulation tools, such as power flows, security analyses (with or without remedial actions) and sensitivity analyses;
The framework is mostly written in Java, with a Python binding so that Python developers can access PowSyBl functionalities as well.
What you will learn during the webinar:
- For beginners: discover PowSyBl's functionalities through a quick general presentation and the notebook, without needing any expert coding skills;
- For advanced developers: master the skills to efficiently apply PowSyBl functionalities to your real-world scenarios.
Smart TV Buyer Insights Survey 2024 by 91mobiles.pdf91mobiles
91mobiles recently conducted a Smart TV Buyer Insights Survey in which we asked over 3,000 respondents about the TV they own, aspects they look at on a new TV, and their TV buying preferences.
Securing your Kubernetes cluster_ a step-by-step guide to success !KatiaHIMEUR1
Today, after several years of existence, an extremely active community and an ultra-dynamic ecosystem, Kubernetes has established itself as the de facto standard in container orchestration. Thanks to a wide range of managed services, it has never been so easy to set up a ready-to-use Kubernetes cluster.
However, this ease of use means that the subject of security in Kubernetes is often left for later, or even neglected. This exposes companies to significant risks.
In this talk, I'll show you step-by-step how to secure your Kubernetes cluster for greater peace of mind and reliability.
Generative AI Deep Dive: Advancing from Proof of Concept to ProductionAggregage
Join Maher Hanafi, VP of Engineering at Betterworks, in this new session where he'll share a practical framework to transform Gen AI prototypes into impactful products! He'll delve into the complexities of data collection and management, model selection and optimization, and ensuring security, scalability, and responsible use.
Elevating Tactical DDD Patterns Through Object CalisthenicsDorra BARTAGUIZ
After immersing yourself in the blue book and its red counterpart, attending DDD-focused conferences, and applying tactical patterns, you're left with a crucial question: How do I ensure my design is effective? Tactical patterns within Domain-Driven Design (DDD) serve as guiding principles for creating clear and manageable domain models. However, achieving success with these patterns requires additional guidance. Interestingly, we've observed that a set of constraints initially designed for training purposes remarkably aligns with effective pattern implementation, offering a more ‘mechanical’ approach. Let's explore together how Object Calisthenics can elevate the design of your tactical DDD patterns, offering concrete help for those venturing into DDD for the first time!
DevOps and Testing slides at DASA ConnectKari Kakkonen
My and Rik Marselis slides at 30.5.2024 DASA Connect conference. We discuss about what is testing, then what is agile testing and finally what is Testing in DevOps. Finally we had lovely workshop with the participants trying to find out different ways to think about quality and testing in different parts of the DevOps infinity loop.
PHP Frameworks: I want to break free (IPC Berlin 2024)Ralf Eggert
In this presentation, we examine the challenges and limitations of relying too heavily on PHP frameworks in web development. We discuss the history of PHP and its frameworks to understand how this dependence has evolved. The focus will be on providing concrete tips and strategies to reduce reliance on these frameworks, based on real-world examples and practical considerations. The goal is to equip developers with the skills and knowledge to create more flexible and future-proof web applications. We'll explore the importance of maintaining autonomy in a rapidly changing tech landscape and how to make informed decisions in PHP development.
This talk is aimed at encouraging a more independent approach to using PHP frameworks, moving towards a more flexible and future-proof approach to PHP development.
SAP Sapphire 2024 - ASUG301 building better apps with SAP Fiori.pdfPeter Spielvogel
Building better applications for business users with SAP Fiori.
• What is SAP Fiori and why it matters to you
• How a better user experience drives measurable business benefits
• How to get started with SAP Fiori today
• How SAP Fiori elements accelerates application development
• How SAP Build Code includes SAP Fiori tools and other generative artificial intelligence capabilities
• How SAP Fiori paves the way for using AI in SAP apps
Welocme to ViralQR, your best QR code generator.ViralQR
Welcome to ViralQR, your best QR code generator available on the market!
At ViralQR, we design static and dynamic QR codes. Our mission is to make business operations easier and customer engagement more powerful through the use of QR technology. Be it a small-scale business or a huge enterprise, our easy-to-use platform provides multiple choices that can be tailored according to your company's branding and marketing strategies.
Our Vision
We are here to make the process of creating QR codes easy and smooth, thus enhancing customer interaction and making business more fluid. We very strongly believe in the ability of QR codes to change the world for businesses in their interaction with customers and are set on making that technology accessible and usable far and wide.
Our Achievements
Ever since its inception, we have successfully served many clients by offering QR codes in their marketing, service delivery, and collection of feedback across various industries. Our platform has been recognized for its ease of use and amazing features, which helped a business to make QR codes.
Our Services
At ViralQR, here is a comprehensive suite of services that caters to your very needs:
Static QR Codes: Create free static QR codes. These QR codes are able to store significant information such as URLs, vCards, plain text, emails and SMS, Wi-Fi credentials, and Bitcoin addresses.
Dynamic QR codes: These also have all the advanced features but are subscription-based. They can directly link to PDF files, images, micro-landing pages, social accounts, review forms, business pages, and applications. In addition, they can be branded with CTAs, frames, patterns, colors, and logos to enhance your branding.
Pricing and Packages
Additionally, there is a 14-day free offer to ViralQR, which is an exceptional opportunity for new users to take a feel of this platform. One can easily subscribe from there and experience the full dynamic of using QR codes. The subscription plans are not only meant for business; they are priced very flexibly so that literally every business could afford to benefit from our service.
Why choose us?
ViralQR will provide services for marketing, advertising, catering, retail, and the like. The QR codes can be posted on fliers, packaging, merchandise, and banners, as well as to substitute for cash and cards in a restaurant or coffee shop. With QR codes integrated into your business, improve customer engagement and streamline operations.
Comprehensive Analytics
Subscribers of ViralQR receive detailed analytics and tracking tools in light of having a view of the core values of QR code performance. Our analytics dashboard shows aggregate views and unique views, as well as detailed information about each impression, including time, device, browser, and estimated location by city and country.
So, thank you for choosing ViralQR; we have an offer of nothing but the best in terms of QR code services to meet business diversity!
Accelerate your Kubernetes clusters with Varnish CachingThijs Feryn
A presentation about the usage and availability of Varnish on Kubernetes. This talk explores the capabilities of Varnish caching and shows how to use the Varnish Helm chart to deploy it to Kubernetes.
This presentation was delivered at K8SUG Singapore. See https://feryn.eu/presentations/accelerate-your-kubernetes-clusters-with-varnish-caching-k8sug-singapore-28-2024 for more details.
How world-class product teams are winning in the AI era by CEO and Founder, P...
OpenStack Swift Object Storage on EMC Isilon Scale-Out NAS
1. EMC WHITE PAPER
OPENSTACK SWIFT OBJECT STORAGE
ON EMC ISILON SCALE-OUT NAS
ABSTRACT
The EMC Isilon scale-out storage platform provides object storage by exposing the
OpenStack Object Storage API as a set of Representational State Transfer (REST) web
services over HTTP. The objects that you store through the Swift API can be accessed
as directories and files through NFS, SMB, and HDFS. The result is a standard method
of securely integrating data-intensive applications with the Isilon storage platform and
then sharing the data with other applications, such as Hadoop and Apache Spark.
November 2014
3. 3
TABLE OF CONTENTS
INTRODUCTION........................................................................................5
USE CASES................................................................................................5
SOLVING DATA MANAGEMENT PROBLEMS................................................6
BENEFITS .................................................................................................7
THE ONEFS SWIFT IMPLEMENTATION ......................................................7
Data Protection Overview .................................................................................. 8
Client Libraries ................................................................................................. 8
Supported HTTP Requests.................................................................................. 8
AUTHENTICATION ....................................................................................8
Token Generation ............................................................................................. 9
CONCLUSION............................................................................................9
APPENDIX ..............................................................................................10
Standard TempAuth Authentication ....................................................................10
Libcloud Authentication with Python Code ...........................................................10
Libcloud Authentication with OpenStack Identity Service .......................................10
Libcloud Authentication with the RackSpace Extension ..........................................11
OBJECT STORAGE REQUESTS ..................................................................12
Example Requests Submitted with Curl...............................................................12
GET Account ............................................................................................... 12
GET Container ............................................................................................. 13
GET Object ................................................................................................. 13
PUT Container ............................................................................................. 14
PUT Object.................................................................................................. 14
Copy Object ................................................................................................ 14
POST Account.............................................................................................. 14
POST Container ........................................................................................... 14
POST Object................................................................................................ 15
HEAD Account ............................................................................................. 15
4. 4
HEAD Container........................................................................................... 15
HEAD Object ............................................................................................... 15
DELETE Container ........................................................................................ 16
DELETE Object ............................................................................................ 16
Paging Through Objects................................................................................ 16
Working with Libcloud ......................................................................................16
Create a Swift Driver.................................................................................... 16
Configure a Swift Driver for OneFS ................................................................. 16
Submitting Example Requests with Libcloud .................................................... 17
5. 5
INTRODUCTION
The EMC®
OneFS®
distributed operating system integrates OpenStack Object Storage with the EMC®
Isilon®
scale-out storage
platform. The OpenStack Object Storage project, code named Swift, stores content and metadata as objects by using an application
programming interface. The API furnishes a set of Representational State Transfer (REST) web services over HTTP.
OneFS exposes the Swift API by implementing the OpenStack Object Storage proxy server on every storage node in an Isilon cluster.
The proxy server handles API requests. The Swift API presents the distributed OneFS file system as a set of accounts, containers,
and objects.
Although OneFS flattens the multi-tiered architecture of OpenStack Object Storage into a single scale-out storage cluster, the
containers and objects that you store with the Swift API can be simultaneously accessed as directories and files by using the other
protocols that OneFS supports—NFS, SMB, HTTP, FTP, and HDFS.
This seamless, shared access to Swift containers and objects lets applications interact with all the data stored on an Isilon cluster,
regardless of the protocol that stored the data. You can, for example, store application data as objects through the Swift API,
analyze the data with Hadoop through the OneFS HDFS interface, and then export the results of MapReduce jobs to Microsoft
Windows workstations with SMB.
USE CASES
By implementing OpenStack Object Storage, OneFS provides a standard, cost-effective method of integrating data-intensive
applications with the EMC Isilon scale-out storage platform to securely manage application data with enterprise solutions. Object
storage on an Isilon cluster addresses the following uses cases:
• Consolidate storage for applications regardless of protocol.
• Automate data-processing applications to store objects on an Isilon cluster and then analyze the data through the OneFS HDFS
interface.
• Automate the sharing of information by storing data with Swift and then seamlessly access the objects as files with SMB, NFS,
HTTP, FTP, and HDFS.
• Store files with SMB, NFS, and other protocols and then access the files as objects through Swift.
• Provide secure multitenancy for applications while uniformly protecting the data with enterprise security capabilities like
Kerberos authentication, fine-grained access control, and identity management.
• Manage data from Swift applications with enterprise storage features like deduplication, tiering, performance monitoring,
snapshots, and NDMP backups.
• Protect data reliably, efficiently, and cost-effectively with forward error correction instead of inefficient replication.
• Automate the dissemination of data to web sites and mobile devices.
• Support second- and third-platform workloads.
Storing objects instead of files on OneFS improves scalability and performance to handle the velocity and volume of large workloads.
More importantly, Swift empowers you to automate the collection of petabytes of data and store them in an Isilon data lake for later
analysis through other protocols, such as HDFS, SMB, and NFS. OneFS seamless interoperates between object and file, as the
following diagram illustrates:
6. 6
Figure 1. OneFS seamless interoperates between object and file.
A Swift connection processes the data as an object, while an HDFS, SMB, or NFS connection processes the same exact data as a file.
SOLVING DATA MANAGEMENT PROBLEMS
Object storage's place in the Isilon architecture helps you solve an array of problems. The Isilon architecture combines a secure,
scalable multiprotocol data lake with enterprise storage solutions like deduplication to produce a unique storage platform, and the
result is that the platform can help solve unique problems.
You can, for example, give a set of users access to the results of MapReduce jobs by having them connect to the cluster with their
SmartPhones and access the files with a REST application.
Another use case is to automate applications to retrieve objects that contain data. The applications can then process the data as, for
example, business objects and then distribute the information to other applications downstream.
Another Swift use case relates to big data, including metadata. The Swift protocol gives you control over vast amounts of metadata,
the data that describes your objects. You can exploit that control to tap into your metadata for analysis, potentially yielding the
transformational insights that are the promise of big data analytics. For example, after you store your objects and their metadata on
an Isilon cluster with Swift—metadata is stored on Isilon as an alternate data stream—you can automate an application to request
the metadata of the stored objects through the REST API. You can then transform the data into a structure that suits your
objectives.
Combining Swift data ingestion with Hadoop analytics are two components of a data lake—a technology strategy that lays the
foundation to transform your organization into an information-driven enterprise:
7. 7
Figure 2. Swift data ingestion with Hadoop analytics are two components of a data lake technology
strategy
The Swift protocol, coupled with a scale-out data lake powered by an Isilon cluster, streamlines the collection of vast amounts of
data for analysis. Data stored with Swift can be analyzed in place with Hadoop or Apache Spark through the OneFS HDFS interface.
For more information, see EMC Isilon Scale-Out NAS for In-Place Hadoop Data Analytics.
BENEFITS
With an Isilon cluster, you can point at the cluster applications that use Swift to store your data, saving the time, expense, risk, and
complexity of building and supporting a new storage infrastructure for the data.
Since Isilon bases its Swift protocol on an open OpenStack Object Storage standard, you can use an existing Isilon cluster to store
data from Swift applications without vendor lockin.
Whether you use a new or an existing Isilon cluster, storing object data on an Isilon cluster produces efficiencies at multiple levels:
• Manage only one storage system to avoid additional operating expenses.
• Store object data more efficiently with forward error correction instead of Swift's replication.
• Tap the excess capacity of an existing Isilon cluster to keep storage costs low.
• Easily manage the object data with enterprise capabilities like security, storage pools, tiering, snapshots, replication, and NDMP.
• Increase the return on investment for your Isilon cluster by supporting object data.
• Eliminate storage silos that undermine the benefits of an enterprise data hub.
• Set up and support object storage with ease.
The OneFS implementation of OpenStack Object Storage disregards georeplication as a use case: You cannot use Swift to distribute
data over geographically dispersed storage sites.
THE ONEFS SWIFT IMPLEMENTATION
OneFS exposes the Swift API by implementing an instance of the OpenStack Object Storage Proxy Server and the OpenStack
Storage Server on every storage node in an Isilon cluster. The distributed OneFS operating system combines both the OpenStack
Proxy Server and the OpenStack Storage Server into a single server that runs on every Isilon node. As a result, client computers that
connect to an Isilon cluster with Swift gain access to the distributed Isilon file system's single volume, ifs, while taking advantage of
the entire cluster's performance.
To work with object storage, you connect to an Isilon cluster with HTTP and then use standard REST calls such as PUT, GET, and
POST to perform API operations. The API presents the home directories as accounts, directories as containers, and files as objects.
All objects have metadata. The API operations that you submit with REST can store and manage containers, objects, and metadata
in the OneFS file system.
8. 8
Each home directory in the OneFS file system maps to a Swift account. The directories and subdirectories in a home directory map to
containers and subcontainers. Files appear as objects. Since each object has a URL, you can access a file by its URL. A file in a user's
home directory, for example, might look something like this:
/ifs/home/admin/engineering/samplefile.txt
With Swift, you can access the file as an object by using a GET, PUT, or POST operation at the following URL, which contains the
SmartConnect zone of an example cluster:
http://examplezonename:28080/v1/AUTH_admin/engineering/samplefile.txt
OneFS uses Port 28080 for all Swift requests. Each object can be as large as 5 GB, which is the Swift default.
By default, OneFS uses a round-robin algorithm to distribute API requests across all the nodes in the cluster. For optimal
performance, you should provision the external network interfaces of the Isilon nodes with dual high-throughput (10GbE) interfaces.
DATA PROTECTION OVERVIEW
The OneFS operating system efficiently and reliably protects data with forward error correction (FEC) codes, which consume less
space than replication but provide better protection. Swift replicates an object three times to protect it and to make it highly
available. Instead of replicating the object, OneFS stripes the object's data across the cluster over its internal InfiniBand network.
FEC is a highly efficient method of reliably protecting data. FEC encodes an object's data in a distributed set of symbols, adding
space-efficient redundancy. With only a part of the symbol set, OneFS can recover the object's data. In an Isilon cluster with five or
more nodes, FEC delivers as much as 80 percent efficiency. As you add nodes to a cluster, the efficiency improves.
Striping data with FEC codes consumes much less storage space than replicating every object three times. With the Isilon data
protection scheme, more than 80 percent of an Isilon cluster’s capacity can be used, bringing efficiency to applications that store
objects with Swift. See High Availability and Data Protection with EMC Isilon Scale-Out NAS.
CLIENT LIBRARIES
The OneFS Swift implementation is compatible with the following Swift client applications and APIs:
• The Swift command-line client and the Python-Swift client library; see http://docs.openstack.org/developer/python-swiftclient.
• Apache Libcloud, which is a Python library that supports Swift along with many other different object storage provider APIs
through a unified API; see http://libcloud.readthedocs.org.
SUPPORTED HTTP REQUESTS
To work with these libraries, the first version of the OneFS Swift service, once activated with a license key, supports the following
Swift HTTP requests:
• Authentication: Use TempAuth or Libcloud
• GET: retrieve an object or the contents of a container or account.
• PUT: upload an object or create a container.
• DELETE: delete an object or container.
• POST: store metadata for an object, container, or account.
• HEAD: retrieve metadata for an object, container, or account.
• COPY: create a copy of an object.
To obtain a license key, contact your EMC Isilon representative. For more information about supported requests and the capabilities
of the OneFS Swift service, see support.emc.com. For information on the Swift RESTful API, see OpenStack Object Storage API
documentation.
AUTHENTICATION
Authentication for a Swift connection takes place in an access zone—a virtual security context in which OneFS connects to directory
services, authenticates users, and controls access to a segment of the file system. By default, a cluster has a single access zone for
the entire file system. You may create additional access zones to give users from different identity management systems, such as
two untrusted Active Directory domains, access to different OneFS resources by using a destination IP address or SmartConnect zone
9. 9
name. Access zones provide multitenancy: You can set up a cluster to work with multiple identity management systems, Swift
namespaces, SMB namespaces, and HDFS namespaces.
The main purpose of an access zone is to define a list of identity management systems that apply only in the context of a zone. As
such, a key use is consolidating data sets from different storage silos into a single storage system but continuing to expose each
data set with a unique root directory and then limiting access to a group of users.
When a Swift user submits an authentication request to the cluster, OneFS checks the directory services to which the user’s access
zone is connected for an account for the user. If OneFS finds an account that matches the user’s login name, OneFS authenticates
the user.
During authentication, OneFS creates an access token for a Swift user in the same way that OneFS creates tokens for users who
connect with other protocols. The token contains the user’s full identity, including group memberships, and OneFS uses the token
later to check access to directories and files. The OneFS Swift implementation will not, however, create a home directory account for
a user, even if a home directory has been specified in, for example, Active Directory. Before a user connects with Swift, you must
explicitly create a home directory for the user in the ifs global directory with a method other than through the Swift protocol.
For more information on how OneFS authenticates connections, creates access tokens, and authorizes access to resources with
access control lists, see OneFS Multiprotocol Security Untangled. For more information on multitenancy, see EMC Isilon Multitenancy
for Hadoop Big Data Analytics.
With OneFS, you can submit a Swift authentication request in two ways:
• The standard Swift format, which is used by the OpenStack TempAuth and Swauth modules.
• The OpenStack Identity Service, which libcloud uses as its primary authentication method.
• The Rackspace extension to the OpenStack Identity Service.
The appendix includes examples of how to authenticate by using the standard format and libcloud.
TOKEN GENERATION
When a user authenticates successfully, OneFS generates a unique authentication token for the user. A string of 32 hex characters
prefixed with AUTH_tk, the token lets a user perform Swift requests without providing a username and password each time. Instead,
the token appears in the X-Auth-Token or X-Storage-Token fields of the header. By default, a token lasts for 24 hours. After it
expires, a new token must be obtained by submitting an authentication request. A token works only in the OneFS access zone in
which the authentication request that generated it took place.
CONCLUSION
The EMC Isilon OneFS operating system exposes the Swift API by implementing an instance of the OpenStack Object Storage Proxy
Server and the OpenStack Storage Server on every storage node in an Isilon cluster. Client computers that connect to an Isilon
cluster with Swift gain access to the distributed Isilon file system's single volume as well as data stored by other protocols, including
NFS, SMB, and HDFS. The result is a standard method of securely integrating data-intensive applications with the Isilon scale-out
storage platform and exchanging data with other applications.
10. 10
APPENDIX
STANDARD TEMPAUTH AUTHENTICATION
You can authenticate a user with the TempAuth method by submitting a GET request in the following format:
http://examplezonename:28080/auth/v1.0
The request provides the username and password for the authentication request in the HTTP header in the following format:
X-Auth-User: <account>:<username> X-Auth-Key: <password>
OneFS ignores the value of the account field. It can be set to any value, but the field must include a value to maintain compatibility
with the OpenStack Object Storage standard. The standard uses the account field to support multitenancy, but because OneFS
provides multitenancy with access zones, the value is irrelevant.
The following example demonstrates how to authenticate with TempAuth by using curl, a command-line utility for transferring data
with URL syntax (see the curl web site at http://http://curl.haxx.se/). The first instance of 'admin' in 'X-Auth-User: admin:admin' is
where the client specifies the account name—the field that OneFS ignores. The username is 'admin' and the password is 'test'.
curl -H "X-Auth-User: admin:admin" -H "X-Auth-Key: test"
-v "http://examplezonename:28080/auth/v1.0" -X GET
A successful response looks like the following example. The response includes an access token and a storage access token and URL,
which the client uses for subsequent requests.
HTTP/1.1 200 OK
Content-Length: 101
Content-Type: application/json; charset=utf-8
Date: Thu, 19 Jun 2014 09:21:34 PDT
X-Auth-Token: AUTH_tk9b2f1d4d640b31fee0b3f6d644aab52e
X-Storage-Token: AUTH_tk9b2f1d4d640b31fee0b3f6d644aab52e
X-Storage-Url: http://examplezonename:28080/v1/AUTH_admin
X-Trans-Id: tx070e772fe79c4d948a016-0053a30e0e
{
"storage": {
"cluster_name": "http://examplezonename:28080/v1/AUTH_admin",
"default": "cluster_name"
}
}
LIBCLOUD AUTHENTICATION WITH PYTHON CODE
Here is an example that demonstrate how to authenticate with Libcloud by using code written in Python:
from libcloud.storage.types import Provider
from libcloud.storage.providers import get_driver
import sys
ip = "192.0.2.250"
cls = get_driver(Provider.OPENSTACK_SWIFT)
my_username=”admin”
my_password=”testpwd”
driver = cls(my_username, my_password, region='ISILON',
ex_force_auth_url='http://'+ip+':28080',
ex_force_base_url='http://'+ip+':28080/v1/AUTH_ISILON',
ex_force_auth_version='2.0',
ex_force_service_type='object-store',
ex_force_service_name='swift')
LIBCLOUD AUTHENTICATION WITH OPENSTACK IDENTITY SERVICE
You can authenticate a user with the OpenStack Identity Service by submitting a request with Curl in the following format:
curl -H "Content-Type: application/json; charset=utf-8" -v http://<node-ip>:28080/v2.0/tokens -X POST -d
'{"auth":{"tenantName":"<any-value-works-here>", "passwordCredentials":{"username": "<username>",
"password":"<password>"}}}'
Here is an example that replaces the variables with sample values:
curl -H "Content-Type: application/json; charset=utf-8" -v http://examplezonename:28080/v2.0/tokens -X POST -d
'{"auth":{"tenantName":"isilon", "passwordCredentials":{"username": "myusername", "password":"mypassword"}}}'
11. 11
The response looks like this:
HTTP/1.1 200 OK
Content-Length: 467
Content-Type: text/html; charset=UTF-8
Date: Mon, 18 Aug 2014 13:13:54 PDT
X-Trans-Id: txdc10f3db5b1444a6a4521-0053f25e82
* Connection #0 to host examplezonename left intact
{ "access":
{ "token":
{
"expires":"1600-12-31 16:02:20",
"id":"AUTH_tk71e9c58f3e432da1adf3ba70dc236dd8",
"tenant":{ "id":"10", "name":"isilon" } },
"serviceCatalog":
[
{ "endpoints":
[
{
"region":"ISILON", "internalURL":"http://192.0.2.250/v1/AUTH_isilon",
"publicURL":"http://examplezonename:28080/v1/AUTH_isilon"
}
],
"type":"object-store", "name":"swift"
}
],
"user":
{
"id":"10",
"roles":[ { "tenantId":"10", "id":"0", "name":"object-store:myusername" } ],
"name":"mypassword"
}
}
}
LIBCLOUD AUTHENTICATION WITH THE RACKSPACE EXTENSION
You can authenticate a user with the Rackspace method by submitting a POST request in the following format:
http://examplezonename:28080/v2.0/tokens
With the Rackspace method, you format the username and password in JSON and set it in the content section of the HTTP request so
it looks like this:
{
"auth":
{
"RAX-KSKEY:apiKeyCredentials":
{
"username": "<username>",
"apiKey": "<password>"
}
}
}
Here is an example of how to submit an authentication request using the Rackspace method. The username is 'admin' and the
password is 'test'.
curl -H "Content-Type: application/json; charset=UTF-8"
-v "http://examplezonename:28080/v2.0/tokens" -X POST
-d '{"auth": {"RAX-KSKEY:apiKeyCredentials":
{"username": "admin", "apiKey": "test"}}}'
12. 12
A successful response looks like the following example. The response includes an access token and a storage access URL, which the
client uses for subsequent requests.
HTTP/1.1 200 OK
Content-Length: 467
Content-Type: text/html; charset=UTF-8
Date: Thu, 19 Jun 2014 09:33:21 PDT
X-Trans-Id: tx3cfed971658f46eeb33c4-0053a310d1
{
"access": {
"serviceCatalog": [
{
"endpoints": [
{
"internalURL": "http://examplezonename/v1/AUTH_isilon",
"publicURL": "http://examplezonename:28080/v1/AUTH_isilon",
"region": "ISILON"
}
],
"name": "swift",
"type": "object-store"
}
],
"token": {
"expires": "1600-12-31 16:02:20",
"id": "AUTH_tk9b2f1d4d640b31fee0b3f6d644aab52e",
"tenant": {
"id": "10",
"name": "isilon"
}
},
"user": {
"id": "10",
"name": "admin",
"roles": [
{
"id": "0",
"name": "object-store:admin",
"tenantId": "10"
}
]
}
}
}
OBJECT STORAGE REQUESTS
After you acquire a token, you can submit object storage requests by using REST over HTTP. The syntax of the requests is described
in the OpenStack Object Storage API Reference and other documents at http://docs.openstack.org/.
EXAMPLE REQUESTS SUBMITTED WITH CURL
The following examples submit requests with curl from the command line of a client computer. The examples use an IP address of
examplezonename for all requests.
GET Account
The following example submits a GET request to obtain account information:
curl -H "X-Auth-Token: AUTH_tk9b2f1d4d640b31fee0b3f6d644aab52e" -v
"http://examplezonename:28080/v1/AUTH_admin?format=json" -X GET
A successful response looks like this:
HTTP/1.1 200 OK
13. 13
Content-Length: 84
Content-Type: application/json; charset=utf-8
Date: Thu, 19 Jun 2014 14:54:32 PDT
Last-Modified: 2014-06-19 14:54:29
X-Account-Bytes-Used: 13
X-Account-Container-Count: 2
X-Account-Object-Count: 2
X-Timestamp: 130476887138427115
X-Trans-Id: tx83b138929e134d24a4cd7-0053a35d1b
[
{
"count": 1,
"bytes": 4,
"name": "container"
},
{
"count": 1,
"bytes": 9,
"name": "container2"
}
]
GET Container
The following example submits a GET request to obtain information about a container:
curl -H "X-Auth-Token: AUTH_tk9b2f1d4d640b31fee0b3f6d644aab52e" -v
"http://examplezonename:28080/v1/AUTH_admin/container2?format=json" -X GET
A successful response looks like this:
HTTP/1.1 200 OK
Accept-Ranges: bytes
Content-Length: 217
Content-Type: application/json; charset=utf-8
Date: Thu, 19 Jun 2014 14:59:44 PDT
Last-Modified: 2014-06-19 14:58:48
X-Container-Bytes-Used: 9
X-Container-Object-Count: 1
X-Timestamp: 130476887284112371
X-Trans-Id: tx169d4d5da58647dea7ef0-0053a35d50
[
{
"hash": "ef614d88b44d9b768ca06befb206cf3c",
"last_modified": "2014-06-19 14:58:48",
"bytes": "9",
"name": "obj2.txt",
"content_type": "application/octet-stream"
}
]
GET Object
The following example submits a GET request to obtain information about an object:
curl -H "X-Auth-Token: AUTH_tk9b2f1d4d640b31fee0b3f6d644aab52e" -v
"http://examplezonename:28080/v1/AUTH_admin/container2/obj2.txt" -X GET
A successful response looks like this:
HTTP/1.1 200 OK
Content-Length: 9
Content-Type: application/octet-stream
Date: Thu, 19 Jun 2014 15:01:13 PDT
ETag: ef614d88b44d9b768ca06befb206cf3c
Last-Modified: 2014-06-19 14:58:48
X-Timestamp: 130476887284112371
14. 14
X-Trans-Id: tx2572012006224369ae08f-0053a35da9
obj2 data
PUT Container
The following example submits a PUT request for a container:
curl -H "X-Auth-Token: AUTH_tk9b2f1d4d640b31fee0b3f6d644aab52e" -v
"http://examplezonename:28080/v1/AUTH_admin/container3" -X PUT
A successful response looks like this:
HTTP/1.1 201 Created
Content-Length: 0
Content-Type: text/html; charset=UTF-8
Date: Fri, 20 Jun 2014 10:54:50 PDT
X-Trans-Id: tx0ceffc19132b45d1ba91d-0053a4756a
PUT Object
The following example submits a PUT request for an object:
curl -H "X-Auth-Token: AUTH_tk9b2f1d4d640b31fee0b3f6d644aab52e" -v
"http://examplezonename:28080/v1/AUTH_admin/container2/obj2.txt" -X PUT -d "obj content"
A successful response looks like this:
HTTP/1.1 201 Created
Content-Length: 0
Content-Type: text/html; charset=UTF-8
Date: Fri, 20 Jun 2014 10:47:52 PDT
ETag: e3cf5b79108fc1cc822bd0f9b5b67cef
X-Trans-Id: tx5e366ec5f04041f9b8500-0053a473c8
Copy Object
The following example submits a COPY request to create a server-side copy of container2/obj2 and name it obj2_copy.txt in
container/sub-container:
curl -H "X-Auth-Token: AUTH_tk9b2f1d4d640b31fee0b3f6d644aab52e" -v
"http://examplezonename:28080/v1/AUTH_admin/container2/obj2.txt" -X COPY -H "Destination: container/sub-
container/obj2_copy.txt"
A successful response looks like this:
HTTP/1.1 201 Created
Content-Length: 0
Content-Type: text/html; charset=UTF-8
Date: Fri, 20 Jun 2014 11:38:18 PDT
X-Trans-Id: tx5c765fa5e1e64f5bb6864-0053a47f9a
POST Account
The following example submits a POST request to set the custom metadata tags of an account to Value1 and Value2:
curl -H "X-Auth-Token: AUTH_tk9b2f1d4d640b31fee0b3f6d644aab52e" -v
"http://examplezonename:28080/v1/AUTH_admin" -X POST -H "X-Account-Meta-Name1: Value1" -H "X-Account-Meta-Name2:
Value2"
A successful response looks like this:
HTTP/1.1 204 No Content
Content-Length: 0
Content-Type: text/html; charset=UTF-8
Date: Fri, 20 Jun 2014 11:07:42 PDT
X-Trans-Id: txa819a723b6f449d593966-0053a4786e
POST Container
The following example submits a POST request to set the custom metadata tags of a container to Value1 and Value2:
curl -H "X-Auth-Token: AUTH_tk9b2f1d4d640b31fee0b3f6d644aab52e" -v
"http://examplezonename:28080/v1/AUTH_admin/container3" -X POST -H "X-Container-Meta-Name1: Value1"
-H "X-Container-Meta-Name2: Value2"
A successful response looks like this:
HTTP/1.1 204 No Content
15. 15
Content-Length: 0
Content-Type: text/html; charset=UTF-8
Date: Fri, 20 Jun 2014 11:09:56 PDT
X-Trans-Id: tx1250a35c0ac14f2480612-0053a478f4
POST Object
The following example submits a POST request to set the custom metadata tags of an object to Value1 and Value2:
curl -H "X-Auth-Token: AUTH_tk9b2f1d4d640b31fee0b3f6d644aab52e" -v
"http://examplezonename:28080/v1/AUTH_admin/container2/obj2.txt" -X POST -H "X-Object-Meta-Name1: Value1"
-H "X-Object-Meta-Name2: Value2"
A successful response looks like this:
HTTP/1.1 204 No Content
Content-Length: 0
Content-Type: text/html; charset=UTF-8
Date: Fri, 20 Jun 2014 11:12:36 PDT
X-Trans-Id: tx7205bce69fe547508c176-0053a47994
HEAD Account
The following example submits a HEAD request to retrieve account statistics, metadata, and the values of the custom metadata tags:
curl -H "X-Auth-Token: AUTH_tk9b2f1d4d640b31fee0b3f6d644aab52e" -v
"http://examplezonename:28080/v1/AUTH_admin" -X HEAD
A successful response looks like this:
HTTP/1.1 204 No Content
X-Account-Meta-Name1: Value1
X-Account-Meta-Name2: Value2
Content-Length: 0
Content-Type: text/html; charset=UTF-8
Date: Fri, 20 Jun 2014 11:16:23 PDT
Last-Modified: 2014-06-20 11:07:42
X-Account-Bytes-Used: 15
X-Account-Container-Count: 3
X-Account-Object-Count: 2
X-Timestamp: 130477604900823291
X-Trans-Id: txe0e5f861dca641268ce4e-0053a47a76
HEAD Container
The following example submits a HEAD request to retrieve a container's metadata:
curl -H "X-Auth-Token: AUTH_tk9b2f1d4d640b31fee0b3f6d644aab52e" -v
"http://examplezonename:28080/v1/AUTH_admin/container2" -X HEAD
A successful response looks like this:
HTTP/1.1 204 No Content
X-Container-Meta-Name1: Value1
X-Container-Meta-Name2: Value2
Accept-Ranges: bytes
Content-Length: 0
Content-Type: text/html; charset=UTF-8
Date: Fri, 20 Jun 2014 11:14:40 PDT
Last-Modified: 2014-06-20 11:09:56
X-Container-Bytes-Used: 0
X-Container-Object-Count: 0
X-Timestamp: 130477613968019069
X-Trans-Id: tx56d23577d9e04dad95d73-0053a47a10
HEAD Object
The following example submits a HEAD request to retrieve an object’s metadata, including its etag:
curl -H "X-Auth-Token: AUTH_tk9b2f1d4d640b31fee0b3f6d644aab52e" -v
"http://examplezonename:28080/v1/AUTH_admin/container2/obj2.txt" -X HEAD
A successful response looks like this:
16. 16
HTTP/1.1 204 No Content
X-Object-Meta-Name1: Value1
X-Object-Meta-Name2: Value2
Content-Length: 0
Content-Type: text/html; charset=UTF-8
Date: Fri, 20 Jun 2014 11:13:57 PDT
ETag: e3cf5b79108fc1cc822bd0f9b5b67cef
Last-Modified: 2014-06-20 11:12:36
X-Timestamp: 130477615561086205
X-Trans-Id: tx4951a255ec3e43e5b6c09-0053a479e5
DELETE Container
The following example submits a DELETE request to remove a container:
curl -H "X-Auth-Token: AUTH_tk9b2f1d4d640b31fee0b3f6d644aab52e" -v
"http://examplezonename:28080/v1/AUTH_admin/container3" -X DELETE
A successful response looks like this:
HTTP/1.1 204 No Content
Content-Length: 0
Content-Type: text/html; charset=UTF-8
Date: Fri, 20 Jun 2014 11:34:26 PDT
X-Trans-Id: tx04f0072a9573448ea0e52-0053a47eb2
DELETE Object
The following example submits a DELETE request to remove an object:
curl -H "X-Auth-Token: AUTH_tk9b2f1d4d640b31fee0b3f6d644aab52e" -v
"http://examplezonename:28080/v1/AUTH_admin/container2/obj2.txt" -X DELETE
A successful response looks like this:
HTTP/1.1 204 No Content
Content-Length: 0
Content-Type: text/html; charset=UTF-8
Date: Fri, 20 Jun 2014 11:33:35 PDT
X-Trans-Id: txbfe6a6b594904ad1a55fd-0053a47e7f
Paging Through Objects
You can page through a large list of objects in a container or a large list of containers in an account by using the limit and marker
URL parameters. For example, say there are five objects in a container named small_container and the names of the objects are
obj1, obj2, obj3, obj4, and obj5. To page through the items in blocks of two items at a time, you can use the following command:
curl -H “X-Auth-Token: <token>” “http://<ip>:28080/v1/<account>/small_container?limit=2” -X GET
This command returns obj1 and obj2. To get obj3 and obj4, you can run this command:
curl -H “X-Auth-Token: <token>” “http://<ip>:28080/v1/<account>/small_container?limit=2&marker=obj2” -X GET
And then you can get obj5 like this:
curl -H “X-Auth-Token: <token>” “http://<ip>:28080/v1/<account>/small_container?limit=2&marker=obj4” -X GET
Working with Libcloud
The OneFS ObjectStorage API accepts requests from libcloud. For more information on libclould, see the libcloud reference
documentation.
Create a Swift Driver
Here is an example of how to create a Swift driver so that you can submit requests with libcloud:
from libcloud.storage.types import Provider
from libcloud.storage.providers import get_driver
import sys
cls = get_driver(Provider.OPENSTACK_SWIFT)
Configure a Swift Driver for OneFS
Here is an example of how to configure a driver. The username is 'admin' and the password is 'test':
driver = cls('admin', 'test', region='ISILON',
17. 17
ex_force_auth_url='http://examplezonename:28080',
ex_force_base_url='http://examplezonename:28080/v1/AUTH_ISILON',
ex_force_service_type='object-store',
ex_force_service_name='swift')
Another method of configuring a driver is to use a connection endpoint URL instead of a base URL:
driver = cls('admin', 'test', region='ISILON',
ex_force_auth_url='http://examplezonename:28080',
ex_force_service_type='object-store',
ex_force_service_name='swift')
driver.connection.endpoint_url = "publicURL"
Submitting Example Requests with Libcloud
Get a list of containers in the account:
containers = driver.list_containers()
Get a list of objects in the first container:
objs = driver.list_container_objects(containers[0])
Download the first object to /tmp/test.txt:
driver.download_object(objs[0], "/tmp/test.txt")
Upload /tmp/test.txt to the first container and name it libcloud.txt:
driver.upload_object("/tmp/test.txt", containers[0], "libcloud.txt")
Delete the first object:
driver.delete_object(objs[0])
Create a new container named libcloud_container:
driver.create_container("libcloud_container")
Iterate through containers and delete the one named libcloud_container:
for container in driver.iterate_containers():
if container.name == "libcloud_container":
driver.delete_container(container)
break