According to Gartner, big data will drive $232 billion in IT spending through 2016. The benefits to organizations for adding big data to their information management and analytics infrastructure will force a more rapid cycle of replacing existing solutions.
Learn more about:
• Provisioning a Data-intensive Application Cluster (Hadoop or Spark) on top of OpenStack.
• Building an Architecture combining the Hadoop and OpenStack Ecosystems.
• Build OpenStack Cloud and implement Big Data Architectures with comparative benefits of other Architectures.
1. We manage learning.
“Building an Innovative Learning Organization. A Framework to Build a Smarter
Workforce, Adapt to Change, and Drive Growth”. Download now!
“Power of OpenStack and Hadoop”
5. What is OpenStack?
OpenStack is a collection of open source
projects that provides an operating platform
for orchestrating clouds in a massively scale.
6. OpenStack History
"Founded by Rackspace Hosting and NASA, OpenStack has grown to be a global software
community of developers collaborating on a standard and massively scalable open source
cloud operating system."
"All of the code for OpenStack is freely available under the Apache 2.0 license. Anyone can
run it, build on it, or submit changes back to the project."
8. OpenStack Services
• Nova - Compute Service
• Swift - Storage Service
• Glance - Imaging Service
• Keystone - Identity Service
• Horizon - UI Service
34. Compute(Nova)
• Nova is the Computing Fabric controller for the OpenStack Cloud.
• All activities needed to support the life cycle of instances within the OpenStack cloud are handled
by Nova.
• This makes Nova a Management Platform that manages compute resources, networking,
authorization, and scalability needs of the OpenStack cloud.
• But, Nova does not provide any virtualization capabilities by itself; instead, it uses libvirt API to
interact with supported hypervisors.
• Nova exposes all its capabilities through a web services API that is compatible with the EC2 API of
Amazon Web Services.
35. • Instance life cycle management.
• Management of compute resources
• Networking and Authorization.
• REST-based API
• Asynchronous eventually consistent communication
• Hypervisor agnostic : support for Xen, XenServer/XCP, KVM,
UML, VMware vSphere and Hyper-V
Nova: Functions and Features
36. Components of OpenStack Compute
• API Server (nova-api)
• Message Queue(rabbit-mq server)
• Compute Workers(nova-compute)
• Network Controller(nova-compute)
• Volume Worker(nova-volume)
• Scheduler (nova-scheduler)
37. API Server (nova-api)
• The API Server provides an interface for the outside world to
interact with the cloud infrastructure.
• API Server is the only component that the outside world uses to
manage Infrastructure.
• Management is done through web services calls using EC2 API.
• The API Server in turn communicates with the relevant
components of the cloud infrastructure through message queue.
• As an alternative to EC2 API,Openstack also provides a native API
called “OpenStack API”.
38. Message Queue (Rabbit MQ Server)
• Components of OpenStack Communicate among themselves using
message queue via AMQP.
• Nova uses asynchronous calls for request-response, with a callback that
gets triggered once a response is received.
• Because of async communication none of the user actions get stuck for
long in wait state.
• This approach is effective since many actions are expected by the API
calls like launching an instance or uploading an image.
39. Compute Worker(nova-compute)
• Compute worker deals with instance management lifecycle.
• They receive the request for instance life cycle management via Message
Queue and carry out operations.
• There are many compute workers in typical production Environment.
• Instance is deployed on any of the available compute worker based on
scheduling algorithm used.
40. Network Controller(Nova network)
• It deals with the network configuration of host machines.
• It does operations like allocating IP addresses, Configuring VLANS
for projects, implementing Security groups and configuring
networks for compute nodes.
41. Volume Workers(nova-volume)
• Volume workers are used for management of LVM-based instances.
• They perform volume related functions such as creation, deletion,
attaching a volume to instance, and detaching a volume from
instance.
• Volumes provides a way of providing persistent storage for the
instances, as the root partition is non-persistent and any change made
are lost when an instance is terminated.
42. Scheduler(nova-scheduler)
• The scheduler maps the nova-API calls to appropriate OpenStack
components.
• It runs as daemon named nova-schedule and picks up a compute server
from a pool of available resources depending on the scheduling algorithm
in place.
• A scheduler can base its decisions on various factors like load, memory,
physical distance and availability sone, CPU architecture etc.
• Nova scheduler implements a pluggable architecture.
• Scheduling algorithms are based on chance, availability zone and load
43. OpenStack Imaging Service(Glance)
• OpenStack Imaging Service is a lookup and retrieval system for
Virtual machine images.
• Glance can be configured to use any of the following storage
backends:
• Local Filesystem
• OpenStack Object Store to Store images
• S3 storage directly
• S3 storage with object store as the implementation for S3 access.
• HTTP(read-only)
• Glance has two components :Glance-control and Glance-registry
44. OpenStack Storage Infrastructure(Swift)
• Swift provides a distributed ,eventually consistent virtual object store for
OpenStack.
• It is similar to S3 of Amazon.
• Swift is capable of storing billions of objects distributed across nodes.
• Swift has built-in redundancy and failover management and is capable of
archiving and media streaming.
• Swift is extremely scalable in terms of both size(several petabytes) and
capacity (number of objects)
45. • Storage of large number of objects.
• Storage of large sized objects.
• Data Redundancy.
• Archival capabilities- Work with large datasets.
• Data Container for virtual machines and cloud apps.
• Media Streaming capabilities.
• Secure storage of objects.
• Backup and archival.
• Extreme scalability
OpenStack Storage Infrastructure(Swift): Features
46. Components of Swift
• Swift Account Server
• Swift Container Server
• Swift Object Server
• Swift Proxy Server
• The Ring
47. Swift Proxy Server
• The consumers interact with the swift setup through the proxy server
using Swift API.
• Proxy Server acts as a gatekeeper and receives request from the world.
• It looks up the location of the appropriate entities and routes the request
to them.
• The proxy server also handles failures of entities by rerouting requests
to failover entities (handoff entities)
48. Swift Object Server
• Object store is a blob store.
• Its responsibility is to handle storage, retrieval and deletion of objects stored in the
local storage.
• Objects are typically binary files stored in the filesystem with metadata contained
as extended file attributes(xattr).
• xattr is supported in several filesystems such as ext3,ext4,JFS etc.
49. Swift Container Server
• Container server lists the objects in a container.
• Lists are stored as SQLite files.
• The Container server also tracks the statistics like the number of
objects contained and the storage size occupied by a container.
50. Swift Account Server
• The account server list containers the same way a container
server lists objects.
51. The Ring
• The ring contains information about the physical location of the object stored inside Swift.
• It is a virtual representation of mapping of names of entities to their real physical location.
• It is analogous to an indexing service that various processes use to lookup and locate the
real physical location of entities within cluster.
• Entities like Accounts, Containers ,Objects have their own separate rings.
52. OpenStack Identity Service(Keystone)
• Keystone provides identity and access policy services for all components
in the OpenStack family.
• Keystone implements its own REST based API called as Identity API.
• It provides authentication and authorization for all components of
OpenStack including Swift, Glance ,Nova.
• Authentication verifies that a request actually comes from who it says it
does .
• Authorization is used to verify whether the authenticated user has access
to the services he is asking for.
54. Components of Identity Service
• Endpoints-Every OpenStack service runs on a dedicated port and on a
dedicated URL, we call the endpoints.
• Regions-A region defines a dedicated physical location inside a data center.
In a typical cloud setup, most if not all services are distributed across data
centers/servers which are also called regions.
• User- A keystone authenticated user.
• Services-Each component that is being connected to or being administered
via keystone can be called a service. For example we can call Glance a
keystone service.
• Role- In order to maintain restrictions as to what a particular user can do
inside cloud infrastructure it is important to have a role associated.
• Tenant- A tenant is a project with all the service endpoint and a role
associated to user who is member of that particular tenant.
55. Administrative web Interface: Horizon
• Horizon is the web based dashboard which can be used to
manage/administer OpenStack Services.
• Following are some features:
• Instance management
• Access and Security management
• Flavor Management.
• Image Management
• View Service Catalog
• Manage users, Quotas and usage for projects.
• Volume management
• Object Store management
61. • HPC-ABDS
• ~120 Capabilities
• >40 Apache
• Green layers have strong HPC Integration opportunities
• Goal
• Functionality of ABDS
• Performance of HPC
62. Broad Layers in HPC-ABDS
• Workflow-Orchestration
• Application and Analytics: Mahout, MLlib, R…
• High level Programming
• Basic Programming model and runtime
- SPMD, Streaming, MapReduce, MPI
• Inter process communication
- Collectives, point-to-point, publish-subscribe
• In-memory databases/caches
• Object-relational mapping
• SQL and NoSQL, File management
• Data Transport
• Cluster Resource Management (Yarn, Slurm, SGE)
• File systems(HDFS, Lustre …)
• DevOps (Puppet, Chef …)
• IaaS Management from HPC to hypervisors (OpenStack)
• Cross Cutting
- Message Protocols
- Distributed Coordination
- Security & Privacy
- Monitoring
63. Useful Set of Analytics Architectures
• Pleasingly Parallel: including local machine learning as in parallel over images and apply
image processing to each image
- Hadoop could be used but many other HTC, Many task tools
• Search: including collaborative filtering and motif finding implemented using classic
MapReduce (Hadoop)
• Map-Collective or Iterative MapReduce using Collective Communication (clustering) –
Hadoop with Harp, Spark …..
• Map-Communication or Iterative Giraph: (MapReduce) with point-to-point
communication (most graph algorithms such as maximum clique, connected component,
finding diameter, community detection)
- Vary in difficulty of finding partitioning (classic parallel load balancing)
• Shared memory: thread-based (event driven) graph algorithms (shortest path,
Betweenness centrality)
64. Getting High Performance on Data Analytics
(e.g. Mahout, R…)
• On the systems side, we have two principles:
• The Apache Big Data Stack with ~120 projects has important broad functionality with a vital large
support organization
• HPC including MPI has striking success in delivering high performance,
however with a fragile sustainability model
• There are key systems abstractions which are levels in HPC-ABDS software stack where Apache
approach needs careful integration with HPC
• Resource management
• Storage
• Programming model -- horizontal scaling parallelism
• Collective and Point-to-Point communication
• Support of iteration
• Data interface (not just key-value)
• In application areas, we define application abstractions to support:
• Graphs/network
• Geospatial
• Genes
• Images, etc.
65. HPC-ABDS Hourglass
HPC ABDS
System (Middleware)
High performance
Applications
• HPC Yarn for Resource management
• Horizontally scalable parallel programming model
• Collective and Point-to-Point communication
• Support of iteration (in memory databases)
System Abstractions/standards
• Data format
• Storage
120 Software Projects
Application Abstractions/standards
Graphs, Networks, Images, Geospatial ….
SPIDAL (Scalable Parallel Interoperable
Data Analytics Library) or High
performance Mahout, R, Matlab…
67. Mahout and Hadoop MR – Slow due to MapReduce
Python slow as Scripting
Spark Iterative MapReduce, non optimal communication
Harp Hadoop plug in with ~MPI collectives
MPI fastest as C not Java
Increasing Communication Identical Computation
68. WDA SMACOF MDS (Multidimensional Scaling)
using Harp on Big Red 2
Parallel Efficiency: on 100-300K sequences
Conjugate Gradient (dominant time) and Matrix Multiplication
0.00
0.25
0.50
0.75
1.00
1.25
0 35 70 105 140
ParallelEfficiency
Number of Nodes
100K points 200K points 300K points
69. Features of Harp Hadoop Plugin
• Hadoop Plugin (on Hadoop 1.2.1 and Hadoop 2.2.0)
• Hierarchical data abstraction on arrays, key-values and graphs for easy
programming expressiveness.
• Collective communication model to support various communication operations
on the data abstractions
• Caching with buffer management for memory allocation required from
computation and communication
• BSP style parallelism
• Fault tolerance with check pointing
70. Building a Big Data Ecosystem that is
broadly deployable
72. Infra
structure
IaaS
Software Defined Computing
(virtual Clusters)
Hypervisor, Bare Metal
Operating System
Platform
PaaS
Cloud e.g. MapReduce
HPC e.g. PETSc, SAGA
Computer Science e.g.
Compiler tools, Sensor
nets, Monitors
Software-Defined Distributed
System (SDDS) as a Service
Network
NaaS
Software Defined
Networks
OpenFlow GENI
Software
(Application
Or Usage)
SaaS
CS Research Use e.g. test
new compiler or storage
model
Class Usages e.g. run GPU
& multicore
Applications
FutureGrid uses
SDDS-aaS Tools
Provisioning
Image Management
IaaS Interoperability
NaaS, IaaS tools
Expt management
Dynamic IaaS NaaS
DevOps
CloudMesh is a SDDSaaS
tool that uses Dynamic
Provisioning and Image
Management to provide
custom environments for
general target systems
Involves (1) creating,
(2) deploying, and
(3) provisioning
of one or more images in a
set of machines on demand
http://cloudmesh.futuregrid.org/
72
73. Maybe a Big Data Initiative would include
• OpenStack
• Slurm
• Yarn
• Hbase
• MySQL
• iRods
• Memcached
• Kafka
• Harp
• Hadoop, Giraph, Spark
• Storm
• Hive
• Pig
• Mahout – lots of different analytics
• R – lots of different analytics
• Kepler, Pegasus, Airavata
• Zookeeper
• Ganglia, Nagios, Inca
74. CloudMesh Architecture
• Cloudmesh is a SDDSaaS toolkit to support
A software-defined distributed system encompassing virtualized and bare-metal infrastructure, networks,
application, systems and platform software with a unifying goal of providing Computing as a Service.
The creation of a tightly integrated mesh of services targeting multiple IaaS frameworks
The ability to federate a number of resources from academia and industry. This includes existing FutureGrid
infrastructure, Amazon Web Services, Azure, HP Cloud, Karlsruhe using several IaaS frameworks
The creation of an environment in which it becomes easier to experiment with platforms and software
services while assisting with their deployment.
The exposure of information to guide the efficient utilization of resources. (Monitoring)
Support reproducible computing environments
IPython-based workflow as an interoperable onramp
• Cloudmesh exposes both hypervisor-based and bare-metal provisioning to users and administrators
• Access through command line, API, and Web interfaces.
75. Cloudmesh Architecture
FutureGrid, SDSC Comet, IU Juliet
• Cloudmesh Management
Framework for monitoring
and operations, user and
project management,
experiment planning and
deployment of services
needed by an experiment
• Provisioning and execution
environments to be
deployed on resources to
(or interfaced with) enable
experiment management.
• Resources.
81. SDDS Software Defined Distributed Systems
• Cloudmesh builds infrastructure as SDDS consisting of one or more virtual clusters or
slices with extensive built-in monitoring
• These slices are instantiated on infrastructures with various owners
• Controlled by roles/rules of Project, User, infrastructure
Python or REST
API
User in Project
CMPlan
CMProv
CMMon
Infrastructure
(Cluster, Storage,
Network, CPS)
Instance Type
Current State
Management
Structure
Provisioning Rules
Usage Rules
(depends on user
roles)
Results
CMExecUser Roles
User role and infrastructure
rule dependent security checks
Request
Execution in Project
Request
SDDS
Select
Plan Requested SDDS as federated
Virtual Infrastructures
#1Virtual infra.
Linux #2 Virtual
infra.
Windows#3Virtual infra.
Linux #4 Virtual
infra.
Mac OS X
Repository
Image and
Template
Library
SDDSL
One needs general
hypervisor and bare-
metal slices to
support FG research
The experiment
management system
is intended to
integrates ISI
Precip, FG
Cloudmesh and
tools latter invokes
Enables
reproducibility in
experiments.
82. What is SDDSL?
• There is an OASIS standard activity TOSCA (Topology and Orchestration Specification for
Cloud Applications)
• But this is similar to mash-ups or workflow (Taverna, Kepler, Pegasus, Swift ..) and we
know that workflow itself is very successful but workflow standards are not
- OASIS WS-BPEL (Business Process Execution Language) didn’t catch on
• As basic tools (Cloudmesh) use Python and Python is a popular scripting language for
workflow, we suggest that Python is SDDSL
- IPython Notebooks are natural log of execution provenance
83. Cloudmesh as an On-Ramp
• As an On-Ramp, CloudMesh deploys recipes on multiple platforms so you can
test in one place and do production on others
• Its multi-host support implies it is effective at distributed systems
• It will support traditional workflow functions such as
• Specification of an execution dataflow
• Customization of Recipe
• Specification of program parameters
• Workflow quite well explored in Python
https://wiki.openstack.org/wiki/NovaOrchestration/WorkflowEngines
• IPython notebook preserves provenance of activity
84. CloudMesh Administrative View of SDDS aaS
• CM-BMPaaS (Bare Metal Provisioning aaS) is a systems view and allows Cloudmesh to
dynamically generate anything and assign it as permitted by user role and resource policy
• FutureGrid machines India, Bravo, Delta, Sierra, Foxtrot are like this
• Note this only implies user level bare metal access if given user is authorized and this is
done on a per machine basis
• It does imply dynamic retargeting of nodes to typically safe modes of operation
(approved machine images) such as switching back and forth between OpenStack,
OpenNebula, HPC on Bare metal, Hadoop etc.
• CM-HPaaS (Hypervisor based Provisioning aaS) allows Cloudmesh to generate "anything"
on the hypervisor allowed for a particular user
• Platform determined by images available to user
• Amazon, Azure, HPCloud, Google Compute Engine
• CM-PaaS (Platform as a Service) makes available an essentially fixed Platform with
configuration differences
• XSEDE with MPI HPC nodes could be like this as is Google App Engine and Amazon HPC
Cluster. Echo at IU (ScaleMP) is like this
• In such a case a system administrator can statically change base system but the dynamic
provisioner cannot
85. CloudMesh User View of SDDS aaS
• Note we always consider virtual clusters or slices with nodes that may or may
not have hypervisors
• BM-IaaS: Bare Metal (root access) Infrastructure as a service with variants e.g.
can change firmware or not
• H-IaaS: Hypervisor based Infrastructure (Machine) as a Service. User provided
a collection of hypervisors to build system on.
• Classic Commercial cloud view
• PSaaS Physical or Platformed System as a Service where user provided a
configured image on either Bare Metal or a Hypervisor
• User could request a deployment of Apache Storm and Kafka to control a
set of devices (e.g. smartphones)
86. Cloudmesh Infrastructure Types
• Nucleus Infrastructure:
• Persistent Cloudmesh Infrastructure with defined provisioning rules and
characteristics and managed by CloudMesh
• Federated Infrastructure:
• Outside infrastructure that can be used by special arrangement such as
commercial clouds or XSEDE
• Typically persistent and often batch scheduled
• CloudMesh can use within prescribed provisioning rules and users restricted to
those with permitted access; interoperable templates allow common images to
nucleus
• Contributed Infrastructure
• Outside contributions to a particular Cloudmesh project managed by Cloudmesh
in this project
• Typically strong user role restrictions – users must belong to a particular project
• Can implement a Planetlab like environment by contributing hardware that can
be generally used with bare-metal provisioning
87. Lessons / Insights
• Integrate (don’t compete) HPC with “Commodity Big data” (Google to Amazon to
Enterprise Data Analytics)
• i.e. improve Mahout; don’t compete with it
• Use Hadoop plug-ins rather than replacing Hadoop
• Enhanced Apache Big Data Stack HPC-ABDS has ~120 members
• Opportunities at Resource management, Data/File, Streaming, Programming,
monitoring, workflow layers for HPC and ABDS integration
• Need to capture as services – developing a HPC-Cloud interoperability environment
• Data intensive algorithms do not have the well developed high performance
libraries familiar from HPC
• Need to develop needed services at all levels of stack from users of Mahout to
those developing better run time and programming environments
88. Recommended Courses
NetCom Learning offers a comprehensive portfolio for Big Data training options.
Please see below the list of recommended courses with upcoming schedules:
Deploying OpenStack Cloud - Fundamentals v3.0 – OCDCU
Check out more Big Data training options with NetCom Learning. CLICK HERE
89. Our live webinars will help you to touch base a wide variety of IT, soft skills and business
productivity topics; and keep you up to date on the latest IT industry trends. Register now
for our upcoming webinars:
BIG DATA | How to explain it & how to use it for your career? – March 30
A Brief on Benefits of ITIL for the Organization – April 4
Visualization with Tableau to Enhance Efficiency in Organization – April 6
How Machine Learning Helps Organizations to Work More Efficiently? – April 11
Why Certified Associate in Project Management (CAPM) and How to Prepare? - April 18
90. Special Promotion
Whether you're learning new IT or Business skills, or you are developing
a learning plan for your team, for limited time, register for our
Guarantee to Run classes and get 25% off on the course price.
Learn more»
91. To get latest technology updates, please follow our social media pages!
93. THANK YOU !!!
We manage learning.
“Building an Innovative Learning Organization. A Framework to Build a Smarter
Workforce, Adapt to Change, and Drive Growth”. Download now!
Editor's Notes
Due to its integrated services Cloudmesh provides the ability to be an onramp for other clouds.
It provides information services to various system level sensors to give access to sensor and utilization data. Internally, it can be used to optimize the system usage.
The provisioning experience from FutureGrid has taught us that we need to provide
the creation of new clouds (rain)
the repartitioning of resources between services (cloud shifting)
and the integration of external cloud resources in case of over provisioning (cloud bursting)
As we deal with many IaaS frameworks, we need an abstraction layer on top of the IaaS framework.
Experiment management is conducted with workflows controlled in shells, Python/iPython, as well as systems such as OpenStack's Heat.
Accounting is supported through additional services such as user management and charge rate management.
Not all features are yet implemented. Figure shows the main functionality that we target at this time to implement.
A starting window allows to chose from the different functionality
Yes Azure is also there, Our gui can easily handle searching for images , we can set defaults for each cloud (images & flavors), pressing the + button will give us a new server with the specified defaults
Cloudmesh provides more than shell commands it has an integrated shell